Академический Документы
Профессиональный Документы
Культура Документы
V800R010C00
Feature Description
Issue 01
Date 2012-01-17
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or representations
of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: support@huawei.com
Intended Audience
This document describes the key features (including GPON access, EPON access, VLAN, ACL,
QoS, and security features) of the SmartAX MA5600T (hereinafter referred to as the
MA5600T) in detail from the following aspects:
l Definition
l Purpose
l Specification
l Availability
l Principle
l Reference
After reading this document, you can learn about the definitions and purposes of the various
features of the MA5600T, and also the support of these features by the MA5600T and the
references on these features. In this way, you can know the feature list of the MA5600T and
understand the implementation of these features on the MA5600T.
Symbol Conventions
The following symbols may be found in this document. They are defined as follows
Symbol Description
Update History
Updates between document issues are cumulative. Therefore, the latest document issue contains
all updates made in previous issues.
Update History
Updates between document issues are cumulative. Therefore, the latest document issue contains
all updates made in previous issues.
Contents
2 EPON Access................................................................................................................................25
2.1 EPON Feature...................................................................................................................................................26
2.1.1 Introduction.............................................................................................................................................26
2.1.2 Specifications...........................................................................................................................................26
2.1.3 Reference Standards and Protocols.........................................................................................................27
2.1.4 Availability..............................................................................................................................................28
2.1.5 Principle...................................................................................................................................................28
2.1.6 Glossary, Acronyms, and Abbreviations.................................................................................................30
2.2 EPON Terminal Management..........................................................................................................................31
2.2.1 Introduction.............................................................................................................................................31
2.2.2 Specifications...........................................................................................................................................32
2.2.3 Reference Standards and Protocols.........................................................................................................32
2.2.4 Availability..............................................................................................................................................32
2.2.5 Principle...................................................................................................................................................32
2.2.6 Glossary, Acronyms, and Abbreviations.................................................................................................34
3.4 Availability.......................................................................................................................................................42
3.5 Network Applications.......................................................................................................................................42
4 MPLS..............................................................................................................................................46
4.1 Overview..........................................................................................................................................................47
4.2 Reference Standards and Protocols..................................................................................................................47
4.3 Availability.......................................................................................................................................................49
4.4 MPLS................................................................................................................................................................49
4.4.1 Introduction.............................................................................................................................................50
4.4.2 Specifications...........................................................................................................................................50
4.4.3 Principle...................................................................................................................................................51
4.5 MPLS RSVP-TE...............................................................................................................................................55
4.5.1 Introduction.............................................................................................................................................55
4.5.2 Specifications...........................................................................................................................................55
4.5.3 Principle...................................................................................................................................................56
4.6 MPLS OAM.....................................................................................................................................................58
4.6.1 Introduction.............................................................................................................................................58
4.6.2 Specifications...........................................................................................................................................59
4.6.3 Principle...................................................................................................................................................59
4.7 Glossary, Acronyms, and Abbreviations..........................................................................................................61
5 PWE3..............................................................................................................................................67
5.1 Introduction......................................................................................................................................................68
5.2 Specifications....................................................................................................................................................68
5.3 Reference Standards and Protocols..................................................................................................................69
5.4 Availability.......................................................................................................................................................69
5.5 Principle............................................................................................................................................................70
5.5.1 Basic Principle of PWE3.........................................................................................................................70
5.5.2 Principle of ETH PWE3..........................................................................................................................75
5.5.3 Traffic Label Principle.............................................................................................................................77
5.6 Network Applications.......................................................................................................................................78
5.7 Glossary, Acronyms and Abbreviations...........................................................................................................79
7.2.2 Specifications...........................................................................................................................................90
7.2.3 Availability..............................................................................................................................................90
7.2.4 Principle...................................................................................................................................................90
7.3 VLAN Management.........................................................................................................................................91
7.3.1 VLAN Overview.....................................................................................................................................91
7.3.2 Specifications...........................................................................................................................................92
7.3.3 Reference Standards and Protocols.........................................................................................................93
7.3.4 Availability..............................................................................................................................................93
7.3.5 Types of VLAN.......................................................................................................................................94
7.3.6 VLAN Attribute.......................................................................................................................................96
7.3.7 VLAN Processing....................................................................................................................................98
7.3.8 VLAN Aggregation.................................................................................................................................99
7.3.9 Special Applications of VLANs..............................................................................................................99
7.4 VLAN Translation Policy...............................................................................................................................100
7.4.1 Introduction...........................................................................................................................................100
7.4.2 Specifications.........................................................................................................................................100
7.4.3 Availability............................................................................................................................................101
7.4.4 Principle.................................................................................................................................................101
7.5 Forwarding Policy Management....................................................................................................................105
7.5.1 Introduction...........................................................................................................................................105
7.5.2 Specifications.........................................................................................................................................105
7.5.3 Availability............................................................................................................................................105
7.5.4 Principle.................................................................................................................................................106
7.6 Bridging..........................................................................................................................................................108
7.6.1 Introduction...........................................................................................................................................108
7.6.2 Specifications.........................................................................................................................................108
7.6.3 Reference Standards and Protocols.......................................................................................................108
7.6.4 Availability............................................................................................................................................109
7.6.5 Principle.................................................................................................................................................109
7.7 Glossary, Acronyms and Abbreviations.........................................................................................................111
8 QoS...............................................................................................................................................113
8.1 QoS Processing...............................................................................................................................................114
8.2 Traffic Classification......................................................................................................................................115
8.2.1 Overview...............................................................................................................................................115
8.2.2 Specifications.........................................................................................................................................116
8.2.3 Availability............................................................................................................................................117
8.2.4 Principle.................................................................................................................................................117
8.3 Priority Processing..........................................................................................................................................119
8.3.1 Overview...............................................................................................................................................119
8.3.2 Specifications.........................................................................................................................................119
8.3.3 Availability............................................................................................................................................120
8.3.4 Principle.................................................................................................................................................120
9 L3 Features..................................................................................................................................141
9.1 ARP................................................................................................................................................................142
9.1.1 Introduction...........................................................................................................................................142
9.1.2 Specifications.........................................................................................................................................142
9.1.3 Reference Standards and Protocols.......................................................................................................142
9.1.4 Availability............................................................................................................................................142
9.1.5 Principle.................................................................................................................................................143
9.2 ARP Proxy......................................................................................................................................................144
9.2.1 Introduction...........................................................................................................................................144
9.2.2 Specifications.........................................................................................................................................144
9.2.3 Reference Standards and Protocols.......................................................................................................144
9.2.4 Availability............................................................................................................................................144
9.2.5 Principle.................................................................................................................................................145
9.3 DHCP Relay...................................................................................................................................................146
9.3.1 Introduction...........................................................................................................................................146
9.3.2 Specifications.........................................................................................................................................146
9.3.3 Reference Standards and Protocols.......................................................................................................147
9.3.4 Availability............................................................................................................................................147
9.3.5 Principle.................................................................................................................................................147
9.4 DHCP Proxy...................................................................................................................................................148
9.4.1 Introduction...........................................................................................................................................149
9.4.2 Specifications.........................................................................................................................................149
9.4.3 Reference Standards and Protocols.......................................................................................................149
9.4.4 Availability............................................................................................................................................150
9.4.5 Principle.................................................................................................................................................150
9.5 IP-aware Bridge..............................................................................................................................................153
9.5.1 Introduction...........................................................................................................................................153
9.5.2 Specifications.........................................................................................................................................154
9.5.3 Availability............................................................................................................................................154
9.5.4 Principle.................................................................................................................................................155
9.6 Routing...........................................................................................................................................................158
9.6.1 Introduction...........................................................................................................................................158
9.6.2 Reference Standards and Protocols.......................................................................................................159
9.6.3 Availability............................................................................................................................................159
9.6.4 Specifications.........................................................................................................................................159
9.6.5 Principle.................................................................................................................................................160
9.6.6 Static Route............................................................................................................................................164
9.6.7 RIP.........................................................................................................................................................165
9.6.8 OSPF......................................................................................................................................................167
9.6.9 IS-IS.......................................................................................................................................................168
9.6.10 BGP.....................................................................................................................................................172
9.6.11 VRF.....................................................................................................................................................173
9.6.12 ECMP..................................................................................................................................................176
9.6.13 Glossary and Abbreviations.................................................................................................................177
10 Multicast....................................................................................................................................180
10.1 Introduction..................................................................................................................................................181
10.2 Specifications................................................................................................................................................181
10.3 Reference Standards and Protocols..............................................................................................................182
10.4 Availability...................................................................................................................................................182
10.5 Multicast Overview......................................................................................................................................183
10.6 Implementation Principles............................................................................................................................188
10.6.1 Basic Managed Objects.......................................................................................................................188
10.6.2 Forwarding Framework on the Device................................................................................................189
10.6.3 IGMP Control Framework...................................................................................................................190
10.6.4 Multicast Forwarding Flow.................................................................................................................192
10.7 Advanced Multicast Technologies...............................................................................................................194
10.7.1 Service Provisioning............................................................................................................................194
10.7.2 Protocol Interoperation........................................................................................................................205
10.7.3 Network-side Interoperating Technologies.........................................................................................214
10.7.4 User-Side Interoperating Technologies...............................................................................................226
10.7.5 Interoperating Technologies Between Specific Ends..........................................................................234
10.8 Network Application....................................................................................................................................237
11.1 MSTP............................................................................................................................................................241
11.1.1 Introduction.........................................................................................................................................241
11.1.2 Specifications.......................................................................................................................................241
11.1.3 Reference Standards and Protocols.....................................................................................................242
11.1.4 Availability..........................................................................................................................................242
11.1.5 Principle...............................................................................................................................................242
11.2 RRPP............................................................................................................................................................246
11.2.1 Introduction.........................................................................................................................................247
11.2.2 Specifications.......................................................................................................................................248
11.2.3 Reference Standards and Protocols.....................................................................................................248
11.2.4 Availability..........................................................................................................................................248
11.2.5 Principle...............................................................................................................................................249
11.2.6 Network Applications..........................................................................................................................259
11.2.7 Glossary, Acronyms and Abbreviations..............................................................................................261
11.3 Smart Link and Monitor Link.......................................................................................................................262
11.3.1 Introduction.........................................................................................................................................262
11.3.2 Specifications.......................................................................................................................................262
11.3.3 Availability..........................................................................................................................................263
11.3.4 Principle...............................................................................................................................................263
11.3.5 Network Applications..........................................................................................................................268
11.3.6 Glossary, Acronyms, and Abbreviations.............................................................................................268
11.4 Ethernet Link Aggregation...........................................................................................................................269
11.4.1 Introduction.........................................................................................................................................269
11.4.2 Specifications.......................................................................................................................................271
11.4.3 Reference Standards and Protocols.....................................................................................................271
11.4.4 Availability..........................................................................................................................................271
11.4.5 Principle...............................................................................................................................................273
11.4.6 Network Applications..........................................................................................................................277
11.4.7 Term, Acronyms, and Abbreviations..................................................................................................281
11.5 Protection Group of Uplink Ports.................................................................................................................282
11.5.1 Introduction.........................................................................................................................................282
11.5.2 Specifications.......................................................................................................................................282
11.5.3 Availability..........................................................................................................................................283
11.5.4 Principle...............................................................................................................................................283
11.6 BFD..............................................................................................................................................................285
11.6.1 Introduction.........................................................................................................................................285
11.6.2 Specifications.......................................................................................................................................286
11.6.3 Principle...............................................................................................................................................286
11.6.4 Glossary, Acronyms, and Abbreviations.............................................................................................288
11.7 GPON Type B Protection.............................................................................................................................288
11.7.1 Introduction.........................................................................................................................................288
11.7.2 Specifications.......................................................................................................................................289
12 Application Security...............................................................................................................296
12.1 Introduction..................................................................................................................................................298
12.2 Relevant Standards and Protocols................................................................................................................298
12.3 Availability...................................................................................................................................................299
12.4 HWTACACS................................................................................................................................................301
12.4.1 Introduction.........................................................................................................................................301
12.4.2 Specifications.......................................................................................................................................301
12.4.3 Principle...............................................................................................................................................302
12.5 RAIO............................................................................................................................................................304
12.5.1 Introduction.........................................................................................................................................304
12.5.2 Specifications.......................................................................................................................................305
12.5.3 Principle...............................................................................................................................................305
12.6 PITP..............................................................................................................................................................311
12.6.1 Introduction.........................................................................................................................................311
12.6.2 Specifications.......................................................................................................................................312
12.6.3 Principle...............................................................................................................................................312
12.7 DHCP option82............................................................................................................................................314
12.7.1 Introduction.........................................................................................................................................314
12.7.2 Specifications.......................................................................................................................................315
12.7.3 Principle...............................................................................................................................................315
12.8 802.1X..........................................................................................................................................................317
12.8.1 Introduction.........................................................................................................................................317
12.8.2 Specifications.......................................................................................................................................317
12.8.3 Principle...............................................................................................................................................318
12.9 Anti MAC Spoofing.....................................................................................................................................319
12.9.1 Introduction.........................................................................................................................................320
12.9.2 Specifications.......................................................................................................................................320
13 Network Security.....................................................................................................................328
13.1 Introduction..................................................................................................................................................329
13.2 Availability...................................................................................................................................................329
13.3 Anti-DoS Attack...........................................................................................................................................330
13.3.1 Introduction.........................................................................................................................................330
13.3.2 Specifications.......................................................................................................................................331
13.3.3 Principle...............................................................................................................................................331
13.4 Anti-ICMP/IP Attack....................................................................................................................................331
13.4.1 Introduction.........................................................................................................................................331
13.4.2 Principle...............................................................................................................................................332
13.5 Source Route Filtering..................................................................................................................................332
13.5.1 Introduction.........................................................................................................................................332
13.5.2 Principle...............................................................................................................................................332
13.6 MAC Address Filtering................................................................................................................................332
13.6.1 Introduction.........................................................................................................................................333
13.6.2 Specifications.......................................................................................................................................333
13.6.3 Principle...............................................................................................................................................333
13.7 Firewall Blacklist..........................................................................................................................................333
13.7.1 Introduction.........................................................................................................................................333
13.7.2 Specifications.......................................................................................................................................334
13.7.3 Principle...............................................................................................................................................334
13.8 Configuration of Acceptable or Refused Address Segments.......................................................................334
13.8.1 Introduction.........................................................................................................................................334
13.8.2 Specifications.......................................................................................................................................335
13.8.3 Principle...............................................................................................................................................335
13.9 Service Overload Control.............................................................................................................................335
13.9.1 Introduction.........................................................................................................................................335
13.9.2 Principle...............................................................................................................................................336
13.10 Acronyms and Abbreviations.....................................................................................................................337
16 Ethernet OAM..........................................................................................................................385
16.1 Introduction..................................................................................................................................................386
16.2 Reference Standards and Protocols..............................................................................................................386
16.3 Ethernet CFM OAM.....................................................................................................................................386
16.3.1 Introduction.........................................................................................................................................386
16.3.2 Specifications.......................................................................................................................................387
16.3.3 Availability..........................................................................................................................................387
16.3.4 Principle...............................................................................................................................................388
16.4 Ethernet EFM OAM.....................................................................................................................................391
16.4.1 Introduction.........................................................................................................................................391
16.4.2 Availability..........................................................................................................................................392
16.4.3 Principle...............................................................................................................................................392
16.5 Glossary, Acronyms, and Abbreviations......................................................................................................395
17 Clock Feature............................................................................................................................398
17.1 NTP...............................................................................................................................................................399
17.1.1 Introduction.........................................................................................................................................399
17.1.2 Specifications.......................................................................................................................................399
17.1.3 Reference Standards and Protocols.....................................................................................................399
17.1.4 Availability..........................................................................................................................................400
17.1.5 Principle...............................................................................................................................................400
17.2 Clock and Time System................................................................................................................................401
17.2.1 Introduction.........................................................................................................................................401
17.2.2 Specifications.......................................................................................................................................402
17.2.3 Reference Standards and Protocols.....................................................................................................403
17.2.4 Availability..........................................................................................................................................404
17.2.5 Principle of the Clock and Time System.............................................................................................404
17.2.6 Scenarios of Clock/Time Synchronization..........................................................................................408
17.2.7 Glossary, Acronyms, and Abbreviations.............................................................................................414
1 GPON Features
Gigabit passive optical network (GPON) is one of the PON technologies. A GPON-capable
device supports high-bandwidth transmission. GPON effectively solves the bandwidth
bottleneck problem in the twisted-pair access and meets users demands on high-bandwidth
services.
1.1 Introduction
1.2 Specifications
1.3 Reference Standards and Protocols
1.4 Availability
1.5 Overview of the GPON System
1.6 GPON Principle
1.7 Key GPON Technologies
1.8 GPON Terminal Authentication and Management
1.9 Continuous-Mode ONU Detection
1.10 GPON Network Applications
1.11 Glossary, Acronyms, and Abbreviations
1.1 Introduction
Definition
xPON is a type of point to multi-point (P2MP) passive optical network (PON).
Purpose
GPON adopts the passive optical transmission technology and is mainly applicable to such
scenarios as Fiber To The Home (FTTH), Fiber To The Building (FTTB), Fiber To The Office
(FTTO), and Fiber To The Mobility Base Station (FTTM) to provide various services:
l Voice
l Data
l Video
l Leased line
GPON supports high-bandwidth transmission. This helps break the bandwidth bottleneck of the
access over twisted pairs and achieve bandwidth-eating services, such as high-definition TV
(HDTV) and live programs.
In addition, GPON supports long-reach access, which helps extend the coverage and reduce
network nodes.
1.2 Specifications
The specifications of the GPON boards and ports are as follows:
l The system supports the service shelf to be fully configured with the GPBD board (every
GPBD board supports eight GPON ports).
l Every GPBD supports up to 8K service streams.
l Every GPON port on GPBD supports up to 128 ONUs/ONTs.
l The system supports up to 8,192 ONUs/ONTs.
l The GPON port supports maximum downstream and upstream rates of 2.5 Gbit/s and 1.25
Gbit/s respectively.
l The system supports a maximum physical transmission distance of 20 km and a maximum
logical transmission distance of 60 km.
The system supports the following GEM port and T-CONT specifications:
l The system supports the GEM encapsulation. Every GPON port supports up to 4096 GEM
ports and the maximum number of GEM ports supported in the system is 32K.
l The system supports up to 512 DBA profiles and 32K T-CONTs.
l The system supports the loop line detection for the remote GEM port and the line detection
for the ONT UNI port.
l The system can automatically allocate GEM port IDs.
l ITU-T G.984.1: General Characteristics. This protocol mainly describes the basic features
and major protection modes of GPON.
l ITU-T G.984.2: Physical Media Dependent (PMD) Layer Specification. This protocol
mainly describes the PMD layer parameters, including physical parameters (such as the
transmit optical power, receiver sensitivity, and overload optical power) of optical
transceivers, and also defines optical budget of different levels, for example, the most
common Class B+.
l ITU-T G.984.3: Transmission Convergence Layer Specification. This protocol mainly
describes the TC layer specifications, including the upstream and downstream frame
structures and GPON principle.
l ITU-T G.984.4: ONT Management And Control Interface Specification. This protocol
mainly describes the GPON management and maintenance protocols, such as OAM,
PLOAM, and OMCI.
l ITU-T G.984.5: Enhancement Band. This protocol mainly describes the GPON wavelength
planning, including reserving bands for next-generation PON.
l ITU-T G.984.6: Reach Extension. This protocol mainly describes several long reach PON
schemes for extending GPON transmission distance.
l TR-156: Using GPON Access in the context of TR-101.
1.4 Availability
License Support
The number of remote ONT ports supported by the MA5600T is licensed. Therefore, the
corresponding service is also licensed.
Version Support
Passive optical
splitter
Passive optical
splitter
Mainstream PON technologies include broadband passive optical network (BPON), Ethernet
passive optical network (EPON), and gigabit passive optical network (GPON). Adopting the
ATM encapsulation mode, BPON is mainly used for carrying ATM services. With the
obsolescence of the ATM technology, BPON also drops out. EPON is an Ethernet passive optical
network technology. GPON is a gigabit passive optical network technology and is to date the
most widely used mainstream optical access technology.
ODN
OLT
1310 nm
ONU/ONT
In the GPON network, the OLT is connected to the optical splitter through a single optical fiber,
and the optical splitter is then connected to ONUs. Different wavelengths are adopted in the
upstream and downstream directions for transmitting data. The upstream wavelength is 1310
nm and downstream wavelength is 1490 nm. The GPON adopts WDM to transmit data of
different upstream/downstream wavelengths over the same ODN. Data is broadcast in the
downstream direction and transmitted in the TDMA mode (based on timeslots) in the upstream
direction.
All data is broadcast to all ONUs from the OLT. The ONUs then select and receive their
respective data and discard the other data. Figure 1-3 shows the details.
1 3 OLT
2
1
ONU2
3 2 1
2 3 2 1
ONU3 1 Splitter
2
3 3
In the upstream direction, each ONU can send data to the OLT only in the timeslot permitted
and allocated by the OLT. This ensures that each ONU sends data in a given sequence, thus
avoiding upstream data conflicts. Figure 1-4 shows the details.
1 OLT
1
ONU2
3 2 1
2 2
ONU3 Splitter
3 3
T-CONT: a service carrier in the upstream direction in the GPON system. All GEM ports are
mapped to T-CONTs. Then, service streams are transmitted upstream by means of the OLT's
DBA scheduling. T-CONT is the basic control unit of the upstream service stream in the GPON
system. Every T-CONT is identified by Alloc-ID. The Alloc-ID is globally allocated by the
OLT. That is, every T-CONT can be used by only one ONU connected to the OLT.
There are five types of T-CONT; therefore, T-CONT selection varies during the scheduling of
different types of upstream service streams. Every T-CONT bandwidth type has its own quality
of service (QoS) feature. QoS is mainly represented by the bandwidth guarantee, which can be
classified as fixed, assured, non-assured, best-effort, and hybrid mode (corresponding to type 1-
type 5 in Table 1-2).
Fixed X No No No X
Bandwidth
Assured No Y Y No Y
Bandwidth
NOTE
In Table 1-2, X indicates the fixed bandwidth value, Y the assured bandwidth value, and Z the maximum
bandwidth value.
Figure 1-5 shows the principle of service multiplexing in the GPON system. On ONUs, all
service streams are mapped to different GEM ports and then to different types of T-CONTs for
upstream transmission (the T-CONT is the basic carrier in the upstream direction over GPON
lines). On the OLT, the T-CONT demodulates GEM ports therein and sends them to the GPON
MAC chip. The MAC chip demodulates service streams in the GEM port payload and then sends
them to a proper service processing unit for processing. In the downstream direction, all service
streams are encapsulated by the GPON service processing unit into GEM ports and then GEM
ports are broadcast to all ONUs connected to the GPON port. Then, every ONU filters data
according to GEM port ID, reserving the GEM port corresponding to itself. After that, every
ONU decapsulates service streams from the GEM port and sends them to the user-side equipment
through the service interface of the ONU.
GEM port
T-CONT
GEM port
T-CONT
Figure 1-6 shows the mapping between service stream, GEM port, and T-CONT. The GEM
port is the smallest service unit in the GPON system. Every GEM port can carry one or more
types of service stream. The GEM port, after carrying service streams, must be mapped to a T-
CONT before upstream service scheduling. Every ONU supports multiple T-CONTs and can
be configured with different service types. A T-CONT can be bound with one or more GEM
ports, depending on the user's configuration. On the OLT, GEM ports are demodulated from the
T-CONT and then service streams are demodulated from the GEM port payload for further
processing.
Port
T-CONT Port
Port
ONU
Port
T-CONT
Port
PON
Port
ONU T-CONT Port
Port
OLT
257 100 200 258 300 500
T-CONT1 T-CONT 2
(ONT 1) (ONT 2)
Slot Slot Slot Slot
100 200 300 500
Upstream framing
NOTE
The lengths of the upstream frame and downstream frame at each GPON rate are the same. Every
upstream frame contains the content carried by one or more T-CONTs. The BWmap in each
downstream frame identifies the start time and end time of each T-CONT transmission. An ONU
must send a PLOu each time before the ONU receives the media access right to PON from
another ONU. If an ONU is allocated two consecutive Alloc-IDs (the end time of one is smaller
by 1 than the start time of the other), the ONU must not send the PLOu of the second Alloc-ID.
The payload of an upstream frame may contain three types of content: the ATM cell, the GEM
frame, and the DBA report.
Figure 1-8 shows the GPON upstream frame structure.
Upstream Frame
PLO DB DB DB
PLOu PLSu Payload X Payload Y PLOu Payload Z
AMu RuX RuY RuZ
ONT A ONT B
The GPON upstream frame consists of the PLOu, PLOAMu, PLSu, DBRu, and Payload fields
and the meanings of these fields are described as follows:
l PLOu: physical control header, mainly used for frame delimitation, synchronization, and
indication of which ONU the current frame targets at.
l PLOAMu: PLOAM message of upstream data, mainly used for reporting management
information such as ONU maintenance and management status. (Not every frame has such
a field. This field may not be sent but needs to be negotiated.)
l PLSu: Power Levelling Sequence upstream. It is a 120-byte field and is used for power
control measurements by the ONU.
l DBRu: mainly used for reporting the T-CONT status for applying for bandwidth and
completing dynamic bandwidth allocation for ONUs next time. (Not every frame has such
a field. This field may not be sent but needs to be negotiated.)
l Payload: DBA status report or data frame. The data frame may be GEM header or frame.
l GEM header: mainly used for differentiating data of different GEM ports. The GEM port
is the smallest unit for data transmission in the GPON system, which is similar to the PVC
of ATM. Every type of upstream service stream must be mapped to the GEM port and then
to the T-CONT for transmission. The GEM header field consists of PLI, Port ID, PTI, and
HEC.
PLI: Indicates the length of data payload.
Port ID: Uniquely identifies a GEM port.
PTI: Identifies the payload type. It is mainly used for identifying the status and type of
data that is being transmitted (for example, whether the OAM message is being
transmitted and whether data transmission is complete).
HEC: Provides the FEC function and transmission quality.
GPON supports a downstream transmission rate of 2.488 Gbit/s, a frame length of 38880 bytes,
and a frequency of one frame every 125 s, as shown in Figure 1-9 and Figure 1-10.
N * 53
bytes
PCBd Payload
The OLT broadcasts PCBd to all ONUs. Every ONU receives the entire PCBd and then acts
upon the relevant information contained therein.
A PCBd contains information such as frame synchronization information, physical layer OAM
information, and BIP check field. US BWMap (upstream bandwidth map) is the upstream
transmission bandwidth map sent to each T-CONT by the OLT. The bandwidth map is
transmitted through the US BW Map field in the PCBd of the downstream frame. In this way,
MAC control is implemented.
GPON uses TDM for the upstream transmission. Therefore, when multiple ONUs transmit data
upstream concurrently, transmission conflicts occur. The avoidance mechanism for such a
conflict is that the OLT sends a notification through the downstream frame, informing each ONU
of its corresponding timeslot for upstream transmission.
Ranging
The GPON system has strict requirements for synchronization. To make every ONU send data
normally at the specified timeslot, the entire GPON system must be highly synchronous and all
ONUs must follow uniform clock requirements. The distance of ONUs to the OLT, however,
varies. To achieve synchronization throughout all ONUs, the actual physical distance of every
ONU to the OLT must be calculated first and then the varying delay compensation is performed
on all ONUs according to the farthest logical distance. This ensures the logical distance of all
ONUs to the OLT is the same. Hence, synchronization is achieved among all the ONUs
connected to the GPON port. In the GPON system, the ranging principle is mainly as follows:
Test the OLT-to-ONU delay parameter and then perform delay compensation according to the
maximum logical distance and the round-trip time of every ONU, thereby achieving
synchronization over the entire GPON system.
Figure 1-11 shows the ranging method.
l The basic formula is EqD(n) = Teqd - RTD(n). Here, Teqd is a constant value indicating
the possible maximum delay of the GPON system (for example, if the maximum distance
between the OLT and the ONU is 20 km, Teqd is 200 + 50 = 250 s), and round-trip-delay
(RTD) is the round-trip transmission delay of each ONU measured by the OLT.
l To measure RTD, the OLT sends a ranging request to the ONU, and then the ONU responds
to the request.
l RTD is the time from the first bit/byte of the Ranging-request in the downstream frame till
the reception of the Ranging-transmission's last bit/byte.
OLT TRESPONSE
Ti
1
t
me
p=
st
am
am
st
p=
me
t0
Ti
ONU
FEC
Forward error correction (FEC) is mainly used for improving the transmission quality of a line.
FEC uses RS (255, 239), performing an FEC encoding of all downstream packets every 255
bytes. This ensures the correctness of data received by the ONUs. By using the FEC algorithm,
the GPON system achieves the reduced bit error rate of line transmission to 10E-15 at the
transport layer, avoids data retransmission, and improves the optical power budget by 2-3 dB.
Upstream FEC and downstream FEC are supported in the GPON system.
Line Encryption
In the GPON system, downstream data is broadcast to all ONUs. As a result, downstream data
destined for certain ONUs or all ONUs may be intercepted by illegal users. At the same time,
the GPON system is uniquely and highly data-directional. Therefore, almost every ONU cannot
intercept the upstream data of other ONUs, thus allowing some private information (such as key)
to be safely transmitted in the upstream direction. The GPON system uses AES128 encryption
for line security control, thereby effectively preventing security issues such as data
embezzlement. In the AES128 encryption system, the OLT supports key exchange and
switchover. The key exchange is initiated by the OLT. The OLT does so by sending a key
exchange request. The ONU responds by generating and sending the key to the OLT. Because
the PLOAM message is limited in length, the key is sent in two parts. The two parts of the key
are sent three times repeatedly. If the OLT has not received the key for any of the three times,
it will re-send the key exchange request until it receives the same key all three times the key is
sent. When the OLT receives a new key, it starts the key switchover. The OLT notifies the ONU
by sending a command containing the frame number of the new key. This command will be sent
for three times. As long as the ONU receives the command once, it will switch over the check
key on proper data frames.
DBA
In the GPON system, the OLT controls an ONU's upstream data traffic by sending authorization
signals to the ONU. PON requires an effective TDMA mechanism to control the upstream traffic,
so that data packets from multiple ONUs do not collide when packets are transmitted upstream.
Nevertheless, the collision-based mechanism requires QoS management in an optical
distribution network (ODN), a passive network. This is physically impossible, or causes severe
efficiency decrease. Due to the above-mentioned reason, a mechanism for management of the
upstream GPON traffic has been a primary focus in standardization of GPON traffic
management. It drives the development of the ITU-TG.983.4 Recommendation, which defines
the dynamic bandwidth allocation (DBA) protocol for management of the upstream PON traffic.
Figure 1-12 shows the DBA principle. The GPON system controls the upstream traffic by
allocating data authorization to each transmission container (T-CONT) inside the ONU. The
OLT needs to know the traffic status of a T-CONT to determine the authorized amount to be
allocated to the T-CONT. By using the DBRu field or the Payload field in the upstream frame,
the ONUs report their data statuses to the OLT. After receiving ONUs' data statuses, the OLT
uses DBA to periodically update the upstream BWmap information according to the status of
ONU data waiting to be sent and notifies all ONUs of the updates through the downstream frame.
Thus, every ONU can dynamically adjust its upstream bandwidth according to the actual data
traffic to be sent, thereby improving the utilization of upstream bandwidth.
ONU OLT
DBA report
Control plane DBA algorithm
logic
BW Map
T-CONT
Time slot Data plane
Scheduler
system, only authenticated ONUs can access the system. Implementing authentication meets the
carriers' requirements for flexible management and easy maintenance.
A GPON ONU has three types of authentication: SN authentication, SN+password
authentication, and password authentication. After passing the authentication, the ONU can go
online and transmit data. The ONU selectively receives downstream data based on the GEM
port. Each ONU checks the GEM port ID of the received frame, and accepts the frame if the
GEM port ID of the frame is the same as its own GEM port ID or if the GEM port ID is a multicast
GEM port ID (4095 by default, and configurable from 4000 to 4095). Otherwise, the ONU
discards the frame. The authentication process applies to the ONU that is pre-configured on the
OLT. In the case of an ONU that is not pre-configured on the OLT, see the processing flowchart
as shown in Figure 1-13.
Figure 1-13 Flowchart of registering an ONU that is not pre-configured on the OLT
ONU OLT
SN is not
configured on O3
OLT assigns
temporary ONU
ID.
Ranging request
Ranging response O4
Ranging time
Request password
Password
the registration and going-online processes of such an ONU and the registration process of
an ONU that is not pre-configured are as follows.
After receiving the SN response message of the ONU, the OLT checks whether it is a
pre-configured ONU. If it is a pre-configured ONU, the OLT checks whether an online
ONU with an identical SN exists on the OLT. If such an ONU exists, the OLT reports
an SN conflict alarm to the CLI and to the U2000. If such an ONU is not detected, the
OLT directly allocates the user-defined ONU ID to the ONU.
After the ONU enters the operation state, if the ONU adopts SN authentication, the OLT
does not send the Request Password message to this ONU. Instead, the OLT directly
configures the GEM port for the ONU for carrying OMCI messages, and allows the
ONU to go online. The GEM port can be automatically configured by the OLT so that
the OMCI-carrying GEM port has the same ID as the ONU ID. In addition, the OLT
reports an ONU online alarm to the CLI or the U2000. If the ONU adopts SN+password
authentication, the OLT sends the Request Password message to the ONU, and
compares the password reported by the ONU with the password configured on the OLT.
If the compared passwords are the same, the OLT checks whether an online ONU
authenticated by the same SN+password exists on the OLT. If such an ONU exists, the
OLT reports a password conflict alarm to the CLI or the U2000. If such an ONU is not
detected, the OLT directly configures a GEM port for the ONU for carrying OMCI
messages, and allows the ONU to go online. In addition, the OLT reports an ONU online
alarm to the CLI or the U2000. If the compared passwords are not the same, the OLT
does not report the auto-found ONU even if the ONU auto-find function is enabled on
the PON port. In this case, the OLT sends the Deactivate_ONU-ID PLOAM message
to deregister the ONU.
l Password authentication
Password authentication includes two modes: always-on and once-on. An ONU that adopts
password authentication is added to the OLT in advance. Then, when this ONU is connected
to the PON port on the OLT, before password authentication, the ONU undergoes a same
process as that for an ONU that is not pre-configured on the OLT.
If the once-on mode is selected, the aging-time can be selected (the aging-time can be
set to 1 to 168 hours). After the aging-time is set, the ONU must register with the OLT
and go online within the preset aging time. Otherwise, the ONU is not allowed to register
with the OLT or go online. If the always-on mode is used, the ONU can register with
the OLT and go online at any time.
In the once-on authentication mode, the ONU must be authenticated within the specified
duration and will fail to be authenticated when the time expires. In addition, once the
ONU is authenticated, its SN must not be changed. In other words, in the once-on mode,
only the initial authentication of the ONU is by password; in subsequent authentications,
the ONU is actually authenticated by SN+password. The scenario of the once-on mode
is one in which the carrier allocates a password to the user, and the user must go online
within the specified duration. After going online, the user must not change the ONU.
To change the ONU, the user needs to notify the carrier of this requirement.
In the always-on mode, there is no restriction on the time when the user goes online.
An ONU is authenticated by password when it goes online for the first time. After the
ONU passes the password authentication and goes online successfully, the OLT
generates an SN+password entry according to the SN and password of the ONU. If it
is not the first time that an ONU goes online, and if the SN and password of the ONU
are the same as the SN and password of the ONU that successfully goes online for the
first time, the ONU is authenticated by SN+password. If the user needs to replace the
ONU with an ONU that has the same password but a different SN, the ONU after the
replacement will be authenticated by password. After this ONU passes authentication
and goes online successfully, the original SN+password entry is updated. Therefore, in
the always-on mode, the ONU can go online at any time if its password is correct. The
scenario of the always-on mode is one in which the carrier allocates a password to the
user, and the user can use different ONUs with different SNs, as long as the user uses
the same password. As such, the user can change the ONU without informing the
carrier's.
When an ONU is authenticated by password, if the board software finds that the SN or
password of the ONU conflicts with that of an online ONU, the OLT deregisters the new
ONU and reports an SN conflict alarm or a password conflict alarm to the CLI and the
U2000. This does not affect the online ONU. For details on how to process the ONU that
fails to pass the password authentication, see the flowchart of registering an ONU that is
not pre-configured on the OLT.
When an ONU is authenticated in the once-on mode, after the configuration of the GPON
board recovers, the board software starts the registration timeout timer. If the GPON board
is reset before the ONU registration times out, the ONU registration timeout time will be
reset to 0 and counting will start again. Before the registration of the ONU times out or
before the ONU successfully registers with the OLT for the first time, the ONU discovery
status is ON. In addition, only the ONU whose discovery status is ON is allowed to register
with the OLT and go online. After the registration of the ONU times out or after the ONU
successfully registers with the OLT for the first time, the OLT sets the discovery status of
the ONU to OFF. The ONU whose registration times out is not allowed to register with the
OLT or go online. In this case, the registration timeout flag of the ONU needs to be reset
at the CO, and then the ONU can go online. An ONU that successfully registers for the first
time is allowed to register and go online again. If the ONU registration times out, the board
reports an alarm to the CLI and the U2000. In addition, the discovery status of the ONU
supports the active/standby switchover and configuration recovery.
NOTE
The PLOAM protocol is defined in ITU-T G.984.3 and is used for maintenance and management at the physical
layer.
OMCI is a master-slave management protocol. The OLT is the master device and the ONU is the slave device.
The OLT controls multiple downstream ONUs through OMCI channels.
There are the following two methods of detecting the continuous-mode ONU:
l ONU auto-detection
l OLT detection
GPON uses time division multiplexing (TDM) mechanism in the upstream direction. Every
ONU sends data upstream to the OLT at its own timeslot allocated by the OLT. If an ONU sends
optical signals at other ONUs' timeslots, the optical signals of the ONU will conflict with those
sent by other ONUs. As a result, communication of a certain other ONU or all the ONUs is
affected. Such an ONU that sends optical signals upstream not at its allocated timeslot is called
a rogue ONU.
There are two types of rogue ONUs: random-mode ONU and continuous-mode ONU. The
random-mode ONU is generally detected in the ONU auto-detection mode, and the continuous-
mode ONU can be detected by means of ONU auto-detection or OLT detection.
When an ONU detects that it is working in the continuous mode, the ONT isolates itself
automatically.
After detecting that an ONU is working in the continuous mode, the OLT sends proper
commands to isolate the ONU.
No
Is any optical signal OLT deletes the error
received? alarm
Yes
No
Is the check
completed?
Yes
OLT identifies the faulty
ONU
1. The OLT opens a blank window in the GPON upstream direction every five seconds to
detect upstream optical signals sent by ONUs. At this moment, the OLT starts the rogue
ONU detection process if still receiving upstream optical signals. If the OLT does not
receive any upstream optical signals, it indicates that no rogue ONU exists in the system
or that the previously reported alarm is an error.
2. In the rogue ONU detection process, the OLT broadcasts messages to all the ONUs
connected to a PON port to disable the optical transceivers of the ONUs, that is, to instruct
them not to send upstream optical signals. Then, the OLT opens a window to detect
upstream optical signals again. If the OLT still receives upstream optical signals sent by
ONUs, it indicates that a third-party ONU is connected to the PON port and that this ONU
does not respond to the instruction issued by the OLT. In this case, the OLT enters the
special processing state and clears the alarm. If the OLT does not receive any upstream
optical signals, it starts to check the ONUs one by one.
3. The OLT issues proper commands to the ONUs, instructing their optical transceivers to
send upstream optical signals one by one. In this way, the OLT checks whether it can receive
upstream optical signals and whether the other ONUs go offline after an ONU starts to send
optical signals. If the other ONUs all go offline after an ONU starts to send optical signals,
it indicates that the ONU is a continuous-mode ONU, that is, a rogue ONU. Continuous-
mode ONU detection is performed on all the ONUs connected to the PON port. This ensures
that all rogue ONUs are searched out.
4. After spotting a rogue ONU, the OLT issues proper commands to disable the optical
transceiver of the ONU so that the ONU does not send upstream optical signals. If the
optical transceiver of an ONU is disabled by the OLT, the ONU cannot send upstream
optical signal permanently (even after the ONU is reset or is restarted after power-off) until
the OLT issues proper commands to enable the ONU to resume sending upstream optical
signals. This mechanism ensures rogue ONUs are isolated thoroughly.
5. Troubleshoot the faulty ONU.
NOTE
If the ONU nearest to the PON port of the OLT is working in the continuous mode, the other ONUs connected
to the PON port will go offline.
If an ONU relatively far away from the PON port of the OLT is working in the continuous mode, the other ONUs
that have slightly weaker optical signal strength than the ONU will go offline.
FTTM
E1
BTS iManager U2000
FE/GE
SDH/Metro
ONU Enterprise
HQ
FTTH
E1/GE
ONT
GPON Protection
In GPON applications, type B, type B dual-homing, and type C protections can be implemented,
as shown in Figure 1-16, Figure 1-17, and Figure 1-18 respectively.
ONU1
Backbone
Optical optical fiber OLT
splitter protection
ONU2
Active
Standby
ONU2 OLT2
Active
Standby
Optical
ONU2 splitter2
Backbone
optical fiber
Tributary
protection
optical fiber Active
protection
Standby
Type C protection protects the OLT, ONUs, and optical splitters. It is similar to a ring network
protection.
2 EPON Access
The EPON protocol is an extension of the Ethernet basic MAC protocol and an EPON device
can easily integrate or interoperate with other Ethernet ports or devices. This helps to reduce the
cost of the access system or access network.
2.1.1 Introduction
Definition
As one of the multiple passive optical network (PON) technologies, Ethernet Passive Optical
Network (EPON) is defined and standardized by the IEEE, and is gradually becoming an
important supporting technology of the next-generation optical access network. The EPON
protocol is an extension of the Ethernet basic MAC protocol and an EPON device can easily
integrate or interoperate with other Ethernet ports or devices. This helps to reduce the cost of
the access network.
An EPON network uses one optical fiber and two wavelengths to transmit the bi-directional
digital signals at a rate of 1.25 Gbit/s. In the upstream direction, the 1310 nm wavelength window
is used and in the downstream direction, the 1490 nm wavelength window is used. An EPON
network is composed of the optical line terminal (OLT), optical network unit (ONU) and optical
distribution network (ODN). The physical EPON topology is a P2MP tree network and the
logical EPON network consists of multiple P2P links from the OLT to the ONUs.
Purpose
As one of PON technologies, EPON has the common features such as high bandwidth, long
transmission distance, flexible networking, and passive ODN nodes. When EPON is applied to
a broadband access network, it can increase network bandwidth, improve network performance,
and reduce the maintenance cost. Hence, mainstream carriers prefer EPON as the next generation
optical access technology.
In 2005, China Telecom initiated the draft of the EPON interoperation standard. In 2007, the
China Telecom Technical Requirements on an EPON Device is finished. At the same time, China
Telecom led the draft of the standard for the China telecommunications industry association,
namely, Technical Requirements of the Access Network--Interoperation Requirements of an
EPON System. The technical requirements in these two standards are basically the same.
The massive application of EPON in China, Japan, and Korea indicates that EPON is one of the
mainstream technologies applying to the next generation optical access network. The
MA5600T supports the EPON access feature. Therefore, the MA5600T can provide a complete
access solution for carriers.
2.1.2 Specifications
The MA5600T supports the following EPON access specifications:
l A maximum optical split ratio of 1:64. In this way, 64 ONUs can simultaneously connect
to one EPON port.
l Upstream and downstream forward error correction (FEC). The FEC function can be
enabled or disabled by configuring the ONT.
l Downstream encryption. The encryption algorithm can be CTC triple-churning, with
periodical update of the key based on the configurable update time.
l Upstream dynamic bandwidth allocation (DBA). You can set the parameters such as
assured bandwidth, maximum bandwidth, and fixed bandwidth in the DBA profile, and can
bind the DBA profile to an ONT.
l Pre-configuration of the ONT, authentication of the ONT by MAC address, and manual
configuration of the description of the ONT.
l Automatic discovery and manual confirmation of the MAC address of the ONT.
l Remote reset of the ONT.
l The configuration of an offline ONT can be resumed when the ONT registers with the OLT
and gets online, and the configuration resume strategy can be set.
l The ONT capability set profile that describes the parameters such as the ONT port type
and the number of ONT ports. Up to 4096 ONT service profiles and 4096 ONT line profiles
are supported.
l Display of the vendor and version of the chip, the port type, and the capability parameters
such as the number of the ports, and upstream and downstream queues of a remote ONT.
l Enable switch, traffic control switch, auto-negotiation switch, multicast tag selection,
policing parameter, and traffic classification rule of the ONT Ethernet port to implement
the remote management of the ONTs.
l Setting of the VLAN mode and VLAN entry for the ONT port to implement service
provisioning.
l Display of all configurations of remote ONTs.
l Software upgrade of remote ONTs.
l Collection of all performance statistics of the EPON OLT and ONTs to facilitate the
network maintenance and fault diagnosis.
l Fault diagnosis for the EPON network, including detection of all types of alarms and
loopback tests, and also processing of the dying gasp alarm.
l Protection for the PON port.
l Authentication by key.
l POTS port management.
l PRF port management.
l Flexible QinQ VLAN whose outer VLAN is marked by the service type.
l EPON MDU authentication by key.
l ONU-based LOID authentication
l ONU-based LOID+password authentication
2.1.4 Availability
Version Support
2.1.5 Principle
EPON Topology and Basic EPON Principle
The EPON standard is one of several TDM-PON standards and an EPON network has the basic
features of a TDM-PON network. That is, an EPON network is a tree topology composed of the
OLT, ONU, and optical distribution network (ODN), where the ODN is composed of passive
optical components such as trunk optical fibers, optical splitters, and branch optical fibers.
Figure 2-1 shows the physical topology of an EPON network.
1490nm
ONT/ONU
ONT/ONU
Splitter
OLT
ONT/ONU
1310nm
ONT/ONU
In the logical topology, the EPON protocol sets up a logical link from the OLT to each ONU.
The leading bytes of the Ethernet data frame carry the logical link identity (LLID).
The downstream data streams from the OLT to the ONUs are encapsulated as the Ethernet
packets, added with the mapping LLID, and then transmitted in the tree PON network. Optical
splitters in the ODN broadcast data streams to every branch and then all the ONUs can receive
the downstream Ethernet data frames. After that, the ONUs select their data by the LLID.
In the upstream direction from the ONU to the OLT, each ONU uses the TDM mechanism to
share the upstream bandwidth. The OLT allocates the upstream time slots to each ONU through
the Multi-point Control Protocol (MPCP) packet and dynamically assigns the bandwidth
according to the bandwidth request and link conditions to improve the network utilization.
Through MPCP, EPON defines a series of mechanisms in which the ONUs register with the
OLT, the OLT assigns time slots to the ONUs for authentication, and the ONUs report bandwidth
requests to the OLT. This helps to form an effective and clear TDM-PON model and enables
the EPON access technology to be widely used.
DATA LINK
GMII GMII
PHYSICAL
PCS PCS
PMA PHY PMA PHY
PMD PMD
MDI MDI
PASSIVE OPTICAL NETWORK MEDIUM
maintenance mode of an EPON access network is different from that of an xDSL access network.
That is, the OLT needs to configure and manage the services the ONUs, thus multiple services
can share the access channel.
The MA5600T configures and manages the remote ONUs through the extended Ethernet OAM
protocol, and can interconnect with other ONUs that comply with the standard. The
MA5600T is responsible for the storage of the configuration data of every ONU. After an ONU
gets online, the MA5600T configures the data for it. In this way, you do not have to configure
the ONUs locally. This facilitates service provisioning.
The MA5600T can perform the sorting management on the ONUs through the ONU profile and
then the ONUs can be pre-configured offline for service provisioning. The bandwidth control
of the ONUs can also be implemented through the DBA profile. The OLT performs the
bandwidth assignment and scheduling of each ONT by the EPON MAC chip of the OLT.
The MA5600T authenticates an ONU by its pre-configured MAC address. That is, an ONU can
register with the OLT and use the services successfully if its MAC address matches the one
configured on the OLT. Otherwise, the ONU registration with the OLT is rejected. In this way,
network security is guaranteed. In addition, the MA5600T supports the automatic discovery of
the ONUs and recording of the MAC addresses of the rejected ONUs, which are allowed to
connect to the EPON network after being confirmed by an operator.
Glossary
None
2.2.1 Introduction
Definition
The EPON terminal management feature refers to the service configuration and management of
the EPON ONT on the EPON OLT through the extended OAM protocol. Through the CLI of
the OLT or the NMS, an operator can perform operations such as configuring the port attribute
and configuring the port VLAN on the ONT.
The CTC OAM protocol is an ONT management and interface control protocol drafted by China
Telecom. The protocol defines the format of the exchanged messages and the mechanism for
exchanging messages between the OLT and the ONT.
The MA5600T manages and configures the ONT through the OAM protocol, and supports the
configuration of offline ONTs and automatic configuration resume when the ONTs get online.
With this mechanism, the ONTs do not need to save the configuration. This helps service
provisioning and terminal maintenance.
Purpose
The EPON terminal management feature is mainly applicable to automatic provisioning of
EPON services and maintenance of EPON ONTs. The MA5600T supports configuration and
management of the following services of the ONTs through the OAM protocol:
l Data service
l Voice service
l Video service
For an operator, the ONT is like a remote module of the OLT. In this management mode, the
OLT and the ONT are integrated into one network entity, and are reflected as a network element
on the NMS. This facilitates service provisioning, maintenance, and management.
2.2.2 Specifications
The MA5600T supports the following EPON terminal management specifications:
l The system supports up to 4096 ONT service profiles and 4096 ONT line profiles.
l The ONT profile name is a character string consisting of up to 32 characters.
l The ONT profile supports up to 8 POTS ports.
l The ONT profile supports up to 8 TDM ports.
l The ONT profile supports up to 8 ETH ports.
l Each Ethernet port on an ONT can be added to 8 VLANs.
l For an ONT, the multicast mode can be set to the CTC mode, the snooping mode, the
transparent transmission mode, or the global default mode on the OLT. The transparent
transmission mode cannot be set to the global default mode.
l The system supports the ONT software upgrade and loading process defined in CTC2.1
and supports taking effect immediately or upon next startup.
l The system supports adaptation between CTC2.1 and CTC2.0.
2.2.4 Availability
License Support
The number of remote ONT ports supported by the MA5600T is under license. Therefore, the
license is required for accessing the corresponding service.
Version Support
Hardware Support
The EPBD board supports the feature of EPON terminal management.
2.2.5 Principle
Introduction to the OAM Protocol
The EPON terminal management feature refers to the management of the EPON ONT on the
EPON OLT through the extended OAM protocol. The EPON system supports the OAM function
defined in IEEE802.3-2005 clause 57 and the management objects, attributes and operations
defined in IEEE802.3-2005 clause 30. The OAM protocol defines the packet format, the
acknowledgment and retransmission mechanism for the messages between the OLT and the
ONT. In this way, the OAM provides a logical channel for communications.
In addition, the OAM protocol abstracts the ONT service module, and defines a large number
of management entities to describe the inner structure, external features, and capability of an
ONT.
l The correlation between these management entities forms a complicated topology model,
indicating the directions of data flow and control traffic in the ONT.
l Each management entity maps an internal hardware or software module of the ONT and
has specific data members representing the attribute parameters of the module.
l The data member has different attributes. That is, the read-only attribute parameters indicate
the inherent features of the module and the OLT can set the write attribute parameters.
Mechanism for Exchanging Messages Between the OLT and the ONT
After startup, the ONT automatically generates various management entities that map the
modules of the ONT. The set of these entities is also called management information base (MIB).
By obtaining the MIB of every ONT, the OLT learns the type and capability set of the ONTs.
This helps an operator to learn the details of the ONTs and to configure services.
An operator can configure a specified ONT through the NMS or CLI of the OLT. For example,
an operator can set the native VLAN of the Ethernet port on the ONT or add an Ethernet port
on the ONT to a specified VLAN. After the OLT resolves these configuration commands, the
generated OAM messages are used for configuring the attributes of the management entities in
the ONT. In this way, a specific topology is formed.
The ONT attributes can be set on the MA5600T in the OAM mode and the ONT status can be
reported to the MA5600T.
l The terminal configuration and management information of the MA5600T is delivered to
the ONT through the OAM channel.
l The status and the alarm information of the ONTs are reported to the MA5600T through
the OAM channel.
Figure 2-3 shows the mechanism for exchanging messages between the OLT and the ONT
through the OAM protocol.
Figure 2-3 Mechanism for exchanging messages between the OLT and the ONT
ONT OLT
Serial_Number_ONT
PLOAM information
Configuration and
management information
The procedure for setting up the OAM channel between the MA5600T and the ONT is as follows:
1. After the ONT gets online, the ONT and the MA5600T exchange the OAM message and
then complete the registration.
2. Through the OAM channel, the MA5600T sends the configuration and management
messages to the ONT.
3. Through the OAM channel, the ONT reports the status and the alarm information to the
MA5600T.
Table 2-4 Acronyms and abbreviations of the EPON terminal management feature
Acronym/Abbreviation Full Spelling
The OPGD and ETHB boards supports P2P optical access. The OPGD and ETHB boards
implement GE optical access. In the following, description will focus on the applications of the
OPGD board.
3.1 Introduction
3.2 Specifications
3.3 Reference Standards and Protocols
3.4 Availability
3.5 Network Applications
3.1 Introduction
Definition
GE point-to-point (P2P) Ethernet optical access is a mode in which P2P Ethernet optical access
boards provide GE ports and coordinate with downstream devices to implement various optical
access solutions for users. The solutions include FTTC/FTTB, FTTH, FTTO, and FTTM.
Purpose
P2P optical access boards prior to OPGD include ETHB. The following table lists the ports
provided and scenarios supported by each board. Compared with other P2P optical access boards,
the OPGD board features more advantages for the access and the subtending scenarios.
The OPGD board provides GE P2P Ethernet optical access for more flexible FTTx solutions at
higher bandwidth, lower costs, and higher reliability.
l Higher bandwidth. Traditional FE P2P optical access provides only a 100 Mbit/s
transmission rate, but GE P2P optical access allows for 1000 Mbit/s. The FTTH solution
implemented through GE P2P optical access can provide a higher bandwidth for users, thus
meeting the requirements of high-end users.
l Lower costs. Compared with ETHB, which are capable of both upstream transmission and
subtending, the OPGD board is specially designed for subtending and access scenarios.
The OPGD board provides 48 GE ports, so it can be subtended to more DSLAMs and hence
reduces the costs of FTTC/FTTB networking.
l Higher reliability. The OPGD board allows a higher reliability in the DSLAM subtending
scenario through features such as inter-board aggregation, smart link, and ring check.
l More flexible scenarios. The OPGD board coordinates with a variety of downstream
devices (such as the DSLAM, ONT, SBU, and CBU) to implement FTTC/FTTB, FTTH,
FTTO, and FTTM. An MA5600T configured with the OPGD board can not only be directly
connected to access terminals but also subtend DSLAMs in order to converge a large
number of users.
Benefit
Benefits to carriers
One MA5600T can support multi-access such as GPON and P2P. Such an All-in-one solution
reduces the equipment CAPEX as well as OPEX for carriers.
Benefits to users
Because the OPGD board can provide high-density GE ports for subtending DSLAMs, which
converge massive users, lower costs are needed for providing end-to-end service guarantee for
VIP household and enterprise users. In residential communities where optical fibers are already
deployed, a 1000 Mbit/s bandwidth can be provided for high-end users exclusively, meeting the
user needs for HD video, voice, and data integrated services.
3.2 Specifications
The OPGD board supports two application scenarios: access and subtending.
l In the access scenario, the OPGD board is connected to the ONU to implement FTTH.
l In the subtending scenario, the OPGD board is connected to the DSLAM, CBU, or SBU
to implement FTTC/FTTB, FTTO, or FTTM respectively.
l The two application scenarios cannot be implemented on the same OPGD board at the same
time but can be implemented on different OPGD boards at the same time on the same OLT.
To be specific, FTTH and other FTTx services such as FTTC cannot run on the same OPGD
board at the same time, but FTTC/FTTB, FTTO, and FTTM services can run on the same
OPGD board at the same time. FTTH and other FTTx services such as FTTC can run in
the same OLT system at the same time.
l The scenarios can be switched by running the network-role command. By default, the
OPGD board in the system runs in the access scenario.
The OPGD board supports different functions when running in the access scenario and
subtending scenario.
l In the IPoE mode, a static IP address is directly specified for a user, and user packets are
IPoE-encapsulated and sent to the access network.
l In the PPPoE mode, the OPGD board supports the PPPoE+ protocol, single-MAC mode,
and multi-MAC mode.
l In the DHCP mode, the OPGD board supports L2 forwarding, L3 forwarding, DHCP proxy,
and DHCP option 82.
l In the 802.1x mode, the OPGD board supports re-authentication, keep-alive handshake,
quiet period, RFC 4014, EAP trunk and termination, 802.1x packet statistics collection,
user traffic real-time statistics measurement, and RADIUS real-time accounting.
l The OPGD board supports logging of the last 1000 going online/offline events of DHCP
and PPPoE users. The 1000 log entries can be shared systemwide.
l Anti-DoS attack. The OPGD board limits the number of upstream protocol packets from
users based on port to prevent users from attacking the network by DoS.
l Anti-MAC spoofing. The OPGD board limits the number of MAC addresses that a user
can change within a short time.
l MAC address filter. The OPGD board limits the user packets carrying specified MAC
addresses.
l VMAC. The OPGD board replaces untrustable user MAC addresses with trusted ones by
means of 1:1 VMAC or N:1 VMAC.
l Anti-IP spoofing. The OPGD board limits the number of IP addresses that a user can change
within a short time.
l IP address filter. The OPGD board permits or denies a user the access to the device
according to the user IP address.
l Anti-ICMP attack. The OPGD board prevents users from attacking the network with ICMP
packets.
l Anti-IP attack. The OPGD board prevents users from attacking the network with IP packets.
l IP binding based on stream. The OPGD board supports 2K service streams for IP binding.
l Supports aggregation of the OPGD boards in adjacent slots according to the following rules:
On the MA5600T, the ID of the slot for a service board starts from 1. Therefore, two
OPGD boards in slots 1-2, 3-4, or 5-6, ... can be aggregated.
l Supports aggregation groups. Multiple user ports can be added to an aggregation group.
Each OPGD board supports up to 48 aggregation groups.
l Supports inter-board aggregation. User ports on aggregated OPGD boards can be added to
the same aggregation group.
l Supports static LACP.
l Supports protect group, and supports inter-board protect group (including 1:1 protect
group) for the ports on boards of the same type.
l Supports STP and MSTP.
l Supports ring check. This feature prevents broadcast packets from generating a storm in a
ring network.
l Supports smart link and monitor link.
Supports synchronous Ethernet clock; does not support IEEE1588 V2 recovered clock.
Supports transparent transmission of the following protocol packets when the packets are not
QinQ-encapsulated: BPDU, OSPF, RIP, VTP-CDP, ARP, IGMP, VBAS, PPPoE+, BGP, NTP,
PIM, MPLS, ETHOAM, and LDP.
Supports the following types of traffic streams on the multicast subtending port:
l Port+VLAN traffic streams
The following functions are supported in both the access and the subtending scenarios. Unless specified
otherwise, the same function has the same specifications in both scenarios.
Each OPGD board supports 48 GE optical ports, providing 48 one-fiber bidirectional 1GE
physical links or 24 two-fiber bidirectional 1GE physical links. Ports are numbered in different
manners in the one-fiber mode and in the two-fiber mode. For details on the numbering methods,
see "OPGD Board" in the Hardware Description.
The OPGD board supports a 20 Gbit/s forwarding capacity. When the control boards work in
the active/standby mode, the OPGD board supports an upstream bandwidth of 10 Gbit/s; when
the control boards work in the load balancing mode, the OPGD board supports an upstream
bandwidth of 20 Gbit/s.
The OPGD board supports the IPoE, PPPoE, and 2000-byte super-long frame encapsulation
formats for interface data, and does not support the IPoA, PPPoA, or over-2000-byte jumbo-
frame encapsulation formats.
The OPGD board supports the following specifications for the traffic classification feature.
Translate-and-delete (S+C <--> C'). The CVLAN of a packet is translated and the
SVLAN of the packet is deleted.
l Supports the following VLAN forwarding modes:
VLAN+MAC: Identifies the target port according to the SVLAN and DMAC of a
packet.
SVLAN+CVLAN: Identifies the target port according to the SVLAN and CVLAN of
a packet.
NOTE
The VLAN+MAC and SVLAN+CVLAN forwarding modes take effect only on switch-oriented service
streams. In the case of connection-oriented service streams, the target egress port is identified according
to the stream information.
l Supports inner tag check on downstream broadcast packets.
l Supports configuration of bridging based on VLAN. The bridging between user ports of
the OPGD board is implemented through the control board. Users of the OPGD board
cannot be bridged directly.
l Supports an isolation switch for configuring the isolation status of the ports on the OPGD
board. By default, the ports are isolated from each other. The isolated ports cannot be
bridged directly.
l CAR specifications:
Supports single rate three color marker (srTCM) and two rate three color marker
(trTCM).
Colors packets according to CAR results.
Supports stream-based CAR and port+CoS-based CAR (only in the access scenario).
l Supports weighted random early detection (WRED) and early drop.
l Supports PQ, WRR, and PQ+WRR queue scheduling; supports eight queues one each user
port.
l Supports line rate (only in the subtending scenario) to implement rate limitation on egress
port and ingress port.
l Supports IP traffic profile and inner and outer priority mapping.
l Supports queue shaping (only in the access scenario).
l Supports basic ACL, advanced ACL, link ACL, and user-defined ACL.
l Supports rate limitation, priority adjustment and statistics collection, and traffic suppression
on broadcast, unknown multicast, and unknown unicast packets.
3.4 Availability
Relevant NE
Implementing GE P2P Ethernet optical access requires the coordination between the OLT and
ONUs. ONUs include ONT, DSLAM, CBU, and SBU.
License Support
GE P2P optical access is a basic feature of the MA5600T. Therefore, the corresponding service
is provided without a license.
Version Support
Hardware Support
The ONU must support upstream transmission through GE.
Figure 3-1 Network application in the GE P2P Ethernet optical access mode
IPTV server
MG
Softswitch
NMS
BRAS
LAN switch
FTTH OLT
GE
ONT
STB
GE
GE
Phone PC IPTV
GE CBU
TDM
SBU
DSLAM
xDSL
Laptop LAN switch Modem
STB
Phone Laptop
PC PC Phone PC IPTV Mobile
FTTO FTTC/FTTB FTTM
Fiber
To meet the requirements of different scenarios, the OLT works with ONUs of various types to
implement network applications in multiple optical access modes, such as FTTC/FTTB, FTTH,
FTTO, and FTTM.
The FTTx network applications in GE P2P Ethernet optical access have the following in
common: The data, voice, and video signals of terminal users are sent to ONUs, where the signals
are converted into Ethernet packets and then transmitted over optical fibers to the OLT through
the GE upstream ports of the ONUs. Then, the Ethernet packets are forwarded to the upper-layer
IP network through the upstream port of the OLT.
The differences of the FTTx network applications in GE P2P Ethernet optical access are as
follows:
l FTTH: The OLT is connected to the ONUs at user premises through GE P2P Ethernet
optical access. In this way, gigabit bandwidth is exclusively provided to each household.
FTTH is applicable to new apartments or villas in loose distribution. In this scenario, FTTH
provides services of higher bandwidth for high-end users.
l FTTB/FTTC: The OLT is connected to DSLAMs in corridors (FTTB) or by the curb
(FTTC) through GE P2P Ethernet optical access. The DSLAMs are then connected to user
terminals through xDSL. With the aggregation provided by the DSLAMs, one port on the
OPGD board can be connected to a large number of users. FTTB/FTTC is applicable to
Network Protection
FTTC/FTTB, FTTO, and FTTM, compared with FTTH, involve a larger number of access users.
Hence, network reliability must be ensured. The ONU provides dual upstream ports to implement
link redundancy backup. With the coordination of the ONU, the OPGD board on the OLT
supports the following link backup modes: inter-board aggregation, smart link, and monitor link.
Inter-board aggregation: Two upstream ports of the ONU are respectively connected to two
adjacent OPGD boards of the OLT. Dual upstream link aggregation is configured on the ONU,
and a protect group is configured on the OLT. Thus, 1:1 backup of GE links can be implemented
through inter-board aggregation. Figure 3-2 shows the network topology of the OLT subtending
the ONU to implement inter-board aggregation. For more details on the network application of
inter-board aggregation, see 11.4.6 Network Applications.
ONU
OLT
Active
Standby
Smart link and monitor link: Two upstream ports of the ONU are respectively connected to the
OPGD board on two OLTs. Monitor link is configured on the OLTs, and smart link is configured
on the ONU. 1:1 GE link backup is implemented through a mode similar to type B dual homing
of GPON ports. Figure 3-3 shows the network topology of the OLTs subtending the ONUs to
implement smart link and monitor link. For more details on smart link and monitor link, see
11.3 Smart Link and Monitor Link.
OLT1 OLT2
Monitor Link
Monitor Link
Group 1
Group 2
Smart Link
Smart Link
Group 1
Group 2
ONU1 ONU2
Active
Standby
4 MPLS
Multiprotocol Label Switching (MPLS) was introduced to improve the forwarding speed.
However, because of its excellent performance in traffic engineering (TE) and virtual private
network (VPN), which are the two critical technologies, MPLS is becoming an important
standard for extending the IP network.
4.1 Overview
4.2 Reference Standards and Protocols
4.3 Availability
4.4 MPLS
Multiprotocol Label Switching (MPLS) was introduced to improve the forwarding speed.
However, because of its excellent performance in traffic engineering (TE) and virtual private
network (VPN), which are the two critical technologies, MPLS is becoming an important
standard for extending the IP network. This topic provides the introduction, availability,
principle, and reference of the MPLS feature.
4.5 MPLS RSVP-TE
MPLS RSVP-TE is a technology which integrates TE and the MPLS superimposed model. It
provides high quality of service (QoS) and TE capability for users by establishing LSPs based
on TE. This topic provides introduction to this feature and describes the principle and reference
documents of this feature.
4.6 MPLS OAM
MPLS OAM checks if an LSP is in the normal state through a mechanism, and reports the alarm
information if the LSP fails. This topic provides introduction to this feature and describes the
principle and reference documents of this feature.
4.7 Glossary, Acronyms, and Abbreviations
This topic provides the glossary, acronyms, and abbreviations of the MPLS feature.
4.1 Overview
Multi-protocol Label Switching (MPLS) is between the data link layer and the network layer in
the TCP/IP protocol stack. The label in a short fixed length is used to encapsulate IP packets.
On the data plane, fast label forwarding is implemented. On the control plane, MPLS can meet
the requirements on the network from various new applications with the help of the powerful
and flexible routing functions of the IP network.
l Functioning as a P device
l Capability of 100 pps for processing LDP and RSVP packets when functioning as a P device
l MPLS label switching
l Penultimate hop popping (PHP)
l Query of LSP packet statistics by label
1. PWE3
l RFC3985: Pseudo Wire Emulation Edge-to-Edge (PWE3) Architecture
l RFC4447: Pseudowire Setup and Maintenance Using the Label Distribution Protocol
(LDP)
l RFC3916: Requirements for Pseudo-Wire Emulation Edge-to-Edge (PWE3)
l RFC4446: IANA Allocations for Pseudowire Edge to Edge Emulation (PWE3)
l RFC4717: Encapsulation Methods for Transport of Asynchronous Transfer Mode
(ATM) over MPLS Networks
4.3 Availability
License Support
The MPLS feature is an optional feature, and the corresponding service is controlled by the
license.
Version Support
Product Version
Feature Dependency
l The MA5600T cannot support the L3 VPN.
l One shelf supports up to two SPUB boards. It is recommended that you insert these two
boards into two adjacent slots and set them to work in the active-standby mode.
l The CR-LDP is not supported.
l Auto TE FRR is not supported.
l FDI is not supported.
l OAM for the external LSP that is set up by the LDP is not supported.
l The reverse channel of MPLS OAM must be a dedicated LSP but cannot be a shared LSP
or non MPLS channel.
l Facility Backup is supported, but one-to-one backup protection is not supported.
l The MA5600T can function as the label switching router (LSR).
l The load sharing among LDP LSPs is supported.
l The MA5600T can function as the P node on the network.
l VCCV detecting for the PW is supported.
Hardware Support
The cooperation from the SPUB board is required.
4.4 MPLS
Multiprotocol Label Switching (MPLS) was introduced to improve the forwarding speed.
However, because of its excellent performance in traffic engineering (TE) and virtual private
network (VPN), which are the two critical technologies, MPLS is becoming an important
standard for extending the IP network. This topic provides the introduction, availability,
principle, and reference of the MPLS feature.
4.4.1 Introduction
Definition
Basic MPLS features mainly refer to the MPLS Label Distribution Protocol (LDP) and LSP
management function.
The LDP protocol is a standard MPLS label distribution protocol defined by the IETF. LDP,
which is mainly used to allocate labels for the negotiation between LSRs to set up label switching
paths (LSPs), regulates various types of information for the label distribution process, and the
related processing. The LSRs form an LSP that crosses the entire MPLS domain according to
the local forwarding table, which correlates the in label, network hop node, and out label of each
specific FEC.
With the LSP management function, the MA5600T can manage and maintain the LSPs generated
by various LDPs and can issue the hardware forwarding module.
Purpose
MPLS is initially put forth to improve the forwarding speed of routers. Compared with the
traditional IP routing mode, during data forwarding, MPLS analyzes the IP packet header only
on the edge of the network, but does not analyzes the IP packet header at each hop. This saves
the processing time.
With the development of the ASIC technology, the route search speed is not a bottleneck for
network development. Thus, MPLS has not obvious advantages in forwarding speed. MPLS,
however, is widely applied to the virtual private network (VPN), traffic engineering, and quality
of service (QoS) due to its characteristics of supporting multi-layer labels and connected-oriented
forwarding plane. Therefore, MPLS becomes an increasingly important standard for expanding
the scale of the IP network.
4.4.2 Specifications
MPLS can use multiple LDPs, including the following protocols:
l The protocols dedicated for label distribution, such as LDP and constraint-based routing
using LDP (CR-LDP)
l The extended label distribution protocols based on existing protocols, such as Border
Gateway Protocol (BGP) and Resource Reservation Protocol (RSVP)
The MA5600T supports the LDP and RSVP protocols and manual configuration of the static
LSP. The MA5600T does not support the CR-LDP protocol and the BGP label distribution
protocol.
NOTE
The MA5600T cannot use the BGP protocol to distribute labels; however, the MA5600T supports the BGP
routing protocol.
l Downstream unsolicited (DU) label distribution
l Ordered label control mode
l Liberal label retention mode
l Penultimate hop popping function, and implicit and explicit NULL labels
l Functioning as the LER and the transit LSR
4.4.3 Principle
Basic MPLS Concepts
l Forwarding equivalence class (FEC)
An FEC refers to a group of data streams which are forwarded in the same manner. These
data streams are forwarded by the LSR in the same manner. Theoretically, FECs can be
classified according to the IP address, service type, or QoS. For example, in the
conventional IP forwarding by using the maximum matching algorithm, all the packets to
the same route belong to an FEC. Currently, FECs are generally classified based on the
address. The MA5600T supports only address-based FECs.
l Label
A label is a short fixed length physically contiguous identifier which is used to identify an
FEC, usually of local significance. In certain conditions, for example, when load sharing
is required, one FEC may map multiple labels. On one device, however, one label can
represent only one FEC.
Label encapsulation is performed between the link layer and the network layer. Thus, label
can be supported by any link layer.
l Penultimate hop popping
On the last hop node, the label no longer has any function. In this case, the label stack may
be popped at the penultimate LSR of the LSP, rather than at the LSP Egress, to reduce the
load of the last hop LSR. The last hop LSR directly forwards IP packets or next-layer labels,
which are configured at the egress by the PHP.
l Label switching router (LSR)
An LSR, also called an MPLS node, is a network device which is capable of exchanging
and forwarding MPLS labels. LSRs are the basic elements in an MPLS network. All LSRs
support the MPLS protocol.
l Label edge router (LER)
An LSR on the edge of the MPLS domain is called the LER. If an LSR has a neighbor node
that does not run the MPLS protocol, the LSR is an LER.
The LER is responsible for classifying the packets that enter the MPLS domain to FECs
and adding labels to these FECs for forwarding in the MPLS domain. When the packets
leave the MPLS domain, the FECs pop up the labels, resume the original packets, and then
are forwarded accordingly.
l Label switched path (LSP)
The path that a packet in a particular FEC traverses in an MPLS network is called the LSP.
The LSP, similar to the ATM virtual circuit in function, is a unidirectional path from the
ingress to the egress.
l Label distribution protocol (LDP)
LDP, also called the signaling protocol, is the MPLS control protocol. LDP is responsible
for series of operations such as FEC classification, label distribution, and LSP establishment
and maintenance.
MPLS can use multiple label distribution protocols, such as the Label Distribution Protocol
(LDP) and Resource Reservation Protocol Traffic Engineering (RSVP-TE).
l Label distribution mode
In an MPLS system, the downstream LSR determines the label to be advertised to a specific
FEC, and then notifies the upstream LSR. That is, the label is specified by the downstream
LSR, and is advertised from the downstream LSR to the upstream LSR.
The label advertisement modes on the upstream and downstream LSRs with label
advertisement adjacencies must be the same. Otherwise, the LSP cannot be set up.
The two label advertisement modes are as follows:
Downstream unsolicited (DU) mode
In the DU mode, the LSR allocates labels to a specific FEC without asking for the label
request message from upstream LSRs.
Downstream on demand mode
In the DoD mode, the LSR allocates labels to a specific FEC only after obtaining the
label request message from upstream LSRs.
NOTE
When a downstream LSR feeds back the label mapping information is determined by the label control
mode used by the LSR.
l When an LSR supports the ordered label control mode, it sends the label mapping information
to the upstream LSR only when it receives the label mapping message returned by the downstream
LSR, or when it is the egress node of the FEC.
l When an LSR supports the independent label distribution control mode, it sends the label mapping
message to the upstream LSR regardless of whether it receives the label mapping message
returned by the downstream LSR.
l Label distribution control mode
The label distribution control mode is the mode used by the LSR to allocate labels during
the establishment of LSPs.
The two label distribution control modes are as follows:
Independent label distribution control mode
In the independent label distribution control mode, the local LSR can independently
allocate a label to an FEC and binds the label to the FEC, and notify the upstream LSR
of the label, without waiting for the label from the upstream LSR.
Ordered label control mode
In the ordered label control mode, the LSR can send the label mapping message of an
FEC to the upstream LSR only when the LSR has the label mapping message of the
next hop of the FEC, or when the LSR is the egress node of the FEC.
l Label retention mode
The label retention mode is the mode adopted by the LSR to process the received label
mapping message that are not in use temporarily.
The two label retention modes are as follows:
Liberal retention mode
If an LSR supports the liberal retention mode, it maintains the label mapping received
from the neighbor LSR regardless of whether the neighbor LSR is its own next hop.
When the next hop neighbor changes due to the change of network topology, the LSR
that supports the liberal retention mode can use the label sent from the non next hop
neighbor to set up LSPs quickly. This, however, requires more memory and label space.
Conservative retention mode
If an LSR supports the conservative retention mode, it maintains the label mapping
received from the neighbor LSR only when the neighbor LSR is its next hop.
When the next hop neighbor changes due to the change of network topology, the LSR
that supports the conservative retention mode can save memory and label space because
the LSR maintains only the label from the next hop neighbor. The re-establishment of
LSPs, however, lasts a long time.
Access Node
Label Switched Path
(LSP)
Ingress
Egress
1. First, enable MPLS and LDP on each router on the network, and enable LDP on the
interconnected interfaces.
2. Consequently, LDP automatically sets up an LDP session between any two routers. The
LDP packets are carried on this session.
3. LDP works with the traditional routing protocol such as OSPF and RIP to set LSPs in each
LSR for the FEC with service requirements.
4. LDP does not need to be enabled for the establishment of static LSPs. Configure the FEC,
and inbound and outbound labels on each MPLS router that the static LSP travels.
Figure 4-2 Working principle of active and standby protection for the MPLS service
Active control
Active SPUB
board
A B
C F
E D
Service
board
G H
Standby control Standby SPUB
board
The user-side MPLS data is transmitted to the SPUB board for processing
through the control board, and then transmitted to the upstream network
through the control board again after being processed by the SPUB board.
Port B of the two internal 10GE ports on the active SPUB board is connected to port A on the
active control board. Ports A and B are used to receive and transmit the network-side and user-
side packets. The other port (port F) is connected to port E on the standby control board.
Port D of the two internal 10GE ports on the standby SPUB board is connected to port C of the
active control board. Ports C and D are used to receive and transmit the network-side and user-
side packets. The other port (port H) is connected to port G on the standby control board.
Therefore, after the active and standby SPUB boards form a protection group, the system
automatically switches the MPLS services to the standby SPUB board when the active SPUB
board fails, thus implementing active and standby protection for the MPLS services.
LDP GR
The GR is a key technology for implementing the high availability (HA). The GR protocol
collects the information about the protocol control plane from neighbors or remote peers but
does not learn about the information about the control plane through the handshake and exchange
of the protocol.
The LDP GR function ensures normal forwarding of the MPLS service during the active and
standby switchover or upgrade of the system. In addition, the LDP GR function resumes the
LDP session and completes the LSP establishment after the active and standby switchover or
upgrade of the system
NOTE
In actual application, to prevent services from being affected by the failure of the active control board,
configure the system-level GR in the environment where both active and standby control boards are
configured.
4.5.1 Introduction
Definition
MPLS RSVP-TE is a technology that integrates TE with the MPLS technology. MPLS RSVP-
TE establishes label switched path (LSP) tunnels along specified paths for resource reservation,
enables network traffic to avoid the node where congestion occurs to balance network traffic.
To establish constraint-based LSPs in MPLS TE, RSVP is extended. The extended RSVP
signaling protocol is called the RSVP-TE signaling protocol.
Purpose
To deploy engineered traffic on a large-scale backbone network, a simple solution with good
expansibility must be adopted. MPLS, as a stacking model, can easily establish a virtual topology
over a physical network and map traffic to this topology.
MPLS TE establishes the LSP tunnel along a specified path through RSVP-TE and reserves
resources. Thus, carriers can accurately control the path that traffic traverses to avoid the node
where congestion occurs. This solves the problem that certain paths are overloaded and other
paths are idle, utilizing the current bandwidth resources sufficiently. At the same time, MPLS
TE can reserve resources during the establishment of LSP tunnels to ensure the QoS.
To ensure continuity of services, MPLS TE also introduces route backup and fast reroute (FRR)
to implement quick switching in case of link failure.
4.5.2 Specifications
l The RSVP-TE protocol
l Opaque Type 10 LSA (OSPF TE extension)
l The CSPF protocol
l Strict and loose explicit paths
l Active and standby TE LSPs
l Functioning as the ingress or egress LER on an MPLS RSVP-TE network
l 64 ingress TE LSPs
l 64 egress TE LSPs
4.5.3 Principle
The MPLS TE signaling can carry the strict or loose attributes of an explicit path, and establish a
CR-LSP along a specified path.
Resv Resv
Sender Receiver
1. The ingress LSR generates the Path message and transmits it to the egress LSR.
2. After the egress LSR receives the Path message, the egress LSR generates the Resv message
and transmits it to the ingress LSR. At the same time, the LSRs on the LSP reserves
resources for the LSP through the Resv message.
3. When the ingress LSR receives the Resv message, it indicates that the LSP is successfully
established.
RSVP-TE GR
RSVP-TE graceful restart (GR) is a status recovery mechanism of RSVP-TE. When the control
plane performs active/standby switchover, RSVP-TE GR can ensure the continuity of data
transmission on the forwarding plane. At the same time, neighbor nodes help the GR node to
recover in time.
RSVP-TE GR is based on the Hello mechanism of RSVP. The recovery of the local status
depends on the upstream Path message or the downstream Recovery Path message.
RSVP GR has the following features: Shortening the information recovery of the control plane;
reducing changes of temporary routes; ensuring the continuity of service forwarding on the
forwarding plane.
4.6.1 Introduction
Definition
Operation Administration & Maintenance (OAM) has the following features:
l Simplifying network operations
l Checking the network performance anytime
l Reducing OPEX of the network
Deployment of an effective OAM mechanism is crucial to the running of the network, especially
to the network with certain QoS requirements, namely, certain performance and usability
requirements.
MPLS, as the key bearer technology for the extensible network generation network, provides
multiple services with QoS guarantee. In addition, MPLS introduces a unique network layer and
therefore there will be faults that are only relevant to this new network layer. Therefore, an MPLS
network must have the OAM capability.
MPLS OAM provides both detection tools and mature protection switching mechanisms. In this
way, MPLS can perform switching when a fault occurs on the MPLS layer. This minimizes the
loss of user data.
Purpose
The MPLS OAM functions are as follows:
l Fault detection: Requirement-based query and continuous detection are provided to learn
about anytime whether faults exist on the monitored LSP.
l Protection switching: After a fault occurs, it can be detected, analyzed, and located, and an
alarm will be reported. In addition, the corresponding measures can be taken according to
the fault type.
4.6.2 Specifications
l OAM and protection switching for static tunnels and dynamic tunnels (dynamic tunnels
are set up through the RSVP-TE signaling)
l 1:1 LSP protection mode
l 32 LSP protection groups
l Transmission and processing of the CV, FFD, and BDI packets in MPLS OAM
l Transmitting CV packets at an interval of 1s
l Transmitting FFD packets at an interval of 10 ms, 20 ms, 50 ms, 100 ms, 200 ms, or 500
ms
l Transmitting BDI packets at an interval of 1s
4.6.3 Principle
Background Knowledge for MPLS OAM
1. MPLS OAM packets are classified as follows:
l Connectivity detection (CD) packets. The two types of CD packets are as follows:
Connectivity verification (CV)
Fast failure detection (FFD)
l Forward defect indication (FDI)
l Backward defect indication (BDI)
MPLS OAM is implemented by periodically transmitting detection packets CV or FFD
over the detected LSPs.
2. Basic detection process
MPLS OAM is implemented by periodically transmitting detection packets CV and FFD
over the detected LSPs.
l To detect the source by using the CV packet, a sliding window in the width of 3s is set
on the source and the LSP status is checked by using the VC packet received in the
sliding window.
l To detect the source by using the FFD packet, a sliding window in the width of three
times of FFD transmit interval is set on the source and the LSP status is checked by
using the FFD packet received in the sliding window.
3. CV and FFD
The FFD and CV detection packets are mutually exclusive. That is, only the FFD or CV
detection packets can be applied to one LSP at a time.
4. Backward path
BDI packets are transmitted through the backward path. The ingress of a backward path is
the egress of the detected LSP, and the egress of the backward path is the ingress of the
detected LSP. That is, each forward LSP has a backward path.
l The source transmits the CV/FFD packets to the destination through the detected LSP.
l The destination checks the correctness of the type and frequency information carried in the
received detection packets and measures the number of correct and errored packets that are
received within the detection period to monitor the connectivity of the LSP in real time.
l When the LSP fails, the destination detects the defect quickly and analyzes the defect type.
Bind a backward LSP to the detected LSP when configuring the OAM function for the detected
LSP. A backward path is an LSP that has the opposite source and destination of the detected
LSP, or a non MPLS path that can be connected to the source and destination of the detected
LSP.
After the destination detects a defect, the destination transmits the BDI packets that carry the
defect information to the source through the backward path. The source learns about the status
of the defect, and triggers the corresponding protection switching when the protect group is
correctly configured. Figure 4-4 shows the MPLS OAM CD.
Router
LSP fails, the standby LSP needs to use a physical path totally different from that of the active
LSP.
The working mode of MPLS OAM protection switching is 1:1 protection mode. In this mode,
each active LSP has a standby LSP.
l In normal conditions, data is transmitted through the active LSP and no traffic is transmitted
through the standby LSP.
l When the destination detects a failure on the active LSP through the detection mechanism,
the destination switches to the standby LSP, and then transmits the BDI packet to the source
through the backward path, instructing the ingress to switch the traffic on the active LSP
to the standby LSP. Thus, 1:1 protection switching is implemented.
Glossary
The packets with the same destination address are assigned to an FEC
and a label is taken out of the label resource pool and is allocated to this
FEC. The label switching node records the relationship between the label
Label distribution
and the FEC, encapsulates the relationship into the message packet, and
notifies the upstream label switching node of it. This process is called
label distribution.
Label space The value range of the allocated labels is called the label space.
Term Description
LSRs are the basic elements in an MPLS network. All LSRs support the
MPLS protocol.
Label switching An LSR consists of a control unit and a forwarding unit. The control unit
router (LSR) is responsible for label distribution, route selection, setup of the label
forwarding table, and setup and release of the LSP. The forwarding unit
forwards the received packet according to the label forwarding table.
An LER provides the traffic classification, and label mapping (in this
case, the LER is an ingress) and label removal functions. An LER (called
Label switching
the ingress LER), on the edge of the MPLS network, assigns the traffic
edge router (LER)
that enters the MPLS network to different FECs, and applies for
corresponding labels for these FECs.
The path that an FEC traverses in an MPLS network is called the LSP.
Label switched The LSP, whose function is the same as the virtual circuit in ATM and
path (LSP) frame relay, is a unidirectional path from the ingress to the egress. Each
node on the LSP is an LSR.
The static LSP is the label forwarding path manually set up by the user
Static LSP
for label distribution to each FEC.
PEs refer to the edge devices on the service provider's network. In the
basic architecture of the MPLS-based VPN, PEs are located in the
backbone network. PEs are responsible for the management of VPN
users, establishment of LSPs among PEs, and route assignment among
PE
the tributaries within a VPN user. A PE maps and forwards packets from
the private network to the public network tunnel or from the public
network tunnel to the private network. PEs can be classified into U-PEs,
S-PEs, and N-PEs.
Term Description
The path from a PE to another PE, and then to another AC can be a point-
PW
to-point or point-to-multipoint connection between PEs.
Multi-hop PW Multi-hop PWs refer to the multiple PWs existing between two T-PEs.
PWE3 is a general name for all the services that traverse the PSN to the
peer CE. The intermediate transmission media of the services can be the
PWE3
same or not, and end-to-end management of the services can be
implemented.
Term Description
FDI packets are used to respond to the detected failure events. The major
function of the FDI packet is to suppress the alarms on the network layer
FDI packet
that occur after failure. Its primary purpose is to suppress alarms being
raised at affected higher level client LSPs and (in turn) their client layers.
The purpose of the BDI OAM function is to inform the upstream end of
BDI packet an LSP of a downstream defect. The BDI packet can be used in the 1:1
or 1:n protection switching service.
Protection Protection switching refers to the function that MPLS OAM exchanges
switching or duplicate traffic between the active tunnel and the standby tunnel.
An LSP that is used to protect the active LSP. A bypass LSP is generally
in the idle state and does not carry services.
Bypass LSP
When the active LSP fails, the service data is forwarded by the bypass
LSP continuously.
A TE FRR mode of protecting the node that exists between the PLR and
Node protection the MP, and is on the active LSP. When this node fails, traffic can be
switched to the bypass LSP.
A TE FRR mode of protecting the direct link between the PLR and the
Link protection MP, and is along the active LSP. When this link fails, traffic can be
switched to the bypass LSP.
Point of local
The point of local repair is the ingress of the bypass LSP.
repair
IETF extends the protocols (such as OSPF, IS-IS, BGP, LDP, and
RSVP) that are related to IP/MPLS forwarding to implement
uninterrupted forwarding during the protocol restart, and thus to
GR suppress the change of the control-layer protocol to a certain extent
during the active/standby switchover of the system. This series of
standards are generally termed as the graceful restart extension for each
protocol, abbreviated as the GR.
MP Merge Point
DoD Downstream-on-Demand
DU Downstream Unsolicited
FR Frame Relay
TE Traffic Engineering
TEDB TE DataBase
AC Attachment Circuit
CE Customer Edge
PE Provider Edge
PW Pseudo wire
GR Graceful Restart
HA High Availability
5 PWE3
5.1 Introduction
Definition
Pseudo-wire emulation edge to edge (PWE3) is a type of L2 service carrying technology. It is
mainly used to emulate the essential behavior and characteristics of the services such as the
ATM, frame relay, Ethernet, low-rate time division multiplexing (TDM) circuit, and
synchronous optical network (SONET)/synchronous digital hierarchy (SDH) as faithfully as
possible in a packet switched network (PSN).
PWE3 is implemented on access devices through MPLS and IP technologies. MPLS supports
PWE3 by using the LDP or RSVP-TE protocol as signaling.
Purpose
PWE3 can interconnect the traditional network with PSN to share resources and expand the
reach of networks. For example, PWE3 can emulate services such as TDM, ATM, and Ethernet,
and can implement service interoperation by using existing PSN (IP/MPLS) as the bearer
network.
Benefit
PWE3 connects the traditional TDM, ATM, and Ethernet networks with PSN (IP/MPLS). In
this way, PWE3 protects the investment on the traditional TDM, ATM, and Ethernet networks,
and also implements the all-IP network architecture.
5.2 Specifications
The MA5600T supports the following PWE3 specifications:
l Supports 896 static PWs.
l Supports 2048 dynamic PWs.
l Supports 256 PW templates.
l Supports single-hop PWs.
l Supports functioning as a UPE and does not support functioning as an SPE in the multi-
hop scenario.
l The dynamic PW supports the LDP protocol.
l Supports PW CAR.
l Supports functioning as a PE.
l Supports functioning as a P device.
l Supports the following ETH PWE3 specifications:
Supports two PW encapsulation formats: tagged mode and raw mode.
Supports mapping the VLAN priority to the EXP field in the MPLS label.
Supports only the MPLS over MPLS encapsulation format for ETH PWE3.
l Supports traffic label for ETH PW, and does not support traffic label for ATM PW or TDM
PW.
5.4 Availability
License Support
The PWE3 feature is a basic feature of the MA5600T. Therefore, the corresponding service is
provided without a license.
Version Support
Product Version
5.5 Principle
5.5.1 Basic Principle of PWE3
Basic PWE3 Transmission Components
PWE3, which uses LDP and RSVP-TE as the signaling protocols, carries various types of L2
services, such as various types of L2 data packets, from the customer edge (CE), and
transparently transmits the L2 data through tunnels (such as MPLS LSPor TE tunnels). As shown
in Figure 5-1, the basic PWE3 transmission components include the following:
l Attachment circuit (AC): the link between CE and PE. All user packets (including L2 and
L3 protocol packets of users) on the AC are transparently forwarded to the peer end.
l Pseudo wire (PW): a virtual connection. It is a virtual connection (VC) plus a tunnel. A
PW conveys VC information by signaling (LDP or RSVP-TE). The tunnel can be an LSP,
L2TPV3, GRE, or TE tunnel. A PW is directional. A PWE3 conveys VC information by
signaling (LDP or RSVP-TE). The system manages VC information and tunnels to form
PWs. For the PWE3 system, a PW is like a direct channel between a local AC and a peer
AC and is used for transparently transmitting the L2 data of users.
l Forwarder: After a PE receives data frames from an AC, the forwarder selects a PW for
forwarding the frames. In fact, the forwarder is a forwarding table of PWE3.
l Tunnel: A tunnel is a direct channel between a local PE and a peer PE and is used for
transparently transmitting data between the PEs. Tunnels are used for carrying PWs. A
tunnel can carry multiple PWs. Generally, the tunnel refers to an MPLS tunnel.
l PW signaling protocol: A PW signaling protocol is the basis for implementing PWE3 and
is used to create and maintain PWs. Current PW signaling protocols are mainly LDP and
RSVP-TE.
l Encapsulation: The packets transmitted through the PW use the standard PW encapsulation
format and technology. There are multiple PWE3 encapsulation types on a PW. The formats
are defined in detail in draft-ietf-pwe3-iana-allocation-x.
l Quality of service (QoS): The priority information at the header of L2 user packets is
mapped to the QoS priority for transmitting the packets in the public network. In general,
support for MPLS QoS is required.
Assume that the VPN1 packet stream travels from CE1 to CE3. The basic data flow would be
as follows.
l CE1 transmits an L2 packet to PE1 through an AC.
l After PE1 receives the packet, the forwarder selects a PW for forwarding the packet.
l PE1 generates two MPLS labels according to the PW forwarding entry. The private network
label is used for identifying the PW, and the public network label is used for transmitting
the packet to PE2 through the tunnel.
l The L2 packet arrives at PE2 through the public network tunnel. The system extracts the
private network label (the public network label is extracted by the penultimate P device).
l The forwarder of PE2 selects an AC for forwarding the packet, and then PE2 forwards the
packet to CE3.
Pseudo wire
PW end PW end
service PSN tunnel service
Native Native
Ethernet Ethernet
service service
The channel set up in a PWE3 network is a point-to-point channel. Channels are isolated from
each other. L2 user packets are transparently transmitted between PWs. The following provides
a detailed description.
l According to the services requirements of the CE, one or more PWs are set up between
PE1 and PE2. Multiple PWs can be carried on one PSN tunnel.
l For the PEs, after the PW is set up, the mapping between the user access interface (AC)
and virtual link (PW) is determined.
l The PSN device only needs to forward the MPLS packet according to the MPLS label,
regardless of the L2 user packet encapsulated inside the MPLS packet.
Payload
May be null
Encapsulation
PW demultiplexer
May be null
PSN convergence
PSN
Data link
Physical
l Figure 5-4 shows the PWE3 protocol stack in the MPLS over MPLS encapsulation mode.
Figure 5-4 PWE3 protocol stack in the MPLS over MPLS encapsulation mode
LSP PW
PE P P PE
Modem Modem
IP IP IP IP IP
ETH ETH ETH ETH ETH
xDSL PW1 PW1 PW1 xDSL
LSP1 LSP2 LSP3
VLAN VLAN VLAN
ETH ETH ETH
l Figure 5-5 shows the PWE3 protocol stack in the MPLS over IP encapsulation mode.
Figure 5-5 PWE3 protocol stack in the MPLS over IP encapsulation mode
IP tunnel PW
PE P P PE
Modem Modem
IP IP IP IP IP
ETH ETH ETH ETH ETH
xDSL PW1 PW1 PW1 xDSL
IP1 IP1 IP1
VLAN VLAN VLAN
ETH ETH ETH
l Figure 5-6 shows the PWE3 protocol stack in the UDP over IP encapsulation mode.
Figure 5-6 PWE3 protocol stack in the UDP over IP encapsulation mode
IP tunnel PW (UDP)
PE P P PE
Modem Modem
IP IP IP IP IP
ETH ETH ETH ETH ETH
xDSL UDP1 UDP1 UDP1 xDSL
IP1 IP1 IP1
VLAN VLAN VLAN
ETH ETH ETH
IP IP IP IP IP
VLAN VLAN VLAN VLAN VLAN
ETH ETH ETH ETH ETH
xDSL PW PW PW xDSL
LSP LSP LSP
VLAN VLAN VLAN
ETH ETH ETH
OLT MPLS
MPLS ETH
encapsulation encapsulation
ETH ETH ETH ETH
GE/FE PW PW
MPLS MPLS
VLAN
ETH
Upstream direction
Figure 5-9 QoS processing flow of upstream ETH PWE3 service (MPLS over MPLS
encapsulation)
User side Network side
VLAN
LSP
PW
EXP1 EXP1 CoS1
ETH TDM
ETH TDM
ETH TDM
ETH
PW PW PW
EXP EXP EXP
LSP LSP
EXP EXP
ETH
Packets sent to 8
ETH CoS PW EXP LSP EXP
queues for PQ, WRR,
mapped to copied to LSP mapped to ETH
and PQ+WRR
PW EXP CoS
scheduling
Application Description
Figure 5-10 shows an application over an existing network where a large amount of the legacy
equipment does not support the traffic label. In this application, the PE supports generating the
traffic label and performing load balancing but the P has no traffic label capability (for example,
the legacy equipment over the network).
l PE1 generates traffic label (FL) and at the same time performs load balancing (flow1 and
flow2). PE2 removes the FL.
l P1, P2, and P3 do not support traffic label for load balancing and they only forward data
like a common P.
Figure 5-10 Traffic label application (P equipment does not support traffic label for load
balancing)
P1 P3
Flow 1
PE1
PE2
Ingress P5 Engress
P2
P4
Flow 2
Figure 5-11 Network application: implementing PWE3 private line upstream transmission in
FE/GE access
ONU OLT OLT ONU
FE/GE GE
PSN
ETH PWE3
CW Control word
This topic describes the feature of directly connecting the MA5600Ts through the GE ports of
the upstream board in the GIU slot (such a board is referred to as the GIU upstream board).
6.1 Introduction
6.2 Specifications
6.3 Reference Standards and Protocols
6.4 Availability
6.5 Principle
6.6 Glossary, Acronyms and Abbreviations
6.1 Introduction
Definition
Subtending through GIU boards is a networking mode in which the MA5600T series are directly
connected to each other through the GE ports on the GIU upstream board.
Purpose
Subtending through GIU boards makes the networking of the MA5600Ts more flexible and
saves the upstream optical fiber resources for the access node. In addition, remote subtending
allows for fewer convergence devices at the central office (CO), simplifies the topology, and
facilitates service configuration.
6.2 Specifications
l The MA5600Ts can be subtended through the ports of the GIU board, the SCU control
board, or the ETHB board.
The ports provided by GIU boards can be used as upstream ports or subtending ports.
The SCUN board provides four GE optical ports, which can be used as upstream ports
or subtending ports.
The ETHB board provides eight ports, which can be used as subtending ports or
upstream ports. When the ports are used as upstream ports, they receive multicast
streams.
l For subtending in an RSTP/MSTP ring topology, the number of nodes is recommended to
be less than seven.
6.4 Availability
License Support
The feature of subtending through GIU boards is a basic feature of the MA5600T. Therefore,
the corresponding service is provided without a license.
Version Support
Feature Dependence
A GIU upstream board has the following feature dependence:
l The X2CS board works with the control board of the 10GE platform and supports Ethernet
clock synchronization. Each X2CS board provides two 10GE small form-factor pluggable
(SFP) optical ports.
l The GICK board works with the control board of the 10GE platform and supports Ethernet
clock synchronization through the GE port. Each GICK board provides two GE SFP optical
ports.
6.5 Principle
The MA5600T supports local subtending and remote subtending, according to its location.
Local Subtending
Local subtending is the subtending of multiple MA5600T shelves that are in the same cabinet
or in multiple local cabinets.
The MA5600Ts can be locally subtended through the control board or GIU upstream board.
Each GIU upstream board provides up to two GE optical ports for upstream transmission or
subtending. The number of ports for subtending depends on the bandwidth requirement. If an
active/standby configuration is required, configure two GIU upstream boards.
l According to the connection mode, local subtending can be in a star topology or daisy chain
topology.
l According to the configuration of the GIU upstream board (one or two boards configured),
local subtending can be implemented through a single GIU upstream board or through dual
GIU upstream boards.
l Local subtending in a star topology is shown in Figure 6-1. Local subtending in a daisy
chain topology is shown in Figure 6-2.
Splitter
Splitter
S S S S
C C C C
U U U U
To another shelf
Splitter
S S S S
C C C C
U U U U
Splitter
S S S S
C C C C
U U U U
To another Splitter
shelf
To another
shelf
S S
Remote subtended C C
U U
S S
C C
U U
S S
C C
U U
Remote subtended
S S
C C
U U
Such a remote subtending mode can also be implemented in an MSTP/RSTP ring topology, as
shown in Figure 6-4. In this application, the control board, GIU board, and ETHB board all
support the subtending in an MSTP/RSTP ring topology; however, the ETHB board can be
configured only on the device that is connected to the CO device and provides upstream ports.
A more complex subtending network is an MSTP/RSTP ring network in which each node is
involved in local subtending or remote subtending.
ETH
S
S
C
C
U
U
S S
C C
U U
Term Description
Layer 2 protocol processing refers to link layer protocol management and includes a number of
sub-features. This topic describes the sub-features in detail.
7.1 Overview
7.2 MAC Address Management
This topic provides the definition, specifications, availability, and principle of the MAC address
management feature.
7.3 VLAN Management
This topic provides the definition, specifications, reference standards and protocols, availability,
and principle of the VLAN management feature.
7.4 VLAN Translation Policy
This topic provides the definition, specifications, availability, and principle of VLAN translation
policies.
7.5 Forwarding Policy Management
This topic provides the definition, specifications, availability, and principle of the forwarding
policies.
7.6 Bridging
On the MA5600T enabled with the bridging feature, the access users of the same MA5600T can
communicate with each other at Layer 2.
7.7 Glossary, Acronyms and Abbreviations
7.1 Overview
Layer 2 protocol processing refers to link layer protocol management and includes the following
sub-features: MAC address management, VLAN management, VLAN translation, forwarding
policies, and bridging.
7.2.1 Introduction
Definition
MAC address management is a basic Layer 2 management feature. This feature allows users to
set the MAC address aging time, limit the number of learnable dynamic MAC addresses, and
set a static MAC address.
Purpose
l To set the MAC address aging time
After the MAC address aging time is set, the system will periodically check for dynamic
MAC addresses that are aged. If no packet carrying a specified source MAC address is
transmitted or received one or two times of the aging time, the system deletes the MAC
address from the MAC address table.
l To limit the number of learnable dynamic MAC addresses
Users can manually configure the number of learnable dynamic MAC addresses. When the
learned MAC addresses reach the preset number, a user port does not learn new MAC
addresses.
l To set a static MAC address
Users configure a static MAC address so that the system can connect to a device with a
specified MAC address through a port. The system will directly forward data according to
the static MAC address.
Benefits
Benefits to Carriers
l Limiting the number of learnable dynamic MAC addresses can limit the number of the
MAC addresses that enter the network and lighten the burden of a network device.
l Setting a static MAC address prevents MAC address transfer.
Benefits to Subscribers
After users set the static MAC address for a service port and set the maximum number of
learnable dynamic MAC addresses to 0, the service port receives only the user data that carries
the preset static MAC address. Such an operation is actually binding the MAC address to the
service port and it improves user security.
7.2.2 Specifications
The specifications of the MAC address management feature are as follows:
l The SCUN control board supports a maximum of 32K MAC addresses.
l The GPBD/EPBD/ETHB board supports a maximum of 32K MAC addresses.
l The maximum number of learnable MAC addresses can be configured based on service
port.
The maximum number of learnable MAC addresses for a PON service board: 1023
(1023 indicates no limitation. In this case, the number of learnable MAC addresses is
limited by the capacity of the MAC address table of the board.)
l The system supports a maximum number of 1K static MAC addresses.
l The aging time of a dynamic MAC address is configurable, ranging from 10s to 1000000s,
and 300s by default.
l A dynamic MAC address can be set to non-aging.
CAUTION
MAC address tables adopt the hash algorithm, which may result in hash collisions.
l When the SCUN control board is used, the maximum number of concurrent online users
(MAC addresses) in the system is recommended not to exceed 16K.
l The maximum number of concurrent online users (MAC addresses) on each GPBD/EPBD
board is recommended not to exceed 4K.
l The maximum number of concurrent online users (MAC addresses) on each ETHB board is
recommended not to exceed 8K.
7.2.3 Availability
License Support
The corresponding service is provided without a license.
Version Support
7.2.4 Principle
Setting the MAC Address Aging Time
l If the aging time is too short, a dynamic MAC address will be deleted from the MAC address
table prematurely. When the device receives a data packet carrying such a MAC address,
the MAC address will be considered as an unknown address, and the device will broadcast
this data packet to all the ports in a VLAN. Such unnecessary broadcast affects the operation
performance of the system.
l If the aging time is too long, the device will be unable to update the MAC address table
according to network changes. As a result, new MAC addresses cannot be learned and
packets carrying such MAC addresses are broadcast because their destination addresses
are considered as unreachable.
l Dynamic MAC addresses are periodically aged to release MAC address resources and
prevent the failure to learn new MAC addresses.
l The aging time takes effect on dynamic MAC addresses but not on static MAC addresses.
Definition
Virtual local area network (VLAN) is a technology used for logically grouping devices in the
same LAN into different subnets in order to form virtual workgroups. VLAN is a basic
technology that is widely applied to various access modes and services, such as multicast, triple
play, wholesale, and private line services.
The IEEE issued the 802.1q protocol for standardizing VLAN implementations in 1998, and
revised the draft in 2003 and 2005. The IEEE issued the 802.1ad protocol for standardizing
VLAN implementations in 2005.
Purpose
The VLAN management feature facilitates carriers' service planning.
l The standard VLAN is primarily used for subtending. The MA5600T supports the Ethernet
subtending networking. Several access devices at different levels can be subtended through
the GE/FE ports, which can expand the network coverage and address the requirements for
large access capacity.
l The smart VLAN is primarily used for saving the VLAN resources of the system or isolating
users.
l The QinQ VLAN is primarily used for transparently transmitting private network VLAN
tags to implement the L2 VPN application.
l The stacking VLAN can identify users and services. In some scenarios, certain BRASs
need to authenticate two VLAN tags. Therefore, the packets that are transmitted to the
upstream BRAS must carry two VLAN tags. In this case, it is required that the device
support the stacking VLAN.
Packet Format
To learn more about VLAN processing, see the differences between untagged, 802.1q, and QinQ
packet formats, as shown in Figure 7-1.
Figure 7-1 Differences between untagged, 802.1q, and QinQ packet formats
Untagged
Dest Src Length Data
Addr Addr Type FCS
0-1500 Bytes
6 6 2 4
802.1Q Encapsulation
Dest Src Length Data
EType Tag FCS
Addr Addr Type 0-1500 Bytes
6 6 2 2 2 4
Q-in-Q Encapsulation
Dest Src Length Data
EType Tag EType Tag FCS
Addr Addr Type 0-1500 Bytes
6 6 2 2 2 2 2 4
7.3.2 Specifications
The MA5600T supports the following VLAN management specifications:
l IEEE 802.1q: IEEE standards for Local and metropolitan area networks-Virtual Bridged
Local Area Networks
l IEEE P802.1ad: Virtual Bridged Local Area Networks Amendment 4: Provider Bridges
l RFC3069: VLAN Aggregation for Efficient IP Address Allocation
7.3.4 Availability
Related NEs
This feature is a basic feature of the MA5600T and is not related to any other NE.
License Support
The attribute of a VLAN can be changed to stacking only when the VLAN stacking authority
status is Permit.
Version Support
Product Version
Miscellaneous
l Standard VLAN
If a VLAN contains an upstream port, delete the upstream port before deleting the
VLAN.
If a VLAN contains a Layer 3 interface, delete the interface before deleting the VLAN.
l Smart VLAN
If a Layer 3 interface has been created in a VLAN, delete the Layer 3 interface before
deleting the VLAN.
If a VLAN already contains an upstream port, delete the upstream port before deleting
the VLAN.
If a service port has been created in a VLAN, delete the service port before deleting the
VLAN.
The smart VLAN is a special type of VLAN. In addition to the characteristics of a standard
VLAN, the smart VLAN has the following characteristics:
In a smart VLAN, ports have different roles. Specifically, the ports in a smart VLAN
are divided into upstream ports and service ports.
The service ports in a smart VLAN are isolated from each other.
The upstream ports in a smart VLAN can communicate with each other.
A service port and an upstream port in a smart VLAN can communicate with each other.
The broadcast domain of every port in a standard VLAN covers all ports in the VLAN.
The broadcast domain of an upstream port in a smart VLAN covers all ports in the
VLAN. The broadcast domain of a service port, however, covers only the upstream
ports in the VLAN.
l MUX VLAN
If a Layer 3 interface has been created for a VLAN, delete the Layer 3 interface before
deleting the VLAN.
If a VLAN contains an upstream port, delete the upstream port before deleting the
VLAN.
If a service port has been created for a VLAN, delete the service port before deleting
the VLAN.
l VLAN profile
When a VLAN profile is bound to a QinQ VLAN, both the anti-IP spoofing and anti-
MAC spoofing functions are unavailable.
In a VLAN profile, the S-VLAN+C-VLAN forwarding and anti-MAC spoofing
functions cannot be enabled at the same time.
When a VLAN profile is bound to a VLAN, BPDU transparent transmission can be
enabled only when the VLAN is a QinQ VLAN.
When a VLAN profile is bound to a VLAN, the packet forwarding mode cannot be set
to the S-VLAN+C-VLAN mode when the VLAN is a common VLAN.
A smart VLAN can contain multiple upstream ports and multiple service ports. The
service ports in a smart VLAN are isolated from each other.
l MUX VLAN
A MUX VLAN is a VLAN that contains upstream ports and one service port.
One MUX VLAN can contain multiple upstream ports but only one service port.
A service port in a MUX VLAN is isolated from a service port in another MUX VLAN.
One-to-one mapping can be set up between a MUX VLAN and an access user.
Therefore, a MUX VLAN can uniquely identify an access user.
l Super VLAN
The concept of super VLAN is proposed to save IP address resources, and it is an L3-
based VLAN.
A super VLAN is formed by aggregating multiple sub VLANs. Through the L3 interface
of the super VLAN, services of different sub VLANs can be forwarded at L3. In this
way, the usage efficiency of IP addresses is improved. A sub VLAN can be a smart
VLAN or MUX VLAN but cannot be a QinQ VLAN or stacking VLAN. Different sub
VLANs in a super VLAN are isolated at L2, but they can communicate with each other
through the Address Resolution Protocol (ARP) proxy.
Limitation of VLAN
l Standard VLAN
If a VLAN contains an upstream port, delete the upstream port before deleting the
VLAN.
If a VLAN contains an L3 interface, delete the interface before deleting the VLAN.
l Smart VLAN
If an L3 interface has been created for a VLAN, delete the L3 interface before deleting
the VLAN.
If a VLAN already contains an upstream port, delete the upstream port before deleting
the VLAN.
If a service port has been created for a VLAN, delete the service port before deleting
the VLAN.
The smart VLAN is a special type of VLAN. Besides the characteristics of the standard
VLAN, the smart VLAN has the following characteristics:
In a smart VLAN, the ports are of unequal status, that is, the ports in a smart VLAN are
divided into upstream ports and service ports.
The service ports in a smart VLAN are isolated from each other.
The upstream ports in a smart VLAN can communicate with each other.
A service port and upstream port in a smart VLAN can communicate with each other.
The broadcast domain of each port in a standard VLAN covers all the ports in the VLAN.
The broadcast domain of the upstream port in a smart VLAN covers all the ports in the
VLAN. The broadcast domain of the service port, however, covers only the upstream
ports in the VLAN.
l MUX VLAN
If an L3 interface has been created for a VLAN, delete the L3 interface before deleting
the VLAN.
If a VLAN contains an upstream port, delete the upstream port before deleting the
VLAN.
If a service port has been created for a VLAN, delete the service port before deleting
the VLAN.
QinQ VLAN
Figure 7-2 shows QinQ VLAN service processing.
Modem VLAN 2
Modem
VLAN 2
L2
L2 VLAN 1
The access node can implement communication between the users in one private network
(VLAN 1 or VLAN 2) that is located in different regions through a QinQ VLAN. The service
packets of the users are processed as follows:
6. The LAN switch identifies and removes the private network VLAN tag (VLAN 1 or VLAN
2), and forwards the untagged packets to the users in the private network VLAN.
As described in the preceding section, communication between user 1 and user 2 in VLAN 2 or
communication between user 3 and user 4 in VLAN 1 is implemented through the QinQ VLAN.
VLAN Stacking
If VLAN stacking is used for increasing the VLAN quantity or identifying users, the cooperation
of the BRAS is required.
If VLAN stacking is used for providing the private line wholesale service, the upper-layer
network must work in the L2 mode and packets are forwarded according to VLAN+MAC
directly.
ISP 1 ISP 2
MAN
Modem Modem
Access
Node
Enterprise A Enterprise B
The users of enterprise A are connected to ISP 1 through the access node by using a stacking
VLAN and the users of enterprise B are connected to ISP 2 through the access node by using
another stacking VLAN. The service packets of the users are processed as follows:
1. The user transmits untagged packets in the upstream direction. The packets then reach the
access node through the modem.
2. The access node adds two VLAN tags to the untagged user packets. The users belonging
to different ISPs map different outer SP VLANs.
l The outer VLAN tag that is added to the user packets of enterprise A is SP VLAN 1
and the inner VLAN tag is the tag of the corresponding customer VLAN.
l The outer VLAN tag that is added to the user packets of enterprise B is SP VLAN 2
and the inner VLAN tag is the tag of the corresponding customer VLAN.
3. The exchange MAN device forwards packets to different ISPs according to the SP VLAN.
4. After receiving the user packets, ISP 1 and ISP 2 remove the outer SP VLAN tag, and
differentiate users according to inner customer VLAN tags.
Common VLAN
Common is the default attribute of a VLAN. A common VLAN does not contain the attribute
of QinQ or stacking. A common VLAN can serve as a common L2 VLAN or be used to create
an L3 interface for L3 forwarding.
l If the VLAN ID of the Ethernet port is the same as its native VLAN ID, the packet
transmitted upstream through the Ethernet port does not carry any VLAN tag (untagged).
l If the VLAN ID of the Ethernet port is different from its native VLAN ID, the packet
transmitted upstream through the Ethernet port carries the VLAN tag.
l When receiving an untagged packet, the Ethernet port attaches the native VLAN tag to the
packet before transmitting it.
VLAN aggregation can implement L3 interoperation and save IP addresses. A super VLAN
needs to be configured with sub VLANs. A sub VLAN can be added to a specified super VLAN.
A sub VLAN can be either a smart VLAN or MUX VLAN only and its attribute must be common.
Figure 7-4 illustrates VLAN aggregation. When a super VLAN is used to aggregate multiple
VLANs, ports in these VLANs can communicate with each other. If these VLANs are not
aggregated, ports in these VLANs cannot communicate with each other.
Super VLAN 1
Su b VLAN1 VLAN2
VL b Su N
AN A
1 VL
2
PC A PC C PC B PC D
PC A PC B
Reserved VLANs
The range of reserved VLANs in the system is configurable. After reserved VLANs are changed,
the range of reserved VLANs is also changed. Currently, only consecutive reserved VLANs (for
example, VLANs 3-18) are supported, and discontinuous reserved VLANs (for example,
VLANs 3, 10, and 100) are not supported.
The configuration of reserved VLANs takes effect only after the corresponding data is saved
and the device is restarted. After reserved VLANs are changed, the system does not allow other
services to use the VLANs that are taking effect currently or will take effect after system
restarting.
If the start reserved VLAN is not configured, the system uses VLAN 4079 as the start reserved
VLAN and the 15 VLANs starting from VLAN 4079 as reserved VLANs by default. That is,
reserved VLANs are VLANs 4079-4093. The configurable range of the start reserved VLAN is
VLANs 2-4079. After the user configures a start reserved VLAN within this range, the system
automatically uses the 15 VLANs starting from the configured VLAN as reserved VLANs.
VLANs 4094 and 4095 are fixedly reserved, and VLAN 1 is the native VLAN. Therefore, these
three VLANs cannot be configured as reserved VLANs.
l In the service-board-based mode, traffic statistics collection can be performed on the service
ports in the VLAN or all VLANs in the system but cannot be performed on the standard
ports in the VLAN.
l In the ACL-based mode, traffic statistics collection can be performed on the service ports
and standard ports in the VLAN but can be performed on only a maximum of 64 VLANs
concurrently.
7.4.1 Introduction
Definition
VLAN translation refers to translating VLAN tags between user side and network side.
Purpose
VLAN planning is a part of network planning. Flexible VLAN translation policies enable carriers
to identify users or services by VLANs, making network planning easier for carriers.
7.4.2 Specifications
The MA5600T supports the following specifications for the VLAN translation feature:
7.4.3 Availability
License Support
The corresponding service is provided without a license.
Version Support
Product Version
Miscellaneous
A transparent traffic stream must not carry network-side packets with ID of a VLAN that contains
the upstream port of the device.
NOTE
A transparent traffic stream refers to a service port whose tag-transform parameter is set to transparent during
creation of the service port. Protocol packets can be transparently transmitted by transparent service ports.
7.4.4 Principle
After packets undergo traffic classification, the VLANs of the packets need to be translated. For
details, see Figure 7-5.
tag
CAR
Add two VLAN Queue 2
tags
PQ
Traffic WRR . Transmit
Switch one
classifi PQ+WRR .
VLAN tag
cation .
Switch two
VLAN tags Priority
processing
Transparent Queue n
transmission
If a packet matches a traffic rule, the MA5600T adds a VLAN tag to the packet or translates the
VLAN of the packet according to the rule. If the packet does not match any traffic rule, the
MA5600T drops the packet, as shown in Figure 7-6.
Traffic classification:
Packet
By VLAN
By 802.1p
By VLAN+802.1p
By ETH type
Table 7-4 shows the VLAN translation policies for the traffic stream with a single service.
Table 7-4 VLAN translation policies for the traffic stream with a single service
VLAN Type Does Upstream Packet How to Process Packet
Carry a VLAN Tag
l "VLAN Type": Refers to the type of the VLAN that can be configured on the MA5600T.
l "Does Upstream Packet Carry a VLAN Tag": Indicates whether the packet transmitted
upstream carries a VLAN tag. "untag" indicates that the upstream packet does not carry a
VLAN tag. "tag" indicates that the upstream packet carries a VLAN tag. "priority-tagged"
indicates that the upstream packet carries VLAN ID 0.
l "How to Process Packet": Indicates how the MA5600T processes the packet.
Table 7-5 shows the VLAN translation policies for the traffic stream with multiple services.
Table 7-5 VLAN translation policies for the traffic stream with multiple services
VLAN Traffic Type Does Upstream How to Process Packet
Type Packet Carry a
VLAN Tag
user-vlan(tag) Unavailable -
l "VLAN Type": Refers to the type of the VLAN that can be configured on the MA5600T.
l "Traffic Type": Refers to the type of the traffic stream that can be configured on the
MA5600T.
l "Does Upstream Packet Carry a VLAN Tag": Indicates whether the packet transmitted
upstream carries a VLAN tag. "untag" indicates that the upstream packet does not carry a
VLAN tag. "tag" indicates that the upstream packet carries a VLAN tag.
l "How to Process Packet": Indicates how the device processes the packet.
7.5.1 Introduction
Definition
On a Layer 2 device, a packet is generally forwarded based on the VLAN and MAC address
information contained in the packet. That is, forwarding is based on VLAN+MAC. The
MA5600T can forward packets based on VLAN, specifically, based on S-VLAN+C-VLAN.
Purpose
Forwarding based on S-VLAN+C-VLAN addresses the issue that the Layer 2 forwarding of the
MA5600T depends on MAC address learning. Such a forwarding feature has the following
advantages:
7.5.2 Specifications
The MA5600T supports the following specifications of forwarding policies:
7.5.3 Availability
License Support
No license is required to access the corresponding service.
Version Support
Product Version
Miscellaneous
The restrictions and possible impacts brought by forwarding based on S-VLAN+C-VLAN are
as follows:
l The VLAN must be smart VLAN or MUX VLAN and cannot be standard VLAN.
l A MUX VLAN of any attribute (common, stacking, or QinQ) supports forwarding based
on S-VLAN+C-VLAN.
l A smart VLAN of the common attribute does not support forwarding based on S-VLAN
+C-VLAN. In this case, forwarding based on VLAN+MAC is used.
l A smart VLAN of the stacking or QinQ attribute supports forwarding based on S-VLAN
+C-VLAN.
l The system does not support the broadcast suppression function only if the packets of a
VLAN are forwarded based on S-VLAN+C-VLAN.
l The system does not support the virtual MAC address (VMAC) function when the packets
of a VLAN in the system are forwarded based on S-VLAN+C-VLAN.
l The traffic forwarded based on S-VLAN+C-VLAN does not support the anti-MAC
spoofing function. The traffic forwarded based on VLAN+MAC, however, supports this
function.
l The traffic forwarded based on S-VLAN+C-VLAN does not support static MAC addresses.
The traffic forwarded based on VLAN+MAC, however, supports static MAC addresses.
l The S-VLAN+C-VLAN forwarding feature can be enabled for a VLAN. After this feature
is enabled, the packet suppression function is disabled. The packet suppression function
cannot be enabled when there is traffic forwarded based on VLAN+MAC in the system.
7.5.4 Principle
VLAN+MAC Forwarding
In general, the LAN switch forwards packets based on VLAN+MAC. With the VLAN+MAC
forwarding policy, the LAN switch automatically learns the mapping between the VLAN, source
MAC address, and incoming port when packets enter the LAN switch, and according to the
VLAN and destination MAC address, searches for the corresponding outgoing port and transmits
the packets through this port.
In the VLAN+MAC forwarding mechanism, in the case of a broadcast MAC address or unicast
MAC address, packets are broadcast in the VLAN. That is, packets are duplicated and transmitted
to every port in the VLAN.
S-VLAN+C-VLAN Forwarding
The two-tagged VLAN (S-VLAN+C-VLAN) is an extension of VLANs. It expands the VLAN
identification range. In addition, S and C generally have special meanings. For example, S
identifies the service and C identifies the customer. Hence, each "S-VLAN+C-VLAN" uniquely
identifies one type of service of one customer and makes S-VLAN+C-VLAN forwarding
possible.
S-VLAN+C-VLAN forwarding refers to the feature with which a unique outgoing port (or
service port) can be searched out according to the Layer 2 mapping relationship constituted by
two VLAN IDs (S-VLAN+C-VLAN IDs) in order to forward the packets of a VLAN.
NOTE
Only one service port can be set up in a MUX VLAN. Therefore, a MUX VLAN with the common attribute can
also support VLAN-based forwarding. A smart VLAN supports VLAN-based forwarding only when its attribute
is QinQ or stacking.
The S-VLAN+C-VLAN forwarding entry does not need to be learned dynamically. The system
automatically creates the forwarding entry during setup of the service port. According to the
forwarding entry, upstream packets are transmitted through the corresponding upstream port
and downstream packets are transmitted through the corresponding service port.
VLAN+MAC+CoS Forwarding
When the ONT or CPE accesses the MA5600T in the Layer 3 mode, one VLAN may be used
to identify users and the 802.1p priority may be used to identify services in the upstream
direction; after the MAC NAT function is enabled on the ONT or CPE, the MAC addresses
carried in multiple types of services may be the same. To solve this problem, the MA5600T
needs to support VLAN+MAC+CoS forwarding.
VLAN+MAC+CoS forwarding (service flow bundle) can be divided into two steps: identifying
the unique user according to VLAN+MAC; finding out the corresponding service flow according
to the 802.1p priority.
The MA5600T implements service flow bundle as follows: The MA5600T sets up a service flow
for each service of a user and transmits all services to the same S-VLAN in the upstream
direction. This S-VLAN may be an N:1 VLAN, that is, the user may be in the same upstream
VLAN as other users. In this case, set the VLAN+MAC forwarding mode for the S-VLAN.
The MA5600T implements service flow bundle as follows: The MA5600T sets up a service flow
for each service of a user and transmits all services to the same S-VLAN or S-VLAN+C-VLAN
in the upstream direction. This S-VLAN or S-VLAN+C-VLAN uniquely identifies the user. In
this case, set the S-VLAN+C-VLAN forwarding mode for the S-VLAN.
7.6 Bridging
On the MA5600T enabled with the bridging feature, the access users of the same MA5600T can
communicate with each other at Layer 2.
7.6.1 Introduction
Definition
Bridging for access users is a feature with which the access users on the same MA5600T can
communicate with each other at Layer 2. It can be intra-board bridging or inter-board bridging.
Purpose
When an access device (such as an MA5600T) provisions common access services, all access
users are isolated from each other at Layer 2 for sake of security. The following two conditions
may arise:
l For the QinQ service, Layer 2 forwarding is required; however, ports are isolated from each
other at Layer 2 on one MA5600T and hence Layer 2 forwarding cannot be implemented.
Therefore, the QinQ service can only be implemented between different MA5600Ts.
l In common access service, the IP addresses of two users on one MA5600T are generally
in the same network segment. These two users, however, cannot communicate with each
other in this network segment due to Layer 2 isolation. In this case, the upper-layer gateway
is required to support Layer 3 forwarding and ARP proxy. That is, more requirements are
posed on the upper-layer gateway.
All the preceding service application problems originate from the failure in bridging among all
access users on one MA5600T. The purpose of this feature is just to implement bridging among
the access users on one MA5600T.
7.6.2 Specifications
The GPBD board works with the SCUN control board to implement bridging among the access
users of the GPBD board.
l The GPBD board provides 8 GPON ports on its front panel, and each port supports a 1:128
split ratio.
l The SCUN control board, core of system control and service switching, provides 4 GE
ports on its front panel.
7.6.4 Availability
License Support
The bridging feature is an optional feature of the MA5600T, and the corresponding service is
controlled by the license.
Version Support
Hardware Support
Currently, the EPBD, GPBD and SCUN boards on the MA5600T support the bridging feature.
The service board needs to work with the SCUN control board to implement the bridging feature.
Feature Dependency
l In an S-VLAN, the bridging feature and the S-VLAN+C-VLAN forwarding feature are
mutually exclusive.
l In an S-VLAN, the bridging feature and the ARP proxy feature are mutually exclusive.
7.6.5 Principle
Architecture Model of the Bridging Feature
The following section describes the architecture model of the bridging feature.
GPBD
GPON port GPON port
One VLAN
SPL SPL
One IP
User 1 User 2 User 3 network
segment
As shown in Figure 7-7, the following bridging functions are implemented on the MA5600T
after the bridging feature is enabled for a VLAN.
l Bridging among different ONUs that are connected to one GPON port, such as user 1 and
user 2
l Bridging among different ONUs that are connected to different GPON ports on one GPON
board, such as user 1, user 2, and user 3
LAN
switch
GPBD
board
Logic Logic
Network Applications
Access Node
ONU
SPL
ONU
Bridging within IP
the VLAN
ONU
ONU
VPN traffic
As shown in Figure 7-9, on one MA5600T, both bridging of the enterprise private line users
(the red lines) and bridging of common access users (the yellow lines) are implemented. These
applications are differentiated by using VLANs. A QinQ VLAN can be used for the enterprise
private line service. To achieve so, enable the bridging function for this VLAN to implement
bridging among the private line users on an MA5600T. If bridging among the users in a public
network VLAN is also required, enable the bridging function for this VLAN directly.
User board Specific to this document, a board that provides users with access services.
8 QoS
Quality of service (QoS) provides end-to-end service quality assurance for users by setting a
series of QoS parameters, such as service availability, delay, jitter, and packet loss ratio. It
includes technologies such as priority processing, traffic policing, ACL policy, and congestion
avoidance and management.
The following briefly describes QoS processing on the MA5600T. For details about each QoS
action, see the relevant topics.
Congestion Congestion
management management
7 7
classification
Traffic policy
Congestion
processing
Congestion
avoidance
6 6
avoidance
Priority
Traffic
policy
Traffic 5 5
ACL
streams 4 4
3 3
2 2
1 1
Service board 0 Control board 0
1. After entering the service board from the user port, user packets are performed with QoS
processing as follows:
a. Traffic classification: User services are differentiated according to the characteristics
of user Ethernet packets and different services achieve different QoS guarantees.
b. Priority processing: Different priority processing policies are set for different traffic
streams so that these traffic streams are scheduled according to their priorities when
congestion occurs on the local device or upper-layer network.
c. Traffic policing: It is used to limit the traffic volume and address the burst of a certain
incoming connection on a network. When the packets meet certain conditions, for
example, when the traffic of a connection is too heavy, traffic policing takes different
actions, such as dropping the packets, or coloring the packets (re-setting the priority
of the packets). In this way, the port can maintain a stable rate, which avoids impact
on the upper-layer devices. Generally, CAR is used to limit the traffic of a certain type
of packets.
d. Congestion avoidance: When congestion occurs, unqualified packets are dropped in
advance using an early drop algorithm (RED or WRED) to avoid further congestion.
e. Congestion management: Outgoing packets with different priorities enter different
priority queues through PQ or WRR scheduling so as to manage traffic on the device.
2. After entering the control board, packets are performed with QoS processing as follows:
a. ACL policy: A series of match rules are configured to identify and filter data packets
that match the rules. After the specific objects are identified, the system permits or
refuses the corresponding data packets to pass according to the preset rules. ACL-
based traffic filtering is a prerequisite for QoS. ACL together with QoS improves the
system security.
b. Congestion avoidance: When congestion occurs on a port, the early drop algorithm is
used to avoid further congestion.
c. Congestion management: Outgoing packets with different priorities enter different
priority queues through queue scheduling.
Congestion Congestion
management management
7 7
classification
6
Congestion
6
processing
Congestion
avoidance
avoidance
Priority
Traffic
Traffic
Traffic
5 5
policy
policy
ACL
4 4 streams
3 3
2 2
1 1
0 Service 0 Control
board board
8.2.1 Overview
Definition
Traffic classification is a technology that differentiates services by packets classification
according to the characteristics of user Ethernet packets and certain rules, so as to implement
different processing operations and provide different services.
Purpose
The purpose of traffic classification is to differentiate traffic streams to provide different QoS
guarantees for various services of users. The system implements traffic-stream-based service
mapping and makes preparations for the subsequent QoS actions, for example, transforming
between user VLANs and network VLANs, upstream and downstream CAR, priority marking,
and queue scheduling.
8.2.2 Specifications
Traffic streams can be classified by (for details, see Table 8-1):
l Physical port/logical port. In this mode, the "Any" rule is adopted, and traffic on the entire
port is classified as one traffic stream. Hence, a single port carries a single service.
l EtherType. In this mode, traffic classification differentiates between the IPoE and PPPoE
encapsulation types.
l CVLAN.
l CVLAN+802.1p priority.
l CVLAN+EtherType.
other-all (any other) Packets that do not match the other rules
match this rule.
8.2.3 Availability
License Support
The traffic classification feature does not require a license.
Version Support
Product Version
V800R203C00 and
MA5600T
V800R010C00
8.2.4 Principle
adjustment function. The traffic adjustment function includes metering, marking, shaping, and
policing. The DiffServ QoS model implements complicated classification and adjustment
functions only on the edge node of the network. Through marking a proper priority ( IP
precedence) in the DS field of the IPv4 or IPv6 header, traffic streams are aggregated into
behavior aggregates (BAs), and different forwarding behaviors are adopted according to the
marked priority.
The MA5600T implements the following QoS technologies:
l Classification and marking
l Traffic policing and shaping
l Queuing
Figure 8-3 shows the QoS processing model of the MA5600T.
layer VLAN
CAR
Add two- Queue 2
layer VLAN
Traffic PQ
classifi Switch one- WRR . Transmit
cation layer VLAN PQ+WRR .
.
Switch two-
layer VLAN Priority
processing
Transparent Queue n
transmission
Traffic Classification
Traffic classification on the MA5600T is a technology that differentiates user services according
to the characteristics of user's Ethernet packets. The major purpose of traffic classification is to
support multi-service applications and guarantee QoS for each service (each traffic stream) of
each user.
After packets enter the MA5600T, the MA5600T classifies them and then provides different
QoS services for different traffic streams.
Figure 8-4 shows the traffic classification process.
8.3.1 Overview
Definition
Priority processing of the MA5600T mainly includes remarking the VLAN priority, trusting the
user-side CoS priority, and trusting the user-side ToS priority for packets.
Purpose
According to different priority processing policies, the inner and outer VLAN priorities are
configured or the user-side priority is trusted for traffic streams. In this way, packets are
scheduled according to their priorities when congestion occurs on the local device or upper-layer
network.
8.3.2 Specifications
The specifications of priority processing are as follows:
8.3.3 Availability
Relevant NEs
The priority processing feature involves only the MA5600T.
License Support
The priority processing feature does not require a license.
Version Support
Product Version
8.3.4 Principle
With the priority processing feature, the 802.1p priority is remarked for packets according to
certain rules. Priority processing is a preparation for the queue scheduling of the MA5600T. The
MA5600T schedules packets to enter queues according to the outer-VLAN priority. At the same
time, priority processing is also a preparation for the scheduling of the upper-layer network. For
details, see the overall QoS model of the MA5600T in Figure 8-5.
VLAN CAR
Add two-layer Queue 2
VLANs
PQ
Traffic WRR .
Switch one- Transmit
classifi PQ+WRR .
layer VLAN
cation .
Switch two- Priority
layer VLANs processing
Transparent Queue n
transmission
7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
The preceding figure shows the Ethernet frame format defined in 802.1q. The four-byte
802.1q header contains the following contents:
l Tag protocol identifier (TPID): Two-byte tag protocol identifier, with the value of 8100.
l Tag control information (TCI): Two-byte tag control information; it is a new type of
information defined by IEEE, indicating a text added with the 802.1q label.
l The TCI is divided into the following three fields:
VLAN identifier (VLAN ID): 12-bit; indicates the VLAN ID. Up to 4096 VLANs
are supported. All the data packets transmitted from the host that supports 802.1q
contain this field, indicating the VLAN to which the data packets belong.
Canonical format indicator (cfi): one-bit; it is used in the frame for data exchange
between the bus type of Ethernet network and the FDDI or token ring network.
Priority: three-bit; indicates the priority of the frame. Up to eight priorities are
supported. It determines the data packet to be transmitted first in case of switch
congestion.
The local media IP address and signaling IP address of the MA5600T can be configured
in one VLAN or different VLANs according to the networking requirements. The 802.1p
priorities (in the range of 0-7) can be set for the media IP address and signaling IP address
respectively. By default, the priority for either the media IP address or the signaling IP
address is 6.
2. TOS
As defined in the IP protocol, the DSCP and TOS occupy the same field (one-byte) in the
IP header. The IP bearer network device identifies whether DSCP or TOS is filled in, and
schedules and forwards it according to the settings to ensure the QoS for different services.
The type of service (TOS) field contains a three-bit precedence sub field (ignored currently),
a four-bit TOS sub field, and one unused bit (it must be set to 0). The four bits in the TOS
sub field represent the minimum delay, the maximum throughput, the maximum reliability,
and the minimum cost respectively. Only one of the four bits can be set. If all four bits are
set to 0, it indicates the common service.
0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7
DSCP unused 0
CSCP
Precedence ToS
All upstream packets support these three priority processing modes. Downstream packets,
however, support only the trust the user CoS and local priority modes, but do not support the
trust user ToS mode.
When the VLAN is a stacking VLAN, the priority of the inner VLAN is configurable. If the
priority of the inner VLAN is not configured, the default value (priority 0) is used.
8.4.1 Overview
Definition
Traffic policing (also called traffic policy) is used to limit the traffic volume and address the
burst of a certain incoming connection on a network by measuring the arrival rate of traffic
streams. When the packets meet certain conditions, for example, when the traffic of a connection
is too heavy, traffic policing takes different actions, such as dropping the packets, or coloring
the packets (re-setting the priority of the packets). The common method is to limit the traffic of
one type of packets using the CAR, for example, set the HTTP packets to occupy no more than
50% network bandwidth only.
In a PON system, upstream bandwidth conflict between ONUs is resolved by the DBA
technology.
Purpose
The purposes of traffic policing are as follows:
l To ensure that the user traffic meets the service level agreement (SLA).
l To adjust the outgoing traffic and suppress the burst traffic for QoS guarantee.
l To control the rate of broadcast packets through packet suppression.
8.4.2 Specifications
The specifications of traffic policing are as follows:
l CAR based on the port
l CAR based on the traffic stream
l CAR based on port+priority
l CAR using the trTCM algorithm (RFC2698)
l Priority-based CAR (enhanced based on CAR using the trTCM algorithm)
l Color-based early drop
l A maximum of 512 IP traffic profiles defined in MEF10
8.4.3 Availability
Relevant NEs
The traffic policing feature involves only the MA5600T.
License Support
The traffic policing feature does not require a license.
Version Support
CAR uses TB for traffic control. Each packet must use the tokens equal to the packet length for
transmission. As shown in the following figure, traffic policing is implemented as follows:
Packets are classified. Then, packets of a certain type, after being specified with the traffic
feature, enter the TB for processing. If the TB stores sufficient tokens, packets are transmitted.
If the TB stores insufficient tokens, packets are dropped. In this way, the system controls the
traffic of packets of a certain type.
The system generates tokens to the TB at a specified rate. In addition, the TB has a specified
capacity. When the TB is full of tokens, the system suspends token generation until a token is
used. Packet transmission consumes a certain number of tokens. The consumption of tokens
depends on the packet length. When the remaining tokens in the TB are insufficient for
transmitting a packet, the system drops the packet.
A TB is a good tool for traffic control. When the TB is full of tokens, the system can transmit
all the packets represented by the tokens. In this way, the system allows for burst transmission.
When the TB is empty of tokens, the system transmits no packets. The system resumes the
transmission only after a new token is generated. In this way, the rate of traffic transmission is
limited to be lower than or equal to that of token generation.
Classification
Token
bucket
Discard
l For the Ethernet port, run the line-rate command to limit the upstream and downstream
rates of the port.
l For the xDSL port, change the upstream and downstream rates in the line profile to limit
the rate of the port.
PIR
CIR
Colored green
P
bucket <= PBS C <= CBS
Color-bind
bucket CBS
PBS
Color-aware
> CBS queue
Assume that there are two independent TBs, P and C, with sizes PBS and CBS respectively. Tp
(t) and Tc(t) represent the number of tokens in P and C respectively at time t. Initially (t = 0), P
and C are full, that is, Tp(0) = PBS and c(0) = CBS.
Then, Tp is increased by one for PIR times per second until reaching PBS and Tc is increased
by one for CIR times per second until reaching CBS.
l In the color-blind mode, when packets of B bytes arrive at time t, the following operations
are performed:
1. If Tp(t) - B < 0, packets exceeding Tp(t) are marked red. Otherwise, the device
proceeds to the next step.
2. If Tc(t) - B < 0, packets exceeding Tc(t) are marked yellow and Tp is decreased by B.
Otherwise, the device proceeds to the next step.
3. Packets are marked green and both Tp and Tc are decreased by B.
l In the color-aware mode, when packets of size B bytes arrive at time t, the following
operations are performed:
1. If the packets have been marked red or if Tp(t) - B < 0, packets are marked red.
Otherwise, the device proceeds to the next step.
2. If the packets have been marked yellow or if Tc(t) - B < 0, the packets are marked
yellow and Tp is decreased by B. Otherwise, the device proceeds to the next step.
3. Packets are marked green and both Tp and Tc are decreased by B.
Packet Suppression
Packet suppression refers to the suppression of broadcast, multicast, and unknown unicast
packets. In normal conditions, broadcast, multicast, and unknown unicast packets are broadcast
in a VLAN. The purpose of suppressing these packets is to prevent them from exhausting the
network resources so as to avoid network congestion.
The traffic-suppress command can be executed to set the suppression level of broadcast,
multicast, or unknown unicast packets on a port. After the suppression level is set successfully,
the system limits the traffic of the port according to the threshold of the corresponding traffic
suppression level if the traffic control is enabled on the port. Then, the system will drop the
traffic that exceeds the threshold.
ONU OLT
DBA report
Control plane DBA algorithm
logic
BW Map
T-CONT
Time slot Data plane
Scheduler
DBA Profile
ONU upstream bandwidth control is implemented through the DBA profile bound to the T-
CONT. There are five types of T-CONTs. In upstream service scheduling, different types of T-
CONTs are selected according the service type. Each T-CONT bandwidth type has its own QoS
feature, which is mainly represented by bandwidth guarantee, including fixed, assured, assured
+maximum, maximum, and hybrid mode (corresponding to type1 to type5 in Table 8-5).
Fixed X No No No X
bandwidth
Assured No Y Y No Y
bandwidth
NOTE
In Table 8-5, "X" indicates the fixed bandwidth, "Y" assured bandwidth, and "Z" maximum bandwidth.
8.5.1 Overview
Definition
The access control list (ACL) is a feature in which a series of match rules are configured to
identify and filter data packets that match the rules. After the specific objects are identified, the
system permits or refuses the corresponding data packets to pass according to the preset rules.
Purpose
ACL-based traffic filtering is a prerequisite for QoS. ACL together with QoS improves the
system security.
8.5.2 Specifications
The specifications of the ACL feature are as follows:
l ACLs are numbered from 2000 to 5999, and any ACL can be defined within the 4000 IDs.
l The system supports a maximum of 64 ACLs, each supporting a maximum of 32 rules.
Table 8-6 describes each type of ACL.
l The user can use any of the first 80 bytes in the packet to define the ACL rules. Multiple
fields can be configured at the same time.
l The system supports setting of the ACL time segment. A maximum of 256 time segments
can be set.
Basic ACL 2000-2999 The rules of a basic ACL can be defined only according
to the L3 source IP address and the fragment field for
analyzing and processing data packets.
Link layer 4000-4999 The rules of a link layer ACL can be defined according
ACL to the following information:
l MAC address
l Source VLAN ID
l L2 protocol type
l Destination MAC address
l Link layer information such as QoS
8.5.3 Availability
License Support
The ACL function of the MA5600T is under license. Therefore, the license is required for
accessing the corresponding service.
Version Support
Product Version
Feature Dependency
When the ACL rules do not conflict with each other, the priority of the ACL rule activated earlier
is lower, and the priority of the ACL rule activated later is higher.
Hardware Support
No additional hardware is required for supporting the ACL feature.
8.5.4 Principle
The system matches and processes the input packets according to the ACL rules:
l If the packets match an ACL rule, they are performed with further QoS actions, including
packet filtering, priority marking, port rate limitation, traffic control, traffic measurement,
packet redirection, and packet mirroring. After being processed using the preceding QoS
actions, the packets are forwarded and output.
Packet filtering
Determines whether to drop the packets according to whether the packets match an ACL
rule.
Priority marking
Marks the priority of the packets that match an ACL rule, including ToS, and 802.1p.
Traffic control
Controls traffic of the packets that match an ACL rule.
Port rate limitation
Limits the rate at which the Ethernet port transmits packets.
Traffic measurement
Measures traffic of the packets that match an ACL rule.
Packet redirection
Redirects the packets that match an ACL rule, that is, re-specifies the port that forwards
the packets (the original port no longer receives or forwards packets.)
Packet mirroring
Performs traffic mirroring on the packets that match an ACL rule, that is, packet streams
that match an ACL rule can be copied and output to other ports.
l If the packets do not match an ACL rule, the packets are dropped or forwarded according
to the definition of the ACL rule.
Packet filtering
Priority tagging
Traffic limiting
Port rate limiting
Discard
Discarded
packets
8.6.1 Overview
Definition
When congestion occurs, the system takes a series of QoS actions to process the packets that
cause congestion. Such a series of actions is congestion avoidance and management. Generally,
congestion avoidance is implemented using the early drop algorithm, and congestion
management is implemented through queue scheduling.
Purpose
Congestion avoidance and management is to differentiate the priorities of services and process
packets with higher priorities first when congestion occurs in the system.
8.6.2 Specifications
The specifications of congestion avoidance and management are as follows:
Congestion avoidance:
Congestion management:
l Three queue scheduling modes: priority queuing (PQ), weighted round robin (WRR), and
PQ+WRR
8.6.3 Availability
Relevant NEs
The congestion avoidance and management feature involves only the MA5600T.
License Support
The congestion avoidance and management feature does not require a license.
Version Support
packets with higher priorities can enter greater-depth, more burst-tolerant queues than packets
with lower priorities and hence are less likely to be dropped.
Currently, only the GPBD board supports priority-based early drop. The early drop depth of
each queue is configurable.
Color-based WRED
After the system marks packets in different colors (yellow and green; red packets are directly
dropped) using the trTCM algorithm, these packets have different drop thresholds when entering
a port queue. In this way, when the queue is not full although port congestion occurs, traffic
configured with CIR can pass while traffic configured with PIR is dropped early.
Currently, only the SCUN board and the OPGD board support color-based WRED. The start
drop threshold, maximum drop threshold, and drop percentage of yellow and green packets are
configurable.
Drop policy Defines the rules for the device Packet loss
to drop packets. The
commonly used drop policies
are tail drop policy and
WRED.
Scheduling mode in one Packets may be re-prioritized Bandwidth, delay, jitter, and
queue in a queue. In most cases, FIFO packet loss
is used.
Scheduling mode Defines from which queue Bandwidth, delay, jitter, and
between queues packets are taken out to the packet loss
outgoing queue.
PQ is to put packets with different priorities to different queues for scheduling. All boards in the
system support eight PQs, and they also support configuring of weights of PQs and mapping of
packets with different priorities to a PQ.
PQ
PQ classifies packets and puts packets into the corresponding queues according to the packet
classification result. PQ queues are classified into high-priority queues, medium-priority queues,
normal-priority queues, and low-priority queues. PQ takes out all packets from a high-priority
queue and transmits them. After such a transmission is completed, PQ performs the same on all
packets in a medium-priority queue, a normal-priority queue, and a low-priority queue one by
one.
In this way, packets in a queue with a higher priority precede packets in a queue with a lower
priority and therefore are processed preferentially, even in case of congestion. This ensures that
packets for key services are processed first. Packets of non-key services (such as email) are
processed only when the network is idle after key services are processed, thereby utilizing
network resources efficiently.
Figure 8-12 PQ
Queue
High
Data packets out of
Medium the queue
Classific
ation
Normal
OutQueue
Data packets to be scheduling
transmitted through Low
the interface
When packets reach a port, they are classified first and are then put into the tail of the queues to
which they belong according to the packet classification result. During packet transmission, the
packets in the queue with a higher priority are always transmitted first. After that, the packets
in the queue with a lower priority are transmitted. In this way, a short delay is ensured for the
packets with a higher priority.
WRR
WRR(Weighted Round Robin) classifies packets and puts packets into the corresponding queues
according to the packet classification result. WRR queues are assigned bandwidth on a port
according to the bandwidth percentages defined by the user. When packets travel out of queues,
WRR takes a certain number of packets from the queue and transmits them from the port
according to the pre-defined bandwidth percentage.
queue 0 PQ
queue 1 10%
PQ+WRR
PQ+WRR is a combination of the PQ scheduling mode and WRR scheduling mode. When the
weight value of a queue is 0, the queue scheduling mode is PQ+WRR. In this mode, the system
schedules the queues with the weight value 0 in the PQ mode, and then schedules other queues
in the WRR mode. With this flexible scheduling mode, the services that must be guaranteed are
scheduled in the PQ mode, and the services with lower priorities are scheduled in the WRR mode
when there is remaining bandwidth. In this way, services with higher priorities are ensured and
those with lower priorities can obtain bandwidth when there is remaining bandwidth.
l On the OLT, CAR is implemented for traffic streams of VoIP and Internet access services
and then CAR-group-based control is implemented for users, achieving HQoS.
l Generally, the priorities of the VoIP service, multicast service, and Internet access service
are configured in descending order.
PPPoE/DHCP
Upstream Port L2/L3
VoIP UNI port network
T-CONT 2
GEM 131 RSP
Video VLAN 400 VPN n n
VLAN 500 GEM 132
S+C:
HGWVLAN 600 GEM 133 900: 1
901
PPPoE/DHCP 902: 1
ONT OLT
Based on the preceding network, Table 8-9 provides the VLAN data plan, Table 8-10 QoS data
plan, and Table 8-11 service bandwidth data plan.
PQ+WRR VoIP 6
Multicast 4
Internet access 2
RSP 1 VoIP 100 Mbit/s CIR: 128 kbit/s CIR: 128 kbit/s
PIR: 128 kbit/s PIR: 128 kbit/s
RSP 2 VoIP 100 Mbit/s CIR: 128 kbit/s CIR: 128 kbit/s
PIR: 128 kbit/s PIR: 128 kbit/s
VoIP
S+C: S+C:
Video VLAN 100 800: 1 800: 1 RSP1
801 801 VPN 1
VLAN 200 802: 1 802: 1
VLAN 300
HGW
PPPoE/DHCP
Upstream Port L2/L3
VoIP UNI Port Upstream network
Port
VPN n RSP n
Video VLAN 400
VLAN 500
VLAN 600 S+C:
S+C:
HGW 900: 1 900: 1
PPPoE/DHCP 901 901
902: 1 902: 1
Modem
DSLAM OLT
Based on the preceding network, Table 8-12 provides the VLAN data plan, Table 8-13 QoS
data plan, and Table 8-14 service bandwidth data plan.
PQ+WRR VoIP 6
Multicast 4
Internet access 2
RSP 1 VoIP 100 Mbit/s CIR: 128 kbit/s CIR: 128 kbit/s
PIR: 128 kbit/s PIR: 128 kbit/s
RSP 2 VoIP 100 Mbit/s CIR: 128 kbit/s CIR: 128 kbit/s
PIR: 128 kbit/s PIR: 256 kbit/s
Glossary
Term Description
An HQoS user, which does not map an actual access user, is a bandwidth
HQoS user guarantee and scheduling unit. An actual access user can map one or more
HQoS users, which is determined by the specific service planning.
Assured When a user is provided with the assured bandwidth, all the traffic within
bandwidth this bandwidth is allowed to pass.
Burst bandwidth refers to the user's traffic that is allowed to exceed the
Burst
assured bandwidth. The traffic within this bandwidth can pass a port when
bandwidth
the port has remaining bandwidth.
CP content provider
PQ priority queuing
9 L3 Features
This topic describes the network layer (L3) features implemented by the system.
9.1 ARP
The Address Resolution Protocol (ARP) is a protocol which is used to convert an IP address to
a MAC address. This topic provides introduction to this feature and describes the principle and
reference documents of this feature.
9.2 ARP Proxy
ARP proxy is a process of handling the ARP requests. This topic provides introduction to this
feature and describes the principle and reference documents of this feature.
9.3 DHCP Relay
The Dynamic Host Configuration Protocol (DHCP) relay is a process in which the DHCP clients
in different physical subnets can obtain IP addresses which are dynamically allocated from the
same DHCP server.
9.4 DHCP Proxy
DHCP proxy is a mechanism in which the MA5600T acts as a proxy for processing the DHCP
packets exchanged between a DHCP server and a DHCP client. That is, the MA5600T modifies
the DHCP packets based on the requirements.
9.5 IP-aware Bridge
9.6 Routing
Routing is a common term used for describing the path through which the packets from a host
in a network travel to a host in another network.
9.1 ARP
The Address Resolution Protocol (ARP) is a protocol which is used to convert an IP address to
a MAC address. This topic provides introduction to this feature and describes the principle and
reference documents of this feature.
9.1.1 Introduction
Definition
The Address Resolution Protocol (ARP) is a protocol which is used to convert an IP address to
a MAC address. It belongs to the TCP/IP protocol suite.
Purpose
The IP address represents only the network layer address of a host. If a host in a network needs
to send the network layer data to a destination host, the host must know the physical address
(MAC address) of the destination host. Therefore, an IP address has to be translated into a MAC
address. ARP is used for translating an IP address to a MAC address.
9.1.2 Specifications
When using SCUN, the MA5600T supports 8192 ARP entries, including 512 static entries and
7680 dynamic entries.
9.1.4 Availability
License Support
The ARP feature is the basic feature of the MA5600T. Therefore, no license is required for
accessing the corresponding service.
Version Support
Product Version
Hardware Support
No additional hardware is required for supporting the ARP feature.
9.1.5 Principle
The ARP mapping list of a host contains a series of mappings between IP addresses and
associated MAC addresses of other hosts that have communicated with this host recently.
Implementation of ARP
ARP enables two hosts in a network to interconnect with each other at L2.
Assume that there are two PCs: host A and host B with IP addresses IP_A and IP_B respectively.
Host A sends messages to host B in the following way:
1. Host A checks its ARP mapping list for the ARP mapping entry of IP_B.
l If host A finds the MAC address of host B, host A encapsulates the IP data packets
according to the MAC address and then sends them to host B.
l If host A does not find the MAC Address of host B, host A puts the data packets in the
ARP waiting queue, initiates an ARP request, and then broadcasts it on the Ethernet.
The ARP request contains the IP address of host B and the IP address and MAC address
of host A.
2. As the ARP request is broadcasted, all the hosts on the Ethernet can receive it. Only the
requested host (host B), however, responds to the request.
3. Host B stores the IP and MAC addresses of the request initiator (host A) contained in the
request, in its own ARP mapping list.
4. Host B returns an ARP response containing the MAC address of host B to host A. Such a
response is no longer broadcast, but sent to host A directly.
5. After receiving the response, host A extracts the IP address and MAC address of host B,
and adds them to its own ARP mapping list. After that, host A transmits all the data packets
in the waiting queue destined for host B.
In general, the dynamic ARP is needed. The static ARP is needed only when you need to
manually adjust the ARP entries.
A static ARP entry takes effect when the MA5600T works, while the aging time for a dynamic
ARP entry is configurable, the default value is 20 minutes.
9.2.1 Introduction
Definition
When a host sends an ARP request to another host, the request is processed by the access device
connected to the two hosts. This process is called ARP proxy.
Purpose
On the MA5600T, ARP proxy is often used for interconnection between sub VLANs in a super
VLAN.
9.2.2 Specifications
The MA5600T supports ARP proxy.
9.2.4 Availability
License Support
The ARP proxy feature is the basic feature of the MA5600T. Therefore, no license is required
for accessing the corresponding service.
Version Support
Product Version
Hardware Support
No additional hardware is required for supporting the ARP proxy feature.
9.2.5 Principle
As shown in Figure 9-1, PC 1 is in sub VLAN 1, and PC 2 is in sub VLAN 2. They are isolated
at L2. PC 1, PC 2 and the virtual L3 interface are in the same subnet.
Communication
Super VLAN
Isolation
PC1 PC2
IP: 1.1.1.2/24 IP: 1.1.1.15/24
MAC: 00-e0-fc-00-00-02 MAC: 00-e0-fc-00-00-15
9.3.1 Introduction
Definition
The Dynamic Host Configuration Protocol (DHCP) relay is a process in which cross-subnet
forwarding of DHCP broadcast packets is implemented between the DHCP client and the DHCP
server. In this way, the DHCP clients in different physical subnets can obtain IP addresses which
can be dynamically allocated from the same DHCP server.
Purpose
DHCP works in client-server mode.
l The DHCP client dynamically requests the configuration data from the DHCP server.
l The DHCP server dynamically allocates the data including the IP address to the client.
Initially, DHCP was only suitable for the applications where the DHCP client and the DHCP
server were located on the same subnet and could not work across the subnet. In this case, each
subnet had to be configured with a DHCP server, which was uneconomical.
The introduction of the DHCP relay solves this problem. The DHCP relay serves as a relay
between the DHCP client and the DHCP server, which are located on different subnets. With
the DHCP relay, the DHCP packets can be relayed to the destination DHCP server or client
across subnets. In this way, multiple DHCP clients on different networks can use the same DHCP
server. This is economical and convenient for centralized management.
9.3.2 Specifications
The MA5600T supports the following DHCP relay specifications:
l Up to 20 DHCP server groups, with an active DHCP server and 1-3 standby DHCP servers
in each group
l Selection of a DHCP server in three modes:
Standard mode
DHCP option60 mode
MAC address segment mode
l Up to 128 DHCP option60 domains
A domain name is a case-insensitive character string of 1-32 characters.
l Up to 128 MAC address segments
The name of a MAC address segment is a case-insensitive character string of 1-32
characters.
9.3.4 Availability
License Support
The DHCP relay feature is an optional feature of the MA5600T. Therefore, the license is required
for accessing the corresponding service.
Version Support
Product Version
Feature Dependency
The DHCP relay is based on the VLAN. The DHCP relay based on a board or a port is not
supported.
Hardware Support
No additional hardware is required for supporting the DHCP relay feature.
9.3.5 Principle
When a DHCP client starts up and initializes DHCP, it broadcasts the DHCP configuration
request packets on the LAN.
l If there is a DHCP server on the LAN, no DHCP relay is required because the DHCP server
can directly configure DHCP for the DHCP clients on the LAN.
l If there is no DHCP server on the LAN, the DHCP relay function must be enabled on the
MA5600T. The MA5600T processes the received broadcast packets from the DHCP client
as follows:
1. Selects the DHCP server group in a specified mode.
2. Converts the received broadcast packets into unicast IP packets.
3. Forwards the converted packets to the selected DHCP server group.
The MA5600T supports selection of the DHCP server group in the following three modes:
This mode differentiates users by VLANs. It is the most commonly used and simplest
DHCP relay mode. However, it cannot differentiate the service types in the same VLAN.
This is the default mode in the system.
l DHCP option60 mode
It is a mode in which the DHCP server group is selected according to the character string
(domain name) in the option60 field in a DHCP packet. In this mode, you must configure
the option60 domain name and the DHCP server group bound with the domain name in
advance.
This mode differentiates users by the domain information of the packets. It is a commonly
used DHCP relay mode and can differentiate the service types in the same VLAN.
l MAC address segment
It is a mode in which a DHCP server group is selected according to the source MAC address
of the DHCP packets. In this mode, you must configure the MAC address segment and the
DHCP server group bound with the MAC address segment in advance.
This mode differentiates users by the source MAC address segment of the packets and can
differentiate the service types in the same VLAN.
The DHCP server configures the DHCP client according to the received configuration request,
and forwards the configuration data to the DHCP client through the DHCP relay. In this way,
the DHCP server dynamically configures the DHCP client.
Figure 9-2 shows the DHCP relay networking.
DHCP client
DHCP server
DHCP client
Access Node
DHCP server
DHCP client
9.4.1 Introduction
Definition
DHCP proxy is a mechanism in which the MA5600T acts as a proxy for processing the DHCP
packets exchanged between a DHCP server and a DHCP client. That is, the MA5600T modifies
the DHCP packets based on the requirements.
The DHCP proxy functions are the server ID proxy and the lease time proxy.
l Server ID proxy
Option 54 in a DHCP packet is called a server identifier (Server ID). The value of the option
54 Server ID is the IP address of a DHCP server and is used to identify the DHCP server.
The server ID proxy is a function for modifying option 54 in a DHCP packet so that the IP
address of the DHCP server is unavailable to the client. This prevents the attacks initiated
by the DHCP client to the DHCP server.
l Lease time proxy
The lease time of an IP address that a DHCP client applies for is related to options 51, 58,
and 59 in a DHCP packet. The lease time proxy is a function for modifying these options
in a DHCP packet so that a lease time is available to a client. This lease time is shorter than
that directly allocated by the DHCP server, which facilitates the lease time management.
Purpose
Based on different proxy functions, the DHCP proxy addresses different requirements:
l Server ID proxy
The IP address of the DHCP server can be screened to prevent a DHCP client from attacking
the DHCP server.
l Lease time proxy
The lease time for an IP address available to a DHCP client is long (which is often the case).
Therefore, in such a long lease time, the MA5600T is incapable of quickly perceiving
whether a user is online. This obstructs the service provisioning.
The lease time proxy, however, enables a DHCP client to obtain a shorter lease time for an IP
address. The MA5600T with the DHCP proxy function enabled is capable of quickly perceiving
whether a user is online or not. Meanwhile, the request packets from the DHCP client for re-
leasing an IP address during a short lease time are processed by the MA5600T and are no longer
forwarded to the DHCP server. This decreases the load of the DHCP server in frequently
processing the request packets when the short lease time expires.
9.4.2 Specifications
The MA5600T supports the following DHCP proxy specifications:
l The MA5600T supports up to 4K DHCP clients.
l The MA5600T supports globally enabling or disabling the DHCP proxy function.
l The user port and the subtending port support the DHCP proxy function.
9.4.4 Availability
Version Support
Product Version
Feature Dependency
The MA5600T DHCP proxy has the following limitations:
l When a common security feature is enabled, the MA5600T supports up to 8K DHCP clients.
When the DHCP proxy function is enabled, the MA5600T supports only 4K DHCP clients.
l When the L3 DHCP relay function is enabled, the MA5600T supports the DHCP proxy.
When only the L2 DHCP relay function is enabled, the MA5600T does not support the
DHCP proxy.
9.4.5 Principle
Application Scenario
The MA5600T supports the DHCP proxy only when the L3 DHCP relay function is enabled.
Both the user port and the subtending port support the DHCP proxy. Figure 9-3 shows an
application scenario of the DHCP proxy.
DHCP server
L2 LAN switch
Server ID Proxy
The MA5600T with the DHCP proxy function enabled can monitor all the DHCP packets
exchanged between a DHCP client and a DHCP server.
After the DHCP proxy function is enabled on the MA5600T, the exchange of packets (in the
case of the server ID proxy) between the DHCP server and the DHCP client is as shown in
Figure 9-4.
l In the downstream direction, the MA5600T modifies the value of option 54 in the response
packets (including Offer and ACK) sent by the DHCP server to its own IP address. After
the DHCP client receives the packets, option 54 in these packets is the IP address of the
MA5600T, and the related field in the DHCP packets is always the IP address of the
MA5600T hereafter.
l In the upstream direction, the MA5600T recovers the value of option 54 in the DHCP
packets sent from the DHCP client to the IP address of the actual DHCP server.
Figure 9-4 Exchange of packets between a DHCP server and a DHCP client (server ID proxy)
Discover Discover
After the DHCP proxy function is enabled on the MA5600T, the exchange of packets (in the
case of the lease time proxy) between the DHCP server and the DHCP client is as shown in
Figure 9-5.
2. The MA5600T captures the response packet from the DHCP server, modifies the value of
L1 in the packet to a shorter lease time (L2) (which is configurable on the MA5600T), and
then sends the Offer (L2) packet to the DHCP client. In this way, the lease time for the IP
address allocated to the DHCP client is L2.
At the stage of re-leasing an IP address:
1. When the lease time (L2) expires, to re-lease the IP address, the DHCP client sends a request
packet to the DHCP server.
2. The MA5600T captures the request packet and determines whether to send the request
packet to the DHCP server based on L1.
a. If it is unnecessary to send the request packet to the DHCP server, the MA5600T
directly responds to the request packet and allows the DHCP client to re-lease the IP
address.
b. If it is necessary to send the request packet to the DHCP server, the MA5600T forwards
the request packet sent by the DHCP client to the DHCP server.
3. After receiving the request packet, the DHCP server sends the response packet if it approves
to re-lease the IP address to the DHCP client.
4. The MA5600T forwards the response packet sent by the DHCP client to the DHCP server.
Thus, the DHCP client is allowed to re-lease the IP address.
At the stage of releasing an IP address:
l If a DHCP client sends a request for releasing the IP address, the MA5600T forwards the
request to the DHCP server.
l If the MA5600T detects that the lease time (L2) of the DHCP client expires, but fails to
receive any request for re-leasing the IP address from the DHCP client, the MA5600T
directly sends a request to the DHCP server for releasing the IP address.
Figure 9-5 Exchange of packets between a DHCP server and a DHCP client (lease time proxy)
DHCP DHCP
client DHCP proxy server
Discover Discover
Stage of applying for an IP address
L1=Lease time allocated by the Offer(L2) Offer(L1)
DHCP server
Request Request
L2=Lease time configured by the
DHCP proxy ACK(L2) ACK(L1)
L2<<L1
Request
ACK
Request
ACK
Stage of re-leasing an IP address
(Based on lease time L2) Request
ACK
Request
ACK
Request Request
Stage of re-leasing an IP address
(Based on lease time L1) ACK ACK
Definition
IP-aware bridge is a feature in which an access node can implement L3 forwarding without being
configured with an IP address.
Purpose
l To implement L3 forwarding. In this feature, a large number of user MAC addresses can
be replaced with the system MAC address of a device for packet forwarding.
l To identify the destination IP address (IP-aware) of users' traffic streams, and send the
traffic streams to the corresponding next hop (traffic split) according to route information.
l To terminate user-side ARP requests, terminate network-side ARP requests, and respond
by using ARP proxy.
l To implement ARP proxy between users so that users who are in the same VLAN and
isolated at L2 can interoperate at L3.
Benefits to Users
L3 forwarding can be implemented without occupying IP addresses or requiring the
configuration of IP addresses.
9.5.2 Specifications
l Maximum number of VLANs supporting IP-aware bridge: 16
l Maximum number of static routes exclusively used for IP-aware bridge: 32
l Maximum number of virtual IP addresses supported by each VLAN: 8
l Interval supported for periodically sending ARP packets: 5-3600s (180s by default)
9.5.3 Availability
License Support
The IP-aware bridge feature is a basic feature of the MA5600T. Therefore, the corresponding
service is provided without a license.
Version Support
Product Version
Limitations
l The IP-aware bridge feature is applicable only to IPoE encapsulation.
l The IP-aware bridge feature is applicable only to the VLAN with single tag and is not
applicable to QinQ VLAN or stacking VLAN.
l The IP-aware bridge feature is applicable only to the DHCP mode (dynamic IP users).
l The IP-aware bridge feature does not support dynamic routing protocols (RIP, BGP, OSPF,
and IS-IS) or upper-layer protocols such as PIM and NTP.
l The IP-aware bridge feature is not applicable to the subtending scenario.
l ARP interoperation is applicable only to users in the same VLAN and is not applicable to
super VLAN.
l ARP interoperation is applicable only to users of the same access node and is not applicable
to users of different devices.
9.5.4 Principle
Application Scenario
Figure 9-6 shows the application scenario of IP-aware bridge.
Ethernet
Aggregation Network
Distribution switch
Principle Description
Figure 9-7 shows the flow of L3 forwarding of IP-aware bridge in the upstream direction, and
Figure 9-8 shows that in the downstream direction.
WAN
IP: Y
MACAA
S-IP=X DGZ Sys MAC:BB MAC:DD IP :D
D-IP
S-IP IPPayload ARP
SMAC Request
X D DMAC
Proxy
ARP
SMAC
DMAC ARP Reply
AA BB MAC =BB
NAT
SMAC D-IP
DMAC S-IP SMAC ARP
BB AA X Y DMAC Request
MAC for
bcast BB
NextHop
SMAC ARP
DMAC Reply
BB DD MAC =DD
L3
forwarding
SMAC D-IP
DMAC S-IP
BB AA X D
WAN
IP: Y
MACAA
IP :D DGZ Sys MAC:BB MAC:DD S-IP=X
Proxy
ARP
DMAC SMAC ARP Reply
DD BB MAC =BB
L3
forwarding
NAT
l VLAN-based IP-aware bridge is similar to L3 forwarding but does not occupy IP addresses.
l The access node has the interface MAC address (system MAC address).
l The access node supports static routes but does not support dynamic routing protocols.
l The VLAN can be associated with the VRF (VPN instance). The routing entry and IP
address take effect within a VRF.
DHCP snooping
l The access node performs DHCP L2 relay to monitor the IP address application process of
users and record the IP address information about users.
l User-side ARP entries are generated according to the DHCP snooping results.
l The first mode: sending ARP requests by using a user IP address (default mode)
NOTE
ARP requests are not sent to the next hop when a valid user IP address does not exist.
After a user goes offline (the IP address is released), the user IP address will not be used. Instead, another
valid user IP address will be used.
l The second mode: sending ARP requests by using a virtual IP address or all-zero IP address
(optional mode)
When the user RG and the access node next hop do not belong to the same subnet, some
network equipment does not respond to ARP requests. In this case, ARP requests need
to be sent using a virtual IP address or all-zero IP address as the source IP address.
Each VLAN enabled with IP-aware bridge can be configured with eight virtual IP
addresses (corresponding to eight subnets).
When a corresponding virtual IP address is not available, 0.0.0.0 is used as the source
IP address (this method is also called dummy ARP).
l For user-side ARP requests (destination IP address is the user gateway, that is, the network-
side equipment of the access node, such as the BRAS)
The access node terminates user-side ARP requests and responds by using its own MAC
address (system MAC address).
l For network-side ARP requests (destination IP address is the user IP address)
The access node terminates network-side ARP requests and responds by using its own MAC
address (system MAC address).
l By default, the users in the same VLAN do not interoperate with each other.
l After global ARP proxy is enabled, users can interoperate at L3.
9.6 Routing
Routing is a common term used for describing the path through which the packets from a host
in a network travel to a host in another network.
9.6.1 Introduction
Definition
Routing is a common term used for describing the path through which the packets from a host
in a network travel to a host in another network.
Routers send packets on the Internet. A router selects a suitable path in a network according to
the destination address included in a received packet, and sends the packet to the next router on
the path. In this way, the packet travels over the Internet Until it reaches the destination host.
Purpose
The access equipment, serving as a basic element in the entire telecom network, must support
the functions of remote operation, management and maintenance on the equipment itself.
With the development of small-size access equipment that can be managed remotely, the access
equipment needs to feature the functions of a BRAS, such as allocation of network addresses
and user management. In this way, the access equipment must support the routing feature.
9.6.3 Availability
Hardware support
No additional hardware is required for supporting the routing feature.
License support
The dynamic routing function of the MA5600T is under license. Therefore, the license is required
for accessing the corresponding service.
VRF Limitation
l Any two VRFs cannot communicate with each other.
l The L3 features such as AAA, RADIUS, voice features, MPLS, multicast, NTP, and ACL
do not support configuring VRF.
9.6.4 Specifications
The MA5600T supports both static routes and dynamic routes. The dynamic routing protocols
supported are as follows:
l MA5600T
Static
RIP
OSPF
Default
IS-IS
BGP
Equal and Weighted Cost Multi-Path (ECMP)
9.6.5 Principle
As shown in Figure 9-9, the packets from Host PC_A travel through three networks and two
routers until they reach Host PC_C and the hop count is three. If one node is connected to another
through a network, the two nodes are adjacent on the Internet. Similarly, adjacent routers mean
that these routers are connected to the same network. The hop count from a router in a network
to a host in the same network is zero.
Router
Router
PC_A
Route
segment
Router
Router
Router
PC_C
PC_B
Routing Table
Each router maintains a routing table. The routing table is key for forwarding packets. The route
entries in the table are used for the following:
l Through which physical interface of the router a packet can be forwarded to a specific
subnet or host so as to reach the next router along the path.
l Whether the packet can be sent to the destination host in an interconnected network without
passing through other routers.
l Destination address
The destination address is a 32-bit character that labels the destination IP address or
destination network of an IP packet.
l Subnet mask
The subnet mask consists of a sequence of "1"s, and can be expressed in dotted decimal
format or as the total number of consecutive "1"s. The mask is used with the destination
address to identify the subnet address of the destination host or router.
To obtain the subnet address of the destination host or router, perform an AND operation
for the destination address and the subnet mask.
For example, if a router' s destination address and subnet mask are 129.102.8.10 and
255.255.0.0, respectively, the router' s subnet address is 129.102.0.0.
l Output interface
The output interface specifies the interface of a router for IP packet forwarding.
l Next hop IP address
The next hop IP address indicates the next router through which an IP packet will pass.
l Route priority
The route with the highest priority (smallest value) will be the optimal one. You can
configure multiple routes with different priorities to the same destination, but only one
route is selected based on the priority for IP packet forwarding.
l cost: Indicates the cost of reaching the destination.
Route Classification
Based on the destination, routes can be classified as:
l Subnet route: Its destination is a subnet.
l Host route: Its destination is a host.
Based on the connection between the destination and the router, routes can be classified as:
l Direct route: Its destination network is directly connected to the router.
l Indirect route: Its destination network is not directly connected to the router.
To avoid large routing tables, a default route can be assigned. Once a packet fails to find a
dedicated route in the routing table, the default route is selected for forwarding the packet.
Figure 9-10 shows some interconnected networks. The digits in each network represent the IP
address of the network. Router 8 is connected to three networks. Therefore, it has three IP
addresses and three physical ports.
12.0.0.2 R4
R1
14.0.0.1 11.0.0.2
12.0.0.0
12.0.0.3 12.0.0.1
Table 9-6 shows some interconnected networks. The digits in each network represent the IP
address of the network. Router 8 is connected to three networks. Therefore, it has three IP
addresses and three physical ports.
10.0.0.0 Directly 2
11.0.0.0 Directly 1
12.0.0.0 11.0.0.2 1
13.0.0.0 Directly 3
14.0.0.0 13.0.0.2 3
15.0.0.0 10.0.0.2 2
16.0.0.0 10.0.0.2 2
priority. When multiple route sources exist, the route discovered by the routing protocol with
the highest priority becomes the current route.
Table 9-7 lists various routing protocols and the default priorities of the routes discovered by
them.
DIRECT 0
OSPF 10
INTERNAL EIGRP 50
STATIC 60
RIP 100
IBGP 256
EBGP 256
UNKNOWN 255
The smaller the value, the higher the priority. In this table, "0" indicates the direct route, and
"255" indicates any route from an untrusted source.
You can define the priorities for all dynamic routing protocols except the direct route (DIRECT)
and the BGP (IBGP, EBGP). In addition, the priorities of any two static routes can be different.
Route Sharing
Different routing protocols can find different routes as they use different algorithms. Therefore,
a problem arises, that is, how to share the routes discovered by various routing protocols.
A routing protocol might need to import routes discovered by other protocols to diversify its
own routes. However, a protocol only needs to import qualified routes by setting attributes of
the routes to be imported.
To realize a route policy, you must define the attributes of the routes to which the route policy
is to be applied, such as the destination address, and the address of the router distributing routes.
You can define the matching rules in advance so that they can be applied in a route policy for
route distribution, reception and importing.
The MA5600T supports importing the routes discovered by one protocol to another protocol.
Each protocol has its own route importing mechanism.
Filters
The following describes the several filters used by the MA5600T.
l ACL
An ACL is defined with a specified IP address and subnet range for identifying routes with
the desired destination segment address or next hop address.
l Address prefix list
An address prefix list is similar to an ACL in functions, but is more flexible and
comprehensible. When applied to filter routes, the address prefix list targets at the
destination address fields.
Identified by name, an address prefix list contains multiple entries. Each entry specifies a
matching range and is identified with index-number. Index-number also specifies the
matching order.
In the process of matching, the router checks every entry identified with index-number in
the ascending order. If the route matches one entry, it means that the route matches the
address prefix list, and comparison with next entry is unnecessary.
l Route policy
Route policy is a sophisticated filter to identify routes with the desired attributes and modify
some attributes if conditions are satisfied. Route policy can define its own match rules using
other filters.
A route policy consists of several nodes (matching units). The node number is also the
matching order.
Every node consists of if-match clause and apply clause. if-match defines the matching
order. The objects of the matching are some attributes of the routes.
The relationship between two if-match clauses of a node is "and." The match test can
be considered as pass-through only when all if-match clauses of a node are satisfied.
Apply clause specifies the action to be taken when node match test is conducted, that
is, set some attributes of the routes.
The relationship between nodes of a route policy is "or." The system checks every node of
a route policy. If one node passes the match test, it means that the route policy passes the
match test, and match test for next node is not required.
Introduction
Definition
The static route is a special route. It is configured manually by the network administrator.
Purpose
In a simple network, a router can work in the normal state as long as its static routes are
configured. Proper configuration and use of static routes can improve the network performance
and assure bandwidth for important applications.
Configuring static routes is easy. Static routes apply to small networks that are simple and stable.
However, static routes cannot change automatically when the network topology changes. They
have to be adjusted by the administrator.
Specifications
When using SCUN, the MA5600T supports up to 5120 routes, including 4096 static routes
(max).
Principle
An administrator adds static routes to the routing table through the CLI or SNMP. The forwarding
module follows the longest match algorithm for the route matching. If the destination address
of a packet matches an entry in the routing table, the module forwards the packet to the next
hop.
9.6.7 RIP
RIP is a dynamic routing protocol based on the V-D algorithm. Based on RIP, the routing
information is exchanged through UDP data packets. This topic provides introduction to this
feature and describes the principle of this feature.
Introduction
Definition
A dynamic route refers to a route that automatically changes when there is a change in network
topology or network traffic. RIP is a dynamic routing protocol based on the V-D algorithm, and
exchanges routing information through UDP data packets.
Purpose
The RIP protocol has its own routing algorithm, which enables a route to automatically adapt
to the change of a network topology. This protocol is applied to the network deployed with a
number of L3 devices. The configuration of RIP, however, is complicated. In addition, it has a
higher requirement on the system and utilizes more network resources than the static routing.
Specifications
When using SCUN, the MA5600T supports up to 5120 RIP routes.
Principle
RIP defines how routers exchange routing table information. RIP is based on the view
differencing (V-D) algorithm. RIP falls into two versions: RIP 1 and RIP 2.
With RIP, routers can exchange route using the User Datagram Protocol (UDP) packets, and
send route updates every 30s. If a router does not receive any route updates from the peer device
for 180s, it labels the routes from the peer device as unreachable, and deletes such routes if no
route updates are received in the next 120s.
l RIP 1
RIP 1 is a classful routing protocol. It supports broadcasting protocol packets. The RIP 1
protocol packets do not contain any masks. Therefore, RIP 1 can identify only the routes
of the natural network segments such as Class A, Class B and Class C. Thus, RIP 1 supports
neither route summary nor discontinuous subnet.
l RIP 2
RIP 2 is a classless routing protocol. Compared with RIP 1, RIP 2 supports the following:
Route tag
It controls routes flexibly based on the Tag in the route policy.
Packets containing masks
The packets contain masks for route summary and classless inter-domain routing
(CIDR).
The next hop selection
In broadcast networks, you can select the optimal next hop address.
Multicast route to send updates
Only RIP 2 routers can receive protocol packets, thus reducing resource consumption.
Protocol packet authentication
RIP 2 provides two authentication modes: authentication in plain text and MD5
authentication to enhance the security of the packets.
NOTE
l RIP 2 transmits packets in two modes: broadcast mode and multicast mode. By default, packets are
transmitted in multicast mode using the multicast address 224.0.0.9.
l When the interface runs in RIP 2 broadcast mode, it can also receive RIP 1 packets.
l Hop count
RIP uses hop count to measure the distance to the destination host, which is called routing
metric.
In RIP, the metric from a router to its directly connected network is 0 (is 1 defined by some
protocols), and from a router to a network which can be reached through another router is
1, and so on.
To restrict the convergence time, RIP prescribes that the metric is an integer ranging from
0 to 15. When hop count is 16, it is regarded as infinitely large.
l Routing loop avoidance
RIP avoids routing loops by the following mechanisms:
Counting to infinity
RIP defines the metric of 16 as infinity. In case routing loops occur, when the cost of a
route reaches 16, this route is considered unreachable.
Split horizon
RIP does not send the routes learned from an interface to its adjacent routers through
this interface. This reduces bandwidth consumption and avoids routing loops.
Poison reverse
RIP learns a route from an interface, sets its metric to 16 (unreachable), and advertises
it to the adjacent routers through this interface. This helps to clear the unnecessary
information in the routing tables of its adjacent routers.
Triggered updates
RIP can avoid routing loops among multiple routers and speed up the network
convergence through triggered updates. After the metric of a route changes, a router
advertises updates to its adjacent routers rather than waiting until the period times out.
9.6.8 OSPF
Open Shortest Path First (OSPF) is an interior gateway protocol (IGP) based on the link state
developed by the Internet Engineering Task Force (IETF). This topic provides introduction to
this feature and describes the principle of this feature.
Introduction
Definition
A dynamic route refers to a route that automatically changes in case of the change in network
topology or network traffic. OSPF is a dynamic routing protocol based on the link state developed
by the Internet Engineering Task Force (IETF).
Purpose
OSPF has its own routing algorithm, which enables a route to automatically adapt to the change
of a network topology. This protocol is applied to the network deployed with a number of L3
devices. The configuration of OSPF, however, is complicated. In addition, it has a higher
requirement on the system and utilizes more network resources than the static routing.
Specifications
When using SCUN, the MA5600T supports up to 5120 OSPF routes.
Principle
OSPF is an interior gateway protocol (IGP) based on the link state developed by the Internet
Engineering Task Force (IETF). The version in use is OSPF Version 2 (RFC 2328), which has
the following features:
l Application scope
It supports networks of various scales and hundreds of routers.
l Fast convergence
It enables an update to be sent immediately after the network topology changes so that the
change can be synchronized in the Autonomous System (AS).
l Loop-free
As OSPF calculates the route with the shortest path tree algorithm through the collected
link state, no loop route is generated from the algorithm itself.
l Area division
The network of the AS is divided into areas. The routes between the areas become more
abstract, reducing the occupation of bandwidth in the network.
l Equal route
It supports multiple equal routes to the same destination address.
l Routing hierarchy
Four types of routes are used in the order of preference: intra-area routes, inter-area routes,
external routes of type 1 and external routes of type 2.
l Authentication
It supports interface-based packet authentication to ensure the security of route calculation.
l Multicast
It supports multicast addresses.
The whole network can be regarded as an entity consisting of multiple ASs. Information of the
ASs can be synchronized through dynamic discovery and transmission of routes by collecting
and transmitting the AS link states.
Each AS can also be further divided into several areas. If the interfaces of a router are allocated
to multiple areas, this router is called an area border router (ABR). An ABR is located at the
area boundary and is connected to multiple areas.
The OSPF backbone area, a special area labeled with 0.0.0.0, is responsible for the exchange of
routing information for non-backbone areas. As all the non-backbone OSPF areas are
interconnected logically with the backbone area, the concept of virtual link is introduced to
ensure that logical connectivity remains between the physically divided areas.
The Autonomous System Boundary Router (ASBR) is responsible for exchanging routing
information with other ASs and distributing external routes among the ASs.
9.6.9 IS-IS
The Intermediate System-to-Intermediate System (IS-IS) protocol is a dynamic routing protocol
initially designed by the International Organization for Standardization (ISO) for its
Connectionless Network Protocol (CLNP).
Introduction
Definition
The Intermediate System-to-Intermediate System (IS-IS) protocol is a dynamic routing protocol
initially designed by the International Organization for Standardization (ISO) for its
Connectionless Network Protocol (CLNP).
Purpose
To support IP routing, the Internet Engineering Task Force (IETF) has extended and modified
IS-IS in RFC 1195. This enables IS-IS to be applied to TCP/IP and OSI environments at the
same time. This type of IS-IS is called integrated IS-IS or dual IS-IS.
As an Interior Gateway Protocol (IGP), IS-IS is used in an autonomous system (AS).
IS-IS is a link state protocol. It uses the Shortest Path First (SPF) algorithm to calculate routes.
It resembles the Open Shortest Path First (OSPF) protocol.
Specifications
When using SCUN, the MA5600T supports up to 5120 IS-IS routes.
Principle
Two-Level Structure
To support the large-scale routing networks, IS-IS adopts a two-level hierarchical structure in a
routing domain. A routing domain is partitioned into multiple areas. In general, Level-1 routers
are located in an area, Level-2 routers are located among areas, and Level-1-2 routers are located
between Level-1 routers and Level-2 routers.
Level-1 router
A Level-1 router manages the intra-area routing. It establishes adjacencies only with Level-1
and Level-1-2 routers in the same area. It maintains a Level-1 link state database (LSDB). The
LSDB contains the routing information on the local area. A packet to a destination outside of
this area is forwarded to the nearest Level-1-2 router.
Level-2 router
A Level-2 router manages the inter-area routing. It can establish adjacencies with Level-2 routers
and Level-1-2 routers in the local area and other areas. It maintains a Level-2 LSDB that contains
the inter-area routing information.
All Level-2 routers form the backbone network of a routing domain. They are responsible for
communication between areas. Level-2 routers in the routing domain must be in succession to
ensure the continuity of the backbone network.
Only Level-2 routers can exchange data packets or routing information directly with external
routers located outside of the routing domain.
Level-1-2 router
A router, which is both a Level-1 router and a Level-2 router, is called a Level-1-2 router. It can
establish Level-1 adjacencies with Level-1 routers and Level-1-2 routers in the same area.
A Level-1 router can be connected to other areas only through a Level-1-2 router. A Level-1-2
router maintains two LSDBs. The Level-1 LSDB is used for intra-area routing and the Level-2
LSDB is used for inter-area routing.
NOTE
Figure 9-11 shows a network that runs IS-IS. The network is similar to an OSPF topology with
multiple areas. The entire backbone area contains all routers in Area 1 and Level-1-2 routers in
other areas.
Area2
Area3
Backbone
L1 L1/2
L1/2
Area1
L2
L2
L2 L2
Area5
Area4
L1/2
L1/2 L1
L1 L1 L1
L1
Interface Level
A Level-1-2 router may need to establish a Level-1 adjacency with a peer and set up a Level-2
adjacency with the other peer.
You can set the level of an interface to restrict the establishment of adjacencies on the interface.
For example, only a Level-1 adjacency can be established on a Level-1 interface and only a
Level-2 adjacency can be established on a Level-2 interface.
Route Leaking
In general, the intra-area routes are managed by Level-1 routers. All Level-2 and Level-1-2
routers form a successive backbone area. The Level-1 area can be connected only to the Level-2
area. Different Level-1 areas cannot be connected to each other.
The routing information in a Level-1 area is advertised to a Level-2 area through a Level-1-2
router, so Level-1-2 and Level-2 routers can know the routing information of the entire IS-IS
routing domain. However, Level-2 routers, by default, do not advertise the learned routing
information of other Level-1 areas and that of the backbone area to any Level-1 area. In this
way, Level-1 routers cannot know the routing information outside of their area. As a result,
Level-1 routers cannot select the optimal route to a destination address outside of their area.
To solve the earlier mentioned problem, IS-IS provides the route leaking function. Based on this
function, Level-2 routers can advertise the learned routing information of other Level-1 areas
and Level-2 areas to a specified Level-1 area.
performed only on the set of interfaces. In this way, multiple IS-IS processes then can work on
a single router and each process is responsible for a unique set of interfaces.
For the routers that support the VPN, each IS-IS process is associated with a specified VPN-
instance. As a result, all the interfaces attached to an IS-IS process should be associated with
the VPN-instance that this IS-IS process is associated with.
PRC
Similar to I-SPF, the Partial Route Calculation (PRC) is used to calculate only the changed
routes. PRC, however, is used to update leaves (routes) based on the SPT calculated by I-SPF,
without calculating the node path.
In route calculation, a route represents a leaf, and a router represents a node. If the SPT calculated
by I-SPF changes, PRC processes all the leaves on the changed node. If the SPT remains
unchanged, PRC processes only the changed leaves.
For example, if only IS-IS is enabled on an interface of a node, the SPT calculated by I-SPF
remains unchanged. In this case, PRC updates only the routes of this interface. This reduces the
CPU usage.
PRC working with I-SPF further improves the convergence performance of the network. It is
an improvement of the original SPF algorithm, and has replaced the original SPF algorithm.
When an IS-IS router receives new LSPs from other routers, it enables a timer and floods the
LSPs in its own LSDB periodically according to the RFC. This can speed up the network
convergence, but the LSDB synchronization will be slow.
LSP fast flooding improves the earlier mentioned mode. When a router configured with this
function receives one or more new LSPs, it floods the LSPs whose amount is smaller than the
specified one before calculating routes. This significantly speeds up the LSDB synchronization.
This mode can significantly speed up the convergence of the entire network.
Intelligent Timer
Although the route calculation algorithm is improved, the long interval for triggering the route
calculation also affects the convergence speed. You can shorten the interval by using a
millisecond-level timer. Frequent network changes, however, also occupy too much CPU
resources. The SPF intelligent timer can solve the problems. It quickly responds to a small
number of burst events, and reduces the CPU usage.
The LSP generation intelligent timer is similar to the SPF intelligent timer. In IS-IS, when the
LSP generation intelligent timer times out, the system generates a new LSP according to the
current topology. In the original mechanism, a timer with a specified interval is adopted, and
thus cannot achieve fast convergence and low CPU usage. The LSP generation timer, therefore,
is designed as an intelligent timer to quickly respond to the burst events (such as interface Up
or Down) and to speed up the network convergence. At the same time, when the network changes
frequently, the interval of the intelligent timer is automatically extended to avoid occupation of
too much CPU resources.
9.6.10 BGP
The Border Gateway Protocol (BGP) is an inter-AS dynamic routing protocol.
Introduction
Definition
The Border Gateway Protocol (BGP) is an inter-AS dynamic routing protocol.
Purpose
As the only Exterior Gateway Protocol (EGP), BGP is used to transmit routing information
among ASs.
Specifications
When using SCUN, the MA5600T supports up to 5120 BGP routes.
Principle
BGP is an inter-AS dynamic routing protocol. The three earlier versions of BGP are BGP-1
(defined in RFC 1105), BGP-2 (defined in RFC 1163), and BGP-3 (defined in RFC 1267). The
present version of BGP is BGP-4 (defined in RFC 4271 or RFC1771).
As the actual Internet exterior routing protocol, BGP-4 is widely applied among Internet service
providers (ISPs).
NOTE
9.6.11 VRF
Virtual route forwarding instance (VRF) is a mechanism in which a device works as multiple
virtual routing devices. After the L3 interfaces of the device are divided into different VRFs,
multiple route forwarding instances can be emulated on the device.
Introduction
Definition
VRF is an L3 virtual private network (L3VPN). VRF is a mechanism in which a device works
as multiple virtual routing devices. After the L3 interfaces of the device are divided into different
VRFs, multiple route forwarding instances can be emulated on the device.
Purpose
Multiple virtual routing devices can be created on the MA5600T. That is, multiple L3VPNs can
be established to implement the L3 isolation and independent packet forwarding among different
VRFs. Moreover, in different VRFs, the IP address can be reused, and also DHCP relay multi-
instances, routing multi-instances, and independent route forwarding tables are supported.
The MA5600T categorizes VRFs by VLANs to provide L3VPN solutions. All the packets or
related protocols on the L3 interface of a VRF are processed only in this VRF, which is unrelated
to other VRFs. In this way, the services or users can be isolated, and the IP addresses can be
saved.
VRF has two application scenarios:
l When the triple play service is provisioned to xDSL access users or GPON access users,
different services are isolated from each other by VRF, and all services of the device are
carried and go upstream by the same physical link. One VLAN L3 interface can be bound
to only one VR, and the upstream port belongs to multiple L3 interfaces. Different VLAN
L3 interfaces are bound to different VRs, and each VR forwards data according to the route
learned by this VR.
l When the triple play service is provisioned to xDSL access users or GPON access users,
different services are isolated from each other by VRF, and all services of the device are
carried and go upstream by two or more physical links. The links in this case are in the L3
mode, and different services are isolated from each other by VRF.
The difference of the two scenarios is that dual or multiple links are adopted for upstream
transmission in scenario 2, where the effect of different VRs going different "ways" is more
vivid.
Specifications
The MA5600T supports the following VRF specifications:
l Fifteen private network VRFs and one public network VRF
l Up to 32 VLAN L3 interfaces in a VRF (but one VLAN L3 interface can be bound with
only one VRF)
l Ping and trace route functions within a VRF
l The VRF private network supports binding a static route to a BFD session.
Principle
VRF Compatibility
The VRF architecture is compatible with the virtual private routed network (VPRN) architecture
as defined in RFC2764.
VRF Architecture
VRF is an architecture of IP networks, as shown in Figure 9-12. When users are isolated by
service types or ISPs, or the users of different VPNs are prohibited from communicating with
each other, multiple L3VPNs must be established in an IP network.
VRF1
OSPF/RIP/ISIS/BGP/ VPN1 DHCP
Home server server
Static route/ARP/DHCP
Gateway
VPN1 Router
VPN1 VRF1
user
VRF2 Router
VPN2
Access
Node VRF2 IP Core
Home OSPF/RIP/ISIS/BGP/
Gateway Static route/ARP/DHCP
VPN2
user
VPN2 DHCP
server server
The ping function is used to check the connectivity and reachability of a remote host
by sending the ping packets to the host.
The trace route function is used to check the network connectivity and locate the network
faults by testing the route that the data packets pass through from the host to the
destination.
9.6.12 ECMP
Equal and Weighted Cost Multi-Path (ECMP) is a technique in which if two or more equal cost
shortest paths exist between two nodes, the traffic between the nodes is distributed among the
multiple equal-cost paths.
Introduction
Definition
Equal and Weighted Cost Multi-Path (ECMP) is a technique in which if two or more equal cost
shortest paths exist between two nodes, the traffic between the nodes is distributed among the
multiple equal-cost paths. That is, in packet transmissions, if different routes with the same
destination network exist in the system, the packets can be transmitted to the destination network
through multiple next hops.
Purpose
In ECMP, the traffic to the same destination network can be distributed among multiple equal-
cost paths to reduce the network load, and the links in the network can back up each other. That
is, when a link in the network fails, the packets on this link can be forwarded to the destination
network through other links that are in the normal state.
Specifications
The MA5600T supports the following ECMP specifications:
Principle
In ECMP, according to different states of the network, the traffic to the same destination network
can be distributed among multiple equal-cost paths to reduce the network load or to implement
the link backup function.
As shown in Figure 9-13, assume that a packet is transmitted to the destination network
(192.16.5.0) through Routers A-D, and two routes to the destination network exist in Router A.
When receiving the packet from a user, Router A can select Router B or Router C as the next
hop to forward the packet to the destination network.
192.16.1.0/24
192.16.5.0/24
Router A
Router D
192.16.2.0/24
PC
Router C
Backbone area It is an area that is used to forward the inter-area routes. The routing
information among non-backbone areas must be forwarded
through the backbone area.
Incremental SPF Incremental SPF (I-SPF) calculates only the changed routes at a
time rather than all the routes.
Partial route calculation Similar to I-SPF, the Partial Route Calculation (PRC) is used to
calculate only the changed nodes. PRC, however, is used to update
leaves (routes) based on the SPT calculated by I-SPF, without
calculating the node path.
Glossary Definition
Glossary Definition
Autonomous It is a set of routers under a single technical administration and using the
system same routing policy. Each AS has a unique ID which is an integer ranging
from 1 to 65535 and is allocated by the IANA in a unified way. An AS can
be divided into multiple areas.
AS Autonomous System
AS Autonomous System
10 Multicast
10.1 Introduction
Definition
Multicast is a communication mode in which data is transmitted to multiple recipients at the
same time.
Purpose
The device employs multicast technology to provide IP video services such as live TV and QVoD
for carriers.
By incorporating multicast technology, the network device can manage, control, and forward IP
video services and meet carriers' requirements for provisioning IP video service.
10.2 Specifications
Multicast Protocols
l Supports IGMPv2
l Supports IGMPv3 but does not support the Exclude mode
l Supports IGMP proxy
l Supports IGMP snooping, including snooping with proxy
l Supports PIM-SSM
l Supports VLAN-based multicast (TR101 multicast)
l Supports CTC multicast
l Does not support IGMPv1
IGMP Performance
l Supports distributed IGMP protocol stack (by the GPBD/EPBD board)
l Supports dual-level multicast duplication
l Supports a join latency of 50 ms
l Supports a leave latency of 50 ms
Multicast Management
In terms of multicast management:
Multicast Networking
l Supports aggregation and protection on multicast upstream ports and multicast subtending
ports
l Supports double VLAN tags on multicast upstream ports
l Supports GPON type B, EPON type B, and EPON type D for multicast users
10.4 Availability
Hardware Support
No additional hardware is required to support this feature.
License Support
l The number of multicast users supported by the device is controlled by license. The
permitted number of multicast users can be configured only after the corresponding license
is obtained.
l The number of multicast programs that can be configured or ordered by the multicast users
of the device is controlled by license. The permitted number of multicast programs can be
configured or ordered only after the corresponding license is obtained.
l The device can be controlled by either of the two methods described above.
Version Support
Product Version
Unicast: a point-to-point (P2P) transmission mechanism. Unicast involves only one information
sender and one information recipient, as shown in Figure 10-1.
Source
User B
User C
B
C
Source
User B
User C
User D
Broadcast information
Source
User B of group 1
User C of group 2
User D of group 2
mode transmits information over longer distances and ensures that information is
transmitted to only interested recipients. Hence, information security can be guaranteed.
The preceding comparisons demonstrate that multicast effectively resolves certain issues in
P2MP transmission and efficiently transmits P2MP data in IP networks.
Multicast Terms
l Multicast group
A multicast group is identified by a multicast IP address. Any host (or any other receiving device)
joining a multicast group becomes a member of the group. Group members can identify and
receive IP messages destined to the multicast IP address.
l Multicast source
A signal source sending IP messages destined to a multicast address is called a multicast source.
A multicast source can send data to multiple multicast groups at the same time.
l Multicast group member
The members of a multicast group are dynamic. Hosts in a network can join or leave a multicast
group any time. Multicast group members may be widely dispersed across the network.
A multicast source is usually not a data recipient at the same time and is not the member of a
multicast group.
l Multicast duplication
Multicast duplication is a capability with which the network device duplicates a multicast
message from an ingress port into multiple copies and sends them to multiple egress ports. To
ensure effective transmission of large amounts of data, this function can only be implemented
by hardware.
These terms can be metaphorically explained by using watching TV programs as an example.
l The multicast group is an agreement between the sender and the recipient. For example, a
TV channel can be regarded as a multicast group.
l A TV station is like a multicast source as it sends data for the TV channel.
l The TV set is like a receiving host. When the user turns on the TV and chooses to watch a
program on a certain channel, it is like the host joining a multicast group. Then, the TV set
displays the program for the user, which means that the host has received the data sent to
this multicast group.
l The user can turn the TV set on or turn it off or switch between channels at any time, which
means that the host can dynamically join or leave a multicast group.
Multicast Address
To enable communication between a multicast source and its members, a network-layer
multicast address must be available, which is the multicast IP address. In addition, a technology
must also be available for mapping the multicast IP address to a link-layer multicast MAC
address. The following part of this section will describe the two types of multicast addresses.
l Multicast IP address
As specified by the Internet Assigned Numbers Authority (IANA), multicast messages use class-
D IP addresses (224.0.0.0-239.255.255.255) as their destination addresses, and the class-D IP
addresses must not appear in the source IP address field of the IP messages. Address segment
224.0.0.0-224.0.0.255 is reserved for the network protocols in the local network. Address
segment 239.0.0.0-239.255.255.255 is for management addresses. The purpose of defining
management addresses is to limit the multicast addresses within specified multicast domains so
that the addresses of different domains can be re-used.
Multicast addresses are not allocated to the receiving device or the multicast source device for
identifying their location on the network. For the multicast source device, the allocated multicast
address is used for generating and carrying multicast data; for the receiving device, the multicast
address is used for distinguishing multicast data.
In an actual multicast application, the multicast address usually does not need to be manually
input. For example, a menu interface is provided for common applications such as TV service.
When the user selects which program to watch by using a remote control, the application software
will automatically obtain the multicast IP address for that program.
When IP messages are unicast over an Ethernet, the destination MAC addresses used are the
MAC addresses of recipients. However, in the transmission of multicast messages, the
transmission destination is no longer a specific recipient. Instead, it is a group with an unspecified
number of members. In this case, the multicast MAC address is used.
Specified by IANA, the most significant 25 bits of a multicast MAC address are 0x01005e, and
the least significant 23 bits of the MAC address are the least significant 23 bits of the multicast
IP address. Figure 10-4 shows the mapping.
Figure 10-4 Mapping between multicast MAC address and multicast IP address
5 bits lost
XXXX X
32-bit IP address 1110 XXXX X XXXXXXX XXXXXXXX XXXXXXXX
23 bits
mapping
48-bit MAC address
The first four bits of the multicast IP address are 1110, which stands for the multicast ID, and
only 23 of the last 28 bits are mapped to the MAC address. That means that five bits in the IP
address are lost. The direct result is that 32 multicast IP addresses are mapped to the same MAC
address.
Multicast Service
Multicast is an end-to-end service. Multicast applications are accomplished by various types of
devices that each serve a specific purpose in the network.
Multicast Multicast
application application
.
Multicast Multicast
.
routing routing
.
protocol protocol
Group Group Group
member member member
protocol protocol protocol
Group member protocol: usually applied between a router and a host. The group member
protocol allows the host to dynamically join or leave a multicast group and implements multicast
member management.
Multicast routing protocol: usually applied between routers. The multicast routing protocol is
used for generating a message distribution tree for multicast routing. Messages are then
transmitted from the multicast source to recipients through the routes.
Multicast application: multicast application software such as video application software that is
based on the TCP/IP protocol stack and used by the multicast source and recipients. The channel
switching includes two protocol actions: sending a message to leave the existing multicast group,
and at the same time sending a message to join a new multicast group.
IGMP Protocol
The Internet Group Management Protocol (IGMP) is used for maintaining the multicast group
membership between a host and a router. There are currently three IGMP versions, namely:
IGMPv1, IGMPv2 and IGMPv3. The later version is completely compatible with the earlier
version(s). Most systems no longer support IGMPv1 as TR101 has deemed that version obsolete;
as a result, the device no longer supports said protocol and drops all IGMPv1 messages.
This section uses IGMPv2 as an example to describe the main contents of the protocol.
Router General query A router periodically sends this message to maintain the
requirements posed by all hosts connected to the router on
all multicast groups. The router uses an aging mechanism
to detect hosts that have inadvertently gone offline.
Host Report The report message is used by a host for actively joining a
multicast group or for responding to a general query or a
group-specific query.
IGMPv3 includes the basic concepts of IGMPv2. For details, see "IGMPv3."
Multicast VLAN
A multicast VLAN (also called an MVLAN) refers to the VLAN tag carried by multicast data.
MVLANs are usually divided based on ISP. The forwarding plane, control plane, and
management plane are implemented based on VLAN instances, multicast services are
provisioned to the users of the same device, and interference among the users is eliminated.
Except for super VLANs, any other types of VLANs with any kind of attributes configured on
the device can serve as an MVLAN. For details on MVLANs, see "Multi-instance
Multicast."
Multicast Program
A multicast program can be regarded as a multicast group. Its basic attribute is the multicast IP
address. The device can manage a multicast program at a finer grain, such as by rights control
and CAC.
According to whether the attributes (such as the multicast IP address) of each program are
configured before the service is provisioned, multicast programs can be classified into two types:
pre-configured programs and dynamic programs. For details on dynamic programs, see
"Dynamic Programs."
According to their dependency on the link-layer loop protocol, multicast upstream ports can be
classified into two types: manually configured (static) upstream ports and dynamic upstream
ports. For details on dynamic upstream ports, see "Ring Network of Upstream Ports."
Multicast User
A multicast user is a multicast data recipient. A service stream must be configured for the
multicast user so that multicast control messages can be carried in the upstream direction (the
device can distinguish the user by traffic classification). Therefore, a multicast user maps to a
unique terminal or service subscriber. Meanwhile, an MVLAN must be specified for the
multicast user to indicate to which ISP the service subscriber belongs.
The following figure shows the relationships between the basic managed objects.
STB
TV
Multicast
Multicast program
upstream
port Program 1
Terminal Multicast Program 2
user MVLAN MVLAN .
... .
... .
Program N
Service
board
.
Control
. board
TV ONT .
Splitter
Service
board
First-level forwarding
table
Duplication
Index
Destination
VLAN+ Service
GMAC board list
Third-level forwarding Second-level
table forwarding table
Duplication Duplication
Index Index
Destination Destination
VLAN+ VLAN+ GPON port
ONT port list
GMAC GMAC list
The OLT supports a distributed 2+1-level duplication architecture. The first-level duplication
is implemented on the control board. By using the "VLAN+GMAC" index, the control board
duplicates multicast data to the service board interested in the multicast program in an as-per-
requirement manner, which effectively saves the backplane bandwidth. The second-level
duplication is implemented on the service board. By using the "VLAN+GMAC" index, the
service board duplicates multicast data to the GPON port interested in the multicast program on
an as-per-requirement basis, which effectively saves the downstream bandwidth of the GPON
port. Then the service board encapsulates and transmits the multicast data on the GPON port in
the multicast GEM port mode (system-level parameter, configurable). The third-level
duplication is implemented on the ONT. By using the "VLAN+GMAC" white list, the ONT
filters out unneeded multicast data to avoid bandwidth overflow at the downstream ingress.
Then, by using the "VLAN+GMAC" index, the ONT duplicates the multicast data to the ONT
ports on an as-per-requirement basis.
This section describes only the forwarding framework in the most common single-copy
duplication mechanism. For the hardware forwarding framework in the multi-copy duplication
mechanism, see "GPON Multi-Copy Duplication."
IGMP Proxy
IGMP proxy is a mode in which the device in a tree topology does not set up a route to forward
multicast messages, but only acts as a proxy for multicast protocol messages. Details are as
follows:
l From the perspective of a terminal, the device serves as a multicast router that implements
the functions of the router in the IGMP protocol. Specifically, the device fixedly functions
as an IGMP querier (querier election is not supported for security concerns) on the user-
side network. The device receives and terminates the join and leave messages of all
multicast users, and duplicates the multicast program to only the interested multicast users
according to the maintained group membership table.
VLAN+GIP Multicast user list (such as multicast user 1 and multicast user 2)
l From the perspective of a multicast router, the device serves as a multicast group member
that implements the functions of the host in the IGMP protocol. According to the changes
(addition or deletion) of the record in the group membership table, the device sends the join
message or leave message of a program to the upper layer through the multicast upstream
port. In addition, the device responds to the queries of the multicast router according to the
status of the group membership table.
Therefore, IGMP proxy effectively reduces the quantity of IGMP messages exchanged on the
network side and consequently lessens the load of multicast routers. The device can be
configured to send the IGMP general query to all multicast users or to only interested multicast
users.
IGMP IGMP
ONT
Service board
Control board
H R H R H R
In this figure, "R" represents the router functions of the IGMP protocol, and "H" represents the
host functions of the IGMP protocol.
In the distributed dual-level IGMP protocol stack, the first level is on the control board, and the
operation on the user side and the network side is based on MVLAN; the second level is on the
service board, the operation on the network side is based on MVLAN, and the operation on the
user side is based on multicast user, which ensures that users do not affect each other on the
control plane. The convergence of the IGMP protocol stack on the service board lightens the
processing load of the IGMP protocol stack on the control board. Given the same hardware
conditions, the system can process channel switching for more multicast users at the same time.
OLT
Service
board
Multicast
5 Multicast program router
.
Control
. board
. 4 Join
TV ONT Splitter message
o in e
Service 3 J sa g
board s
me
2 Join
1 VOD message
1. The multicast user switches a channel and sends a join message for ordering a new program
GIP1.
2. After receiving the join message, the service board enters the IGMP protocol stack of the
multicast user. After multicast control is implemented (for details, see "Multicast CAC"),
the following group membership table is generated on the service board.
Index Online Member
l At the same time, the following multicast forwarding table is generated on the service
board (for details on how to map GIP1 to GMAC1, see "Multicast Address").
Index Duplication Destination
l According to MVLAN1 corresponding to the program, the service board serves as the
proxy for multicast user 1 and sends a join message to the control board.
3. After receiving the join message, the control board enters the IGMP protocol stack of
MVLAN1 and generates the following group membership table.
l At the same time, the control board generates the following multicast forwarding table.
4. The control board then sends a join message to the multicast router through the multicast
upstream port of MVLAN1.
5. After receiving the multicast stream, the device first duplicates the stream to service board
1 according to the multicast forwarding table of the control board, and then duplicates the
stream to GPON port 1 according to the multicast forwarding table of the service board.
NOTE
Though the S-VLAN of a multicast user is different from the MVLAN, the device can still implement the
mapping to the MVLAN according to the configuration relationship of the multicast member. In this way,
cross-VLAN multicast is supported without requiring additional configuration.
Leave Flow
The process flow of the leave message is the same as that of the join message; the only difference
is that the actions involved are different. For details, see "Fast Leave."
Query Flow
According to IGMP, the status of multicast users needs to be maintained through general queries.
This avoids the situation where an entry is not deleted after a multicast user leaves "quietly".
Therefore, the service board sends general queries to all multicast users at preset query intervals.
To reduce the occupied resources, the service board can be configured to query only online
multicast users. If the service board does not receive a report message from a multicast user
within the preset aging time (robustness variable x query interval + maximum response time),
the service board deletes the corresponding entry of the multicast user from both the multicast
relationship table and the multicast forwarding table.
the corresponding MVLAN. If a service board exists, the control board serves as the proxy for
the service board and sends a report message as a response to the multicast router.
The group-specific query message follows a similar process flow.
VLAN1S1G1 VLAN1*G1
VLAN2S2G2 VLAN2*G1
VLAN3 *G3 VLAN3*G1
C F
l Control plane
On the network side, each MVLAN has an independent IGMP protocol stack. Each ISP
can select the protocol version, message priority, and IGMP mode (IGMP proxy or IGMP
snooping). On the user side, each multicast user has an independent IGMP protocol stack
and is not affected by other multicast users.
l Forwarding plane
On the forwarding plane, all multicast forwarding tables use MVLAN and multicast MAC
address as indexes. This ensures that different MVLANs do not interfere with each other.
For details, see "Multicast Forwarding Table." For the control board, implementing QoS
scheduling on the traffic of different MVLANs on the same port on the service board equals
unicasting. For details, see "8 QoS."
Dynamic Programs
In actual applications, if fine-grained management is not required on the device, dynamic
programs can be applied. This avoids maintenance troubles brought by frequent program
changes. In this case, program maintenance can be performed uniformly through the Electronic
Program Guide (EPG) system.
Multicast
source
1. After started, the STB automatically obtains the program menu from the EPG server and
provides the menu for the multicast user.
2. When the user orders a program, a corresponding IGMP message is generated and sent to
the device. Hence, the program information on the device at this stage is not input by the
administrator. Instead, it is dynamically generated in the MVLAN (to which the multicast
user belongs) after the multicast group IP address and source IP address are extracted by
the device from the real-time IGMP message of the multicast user.
3. The multicast program of the multicast source reaches the STB.
To prevent the user from using an improper group IP address, a legal multicast address
segment can be configured on an MVLAN basis on the device for dynamic programs.
According to the configuration, a multicast program is generated only when the group IP
address is within the legal address segment; otherwise, the IGMP message of the user is
dropped. Apart from the restriction by the address segment, the number of programs that
can be dynamically generated is also controlled by hardware specifications and license.
The fine-grained management not supported by dynamic programs on the device includes
CAC, rights management, multicast preview, and pre-join.
Rights Management
Package-based rights management can be implemented on the device by configuring different
multicast programs on different profiles.
l Rights profile
The rights to any multicast program can be specified in each rights profile, and each rights
profile can be configured with a meaningful name. There are four types of rights:
Forbidden: It indicates that a multicast user is not allowed to watch or preview a multicast
program.
Preview: It indicates that a multicast user can order a multicast program but is restricted in
the duration and times that they are allowed to watch.
Watch: It indicates that a multicast user can order a multicast program normally without
any restriction.
Idle: It is the default value of the rights profile. It indicates that a specific right is not assigned
to a multicast program yet. The effect of "idle" equals that of "forbidden".
Carriers can plan the rights profiles according to user-defined rules. Usually, there are three
modes of planning.
The first one is topical planning, such as the types of programming like news, sports,
and movies. In this case, one multicast program belongs to only one rights profile and
the programs of different profiles do not overlap. Therefore, one user is usually bound
to multiple profiles. See the following figure.
Program M
Program M+1
... Movie type User 3 News type Sports type Movie type
Program X
The second one is planning different levels by ratings, such as basic, family, and adult
type shows. In this case, one multicast program may belong to multiple rights profiles
and the programs of different profiles may overlap. Therefore, one user is usually bound
to only one profile. See the following figure.
Program
group A Family type User 2 Family type
Program group B
Program group C
Adult type User 3 Adult type
The third one is a hybrid of the first and second types of planning and is the most
complicated as well as the most flexible mode. In this mode, the programs of different
rights profiles may overlap and one user may be bound to multiple profiles. The same
program may be configured with different rights in different rights profiles. To ensure
that these rights profiles work as expected by the carrier when it comes to a specific
program for a specific user, the rights of a program in the rights profiles must be
prioritized. It is recommended to plan the priorities before deployment to prevent any
incorrect results. The following are examples.
Table 10-4 Priority of rights: forbidden > preview > watch > idle
Rights profile 1 Program 1: watch User 1 Program 1:
forbidden
Rights profile 2 Program 1:
forbidden
Table 10-5 Priority of rights: watch > preview > forbidden > idle
Rights profile 1 Program 1: watch User 1 Program 1: watch
l Rights control
The rights of each multicast user can be configured by performing the following two steps:
1. Planning the rights profiles of all multicast programs.
2. Binding a multicast user to a rights profile according to the user's subscription.
The device provides open MIB interfaces to support such operations.
There is one more method of implementing rights control; that is, by configuring
encryption on the head system and the STB. In this way, the carrier does not need to
perform rights management on the device and only needs to enable or disable rights
control at the system level or the multicast user level.
Multicast Preview
Allowing multicast viewers to preview certain channels is an effective selling strategy that allows
carriers to entice more users to subscribe to more programming.
The device manages the preview parameters of each multicast program by preview profiles. To
be specific, each multicast program can be bound to a preview profile which is configured with
preview parameters. Similar programs can be bound to the same preview profile so as to simplify
management.
A preview profile contains three preview parameters.
Preview Preview
duration interval
Program N
T1 T2 T3 Time
l Preview interval: It is the minimum interval between two previews. The interval is from
the end time of the previous preview to the start time of the current preview (from T2 to
T3 as shown in the preceding figure). If the interval between the two previews of a user
does not reach the specified preview interval, the user will not be allowed to preview the
program. Such a mechanism guards against any "rogue" behavior of users. Without the
restriction of the preview interval, a user may keep previewing the same program and is
actually "watching" a program without having to pay for it.
l Preview times: This parameter specifies how many times a multicast user is allowed to
preview the same program during a day. Each time the user leaves a previewed program,
the counter increases by 1. When the counter exceeds the maximum value, the user will
not be allowed to watch the program. However, the user will be allowed to resume their
preview of the channel the next day.
l Preview duration: This parameter specifies for how long a multicast user is allowed to
watch the same program each time. The duration starts from the beginning of the viewing
request (from T1 to T2 as shown in the preceding figure). After the duration expires, the
user will not be able to receive any data for the multicast program.
For details on how to control the preview of multicast users, see "Rights Management."
Multicast CAC
CAC is short for call admission control. Here, it means controlling the setup of IGMP sessions.
If an IGMP session fails to be set up, a multicast user will fail to receive the multicast program
ordered.
In a broad sense, implementing CAC requires implementing the first-level control in the system.
Currently, the system control includes the following:
l Anti-DoS attack. The rate of IGMP messages sent from the user side must not exceed the
specified value in the system. Otherwise, the system will regard that a DoS attack occurs
and drops the messages. Such a protection method applies not only to IGMP messages, but
also to control packets such as DHCP and PPPoE packets. For details, see "13.3 Anti-DoS
Attack."
l Anti-IP spoofing. When this function is enabled, the user must obtain a legal IP address
through DHCP before ordering any program. Only the IGMP messages using the legal IP
address as their source IP address will be accepted by the system; otherwise, the messages
will be regarded as coming from an unauthorized user and will be dropped by the system.
For details, see "13.4 Anti-ICMP/IP Attack."
l Broadband message overload. When a service traffic burst occurs, the system resources
may not be able to support all services. Then, the system will drop certain messages
according to specified policies to ensure that the services with a higher priority are not
affected. In this case, IGMP messages may be "sacrificed" to reduce the system load.
After the first-level control in the system comes the multicast first-level control, which
includes the following:
l Concurrent number of programs of a multicast user. This parameter specifies how many
channels a multicast user is allowed to order at the same time. The parameter can be
configured on a multicast user basis.
l Rights control. For details, see "Rights Management."
l Bandwidth check. Though the system supports QoS control on various types of traffic,
packet loss (drop by priority or tail drop) may still occur when the transmission bandwidth
is overloaded. However, due to the real-time and non-retransmittable properties of multicast
programs, postmortem QoS will directly cause fuzzy display to the programs with packet
loss (not only to newly ordered programs). Consequently, the requirements of IPTV for
high-quality experience are not met. Bandwidth check enables the system to control a newly
ordered program beforehand. In this way, the system can ensure that the programs that have
been ordered enjoy sufficient bandwidth and will not be affected by the new program. With
bandwidth check, only the newly ordered program is affected (if bandwidth is insufficient,
the user will not be able to watch the newly ordered program).
CAC can be classified into three types according to different control points and methods.
l Multicast user bandwidth CAC
First, each pre-configured program is configured with bandwidth. The bandwidth is
configured with reference to the video bit streams, and the margin of packet encapsulation
and network transmission jitter; if possible, actually tested network traffic can also be used,
as a better reference. Then, each multicast user is configured with available bandwidth. The
available bandwidth is configured with reference to the actual line bandwidth or the
planning of service provisioning. Thus, when receiving the first IGMP join message of a
program, the device subtracts the bandwidth occupied by the program from the available
bandwidth of the user. If the remainder is smaller than 0, the device rejects the request of
the user. When receiving an IGMP leave message of a program, the device returns the
bandwidth occupied by the program to the available bandwidth of the user. The device
returns the bandwidth when the device stops forwarding multicast data, that is, when the
program is not ordered by any end user of the terminal. For details, see "Fast Leave."
0
Join program 1 Leave program 1 Join program 1 Join program 2
Multicast user bandwidth CAC can be configured at the system level or at the multicast
user level.
l GPON port bandwidth CAC [OLT]
GPON single-copy duplication function (default configuration): Under the same GPON
port, even if multiple multicast users order the same multicast program, the multicast data
is duplicated only once and sent to corresponding multicast users through the downstream
multicast channel. Therefore, this function ensures that the downstream multicast
bandwidth does not overflow the downstream line bandwidth of the GPON port.
To do so, the operator first needs to configure bandwidth for each pre-configured program
(see "Multicast user bandwidth CAC" in the preceding bullet), and then allocates the
available bandwidth for each GPON port (depending on the actual line bandwidth or the
service provisioning plan). In this way, after receiving the first IGMP join message of the
program, the device deducts the bandwidth of the corresponding program from the
remaining bandwidth of the GPON port. If the deduction result is smaller than 0, the device
rejects the order of the user. After receiving an IGMP leave message, the device returns
the bandwidth of the corresponding program to the GPON port. The device returns the
bandwidth when multicast data is no longer forwarded, that is, when no multicast user under
the GPON port orders this program.
GPON multi-copy duplication (see subsection "GPON Multi-Copy Duplication" under
section "User-Side Interoperating Technologies"): Because the duplication of multicast
programs is based on multicast user, the bandwidth control is also based on multicast user,
that is, based on the join and leave of each multicast user. The following figure shows the
difference between single-copy duplication and multi-copy duplication.
Maximum
bandwidth
Program 1
bandwidth
0
User 1 joins User 2 joins User 3 joins
program 1 program 1 program 1
Maximum
bandwidth
1
Program 1
1 bandwidth
Program 1
bandwidth
1 2
1
User 1 joins 1
User 2 joins
program 1 program 1
This function can be configured at the system level or GPON port level. It can be used
together with the multicast user bandwidth CAC.
l GPON port ANCP bandwidth CAC [OLT]
Generally, the IPTV service includes unicast stream for VoD services and multicast stream
for TV services. By using ANCP, this function can cooperate with the RACS and VOD
servers to implement bandwidth CAC (not only multicast bandwidth CAC) on all IPTV
traffic streams. (For the ANCP principle, see "15.5 ANCP.")
This function can be configured at the system level. It cannot be used with the previous
two bandwidth CAC functions.
Charging Mode
For multicast services, carries or ISPs usually adopt two methods to charge for their services:
l Subscription packages. In this mode, programs are divided into different packages. The
user needs to pay a fixed amount for each package during the fixed period (such as by the
year or by the month). This method does not restrict the multicast user in the order count
or the ordered volume of traffic.
l Pay per view (PPV): In this mode, the user is charged according to the order count of
different programs.
In the first mode, since it is package-based, charges remain the same and are not dependent on
the viewing pattern of the user. Therefore, the first mode is inherently supported by the device.
In the second mode, the device can record the programs that each multicast user has watched
and provide the information in the form of a call detail record (CDR) to the accounting system
for settlement. The complete configuration of the CDR function consists of three steps:
1. Enabling the logging function. The function can be configured at the multicast user level,
multicast program level (configurable for pre-configured programs, and enabled by default
for dynamic programs), or the system level. When a user finishes viewing program (from
the time that the program was order to the time that it ended), or when the user fails to order
a program because of failing to pass the multicast CAC, a log is generated.
NOTE
The system can record up to 10K logs. When the logs reach the maximum capacity, new logs will
overwrite old ones. Therefore, to prevent heavy consumption of log resources in the case where the
user quickly browses through channels, the device supports a configurable flag time for generating
logs. If the multicast user watches a channel for a duration shorter than the flag time, the device will
not generate a log. On the contrary, to timely log the users who stay online for a long time, the system
automatically generates logs when the user online time exceeds the flag time.
2. Configuring the file server. The operator needs to select a CDR transfer file. Available
options are TFTP/FTP/SFTP. Also, the operator needs to set the IP addresses of the primary
and secondary servers.
3. Enabling the CDR functions (at system level). After the CDR function is enabled, the device
automatically integrates the logs that need to be reported into a text file and transfers the
file to the server when either of the following conditions is met: when the reporting interval
expires, or when the number of logs reaches the reporting threshold.
The format of the text file name is HWCDR-host name-YYYYMMDDHHMMSS.txt.
7 ProgramName 0..16 bytes For example, cctv1. If the program does not
exist, No-Name is displayed.
Table 10-7 Pros and cons of CDR and syslog transfer modes
Pro Con
CDR Reliable transfer. TFTP, FTP, or Logs are reported to the file server only
SFTP can be selected as the transfer when specified reporting conditions are
protocol. met (the reporting interval expires or the
number of logs reaches the reporting
threshold).
syslog Timely report. Once a log is Unreliable transfer. Syslog uses the UDP
generated, it is uploaded to the protocol.
syslog server.
Multicast Acceptance
After the construction of a site is completed, the site usually needs to be tested for acceptance.
The major purposes of the acceptance test are to check the technique and quality (connectivity
of hardware) of engineering installation and verify device configuration (correctness of software
configuration and external interoperation parameters).
On the BMS, the operator can perform an efficient and cost-effective acceptance test by
performing the following steps:
Multicast program
Start simulation
Send join message
Multicast program
Check multicast
program traffic
Report simulation result
Limited by the solution, the following items support the multicast simulation test during
the acceptance test.
Acceptanc Simulation-Supporting Non-Simulation-
e Item Item Supporting Item
At the same time, for the multicast simulation functions described above, the device
provides open MIB interfaces for secondary development by a third party.
l Batch report. The destination IP address of report messages is filled in as 224.0.0.22 for all
instances. Meanwhile, the IGMP payload can carry multiple group records. This function
reduces the number of report messages between devices. As shown in Figure 10-19, the
IGMP message captured by a packet capture tool carries the information about two groups
232.1.1.1 and 239.255.1.5. In the case of IGMPv2 messages, the destination IP address
must be filled in as the corresponding group IP address. Hence, one IGMPv2 message
cannot carry the information about multiple groups.
l Longer maximum response time for the query message. In IGMPv3, the maximum response
time for the query message is extended from 25.5s (IGMPv2) to 3174.4s. Hence, IGMPv3
is applicable to large-scale networks.
l Source filter. With the source filter function, the host can receive or not receive the multicast
data carrying the IP address of a specified multicast source. This function enables the device
to implement SSM better and support the scenarios with multiple ISPs. IGMPv2 supports
only ASM. The following uses different types of messages to explain the implementation
of source filter.
Query messages
Group-specific query The device sends this message to learn the reception status
of an interface to the multicast group with a specific address.
This is similar to the group-specific query of IGMPv2.
Group-and-source- The device sends this message to learn the reception status
specific query of an interface to the multicast group with a specific group
address and source address. This is a new message of
IGMPv3.
Report messages
TO_IN(G, S) Changes the filter mode of the multicast group to the INCLUDE
mode. The source address list contains a new source address S. TO_IN
(G, {}) indicates leaving all sources of G and this message in this case
is the same as the IGMPv2 leave message.
ALLOW(G, S) Changes the source address list. This message is triggered when the
source address changes. The source address contained in the record
is the source S that the system wishes to join.
BLOCK(G, S) Changes the source address list. This message is triggered when the
source address changes. The source address contained in the record
is the source S that the system does not wish to join.
IS_EX(G, S) Reports the current status. It indicates that the current mode of the
group is the EXCLUDE mode. This message is triggered when the
device receives a query message. The source address list contains the
source address S that the group does not wish to join. IS_EX(G, {})
indicates that the device is interested in all sources of G and this
message in this case is the same as the IGMPv2 join message. The
device does not support the IS_EX message that contains an empty S.
TO_EX(G, S) Changes the filter mode of the multicast group to the EXCLUDE
mode. The source address list contains a new source address S that
the device does not wish to join. TO_EX(G, {}) indicates joining all
sources of G and this message in this case is the same as the IGMPv2
join message. The device does not support the TO_EX message that
contains an empty S.
No Response
No Response
v1 v2/v3 Incompatible
v2 v2 (recommended) Normal
v3 v3 (recommended) Normal
The IGMP version on the user side cannot be directly configured and is determined by the earliest
IGMP version of the MVLAN to which the multicast user belongs. As shown in the following
table, according to the IGMP version on the terminal, the IGMP version on the device should
be set to the recommended version so as to avoid incompatibility. Incompatibility may cause
packet loss.
v1 v2/v3 Incompatible
v2 v2 (recommended) Normal
v3 v3 (recommended) Normal
SSM
According to the multicast source control level, multicast has three models:
l Any-source multicast (ASM)
ASM is defined in RFC 1112. In this model, a recipient, by joining a group identified by the
multicast address, can receive data sent to the group. A recipient can join or leave a group at any
time, and the recipient location or quantity is not limited. In addition, any sender can serve as
the multicast source to send data to the group. Therefore, this model is applicable to the
multipoint-to-multipoint (MP2MP) multicast application.
)
(G1,S1) 1 ,S 1 S1
R1 (G1,S2) (G
(G1,S2)
R2 (G1,S1) ASM S2
(G1,S2)
S3
R3
R1 and R2 in the
same group
)
1 ,S 1
R1 (G1,S1) (G S1
(G1,S2)
R2 SFM
(G1,S2) S2
S3
R3
R1 and R2 in the
same group
)
1 ,S 1
R1 (G1,S1) (G S1
(G1,S2)
R2 SSM
(G1,S2) S2
S3
R3
R1 and R2 in the
same group
The device, with its SSM mapping function (automatically supported), can help implement SSM
networking on the network side even if the user-side device does not support IGMPv3. With the
SSM mapping function, the device maps the received (*, G) message to an (S, G) message
according to the unique multicast program triplet, as shown in Figure 10-24. Note: (1) At a time,
a multicast user cannot belong to multiple MVLANs. (2) Dynamic programs do not support
SSM mapping.
AN Multicast
Modem router
(*,G) (S ,G)
SSM
mapping
The following table describes whether the devices support the SSM and ASM modes.
IGMP Snooping
IGMP snooping falls into two types:
l IGMP transparent snooping
It is a snooping function without proxy. The device selects the proxy, snooping, or snooping
with proxy function based on MVLAN.
The device learns the IGMP join and leave messages of the multicast user to maintain the
multicast group membership table, and then forwards the multicast data of the multicast
upstream port to the corresponding multicast user according to the multicast group
membership table. To maintain the aging of the multicast group membership table, the
device also functions as a querier.
The device processes the IGMP messages as follows:
Query message
After receiving the general query message and group-specific query message from the
multicast upstream port, the device triggers the local querier to immediately send the re-
created query message to the user side.
NOTE
l To ensure that the multicast user responds to the query in time, the maximum response time
configured on the device must be shorter than that configured on the upper-layer multicast router.
l The network-side IGMP version of the device is not affected by the multicast router.
Join/Leave message
The device transparently transmits the join/leave message received from the multicast user
to the MVLAN.
NOTE
The IGMPv3 message may contain multiple group records that match different MVLANs. In this
case, the device segments the message and transparent transmits the segmented messages to the
corresponding MVLANs.
l IGMP snooping with proxy
In IGMP upstream transmission, IGMP snooping with proxy is the same as IGMP proxy,
but in IGMP downstream transmission, IGMP snooping with proxy does not suppress the
query message as IGMP proxy does.
Query message
After receiving the query message from the multicast upstream port, the device sends the
query message to the user and also responds to the multicast router's query according to its
multicast group membership table.
NOTE
Like IGMP proxy, the network-side IGMP version of the device is affected by the multicast router.
Join/Leave message
The device sends only the first join message from the multicast users to the MVLAN. The
device sends only the last leave message from the multicast users to the MVLAN.
Global Leave
As defined in TR101, the global leave message is an IGMP message with an all-zero group IP
address, which indicates leaving all the groups.
l Network side
When the network topology changes, the device sends the global leave message to the upper-
layer multicast router. After receiving the message, the upper-layer multicast router immediately
sends the general query message, with the maximum response time set to the maximum time of
responding to the group-specific query message. The device, after receiving the query message,
responds to the upper-layer multicast router with the join message of the interested group. In
this way, the multicast service can recover more quickly. Here, the network topology change
events include ring network switching, line up/down, and active/standby port switching in a
protect group.
NOTE
l If the device is interconnected with a network device that does not support the global leave message,
the global leave function must be disabled on the device; otherwise, multicast services may be
interrupted during the network topology change.
l The device supports sending the global leave message only in IGMPv2.
l User side
When the STB is powered on immediately after a sudden power-off, because the STB cannot
remember the previously-watched program, the previously-watched program does not release
the bandwidth and program resources until the general query ages.
If the STB supports the global leave function, the STB can send a global leave message after it
is re-powered on. After receiving the message, the device releases all program resources of this
multicast user if confirming that the multicast user is a fast-leave or MAC-based fast-leave user.
Even if the user is a normal-leave user, the device sends a general query message, with the
maximum response time set to the maximum time of responding to the group-specific query
message. In this way, the program resources of the user can be released faster than waiting for
the general query aging.
GE/10GE
GE/10GE GE/10GE
GE/10GE
GE/10GE GE/10GE
Multicast
subtending port
Multicast Multicast
upstream port upstream port
Multicast Multicast
subtending port subtending port
Multicast
upstream port
functions for multicast users: rights management, multicast preview, multicast CAC,
billing, and multicast service acceptance. The device supports only the fast leave
function.
On the subtending port, the IGMP protocol stack is based on different VLANs, as shown
in Figure 10-27.
IGMP IGMP
H R H R
Service
board
H R H R
Subtending
Control board Control board
board
NOTE
If an Ethernet port is not configured as the multicast subtending port, the Ethernet port discards
the IGMP report message.
(SIP, GIP) of the IGMP message and the VLAN of the IGMP message are used for program
matching. The policy of processing unmatched messages can be configured based on subtending
port.
Considering the IGMP processing performance of the source node, it is recommended that all
subtending devices adopt IGMP proxy instead of IGMP snooping.
Multicast data forwarding
Multicast data can be forwarded only in a VLAN. According to different subtending
boards, there are two forwarding architectures.
One-level forwarding architecture: GIU board as an example.
Service
Service board
board
.
Control
. . board
Control
. .
board
.
Subtending
board
Service
board
Duplication
Index
Destination
VLAN+ Ethernet
GMAC port list
Service
Service board
board
.
Control
. . board
Control
. .
board
.
Subtending
board
Service
board First-level
forwarding table
Duplication
Index
Destination
VLAN+ Service
Second-level GMAC board list
forwarding table
Duplication
Index
Destination
VLAN+ Ethernet
GMAC port list
Multicast
subtending Multicast
port upstream
port
Multicast
subtending
port
Multicast
upstream
port
Figure 10-31 IGMP protocol stack of the xPON multicast subtending port
MxU
IGMP
Service board
Control board
OLT
H
IGMP IGMP
Control board
H R H R
PON board
MxU H R H R
R
IGMP
Service board
Control board
Service
board
Service
board
.
Control
. . board
Control .
. board
.
PON
board
Service
board First-level forwarding
table
Index Duplication
Destination
Index Duplication
Destination VLAN+ Service
GMAC board list
VLAN+ Second-level
GMAC Port list forwarding table
Index Duplication
Destination
VLAN+ PON port list
GMAC
Default
AN 1
upstream port
Block
AN 2 AN 3
Subtending port
NOTE
l The root bridge must be the injection point of the multicast service.
l When using ETHB boards for upstream transmission, the device does not support MSTP
multicast.
In the event of a link or device failure, after MSTP selects a backup link, the multicast
VLAN-based IGMP protocol stack immediately sends the new root port (serving as the
multicast upstream port) the join message targeting the multicast group which the device
is interested in. In this way, fast recovery of the multicast service can be ensured.
Default
AN 1
upstream port
Multicast
upstream port
after switching
AN 2 Faulty multicast AN 3
upstream port
l RRPP
The mode of the multicast upstream port needs to be set to RRPP (system-level
configuration). In this case, the multicast upstream port of the device does not need to be
configured either; instead, the ring upstream port determined dynamically by RRPP serves
as the multicast upstream port. The RRPP master node, however, does not need to use the
RRPP multicast upstream port mode, but needs to be configured with the correct multicast
upstream port and multicast subtending port. In addition, the device ports on the ring need
to be configured as the multicast subtending ports. The actual multicast downstream ports
are determined by IGMP according to the multicast group membership table. Figure
10-35 shows the configuration of each role.
Multicast
AN 1
upstream port
Block
AN 2 AN 3
Subtending port
NOTE
l The RRPP master node must be the injection point of the multicast service.
l When using ETHB boards for upstream transmission, the device does not support RRPP
multicast.
l The device supports only RRPP single ring, and the single ring must be a primary ring.
In the event of a link or device failure, after RRPP selects a backup link, the multicast
VLAN-based IGMP protocol stack immediately sends the new ring upstream port (serving
as the multicast upstream port) the join message targeting the multicast group which the
device is interested in. In this way, fast recovery of the multicast service can be ensured.
R1 R2
Report Report
AN
First, set the two access node ports connected to routers 1 and 2 as the multicast upstream ports
(the two ports must not be in the same aggregation group or protect group). After this setting,
when the access node transmits IGMP messages to router 1, it transmits the same IGMP messages
to router 2 at the same time. Then, router 2 can maintain the same multicast forwarding entry as
that of router 1 in a timely manner. Once switching occurs, router 2 can obtain the multicast
forwarding entry by no other means and can ensure fast recovery of multicast services in a shorter
time.
Note: If the router supports transferring the multicast forwarding entry using a proprietary
protocol, this can substitute for the upstream port dual-homing function. In this case, add the
two access node ports to one aggregation group. This function is more commonly used in actual
applications.
Prejoining a Program
The prejoin function is used to shorten the course of channel switching (reduce switching
latency), so as to improve user experience. Switching latency comprises the processing
consumption in each segment of a network, as shown in Figure 10-37. With the prejoin function
enabled, the network-side processing consumption (T1+T2) equals 0.
T5 T4 T3 T2 T1
T5 T4 T3
The prejoin function applies to IGMP proxy scenarios. This function presumes that there are
always online users for a program.
l The flow of prejoining a program is the same as the flow of joining a program normally.
Once a multicast stream is successfully ordered, it is transmitted to the access node.
l In the flow of leaving a prejoined program, compared with the flow of leaving a normally
joined program, the access node does not transmit the leave message to the multicast router
even when the last multicast user leaves the program.
l In the flow of querying a prejoined program, compared with the flow of querying a normally
joined program, the access node responds to the multicast router's query as required by the
protocol regardless of whether or not the multicast group membership table of the program
contains a multicast user.
As described above, viewed from the router, there are always online users for a prejoined
program.
The prejoin function can be set for a program. In general, set the prejoin function for the program
that is most commonly ordered by users. A dynamic program does not support the prejoin
function.
Second highest priority Map to the host IP address of the program (Invalid for a
dynamic program because the host IP addresses of
dynamic programs are unconfigurable).
Other solutions: Certain multicast routers can set their L3 interfaces to work in hybrid mode and
therefore process IGMP messages regardless of whether or not their source IP addresses are in
the same network segments as the IP addresses of their L3 interfaces.
Forward program 1
Join program 2
Forward program
2
Maximum
bandwidth
Program bandwidth
Two IGMP messages are transmitted when channel switching occurs, one for leaving the
original multicast group and one for joining the new multicast group. Therefore, traffic of
two multicast groups exists on the subscriber line before the original multicast group is
stopped. If the subscriber line does not reserve sufficient bandwidth for carrying the traffic
of two multicast groups, traffic overflow (packet loss) will occur. For example, if video
streams are carried, fuzzy display will occur.
l Fast leave
Fast leave: When the device receives the leave message from a multicast user, it
immediately stops forwarding the messages of the user. Figure 10-39 illustrates the flow
of a fast leave.
Forward program 1
Stop program 1
Join program 2
Forward program 2
Maximum bandwidth
Program bandwidth
STB 1 leave
message Index Destination
2 Program 1
Program 1 Multicast user
Program 1
(MAC_STB2)
STB 2 leave
message
Index Destination
3
Program 1
No forwarding entry
In summary, the three leave modes have their advantages and disadvantages. The most
appropriate leave mode can be set for a multicast user according to actual situation.
The following configurations are recommended and can be adopted by different users according
to their house network topologies.
Sufficient
1 1 OLT
1 Multi-copy
1
ONT2 1 1 1
1 1 1 1
ONT3 1
1 Splitter
1 1
ONT1
1
OLT
1 Single-copy
ONT2
1
1 1
ONT3 Splitter
1
1
The following table lists the differences between single-copy duplication and multi-copy
duplication.
Bandwidth One GPON port has only one One multicast user has only one
multicast stream. multicast stream, but one GPON port
may have multiple multicast streams.
Security On the one hand, the security This mode uses the GPON line
depends on the ONT filtering; on the AES128 encryption system and the
other hand, the head end and STB real-time key conversion function,
encryption system are required. which is better than the common
encryption system of the head end.
l VLAN transformation
If the carrier plans the home gateway at the user's house, generally, the VLAN of the IPTV
service (also called C-VLAN) needs to be planned. Because the OLT does not directly
support transformation of the MVLAN, the operator can configure VLAN translation on
the ONT to meet the planning requirement (the OLT provides the corresponding CLI and
the configuration can be issued to the ONT through OMCI). The MVLAN and IGMP
messages can be transformed in three ways: transparently transmitted, transformed to
untagged, and translated to a specified VLAN.
l Controllable multicast
In single-copy duplication, GPON downstream multicast programs are broadcast. Assume
the following condition: After an authorized multicast user orders a program, all users under
the GPON port can receive the program. Therefore, to implement complete rights control
on the access device, the OLT must configure the ONT to work in the "dynamic
controllable" mode. In this way, the multicast filtering table (white list) on the ONT is
issued by the OLT after multicast control checking. If a downstream multicast program is
not in the multicast filtering table, the ONT cannot receive this multicast program.
If the ONT is configured to work in the "IGMP snooping" mode, the multicast filtering
table on the ONT is completely maintained by the ONT. In this case, multicast program
rights management is generally implemented by the encryption system of the IPTV
platform.
EPON Multicast
In addition to the multicast solution similar to GPON (referred to as MVLAN multicast or TR101
multicast), EPON of the device supports the China Telecom Corporation multicast mode
(referred to as CTC multicast). CTC multicast is based on IGMP, supplemented with certain
special requirements. Therefore, this document focuses on only the supplemented requirements
and the flow of cooperation with the ONU.
l IGMP snooping
Service provisioning
User management: For interconnection with the ONT, the device performs management
based on multicast user; for interconnection with the MxU, the device performs
management based on multicast subtending port.
Rights control: It is implemented by the IPTV platform and is irrelevant with the device.
Program management: No special requirement is proposed.
IGMP message
CTC recommended network: OLT-IGMP proxy; ONU-IGMP snooping.
Query message: The OLT uses the MVLAN to issue a query message. If the ONT is
different from the TR101 multicast, the OLT uses the C-VLAN to send a query message.
For the MxU, the requirements are the same.
Hardware forwarding
Duplication mechanism: The device supports single copy broadcast (SCB) only. On an
EPON port, the OLT distributes multicast contents to all ONUs through the broadcast LLID
channel in the SCB mode. The OLT uses a special MVLAN, which is isolated from other
services (including the upstream IGMP service).
Forwarding entry: The OLT uses the "MVLAN+GMAC" index and EPON port list; the
ONU uses the GIP index and maintains its entries through its IGMP snooping.
IGMP Join
(untagged) IGMP Join
(vlan=port id)
OAM PDU
IGMP Join
(vlan = mvlan)
After receiving a leave message, the OLT issues an OAM PDU to the ONT to delete the
corresponding multicast entry. In other words, the forwarding entries on the ONU are
completely controlled by the OLT. The preview times and preview duration are also
implemented in this way.
Other functions of the OLT in this solution are the same as those in the IGMP snooping
solution.
Hardware forwarding
Duplication mechanism: The device supports SCB only.
Forwarding entry: The OLT uses the "MVLAN+GMAC" index and EPON port list; the
ONU uses the "MVLAN+GMAC" index and its entries are maintained by the OLT through
OAM PDUs.
Control board
Unicast data
channel
Service board
If the multicast duplication mode is unicast (multi-copy duplication), multicast streams are
forwarded on the control board according to the multicast forwarding entry and on the
service board according to the configured service port. The VLAN carried by the multicast
streams forwarded to the ONT is the C-VLAN. In this scenario, the inner VLAN is often
defined as the C-VLAN.
Service board
Control board
MVLAN Inner VLAN
ONT
Unicast data
channel
Service board
control board, IGMP messages are broadcast by the control board within the S-VLAN.
If transparent transmission is not allowed, IGMP messages are dropped.
In the downstream direction, the unknown multicast data transmitted from the network
side is broadcast by the control board within the S-VLAN and after the data arrives at
the service board, the service board determines whether to transparently transmit the
data according to the VLAN-level IGMP transparent transmission policy and service-
port-level transparent transmission policy. If transparent transmission is allowed, the
service board translates the S-VLAN tag to the CLAN tag according to the configuration
of the traffic stream and then transmits the data to users. If transparent transmission is
not allowed, unknown multicast data is dropped.
NOTE
To prevent the multicast data of the multicast user provisioned with multicast service from being
transmitted to an unauthorized upstream multicast source, make sure that the policy for
transmitting unknown multicast data is set to drop. The transparent transmission policies for
unknown multicast traffic have switches for two levels on a service board: the VLAN level and
the service port level. When the two switches are both set to transparent transmission, the policy
is transparent transmission. When either of the two switches is set to drop, the policy is drop.
(Only transparent transmission is supported for connection-oriented traffic and the policy is not
configurable in this case.)
l Co-existence of IPTV service and transparent transmission of multicast data
Multi-service-port solution
IPTV service and multicast transparent transmission service are carried on two service
ports, and the S-VLAN of the service port that carries multicast transparent transmission
service must not be the MVLAN.
The service port that carries IPTV service processes the received IGMP messages
following the flow of processing IPTV service, and forwards the multicast data
according to the multicast forwarding entry.
The service port that carries multicast transparent transmission service transparently
transmits or drops the received IGMP messages according to the IGMP transparent
transmission policy of the traffic stream, and transmits or drops the received unknown
multicast data according to the unknown multicast transparent transmission policy for
the traffic stream.
Single-service-port solution
IPTV service and multicast transparent transmission service are carried on one service
port. The S-VLAN of the port must not be the MVLAN.
When the access node receives upstream IGMP messages, it matches the multicast
group address in IGMP messages to the programs in the MVLAN. If the group address
successfully matches a program, the access node processes the messages as IPTV
service. If the group address fails to match any program, the access node determines
whether to transparently transmit the messages according to the IGMP transparent
transmission policy of the S-VLAN and service port. The access node transparently
transmits the messages only when the IGMP transparent transmission policy is enabled
for both the S-VLAN and service port.
If the access node receives downstream IGMP messages that carry the MVLAN tag,
the access node processes the IGMP messages as IPTV service. If the messages carry
the S-VLAN tag, the access node forwards them according to the IGMP transparent
transmission policy of the S-VLAN and service port. The access node transparently
transmits the messages only when the IGMP transparent transmission policy is enabled
for both the S-VLAN and service port.
If the access node receives the multicast data of IPTV service, the access node forwards
the multicast data according to the multicast forwarding entry. If the multicast data is
unknown, the access node forwards the data according to the unknown multicast
transparent transmission policy of the VLAN and service port. The access node
transparently transmits the data only when the unknown multicast transparent
transmission policy is enabled for both the S-VLAN and service port.
FTTH
IGMP proxy
TV ONT
Multicast Multicast
Optical OLT Multicast
router router
splitter source
FTTB/FTTC
TV MxU
IGMP prxoy
multicast VLAN mode
FTTH
IGMP proxy
TV ONT
FTTB/FTTC
TV MxU
IGMP snooping
global IP mode
Figure 10-47 EPON FTTx IGMP dynamic controllable multicast network application
IGMP snooping
FTTH
IGMP proxy
TV ONT
Multicast Multicast
OLT
Optical router router Multicast
splitter source
FTTB/FTTC
TV MxU
OAM multicast
global IP mode
11.1 MSTP
The Multiple Spanning Tree Protocol (MSTP) is compatible with STP and RSTP.
11.2 RRPP
Rapid Ring Protection Protocol (RRPP) is a link-layer protocol specially used for protecting
Ethernet ring networks.
11.3 Smart Link and Monitor Link
The smart link is a solution that is applied in the dual-upstream-transmission network and
provides reliable and high-efficiency backup and quick switching for the dual uplinks. The
monitor link solution, as a supplementary to the smart link solution, is used to monitor the
uplinks.
11.4 Ethernet Link Aggregation
Ethernet link aggregation refers to aggregation of multiple Ethernet ports to form one port to
provide higher bandwidth and link security.
11.5 Protection Group of Uplink Ports
After a protection group of upstream ports are configured, when the primary upstream port fails,
the data can be transmitted in the upstream direction through the secondary upstream port. This
topic provides introduction to this feature and describes the principle and reference documents
of this feature.
11.6 BFD
The following topics provide the definition, purpose, specifications, principle, glossary, and
acronyms and abbreviations of the BFD. It also provides the standards that this feature complies
with.
11.7 GPON Type B Protection
This topic describes the introduction, principle, and reference documents of the feature GPON
Type B protection.
11.8 Type C Protection Switching of GPON Lines
This topic describes the type C protection switching of GPON lines.
11.1 MSTP
The Multiple Spanning Tree Protocol (MSTP) is compatible with STP and RSTP.
11.1.1 Introduction
Definition
The Spanning Tree Protocol (STP) applies to a loop network to realize path redundancy through
certain algorithms. STP also prunes a loop network into a loop-free tree network. This helps to
avoid proliferation and infinite loop of packets in the loop network.
The Rapid Spanning Tree Protocol (RSTP) is an improvement on STP. The rapidness of RSTP
relies on the greatly shortened delay for the designated port and the root port to turn into the
forwarding state in a certain condition. For details, see "Principle of RSTP" in "11.1.5
Principle." This helps to shorten the time for stabilizing the network topology.
The Multiple Spanning Tree Protocol (MSTP) is compatible with STP and RSTP.
Purpose
Although STP can prune a loop network into a loop-free network, it fails to transit fast. Even a
port in a point-to-point link or an edge port has to wait double Forward Delay time before it can
turn into the forwarding state.
RSTP features fast convergence; however, like STP, RSTP still has the following defects:
l All the bridges in a local area network (LAN) share a same spanning tree, and fail to block
redundant links by VLAN.
l The packets of all the VLANs are forwarded along the same spanning tree. Therefore, load
sharing of data traffic cannot be implemented between VLANs.
MSTP can be a remedy to the defects of STP and RSTP. It not only realizes fast convergence,
but also enables traffic of different VLANs to be forwarded along their respective paths. This
helps to provide a better load sharing mechanism for redundant links.
MSTP sets VLAN mapping tables (relation tables between VLANs and spanning trees) to
associate VLANs and spanning trees. MSTP divides a switching network into multiple regions.
Each region contains multiple spanning trees, and each spanning tree is independent from others.
MSTP prunes a loop network to a loop-free tree network to avoid proliferation and infinite loop
of packets in the loop network. It also provides multiple redundant paths for data forwarding to
realize load sharing of VLAN data during forwarding.
11.1.2 Specifications
The MA5600T supports the following MSTP specifications:
l Loop protection
11.1.4 Availability
License Support
The MSTP feature is the basic feature of the MA5600T. Therefore, no license is required for
accessing the corresponding service.
Version Support
Product Version
Feature Dependency
Due to difference in protocols, RSTP and MSTP shall comply with the following limitations
when cooperating to realize fast transition:
Otherwise, when the network topology changes, fast transition of a port cannot be realized.
Hardware Support
The boards that support the MSTP feature are the SCUN board, the OPGD board, the ETHB
board, and the board in the GIU slot.
11.1.5 Principle
Principle of STP
STP determines the topology of a network by transmitting a certain special message
(configuration message as defined in IEEE 802.1D) between bridges. A configuration message
contains sufficient information to enable the bridge to complete the calculation of the spanning
tree.
The following defines the designated port and the designated bridge:
l For a bridge (such as bridge A), the designated bridge is a bridge that is directly connected
to bridge A and forwards data packets to bridge A. The designated port is the port in the
designated bridge through which the data packets are forwarded to bridge A.
l For a LAN, the designated bridge is a bridge that forwards data packets to the LAN. The
designated port is the port in the designated bridge through which the data packets are
forwarded to the LAN.
AP1 AP2
BP1 CP1
Switch C
Switch B
Priority: 1 Priority: 2
BP2 CP2
1. In network initialization, all the bridges work as the root bridge of the spanning tree.
2. The designated port of a bridge takes the hello time as the interval for sending its
configuration messages. If the port that receives the configuration message is a root port,
the bridge increases the message age contained in the configuration message by degrees
and enables the timer to time the configuration message.
3. If a path fails, the root port on this path receives new configuration messages no longer,
and the old configuration messages are discarded due to timeout. This results in
recalculation of the spanning tree. A new path then is created to replace the faulty path and
recover the network connectivity.
The new configuration message upon the recalculation, however, will not immediately spread
throughout the entire network. In this case, the old root port and designated port that fail to
discover the topology change will forward their data along the old paths. If the selected root port
and designated port forwards data immediately, a temporary loop may be created.
Therefore, STP adopts a state transition mechanism. That is, the root port and the designated
port have to experience a transition state before they can re-forward data. The transition state
turns into the forwarding state upon Forward Delay. This delay guarantees that the new
configuration message has spread throughout the entire network.
Defects of STP
l In case of topology change or link failure, a port has to wait double Forward Delay time
before it can turn from the blocking state to the forwarding state. Therefore, in case of
topology change, double Forward Delay time (at least scores of seconds) is required to
restore the network connectivity.
l The entire bridged LAN uses a single spanning tree instance. Therefore, when the network
is large, a longer convergence time may be required or the topology changes frequently.
Principle of RSTP
RSTP is an improvement on STP. The rapidness of RSTP relies on the greatly shortened delay
for the designated port and the root port to turn into the forwarding state in a certain condition.
This helps to shorten the time for stabilizing the network topology.
l First improvement:
The alternate port and backup port are set for rapid switching of the root port and
designated port.
When the root port fails, the alternate port quickly switches to the new root port and
turns into the forwarding state without delay.
When the designated port fails, the backup port quickly switches to the new designated
port and turns into the forwarding state without delay.
l Second improvement:
In a point-to-point link connected with two switching ports, a designated port turns into
the forwarding state without delay after one handshake with the downstream bridge.
In a shared link connected with at least three bridges, the downstream bridge does not
respond to the handshake request sent from the upstream designated port, and the
designated port has to wait double Forward Delay time before it turns into the
forwarding state.
l Third improvement:
A port that is directly connected to a terminal and is not connected to any other bridge
is defined as an edge port. The edge port can directly turn into the forwarding state
without delay.
Because a bridge does not know whether a port is directly connected to a terminal, the
edge port must be configured manually.
The bridges that adopt RSTP are compatible with the bridges which adopt STP. The bridges that
adopt RSTP can identify both STP and RSTP packets and apply them to calculation of the
spanning tree.
Defects of RSTP
Although RSTP features fast convergence, like STP, RSTP still has the following defects: All
the bridges in a LAN share a same spanning tree, and thus the packets of all the VLANs cannot
be forwarded equally. Furthermore, the packets of some VLANs cannot be forwarded.
Principle of MSTP
MSTP can compensate for the defects of STP and RSTP. It not only realizes fast convergence,
but also enables traffic of different VLANs to be forwarded along their respective paths. This
helps to provide a better load sharing mechanism for redundant links.
MSTP sets VLAN mapping tables (relation tables between VLANs and spanning trees) to
associate VLANs and spanning trees. MSTP divides a switching network into multiple regions.
Each region contains multiple spanning trees, and each spanning tree is independent of one
another. Multiple spanning trees can run on each bridge to forward the packets of different
VLANs.
MSTP divides the entire L2 network into multiple spanning tree (MST) regions. These regions
and the other bridges and LANs are connected into a single common spanning tree (CST).
Multiple spanning trees are created in a region through calculation. Each spanning tree is defined
as a multiple spanning tree instance (MSTI). MSTI 0 is defined as an internal spanning tree
(IST). MSTP connects all bridges and LANs with a single common and internal spanning tree
(CIST) which consists of the CST and the IST. Like RSTP, MSTP calculates the spanning tree
according to the configuration message. The configuration message, however, contains the
message of MSTP on the bridge.
l Calculation of CIST
Select a bridge with the highest priority within the entire network as the CIST root by
comparing the configuration messages.
In each MST region, MSTP creates an IST through calculation. Meanwhile, MSTP
regards each MST region as a single bridge, and then creates a CST between regions.
The CST and the IST forms the CIST that connects all the bridges in a bridge network.
l Calculation of MSTI
In an MST region, MSTP creates different MSTIs for different VLANs according to the
mapping relation between the VLANs and the spanning tree instances. Each spanning tree
is calculated independently. The process is similar to that in which the RSTP calculates the
spanning tree.
Besides the basic functions of MSTP, the MA5600T provides some special functions, such as:
l BPDU protection
For an access device, the access port is generally connected to a terminal (such as a PC) or
file server. In this case, the access port is set to an edge port for the purpose of fast transition.
When receiving a configuration message (BPDU), the edge port switches to a non-edge
port automatically, the spanning tree is re-calculated and the topology changes accordingly.
In normal conditions, an edge port cannot receive STP configuration messages. If the bridge
is maliciously attacked by forged configuration messages, the network will be attacked.
The BPDU protection function can prevent such network attacks.
After the BPDU protection function is enabled on the MA5600T, if an edge port receives
a configuration message, the system shuts down the edge port, and notifies the network
management system of the related information. Only network administrators can enable
the port that is shut down.
It is recommended that you enable the BPDU protection function on the MA5600T which
is configured with an edge port.
l Root protection
Because of wrong configurations by the maintenance personnel or malicious network
attacks, a legal root bridge in the network may receive a configuration message with a
higher priority. In this case, this root bridge may become a non-root bridge and the topology
changes accordingly. Such illegal change results in transfer of traffic in high-speed links
to low-speed links, thus causing network congestion.
The root protection function is a solution to this problem.
When the root protection function is enabled for a port, the port is always a designated port.
If the port receives a configuration message with a higher priority, and is to become a non-
designated port, the port will turn into the listening state and will not forward packets (that
is, the link connected to the port is disconnected). If the port does not receive a configuration
message of a much higher priority within a certain long period of time, the port will turn
into the normal state.
l Loop protection
A bridge maintains the states of the root port and other blocked ports by continuously
receiving BPDUs from the upstream bridge.
In case of link congestion or failure, these ports fail to receive BPDUs from the upstream
bridge. For this reason, the bridge will re-select its root bridge. The previous root bridge
switches to the designated port, and the blocked ports turn to the forwarding state. As a
result, loops are created in the switching network.
The loop protection function is a solution to this problem.
After receiving the BPDUs (excluding the TCN packets) again, a port under loop protection
normally processes the packets, selects the role, and resets the forwarding state of the port.
The port is not always in the blocked state.
When the loop protection function is enabled, if the root port switches to a non-root port,
it will turn into the discarding state, and the blocked ports will remain in the discarding
state. Therefore, no packets are forwarded, and no loop is created in the network.
NOTE
11.2 RRPP
Rapid Ring Protection Protocol (RRPP) is a link-layer protocol specially used for protecting
Ethernet ring networks.
11.2.1 Introduction
Definition
Most metropolitan area networks (MANs) and enterprise networks adopt a ring topology to
provide high reliability. In a ring topology, the failure of any node on the ring does not affect
services. The following introduces some known ring network technologies.
l SDH/SONET ring
Synchronous digital hierarchy (SDH) and synchronous optical network (SONET) are ring
technologies widely used in current transport networks and support single ring and multiple
rings. SDH/SONET feature high reliability because they provide an automatic protection
switching (APS) self-healing mechanism in case of a fault.
Due to the point-to-point (P2P) and circuit-switched design, in SDH/SONET ring networks,
bandwidth is fixedly allocated and reserved on the P2P links between nodes. Thus
bandwidth cannot be adjusted according to actual traffic condition in the networks. This
hampers the efficient utilization of bandwidth and makes it different for the SDH/SONET
networks to adapt to IP data service, which has the bursty characteristics.
Broadcast and multicast packets in SDH/SONET ring networks are fragmented and
transmitted as multiple unicast packets, which is a serious waste of bandwidth. In addition,
a redundant bandwidth as high as 50% is required for the APS feature. In this case, a flexible
selection mechanism is not available.
l RPR ring
Resilient packet ring (RPR) is a MAC layer-based protocol researched and standardized
by the IEEE 802.17 working group and RPR Alliance. RPR is used on ring topologies. The
RPR design targets at a close-loop, P2P, and MAC layer-based logical ring topology.
Viewed from the physical layer, an RPR is a set of P2P links; from the data link layer, RPR
is more like a broadcast medium network similar to Ethernet.
RPR requires dedicated hardware support and involves complicated fairness algorithms.
l STP ring
Spanning Tree Protocol (STP) is also a standard ring protection protocol developed by
IEEE and has been in wide application. However, STP rings in actual application are
restricted by the network scale, and the convergence time is also subject to the network
topology. The convergence time is not desirable when the network diameter is large. In this
case, STP rings may fail to carry data that has high requirements on transmission quality.
Rapid Ring Protection Protocol (RRPP) is a link-layer protocol dedicated to Ethernet ring
protection. RRPP is free from the problems above, such as bandwidth waste, dedicated hardware
support, and slow convergence. On a complete Ethernet ring RRPP protects against broadcast
storms caused by data loops. When the Ethernet ring has a link break, RRPP can rapidly recover
the communication channels between the nodes on the ring.
Purpose
To enable faster convergence and mitigate the impact of network scale on the convergence speed,
Huawei develops RRPP, a link-layer protocol specially for Ethernet ring protection. Compared
with other Ethernet ring network technologies, RRPP has the following advantages:
l Supports a convergence duration independent from the number of nodes on the ring. RRPP
applies to networks with a larger diameter.
l Prevents broadcast storms caused by data loops when the Ethernet ring is complete.
l Rapidly starts the backup link to recover the communication channel between the nodes
on the Ethernet ring when the ring has a link break.
11.2.2 Specifications
The specifications of the RRPP feature are as follows:
l The system supports only one RRPP domain, and supports only the RRPP single-ring
network topology.
l If one link on the RRPP ring fails, the service interruption duration on the RRPP ring is
within 200 ms.
11.2.4 Availability
License Support
This feature is provided without a license.
Version Support
Table 11-2 lists the versions that support the RRPP feature.
Product Version
Hardware Support
The optical ports on the GIU/SCU/ETHB board support the RRPP protocol when the ports are
used for upstream transmission.
Limitations
l The RRPP protocol cannot be enabled at the same time with the PS, LACP, or MSTP
protocol.
l Currently, the RRPP feature is supported by upstream Ethernet ports but is not supported
by xPON ports.
l When working in the active/standby mode, the SCU control boards do not support the
Ethernet ports respectively on the active and standby control boards to function as RRPP
ports.
l Only upstream ports support configurable network-side roles for RRPP.
l The control VLAN must not run other L2 or L3 services than the RRPP service.
l Ethernet optical ports support the RRPP protocol. When the upstream Ethernet ports of the
system are electrical ports, RRPP supports protection switching on the basis of seconds.
l When the system goes upstream through the ETHB board and RRPP is enabled in the
system, the system does not support multicast service.
l When the RRPP ring is activated, multicast upstream port and multicast subtending port
do not need to be configured on the upstream port. The configuration sequence of the nodes
on an RRPP ring is as follows: First, configure all transit nodes and activate the transit
nodes; then, configure the master node and activate the master node.
l Suppression of unknown unicast packets affects the recovery time of RRPP ring switching.
11.2.5 Principle
RRPP domain
Primary port
Secondary port
RRPP ring
RRPP Domain
An RRPP domain is uniquely identified by an integral ID, and it consists of a set of interoperated
switches that are configured with the same domain ID and the same control VLANs. One node
supports only one domain.
l RRPP ring
l Control VLAN
l Master node
l Transit node
RRPP Ring
An RRPP ring physically corresponds to a ring-connection Ethernet topology. An RRPP domain
is built on multiple interconnected RRPP rings, among which there is a primary ring and the rest
are secondary rings. The primary ring and secondary rings are identified by levels specified
during configuration. The primary ring is identified by level 0 and secondary rings by level 1.
An RRPP ring is also identified by an integral ID. Currently, the MA5600T supports only one
RRPP ring in an RRPP domain.
Control VLAN
Each RRPP domain has two control VLANs. One is the primary control VLAN and the other
the secondary control VLAN. The protocol packets of the primary ring are transmitted in the
primary control VLAN, and the protocol packets of the secondary ring are transmitted in the
secondary control VLAN.
During configuration, you only need to specify the ID of the primary control VLAN, and a
VLAN whose ID is larger than the primary control VLAN by 1 will serve as the secondary
control VLAN. The ports of the primary control VLAN and the secondary control VLAN must
not be configured with IP addresses.
The RRPP port on the primary ring must belong to the primary control VLAN and the secondary
control VLAN at the same time; the RRPP port of the secondary ring only needs to belong to
the secondary control VLAN.
The primary ring is treated as a logical node of the secondary rings and the packets of the
secondary rings are transparently transmitted by the primary ring. The packets of the primary
ring are transmitted only within the primary ring and are not transmitted to the secondary rings.
Master Node
The master node is the policy-making and controlling node on an RRPP ring. Each RRPP ring
must have one and only one designated master node. The master node initiates the polling
mechanism (a mechanism for actively checking the ring status), and also determines and
implements the policies after the network topology changes.
A master node has three states:
l Complete state: If the master node can receive its own hello packets on the secondary port,
it indicates that the ring is complete. In this case, the ring is in the complete state.
l Failed state: If the master node does not receive its own hello packet within a specified
time, the master node regards that there is a link-down on the ring network. In this case,
the master node opens its secondary port for forwarding data and the ring is in the failed
state.
l Unknown state: When the RRPP ring is not enabled, the ring is in the unknown state.
Transit Node
All nodes except the master node on a ring can be called transit nodes. Transit nodes monitor
the status of the RRPP links that are directly connected to them, and notify the master node of
the link state change. Then the master node will decide how to handle the changes.
A transit node has three states:
l Link-up state: The primary port and secondary port of the transit node are up.
l Link-down state: The primary port or secondary port of the transit node is down.
l Preforwarding state (temporarily blocked state): The primary port or secondary port of the
transit node is blocked. When the link of the port of a link-down transit node goes up, the
transit node changes to the preforwarding state and blocks the recovered port. When the
transit node in the preforwarding state receives a packet instructing an unblock, or when
the fail timer of the domain where the transit node is located expires, the transit node
unblocks the blocked port.
l The range of a hello timer is 1-10s, and the range of a fail timer is 3-30s. The configured
length of the fail timer must be at least three times the length of the hello timer.
RRPP Packet
Packet Type
Table 11-3 lists the types of RRPP packets.
0 7 8 15 16 23 24 31 32 39 40 47
Destination MAC Address (6 bytes)
RRPP_VER RRPPTYP
E Domain ID Ring ID
HELLO_TIMER FAIL_TIMER
RESERVED(0x000000000000)
RESERVED(0x000000000000)
RESERVED(0x000000000000)
RESERVED(0x000000000000)
RESERVED(0x000000000000)
l Destination MAC Address: 48 bits. It indicates the destination MAC address of the packet.
l Source MAC Address: 48 bits. It indicates the source MAC address of the packet.
l EtherType: 8 bits. It is the packet encapsulation type field and is always 0x8100 (indicating
tagged).
l PRI: 4 bits. It indicates the class of service (CoS) priority.
l VLAN ID: 12 bits. It indicates the ID of the VLAN to which the packet belongs.
l Frame Length: 16 bits. It indicates the Ethernet frame length.
l DSAP/SSAP: 16 bits. It indicates the destination service access point/source service access
point.
l CONTROL: 8 bits.
l OUI: 24 bits.
l RRPP_LENGTH: 16 bits. It indicates the length of the RRPP protocol data unit.
l RRPP_VERS: 16 bits. It indicates the RRPP version.
l DOMAIN_ID: 16 bits. It indicates the ID of the RRPP domain to which the packet belongs.
RING_ID: 16 bits. It indicates the ID of the RRPP ring to which the packet belongs.
l SYSTEM_MAC_ADDR: 48 bits. It indicates the bridge MAC address of the node sending
the packet.
l HELLO_TIMER: 16 bits. It indicates the timeout time of the hello timer used by the node
sending the RRPP packet. The timer is in the unit of second.
l FAIL_TIMER: 16 bits. It indicates the timeout time of the fail timer used by the node
sending the RRPP packet. The timer is in the unit of second.
l HELLO_SEQ: 16 bits. It indicates the sequence number of the hello packet.
Polling Mechanism
In the polling mechanism, the master node transmits the HELLO packet from its primary port
periodically to check the ring network. After transmitting the HELLO packet, if the master node
can receive this packet on its secondary port, it indicates that the ring network is complete. If
the master node cannot receive this packet within the specified period, the master node considers
that a link fault occurs on the ring network and unblocks its secondary port and allows it to
forward packets. This is the basic mechanism for RRPP.
Secondary port
Data packet
The polling mechanism is a mechanism that the master node of the RRPP ring actively checks
the health status of the ring network. Its process is as follows:
1. The master node transmits the HELLO packet periodically from its primary port according
to the value of the HELLO timer.
2. The HELLO packet is transmitted over the ring network by passing every transit node on
the ring network.
l After transmitting the HELLO packet, if the master node can receive this packet on its
secondary port before the Fail timer times out, the master node considers that the ring
network is complete.
l After transmitting the HELLO packet, if the master node cannot receive this packet on
its secondary port after the Fail timer times out, the master node considers that the ring
network is faulty.
After receiving the HELLO packet that is sent from the master node in the Failed state on the
secondary port, the master node performs the following operations:
1. If a link fault occurs over the ring network, the state of the port connecting to the link is
changed to Down.
2. The transit node transmits the LINK-DOWN packet actively and immediately to the master
node to notify the master node of the link state change.
3. After receiving the LINK-DOWN packet, the master node considers that the ring network
is faulty and unblocks its secondary port. At the same time, the master node transmits the
packets to other transit nodes to notify them of flushing their FDBs.
4. After other transit nodes flush the FDB, data streams are switched to the normal links.
Secondary COMMON-FDB
port
LINK-DOWN COMMON-FDB LINK-DOWN COMMON-FDB
Protocol packet
Data packet
1. If a link fault is rectified, the port of the transit node changes to the Forwarding state.
2. This transit node temporarily blocks the port whose fault is rectified; however, the HELLO
packet transmitted from the master node can pass the temporarily blocked port.
3. After receiving the HELLO packet that is sent from the master node on the secondary port,
the master node considers that the ring network recovers to the health state.
4. The master node blocks the secondary port and transmits the packet to other transit nodes
to notify them of unblocking the temporarily blocked ports and flushing their FDBs.
Secondary COMPLET-FDB
port
COMPLET-FDB HELLO
HELLO COMPLET-FDB
COMPLET-FDB
HELLO
Transit node preforwarding Transit node preforwarding
Protocol packet
Data packet
Ring Polling
P S
Block
Master
1. When all links in the entire ring network are up, the RRPP ring is in a healthy state. The
state of the master node reflects the health condition of the entire ring network.
2. When the ring network is healthy, the master node needs to block its secondary port to
prevent data loops. Data loops will cause a broadcast storm.
3. The master node periodically sends hello packets from its primary port. The hello packets
traverse the transit nodes and finally return to the master node by its secondary port.
Link-down Alert
1. When the RRPP port of a transit node has a link-down, the transit node notifies the master
node by a link-down packet, as shown in Figure 11-8.
P S
Block
Master
2. After receiving the link-down packet, the master node immediately changes from the
complete state to the failed state and unblocks its secondary port.
The master node provides a polling mechanism which attends to the event that the link-
down packet is lost during transmission. If the master node does not receive hello packets
on its secondary port after the fail timer expires, the master node also considers that there
is a ring network failure. Such a condition is processed in the same way as the transit node
actively reporting link-down.
3. Since the network topology is changed, to prevent incorrect direction of packets, the master
node also needs to flush its FDB table and send the COMMON-FLUSH-FDB packet from
its primary port and secondary port to all transit nodes so that the transit nodes can flush
their FDB tables. Figure 11-9 illustrates the process.
P S
Master
COMMON-FLUSH-FDB
protocol packet P Primary port
Link Restoration
1. When the RRPP port of the transit node recovers, the transit node changes to the
preforwarding state and blocks the recovered port.
2. The master node periodically sends hello packets from its primary port. After all the faulty
links on the ring network recover, the master node will receive the hello packets on its
secondary port again.
3. After the master node receives the hello packets that are sent by itself, the master node will
first changes back to the complete state and block its secondary port.
4. The master node sends the COMPLETE_FLUSH_FDB packet from its primary port to
notify all transit nodes to flush their FDB tables. Figure 11-10 illustrates the process.
P S
Block
Master
COMPLETE-FLUSH-FDB
protocol packet P Primary port
Normal Links
RRPP domain
Transit 2
ADSL2+/SHDSL/
VDSL2 modem
P
ONU
Transit 3
Figure 11-11 shows an RRPP single-ring topology. In normal conditions, data flows travel the
"Transit 1 -> Transit 2 -> Master" route on the RRPP ring. If the link between Transit 1 and
Transit 2 fails, the data flows will be rerouted on the RRPP ring.
Faulty Links
RRPP domain
Transit 2
ADSL2+/SHDSL/
VDSL2 modem
P LAN switch
S
ONU
Transit 3
ONT
Data packet
Packet for updating MAC address and ARP entries
Link-down packet
As shown in Figure 11-12, when the link between Transit 1 and Transit 2 fails, the master node
will receive a link-down notification and will immediately unblock its secondary port. Now, the
network topology is changed and the original MAC address tables of the nodes cannot correctly
guide the forwarding any more. In this case, L2 and L3 service streams will be interrupted. After
unblocking its secondary port, the master node immediately informs all other nodes (transit
nodes) on the ring to re-learn MAC address entries and ARP entries. After entries are re-learned,
L2 and L3 service streams on the RRPP ring will be rerouted to "Transit 1 -> Transit 3 -> Master".
Glossary
Term Explanation
RRPP ring Each RRPP ring physically corresponds to an Ethernet that is in a ring
topology. An RRPP ring is also identified by an integral ID.
Master node A master node initiates the polling mechanism (a mechanism for
actively checking the ring status), and also determines and
implements the policies after the network topology changes.
Transit node A transit node monitors the status of the RRPP links that are directly
connected to the node, and notifies the master node of the link state
change. Then the master node will decide how to handle the changes.
Primary/Secondary The master node and transit nodes all connect to the Ethernet ring
port through two ports. Of the two ports, one is the primary port and the
other the secondary port. The port roles are user-configurable. The
primary port and secondary port of the master node function
differently. A master node sends hello packets from its primary port.
If the master node can receive the hello packets from its secondary
port, it indicates that the RRPP ring where the master node is located
is complete. In this case, the master node should block its secondary
port to prevent a data loop. On the contrary, if the master node does
not receive the hello packets within a specified time, it indicates that
the ring is faulty. In this case, the master node should unblock its
secondary port to ensure normal communication between all nodes
on the ring. The primary port and secondary port of a transit node
function the same.
11.3.1 Introduction
Definition
The smart link is a solution that is applied in the network with dual uplinks and provides reliable
and high-efficiency backup and quick switching for the dual uplinks.
Purpose
The network with dual uplinks is a common network application currently. In a network with
dual uplinks, the redundant link can be blocked through the Spanning Tree Protocol (STP) or
Rapid Spanning Tree Protocol (RSTP) and can provide the backup function. In this way, when
the active link fails, traffic will be switched to the standby link.
The preceding two solutions (STP and RSTP) can meet customers' requirements for redundancy
backup from the perspective of the function, but cannot meet the requirements of many users
for the performance.
Thus, the smart link solution is applied to the access network. With this solution, redundancy
backup for active and standby links and quick switching are implemented for a dual-homing
network. This ensures high reliability and quick convergence. Meanwhile, as a supplementary
to the smart link solution, the monitor link solution is introduced to monitor uplinks. This
improves the backup function of the smart link solution.
Benefits
Benefits to Operators
Implementation of the smart link solution and the monitor link solution provides high reliability
for carriers' network.
11.3.2 Specifications
l Active-standby working mode and load sharing working mode for the smart link feature
l Up to 16 monitor link groups
l Up to 16 downlinks in one monitor link group
11.3.3 Availability
Related NEs
The smart link and monitor link features, which are applied to the scenario of a network with
dual uplinks (the network is connected to the upstream IP network through dual uplinks), is
related to the OLT and the upstream network device.
The upstream network device such as the router must support the smart link and monitor link
features.
NOTE
The smart link and monitor link features are put forth by Huawei. Currently, only Huawei devices support this
technology.
License Support
The smart link and monitor link features are basic features of the MA5600T. Therefore, no
license is required for accessing the corresponding service.
Version Support
Miscellaneous
When the device needs to process the FLUSH packet to update the MAC and ARP entries, the
following conditions should be met:
l Set the port that receives the FLUSH packet as the receive port.
l The corresponding VLAN and check password of the local device must be the same as the
VLAN and check password of the upstream network device.
11.3.4 Principle
Smart Link
This topic describes the working principle of the smart link feature.
Basic Concepts
A smart link protect group can work in the following two modes:
l Active-standby working mode
l Load sharing working mode
Figure 11-13 shows the active-standby working mode of a smart link protect group.
OLT
ACTIVE state
STANDBY state
The following provides some concepts related to the smart link feature:
l Smart link group
A smart link group is also called an intelligent link group, which contains up to two ports,
namely, one master port and one slave port. In normal conditions, only one port is in the
ACTIVE state, and the other port is blocked and in the STANDBY state. When the port in
the ACTIVE state fails, the smart link group automatically blocks the port, and switches
the previously standby port to the ACTIVE state. As shown in Figure 11-13, ports 1 and
2 form a smart link group.
l Master port
The master port, which is also called the work port, is a port role in the smart link group.
When both ports are in the STANDBY state, the master port is prevailed upon to switch to
the ACTIVE state. The master port, however, is not always in the ACTIVE state. If the
slave port is already in the ACTIVE state after link switching, the master port can only be
in the STANDBY state even if its link recovers and the master port remains in this state
until link switching the next time. For example, port 1 in the ACTIVE state in Figure
11-13 is the master port.
l Slave port
The slave port, which is also called the protect port, is a port role in the smart link group.
When both ports are in the STANDBY state, the master is prevailed upon to switch to the
ACTIVE state, and the slave port remains in the STANDBY state. The slave port is not
always in the STANDBY state. It switches to the ACTIVE state after link switching occurs
on the master port. Port 2 in Figure 11-13 is the slave port.
l FLUSH packet
After link switching occurs on the smart link group, the original forwarding entry is not
applicable to the network with new topology, and the upstream convergence device needs
to update the MAC and ARP entries. In this case, the smart link group notifies the other
devices on the network of updating the address table through sending the notification
packet. This notification packet is the FLUSH packet.
Figure 11-14 shows the load sharing working mode of a smart link protect group.
Figure 11-14 Load sharing working mode of a smart link protect group
In the load sharing working mode, the links of both ports are enabled. If both ports are normal,
some services are transmitted through the master port and the others are transmitted through the
slave port. When either of the ports fails, all the services are transmitted through the port in the
normal state.
Working Principle
Figure 11-15 shows the working principle of the smart link feature.
A fault
Send the FLUSH
occurs.
packet.
OLT OLT
ACTIVE state
STANDBY state
l Switching
When the link of port 1 fails, port 1 switches to the STANDBY state and port 2 switches
to the ACTIVE state. When the original active link recovers from the fault, it remains in
the blocked state and does not occupy bandwidth. This ensures stability of traffic.
l Update
When link switching occurs in the smart link group, the MAC and ARP entries on the
devices on the network may be incorrect. Therefore, a new mechanism for updating the
MAC and ARP entries is required. Currently, there are the following two mechanisms
available for updating the MAC and ARP entries:
The smart link device automatically updates the MAC and ARP entries through traffic.
The smart link device sends the FLUSH packet through the new link to update the MAC
and ARP entries.
When the device supports the first mechanism, bidirectional traffic trigger is required. This
is applicable to the scenario when the device interoperates with the device from other
vendors. When the device supports the second mechanism, it requires the upstream device
to identify the FLUSH packet of smart link and to update the MAC and ARP entries.
Monitor Link
This topic describes the working principle of the monitor link feature.
Basic Concepts
IP
1 Uplink
The following describes some basic concepts related to the monitor link feature.
l Monitor link group
A monitor link group is composed of one uplink and several downlinks.
NOTE
The link in a monitor link group may not be a single link, but may be a certain type of link group. The
uplink can be an aggregation group or protect group. The downlink can only be a single link. The status
of the downlink changes according to the status of the uplink.
l Uplink
When the uplink in a monitor link group fails, it indicates that the monitor link group fails.
In this case, the downlinks in the monitor link group will be blocked by force.
l Downlink
When a downlink in a monitor link group fails, it does not affect the uplink or the other
downlinks.
Working Principle
A fault occurs.
Monitor link group Monitor link group
1 uplink 1 uplink
After a monitor link group is configured, its uplink will be monitored in real time. Once the
uplink fails, all the UP downlinks in the monitor link group will be blocked by force. When the
uplink recovers from the fault, the downlinks are resumed.
When the uplink is an aggregation group or protect group, the uplink is considered failed only
when the entire aggregation group or protect group fails.
Device 3
Device 1
Device 2
1
3 4
OLT
Traffic stream
The MA5600T works as the OLT. Ports 3 and 4 on the MA5600T are added to a smart link
group and work in the active-standby mode. Port 1 on device 1 is configured as the upstream
port of the smart link group, and port 2 on device 1 the downstream port.
In normal conditions, traffic is transmitted through the path highlighted in green. If the uplink
of device 1 fails, the uplink in the smart link group will be blocked. In this case, on the
MA5600T, port 4 switches to the ACTIVE state because port 3 fails, and traffic is transmitted
to device 2 and then to the upstream network.
If the monitor link group is not configured on device 1, the channel between device 1 and the
MA5600T is still in the ACTIVE state when the channel between device 1 and device 3 fails.
Thus, the user traffic will be transmitted to device 1 from the MA5600T. As a result, the user
cannot access the network.
Glossary
Table 11-5 Glossary of the terms related to the smart link and monitor link features
Term Description
Term Description
After link switching occurs on the smart link group, the original
forwarding entry is not applicable to the network of new topology.
Flush packet In this case, the FLUSH packet is transmitted to the upstream
convergence device to notify the device to update the MAC and
ARP entries.
11.4.1 Introduction
Definition
The link aggregation group (LAG) aggregates multiple physical links to form a logical link with
a greater rate to transmit data. The domain where the link aggregation works is among devices
and is not relevant to the architecture of the entire network. In an Ethernet, a link maps a port.
Therefore, link aggregation is also called port aggregation.
The Link Aggregation Control Protocol (LACP) is the control protocol that is specified in the
IEEE 802.3ad standard for implementing link aggregation. Through LACP, the Ethernet ports
of different devices can be automatically aggregated without interventions from the user, and
the link layer failure of the ports can be detected, so as to implement control over link aggregation
control.
Purpose
A link aggregation group provides the following functions:
the system will select a link in the standby state to serve as the selected link, so as to protect
against link failure.
11.4.2 Specifications
The Ethernet link aggregation of the MA5600T supports the following specifications:
11.4.4 Availability
Relevant NE
To enable the link aggregation feature to take effect, the device that is connected to the
MA5600T must support link aggregation. The device that is connected to the MA5600T refers
to the upstream device, namely the network-side device, such as an L2 switch, L3 switch, or
router.
License Support
The Ethernet link aggregation feature is a basic feature of the MA5600T. Therefore, the
corresponding service is provided without a license.
Version Support
Product Version
Feature Dependency
l Only the ports of the same type (the ports of the same type, working mode, and rate) can
be aggregated into a link aggregation group.
l Boards (ETHB, and OPGD) supporting inter-board aggregation must be in neighboring
slots. In addition, before configuring inter-board aggregation, make sure that the two service
boards must be bound.
l Supports creation of protect groups between the aggregation group of the active control
board and that of the standby control board. Note that other boards except control boards
do not support creation of protect groups between the aggregation groups.
l Supports creation of protect groups within a same aggregation group. The aggregation
group in protect groups, however, cannot contain a port that is not added to a protect group.
Hardware Support
The SCUN, ETHB, and OPGD, and GIU boards support the intra-board Ethernet link
aggregation feature.
11.4.5 Principle
Introduction to LACP
The Link Aggregation Control Protocol (LACP) is based on the IEEE802.3ad standard, which
provides the following functions:
l Provides a standard negotiation mode for devices exchanging data. The system generates
an aggregation link automatically based on its configurations and enables the aggregation
link to receive and transmit data.
l After being generated, the aggregation link maintains the link state. In addition, when
aggregation conditions change, the aggregation link is adjusted or dismissed automatically.
LACP aggregates links between device A and device B in the following steps:
1. Device A exchanges LACP packets with device B through port 1, port 2, port 3, and port
4. The LACP packet contains the system priority, MAC address, port priority, port ID, and
operation key. The operation key reflects the aggregation capability of a port, which is
determined by various factors such as physical features (including rate and duplex),
configuration restrictions set by the network administrator, and features and limitations of
ports.
2. After receiving LACP packets sent from device A, device B compares the information
about LACP packets with the information saved by other ports and then selects the ports
that can be aggregated.
3. After receiving LACP packets sent from device B, device A compares the information
about LACP packets with the information saved by other ports and then selects the ports
that can be aggregated.
4. Device A and device B determine the same ports that can be aggregated into an aggregation
group and thus a link aggregation group is generated, as shown in Figure 11-19.
Port1 Port1
Port2 Port2
Port3 Port3
Port4 Port4
After an aggregation link is generated, member ports in the aggregated link have two states:
selected and standby. The selected and standby states are states of the aggregated ports
maintained at the LACP protocol layer, not the physical states of the ports. If the physical states
of the ports change, the states of the ports at the LACP protocol layer also change. For example,
if an aggregated port is faulty, the state of the port at the LACP protocol layer will change to the
standby state.
Not only the state change of the physical port, but also the exchange of (LACP data units)
LACPDUs can result in a change in the state of the port at the LACP protocol layer. For example,
when receiving an LACPDU from the peer end, the status of a port may change.
In this case, LACP improves security of link aggregation. The following lists aggregation link
states that can be checked.
LACP also supports such mechanisms as system priority, port priority, and short or long timeout.
l System priority
In LACP, the system priority is used for controlling the master/slave relation of the
interconnected devices. The slave device must select the selected port according to the
selection result of the master device. Otherwise, the two devices cannot be
interconnected with each other.
l Port priority
The port priority is used for selecting the master port and the slave port.
l Timeout
To guarantee the LACP check sensitivity, IEEE 802.3ad defines two timeout periods:
short timeout and long timeout. The two timeout values can be adjusted. A device cannot
use the short timeout to exchange information with the peer device unless the peer device
notifies the device of using the short timeout. Otherwise, the device always uses the
long timeout to exchange and transmit information.
The MA5600T supports the following timeout values:
S S
Switch
C C
Selected L2
U U
Active/Standby Mode
Here, take aggregation of two ports on the control board as an example. When the active/standby
mode is adopted, only one link in the aggregation group is in the selected state and this link
carries traffic, and the other links in the aggregation group are in the standby state. This constructs
a hot standby mechanism. When a selected link in an aggregation group fails, the system will
select a link in the standby state to serve as the selected link, so as to protect against link failure,
as shown in Figure 11-21.
l Links L1 and L2 are members of LAG1.
l Link L1 is in the selected state and carries traffic.
l Link L2 is in the standby state and does not carry traffic, but just constructs a hot standby
mechanism with link L1
l When link L1 fails, the system uses link L2 as the selected link.
S S
Switch
C C
Standby L2
U U
GIU
Service SCU
Board
GIU
l If the two GIU boards work in the normal state, the traffic streams from the service board
are shared according to the MAC address carried in the packets. That is, the traffic streams
are allocated to all the aggregated ports on the boards.
l If the port on one GIU board fails (Link down), the traffic stream carried on the failed port
will be switched to the normal port on the other board.
l If one GIU board fails, the traffic stream carried on the faulty board will be switched to the
normal board.
LAG
LAG
S
C
U
Dual homing
LAG
LAG LAG
S S
C C
U U
(1) (2)
If carriers provide different services for users in different periods, these services are connected
to different MANs through different upstream ports on the OLT (here refers to the MA5600T).
The services include the Internet access service for common users, Internet private line service
for enterprise users, NGN service for common users, VPN interconnection service for enterprise
users, and IPTV service for common users. As shown in the figure, the OLT must provide at
least four groups of ports because the OLT is connected to four types of upstream devices. If
each inter-board aggregation group contains two ports, at least eight GE ports are required. In
this way, both high bandwidth and high reliability are ensured.
The ETHB board supports the upstream transmission of inter-board link aggregation, and the
SCUN and GIU boards support inter-board link aggregation. Inter-board aggregation transmits
data upstream to the same L2/L3 device or upstream to different L2/L3 devices in dual homing,
as shown in Figure 11-25.
Figure 11-25 Network topology of the upstream transmission of inter-board link aggregation
S
C
U
LAG
LAG
S
C
U
Link down
LAG
AN
LAN switch
Valid
aggregation
Valid
Link down aggregation
LAG
AN
G
S
P
C
O
U
N
ETHB
Aggregation group A
Optical
Aggregation group B
splitter
Aggregation group C
Aggregation group D
Term
Term Explanation
Term Explanation
11.5.1 Introduction
Definition
The protection group of upstream ports are a kind of group which contains the upstream ports
of the active and standby control boards when the active and standby control boards are
configured with two upstream ports respectively. In this way, the switching is performed
according to the status of the upstream ports to guarantee that the uplinks work in the normal
state.
Purpose
A protection group of upstream ports implements the port backup function of the devices at the
NE end and provides the upstream backup of services provisioned to the users.
11.5.2 Specifications
The MA5600T supports the following specifications of a protection group of upstream ports:
11.5.3 Availability
Availability
l Hardware support
The SCUN board supports a protection group of upstream ports.
l License support
A protection group of upstream ports is the basic feature of the MA5600T. Therefore, no
license is required for accessing the corresponding service.
11.5.4 Principle
The protection group of upstream ports are implemented in the following two modes:
l PortState mode, which applies when the upstream ports are provided by the control boards.
l TimeDelay mode, which applies when the upstream ports are provided by the boards in the
GIU slots.
PortState Mode
The active and standby SCU boards connect to the active and standby PE devices at the upper
layer respectively through the GE ports. Configure the GE ports of the active and standby SCU
boards into a protection group. Figure 11-29 shows the working principle of the PortState mode.
Active PE Standby PE
Protection group
Standby
Active
SCU
1. When the working ports of the SCU board fail, the system automatically checks the
upstream ports of the standby SCU board.
2. If the number of upstream ports of the standby SCU board that can work in the normal state
is more than that of the ports of the active SCU board that can work in the normal state,
and the data of the active and standby SCU boards is fully synchronized, the system switches
the service to the standby SCU board to implement protection switching of the upstream
ports.
TimeDelay Mode
The TimeDelay mode is implemented by port check. Figure 11-30 shows the working principle
of the TimeDelay mode.
G
Active control board
I Active PE
U
G
I Standby PE
U
l When the active uplink works in the normal state, it connects to the active PE, and the
standby uplink is disabled.
l When the uplink connecting to the active PE fails, and the active control board detects the
failure, the active control board quickly enables the standby uplink. In this way, the service
switches to the standby uplink, thus ensuring the backup of uplinks.
The protection group of upstream ports are implemented in TimeDelay mode, which applies
when the upstream ports are provided by the boards in the GIU slots.
The TimeDelay mode is implemented by port check. Figure 11-31 shows the working principle
of the TimeDelay mode.
G
Active control board
I Active PE
U
G
I Standby PE
U
l When the active uplink works in the normal state, it connects to the active PE, and the
standby uplink is disabled.
l When the uplink connecting to the active PE fails, and the active control board detects the
failure, the active control board quickly enables the standby uplink. In this way, the service
switches to the standby uplink, thus ensuring the backup of uplinks.
11.6 BFD
The following topics provide the definition, purpose, specifications, principle, glossary, and
acronyms and abbreviations of the BFD. It also provides the standards that this feature complies
with.
11.6.1 Introduction
Definition
BFD is a standard draft from the Internet Engineering Task Force (IETF). You can detect the
link or the system traffic forwarding capability by quickly sending the BFD control packet (a
UDP packet in a specific format) periodically between two nodes.
BFD is used to detect the link at the receive end. If the BFD packet is not received in the detection
period, the link is interrupted.
Purpose
In the traditional network, whether the services in the router layer are interrupted can be detected
only through the hello mechanism in the dynamic router protocol. The hello mechanism was
designed earlier in a low-speed network. Hence, the unit of the timer is second.
Currently, the network delay and device delay are reduced in the high-speed network, and a
shorter detection period is preferred. BFD can perform a superior detection. BFD can detect the
error in the forwarding path in a short time, and then enable the switchover of the standby route,
port, and also the entire network.
BFD can be used to monitor the Ethernet, multiprotocol label switching (MPLS) label switching
path (LSP), generic routing encapsulation (GRE), IPSec tunnel, and other transmission types.
The BFD can improve the IP application stability, such as real-time voice traffic, to provide a
stable network for carriers.
BFD adopts a simple fault detection method to quickly detect the forwarding fault to provide
high quality of service (QoS) transmission, such as voice, video, and other on demand services.
Carriers can provide voice over IP (VoIP) service with superior stability and high applicability
and other real-time services.
11.6.2 Specifications
l The Control board can be configured with up to 32 BFD sessions.
l Each BFD session can be configured with the packet sending interval of 10-1000 ms.
l Each BFD session can be configured with the packet receiving interval of 10-1000ms.
l Each BFD session can be configured with the multiple of 3-50.
l Up to 6 static routes can be configured at the same destination.
l Up to 6 static routes can be bound with the BFD.
11.6.3 Principle
Figure 11-32 shows the BFD principle.
Router A Router B
LAN Switch
0/9 MA5600T
S
C
u
Configure a BFD session between router A and the control board. The session establishment
process is a three-way handshake process during which the related parameters are negotiated
and the sessions on both sides are enabled. The follow-up status change is processed according
to the defect detection result.
Figure 11-33 shows the session status conversion.
In the automatic switchover mode, when the R1 route is disabled, the route is switched to the
R2 route automatically. When the R1 route is enabled again, the route is switched back to the
R1 route automatically.
Glossary
Term Description
Asynchronous The BFD control packet is sent periodically between the systems. If the
mode system cannot receive the BFD control packet from the peer end in the
detection time, the system disables the session.
11.7.1 Introduction
Definition
GPON port 1+1 backup is a Type B port protection solution defined in the GPON 984.1 protocol,
providing redundancy protection to ports and optical fibers.
Purpose
GPON Type B protection provides redundancy protection for ports and optical fibers. This
ensures high reliability of the device.
11.7.2 Specifications
The MA5600T supports the following specifications of GPON Type B Protection.
l Backup for two ports on one board or on two boards
l MA5600T: Up to 56 port protect groups for each shelf
l Service interruption time shorter than 50 ms during an automatic switchover
l Triggering an automatic switchover: hardware failures, fractures of optical fibers, quality
deterioration of lines or hot swapping of the active control board.
11.7.4 Availability
Availability
l Hardware Support
All GPON access boards support this feature.
l License Support
GPON Type B protection is a basic feature of the MA5600T. Therefore, no license is
required for accessing the corresponding service.
Other
The boards that provide the ports in one protect group must be of the same type, because different
boards may support different features.
11.7.5 Principle
The ITU-T G.984.1 defines four types of protection switching from Type A to Type D. This
topic describes the principle of Type B protection switching.
Figure 11-34 shows the network of GPON Type B protection. Type B protection protects the
active and standby PON ports on the OLT and the active and standby optical fibers between the
OLT and the optical splitter.
ONU 1
master
slave
When the active PON port works in the normal state, the standby PON port receives optical
signals going upstream from the ONU. When no signals are transmitted upstream because the
optical fiber connected to the active PON port is cut or the GMAC chip works abnormally, the
standby PON port detects the optical signal interruption and performs associated processing
immediately.
The following describes the protection switching in two different scenarios.
l Scenario 1: The active optical fiber is cut when the active PON port is working, as shown
in Figure 11-35.
OLT
Optical splitter Standby PON port
ONU N
PON port
After entering the standby state, the standby PON port enables detection of upstream
optical signals.
When detecting a loss of signal (LOS) alarm (generated due to the active optical fiber
cut), the active PON port disables the transmission of its optical transceiver.
After detecting the LOS alarm of the active PON port, the standby PON port enables
the transmission of its optical transceiver and performs the ONU detection.
When the optical fiber connected to the standby PON port is in the normal state and
ONUs are discovered, the standby PON port reports an LOS clear alarm.
The active PON port switches to the standby state and enables detection of upstream
optical signals. The standby PON port switches to the active state. Till now, the
protection switching is completed.
l Scenario 2: All ONUs connected to the active PON port go offline, as shown in Figure
11-36.
Optical splitter
Standby PON port OLT
ONU N
PON port
After entering the standby state, the standby PON port enables detection of upstream
optical signals.
When detecting an LOS alarm (generated because all ONUs go offline), the active PON
port disables the transmission of its optical transceiver.
After detecting the LOS alarm of the active PON port, the standby PON port enables
the transmission of its optical transceiver and performs the ONU detection.
The OLT keeps checking for ONUs on the active and standby PON ports until it detects
an ONU going online.
After the ONU goes online, no switching is performed between the PON ports.
11.8.1 Introduction
Definition
The types of the GPON line protection group are defined in the ITU-T Recommendation G.
984.1. For the GPON line, the Recommendation proposes four protection switching types,
among which is type C. Type C protection switching refers to the protection for both the
backbone optical fiber and the tributary optical fiber.
The GPON type C protection switching is implemented through the redundancy configuration
of the PON port on the OLT, PON port on the ONU, backbone optical fiber, optical splitter, and
tributary optical fiber. That is, each item is in a dual configuration.
Purpose
The type C protection switching ensures higher reliability of devices. The PON port on the OLT,
PON port on the ONU, backbone optical fiber, optical splitter, and tributary optical fiber are in
redundancy protection. As such, when any part fails, the system can automatically switch the
service to the other optical path. The type C protection switching has two types, automatic
switching and manual switching.
Benefits to Carriers
The GPON type C protection switching brings remarkable benefits to carriers.
l It ensures higher reliability. When any part on the line fails, the system can automatically
detect the fault and switch the service to the other optical path, thus implementing automatic
service recovery.
l It serves as a basic for implementing load balancing in the future, which realizes better
bandwidth usage of the lines and at the same time the ONU can provide higher upstream
bandwidth.
11.8.2 Specifications
The maximum number of protection groups is 56.
The two members of a protection group can be in the intra-board protection or the inter-board
protection.
11.8.4 Availability
Involved NE
This feature requires the OLT to work with the ONU.
l The ONU that is connected to the GPON port in the protection group must provide two
PON ports for upstream transmission.
l The OLT and the ONT must comply with ITU-T G.984.
License Support
The GPON type C protection switching is a basic feature of the MA5600T. Therefore, no license
is required to access the corresponding service.
Version Support
Feature Dependency
The dependency of the GPON type C protection switching is as follows: After an ONU connected
to a GPON port is added to a type C protection group, the GPON port is not allowed to join a
type B protection group or a type B dual-homing protection group, and vice versa.
Hardware Support
l The board that support inter-board type C protection switching is H802GPBD.
l The ONU involved must also support the type C protection switching.
11.8.5 Principle
The GPON type C protection switching is implemented through the redundancy configuration
of the PON port on the OLT, PON port on the ONU, backbone optical fiber, optical splitter, and
tributary optical fiber. That is, each item is in the dual configuration mode. The protection can
be implemented on the OLT in two modes: between two PON MAC chips of the same PON
board, and between two PON ports on two PON boards.
l OLT: The active and standby PON ports on the OLT are both in the working state. The
OLT should ensure that the service information of the active PON port can be synchronized
and backed up to the standby PON port. Thus, during the protection switching, the standby
PON port can maintain the same service attributes for the ONU.
l Optical splitter: Two 1:N optical splitters are used.
l ONU: The ONU uses different PON MAC chips and different optical transceivers. The
ONU should ensure that the service information of the active PON port can be synchronized
and backed up to the standby PON port. Thus, during the protection switching of the PON
ports, the ONU can maintain the same local service attributes.
l The active and standby PON ports on the OLT are both in the working state (the ONU
registers with both PON ports on the OLT, and the OLT and the ONU can negotiate through
the standard and extended PLOAM messages). During the protection switching of the PON
ports, the initialization parameters and the service attributes of the ONU are not configured
on the standby PON port.
l Both the ONU and the OLT check the link status, and according to the link status determine
whether the switching is performed. If the OLT detects that the uplink of the active PON
port is faulty, the OLT automatically switches to the standby optical link and sends the PST
message through the standby optical link to inform the ONU and request the ONU to switch.
If the ONU detects that the downlink of the active PON port is faulty, the ONU
automatically switches to the standby optical link and sends the PST message to inform the
OLT of the switching and the reason for switching, requesting the OLT to switch.
G.984.1 specifies two types of conditions for triggering the switching of a protection group:
1. Forced switching
2. Automatic switching
The conditions triggering an automatic switching include the quality degradation alarm on the
upstream/downstream line, hardware fault, or the LOS, LOF, SF, SD, LCDG, or TF alarm on
the ONU. The protection group supports automatic recovery and automatic recovery hold time.
Automatic recovery means that the system automatically switches back to the original working
member line after the original working member line recovers from the fault.
11.9.1 Introduction
Definition
EPON port 1+1 backup is a TYPE B port protect solution defined in China Telecom Technical
Requirements on an EPON Device, providing redundancy protection to ports and optical fibers.
Purpose
EPON Type B protection provides redundancy protection for ports and optical fibers. Hardware
failures, fractures of optical fibers, or quality deterioration of lines trigger an automatic
switchover. This ensures high reliability of the device.
11.9.2 Specifications
l Backup for two ports on one board or on two boards
l MA5600TUp to 56 port protect groups for each shelf
l Interruption duration shorter than 50 ms during an automatic switchover of the service
(layer 2 service)
l Causes that trigger an automatic switchover: hardware failure, optical fiber break, or quality
deterioration of lines
11.9.4 Availability
Availability
l Hardware Support
All EPON access boards support this feature.
l License Support
EPON Type B protection is a basic feature of the MA5600T. Therefore, the corresponding
service is provided with no license.
Other
l The boards that reside the ports in one protect group must be of the same type, because
different boards may support different features.
l In terms of the EPBD board, two optical fibers used for the type B EPON line protection
must be of the same length to ensure in-service switching.
11.9.5 Principle
The document China Telecom Technical Requirements on an EPON Device defines TYPE A,
TYPE B, and TYPE C protection switching solutions. This feature is the TYPE B protection
switching solution.
Figure 11-38 shows the networking for EPON Type B protection.
ONU 1
master
slave
12 Application Security
This topic provides general specifications, availability, and sub-features of the application
security.
12.1 Introduction
12.2 Relevant Standards and Protocols
12.3 Availability
12.4 HWTACACS
HWTACACS is a security protocol with enhanced functions based on TACACS (RFC1492).
Similar to the RADIUS protocol, HWTACACS implements AAA functions for multiple
subscribers by communicating with the HWTACACS server in the client/server (C/S) mode.
This topic provides the introduction, principle, and reference of the HWTACACS feature.
12.5 RAIO
This topic provides an introduction to the RAIO protocol and describes the working principle
of this feature.
12.6 PITP
This topic provides an introduction to PITP, including the PITP P mode and PITP V mode, and
describes the working principle of PITP.
12.7 DHCP option82
DHCP option82 is similar to PPPoE+ as a user security mechanism. The information on a user's
access location is added into the DHCP request packets initiated by a user for user authentication.
This topic provides introduction to this feature and describes the principle and reference
documents of this feature.
12.8 802.1X
IEEE 802.1X (hereinafter referred to as 802.1X) is a port-based network access control protocol.
12.9 Anti MAC Spoofing
This topic provides an introduction to the anti MAC spoofing feature and describes the working
principle of this feature.
12.10 Anti IP Spoofing
This topic provides an introduction to the anti IP spoofing feature and describes the working
principle of this feature.
12.11 User Isolation
This topic provides an introduction to the user isolation feature and describes the working
principle of this feature.
12.12 Line Security of the EPON System
12.13 Line Security of the GPON System
12.14 Glossary, Acronyms, and Abbreviations
12.1 Introduction
User security refers to the security mechanism that ensures the security of access users, including
the HWTACACS, RAIO, PITP, DHCP option 82, 802.1x, anti-MAC spoofing, anti-IP spoofing,
and user isolation features.
Feature Description
RAIO Relay agent information option (RAIO) is the user physical location
information provided by the device to the BRAS or DHCP server, such
as the shelf ID, slot ID, and port ID on the device, when PITP and DHCP
option 82 are enabled.
DHCP option 82 Add the user physical location information in the option 82 field of the
DHCP request packet initiated by the user to co-work with the upper-
layer authentication server to perform user authentication.
Anti-MAC The system guards against the attack from users who forge MAC
spoofing addresses.
Anti-IP spoofing The system guards against the attack from users who forge IP addresses.
User isolation The users in different MUX VLANs, or the users in one smart VLAN
cannot communicate with each other. Thus, user isolation is
implemented at different layers.
Line security It is about the line security feature in PON access modes.
802.1X
IEEE Std 802.1X-2001: Port-Based Network Access Control
RAIO
TR101
Anti IP Spoofing
None
User Isolation
None
Line Security
Line security of the GPON system (AES128 encryption mechanism)
T-REC-G.Imp984.3-200602-E
12.3 Availability
Related NEs
l PITP is used with RAIO, and the cooperation between the MA5600T and the BRAS (or
RADIUS server) is required. Table 12-1 lists the requirements for these NEs.
l DHCP option 82 is used with RAIO and the cooperation between the MA5600T and the
DHCP relay agent (or DHCP server) is required.
l The line security feature of a PON system requires cooperation between the OLT
(configured with PON boards) and the ONU. Table 12-3 lists the requirements for these
NEs.
License Support
l HWTACACS is a basic feature of the MA5600T. Therefore, no license is required to access
the corresponding service.
l RAIO is a basic feature of the MA5600T. Therefore, no license is required to access the
corresponding service.
l PITP is an optional feature of the MA5600T. The corresponding service is controlled by
the license.
l The 802.1x access authentication feature is a basic feature of the MA5600T. Therefore, no
license is required to access the corresponding service.
l DHCP option 82 is an optional feature of the MA5600T. The corresponding service is
controlled by the license.
l Anti-MAC spoofing is a basic feature of the MA5600T. Therefore, no license is required
to access the corresponding service.
l Anti-IP spoofing is a basic feature of the MA5600T. Therefore, no license is required to
access the corresponding service.
l User isolation (MUX VLAN and smart VLAN) is a basic feature of the MA5600T.
Therefore, no license is required to access the corresponding service.
l Line security is a basic feature of the MA5600T. Therefore, no license is required to access
the corresponding service.
Feature Dependency
l Either PITP P mode or PITP V mode can be enabled at a time in the system. That is, PITP
P mode and PITP V mode cannot be enabled at a time.
l A known Ethernet protocol type cannot be set as the protocol type of the PITP V mode.
Otherwise, conflict occurs.
l The user physical location information provided to the BRAS is determined by the RAIO
working mode.
l The MUX VLAN and the smart VLAN can co-exist in the system.
l When a port on the OPGD board serves as a subtending port, anti-MAC spoofing and anti-
IP spoofing does not take effect on the port.
Miscellaneous
l The user port must support PITP and DHCP option 82; however, PITP and DHCP option
82 are applicable to any access mode.
l RAIO is used with PITP and DHCP option 82, providing the format of the user physical
location information for PITP and DHCP option 82.
12.4 HWTACACS
HWTACACS is a security protocol with enhanced functions based on TACACS (RFC1492).
Similar to the RADIUS protocol, HWTACACS implements AAA functions for multiple
subscribers by communicating with the HWTACACS server in the client/server (C/S) mode.
This topic provides the introduction, principle, and reference of the HWTACACS feature.
12.4.1 Introduction
Definition
HWTACACS is a security protocol enhanced based on TACACS (RFC1492). Similar to the
RADIUS protocol, HWTACACS implements AAA functions for multiple users by
communicating with the HWTACACS server in the client/server (C/S) mode.
AAA is short for authentication, authorization, and accounting. It provides the following three
functions for users:
l Authentication: To authenticate the access right of the users and determine which users can
access the network.
l Authorization: To authorize the users to access certain services.
l Accounting: To keep a network resource usage record of the users.
Purpose
HWTACACS is used for the authentication, authorization, and accounting of the 802.1x access
users and administrators.
12.4.2 Specifications
The MA5600T supports the following HWTACACS specifications:
12.4.3 Principle
AAA
1. Authentication
The MA5600T supports three authentication modes: non-authentication, local
authentication, and remote authentication.
l Non-authentication: The MA5600T trusts users and does not check the validity of the
users. Generally, this mode is not adopted.
l Local authentication: The user information (including the user name, password, and
various attributes) is configured on the MA5600T, and the MA5600T authenticates the
user. This authentication mode is fast and can reduce carrier's cost; however, the amount
of information that can be stored is limited by the device hardware.
l Remote authentication: The user information (including the user name, password, and
various attributes of the user) is configured on an authentication server. The Remote
Authentication Dial In User Service (RADIUS) protocol or HUAWEI Terminal Access
Controller Access Control System (HWTACACS) protocol is used for remote
authentication. The MA5600T serves as the authentication client and communicates
with the RADIUS or HWTACACS server. When the RADIUS or HWTACACS server
is faulty, the MA5600T can automatically switch to local authentication.
2. Authorization
The MA5600T supports direct authorization, local authorization, HWTACACS
authorization, and if-authenticated authorization.
l Direct authorization: If trustful, a user can directly pass the authorization.
l Local authorization: A user is locally authorized according to relevant attributes of the
user configured on the MA5600T.
l HWTACACS authorization: The HWTACACS server authorizes a user.
l If-authenticated authorization: If a user passes the authentication and the authentication
mode is not non-authentication, the user passes the authorization.
3. Accounting
The MA5600T supports non-accounting and remote accounting.
l Non-accounting: A user is not charged.
l Remote accounting: The MA5600T supports remote accounting through the AAA
server.
obtain the right to access the Internet or access certain network resources, the NE authenticates
the user or the corresponding connection.
The NE sends the authentication, authorization, and accounting information of the user to the
RADIUS server. The RADIUS protocol specifies how the NE and the RADIUS server should
exchange the user information and the accounting information. The RADIUS server receives
the connection request of the user, authenticates the user, and sends the necessary configuration
information of the user to the NE. The exchange of authentication information between the NE
and the RADIUS server is key protected. This protects the user password against any interception
when the password is transmitted over an insecure network. Figure 12-1 shows the message
flow between the RADIUS client and the RADIUS server.
Figure 12-1 Message flow between the RADIUS client and the RADIUS server
3. Response
User NE RADIUS Server
NOTE
An NE refers to an access device that can function as a RADIUS client.
1. When a user logs in to the NE, the user name and password are sent to the NE.
2. The RADIUS client on the NE receives the user name and password, and sends an
authentication request to the RADIUS server.
3. The RADIUS server receives the legal request, authenticates the user, and sends the
necessary authorization information of the user to the RADIUS client.
The authentication information exchanged between the RADIUS client and the RADIUS server
must be encrypted before being transmitted over the network. Otherwise, the information may
be intercepted when the network is insecure.
The accounting message flow is similar to the authentication/authorization message flow.
HWTACACS RADIUS
Encrypts the entire body of the packet Encrypts only the password field in the
except the standard HWTACACS packet authentication packet.
header.
12.5 RAIO
This topic provides an introduction to the RAIO protocol and describes the working principle
of this feature.
12.5.1 Introduction
Definition
The Relay Agent Information Option (RAIO) is used for the device to provide the physical
information of a user, such as the shelves, slots, and ports on the device, to the BRAS or DHCP
server when the PITP and DHCP Option82 functions are enabled. In addition, the physical
information is contained in the following packets for transmission:
Purpose
Through the RAIO, the device provides the physical location information of a user to the BRAS
or DHCP server. In addition, the RAIO is used with PITP and DHCP Option82 to ensure the
security of the user account.
Benefits
For the carrier: The RAIO provides the carrier with flexible and customized features, which
facilitates proper network planning.
For users: The RAIO authenticates the binding relation between the physical information of a
user and the user account. This prevents theft of the password of the user account.
12.5.2 Specifications
The RAIO mainly includes the PITP tag and DHCP option 82 tag and is not standardized
currently. Therefore, different carriers may put forward different RAIO formats. To meet the
requirements of different carriers, multiple RAIO working modes are supported.
12.5.3 Principle
ATM port Device name atm shelf ID/slot ID/subslot ID/port ID: vpi.vci
VDSL/LAN access mode Device name eth shelf ID/slot ID/subslot ID/port ID: vlanid
l When the device name field is the default "MA5600T", fill the field with the MAC address
of the device, and the format is "00E0FC000001", with the uppercase letters.
l When the device name is not "MA5600T", fill the device name field with the actual name
of the device.
The RID format is usually used to identify the access information (local information) of a user.
Generally, the RID format is user-defined. In the case of the MA5600T, the RID format is null.
That is, the RID format contains only the Code and Len fields, but does not contain the Value
field.
An example of an RAIO filed in the common mode is as follows:
l CID ----> 00E0FC112233 atm 0/12/0/49:0.35
l RID ----> Null
The syntax is keyword 0m, where m is the number of occupied columns. For example:
slot03 indicates that the field length of the slot is 3 and 0 is added in front of it when the
length is shorter than 3. Slot 2 is 002 in the packet. The value of m cannot exceed the
maximum width. The actual data is output if the number of columns occupied by the data
is greater than m.
VPI Applies to the ATM access mode. It refers to the VPI Yes 4
of the service port.
VCI Applies to the ATM access mode. It refers to the VCI Yes 5
of the service port.
Gemport Applies to the GPON access mode. GEM port refers Yes 4
to the service port.
l If the user defines the RAIO format according to the CID, the format character string must
contain the keyword ANID.
l The keywords of interface types identify the formats of different interface types.
l The keywords mapping to different interface types cannot exist in the same format character
string. For example, VPI and Gemport, or ETH and VCI cannot exist in the same format
character string.
l If no interface type is specified, the CID/RID field mapping to the interface type is null.
l A separator represents the corresponding symbol that the user inputs in the RAIO format
character string. The symbol that the separator represents is ultimately added to the CID/
RID. Table 12-9 lists the RAIO separators defined in the system.
Separator Symbol
Separator Symbol
. Period "."
: Colon ":"
/ Slash "/"
- Hyphen "-"
% Percent "%"
l Other rules
The length ranges from 1 to 127 characters, all of which are lowercase letters.
The CID character string must contain the keyword ANID.
The keyword ANID must exist before the keyword of the dependent interface type.
All the separators before the keyword ANID in the CID character string, the RAIO
separators (if any) in the system name that corresponds to ANID, and the separator
behind ANID are the basis for the downstream packet to identify and parse the keyword
ANID.
An example of an RAIO filed in the user-defined mode is as follows:
Assume that:
l System name: DSLAM01
l Slot ID: 3
l Port ID: 15
l VPI: 0
l VCI: 35
l Priority: 6
The user-defined CID character string is: anid atm slot/port:vpi.vci%priority
The ultimate character string is: dslam01 atm 3/15:0.35%6
12.6 PITP
This topic provides an introduction to PITP, including the PITP P mode and PITP V mode, and
describes the working principle of PITP.
12.6.1 Introduction
Definition
Policy information transfer protocol (PITP) is a protocol for implementing policy information
transfer between the access device and the BRAS through layer 2 P2P communication. PITP,
including the PITP P mode and PITP V mode, is used to transfer the user physical port
information, namely, relay agent information option (RAIO).
l In the PITP V mode, the BRAS actively queries the user physical location information from
the access device.
l In the PITP P mode, the access device adds the user physical location information in the
PPPoE packets in the PPPoE discovery phase. This facilitates the user authentication by
the BRAS.
Purpose
The purpose of the PITP feature is to provide the user physical location information for the
upper-layer authentication server. After the BRAS obtains the user physical location
information, the BRAS binds the information to the user account for authentication to prevent
the user account from roaming or being forged.
Benefits
Benefits to carriers: With the PITP feature, carriers can provide high reliable services to build
the brand and increase profit.
Benefits to users: With the PITP feature, the user physical location information is bound to the
user account to prevent the user account from theft.
12.6.2 Specifications
The MA5600T supports the following specifications for the PITP feature:
l Two PITP modes, namely, PITP P mode and PITP V mode, are supported.
l PITP is supported at three levels, namely, system level, port level, and service port level.
The access device provides the user physical location information to the BRAS only when
PITP is enabled at all three levels.
l By default, PITP is disabled at the system level, but is enabled at the port level and the
service port level.
12.6.3 Principle
Working Principle of the PITP P Mode
The PPPoE dialup process with the PITP P mode enabled is shown as Figure 12-2.
Figure 12-2 PPPoE dialup process with the PITP P mode enabled
User Access Node BRAS RADIUS Server
2 PADO PADO+Tag
3 PADR PADR+Tag
4 PADS PADS+Tag
5 LCP negotiation
6 Authentication
packet 7 Request packet
with user port
information
Session
10 Access
accepted packet
9 Authentication
pass packet
10 Data transmission
With the PITP P mode enabled, in the PPPoE discovery phase, the device adds the user physical
location information to the PPPoE packet sent from the user to cooperate with the upper-layer
server to complete user authentication. The other phases are the same as those of the common
PPPoE process.
Thus, major differences between the PPPoE dialup process with the PITP P mode enabled and
that without the PITP P mode enabled are as follows:
l In the PPPoE discovery phase, all the PPPoE packets exchanged between the MA5600T
and the BRAS contain the user physical location information. The MA5600T adds the user
physical location information to the PPPoE packet after receiving the packet from the user,
and forwards the packet to the BRAS. The MA5600T removes the information from the
PPPoE packet after receiving the packet from the BRAS, and forwards the packet to the
user.
l If the PPPoE user needs to be authenticated on the RADIUS server, the BRAS extracts the
user physical location information from the PPPoE packet that is sent from the
MA5600T and then adds the information to the authentication request packet for
authentication.
Figure 12-3 PPPoE dialup process with the PITP V mode enabled
1 PADI
2 PADO
Discovery
3 PADR
4 PADS
7 LCP negotiation
8 Authentication
Session
12 Data transmission
1. When the PPPoE discovery phase ends, the BRAS sends the PITP request packet to the
BRAS, requesting for the user physical location information.
2. After the device receives the PITP request packet, it queries the user physical location
information such as the shelf ID, slot ID, and port ID according to the user MAC address
and the VLAN information contained in the request packet.
3. If querying the information is successful, the device adds the information to the response
packet and then sends the packet to the BRAS. If querying the information fails, the device
does not send the response packet to the BRAS.
12.7.1 Introduction
Definition
DHCP option82 is similar to PPPoE+ as a user security mechanism. The information on a user's
access location is added into the DHCP request packets initiated by a user for user authentication.
Purpose
DHCP option82 enables the DHCP request packets to carry the information on a user's access
location for user authentication.
12.7.2 Specifications
DHCP option82 takes effect only when it is enabled at all the following levels:
l Global level
l Port level
l Service port level
12.7.3 Principle
Principle
Figure 12-4 shows the DHCP process when DHCP option82 is enabled.
Offer(+Option82)
Offer
Request
Request+Option82
ACK(+Option82)
ACK
Data transmission
Release
The principle of DHCP option82 is similar to that of PPPoE+. The difference lies in that when
a user requests for configuration, the MA5600T adds the information on the user's access location
into the DHCP request packets from the user for authentication at the upper layer.
This field length is changeable. This field contains the following initial configurations for
terminals and network configurations:
l IP features
l Domain name
l Specific information for identifying a terminal
l IP address of the default gateway
l IP address of the default gateway
l IP address of the WINS server
l A user's valid lease term for an IP address
Table 12-10 lists the meanings of each field in a DHCP option82 packet.
Field Meaning
Code One byte. This field is in the CLV format, used to uniquely
identify the following information.
Agent Information Field This field indicates the information in bytes. The length is
specified by the length field.
Option82 contains multiple sub options, which are contained in the value filed of option82.
l Circuit ID (CID)
This sub option is used to identify the local circuit identifier of DHCP proxy for receiving
DHCP packets from a user. This field might contain router interface No. and ATM PVC
No. The identifier is 1.
l Remote ID (RID)
This sub option is used to identify the remote host of a circuit. This field might contain the
ATM address of a remote incoming and the modem ID. The identifier is 2.
The MA5600T supports option82 in different formats. For details, see the section "12.5
RAIO."
12.8 802.1X
IEEE 802.1X (hereinafter referred to as 802.1X) is a port-based network access control protocol.
12.8.1 Introduction
Definition
IEEE 802.1X (hereinafter referred to as 802.1X) is a port-based network access control protocol.
If a user connected to a port can pass the authentication, the user can access the resources in the
network. In case of a failure to pass the authentication, the user cannot access the resources in
the network. That is, the physical connection is cut off.
Purpose
The MA5600T supports the port-based access authentication mode as specified in the standard.
In addition, it extends and optimizes this authentication mode. As a result, the system security
is improved and the system management function is enhanced.
12.8.2 Specifications
The MA5600T supports the following specifications for the 802.1X feature:
12.8.3 Principle
Protocol System
802.1X defines the port-based network access control from the following aspects:
l The access device provides the authentication control function of the access port (physical
port or logical port).
l Before a port passes the authentication, the port is disabled and the users connected to the
port cannot access the network resources.
l If the port passes the authentication, the port is enabled and the users can access the network.
If the port does not pass the authentication, the port is disabled and the users cannot access
the network.
The 802.1X system defines three functional entities: supplicant system, authenticator system,
and authentication server system. Figure 12-7 shows the 802.1X system architecture.
LAN
In general, the digital user terminal provides the functions of the supplicant system entity and
needs to be installed with the 802.1X client software, through which the supplicant system
initiates authentication and quits authentication.
The authenticator system authenticates the request from the supplicant. An authenticator system
is usually an 802.1X-enabled network device, providing a service port for the supplicant. The
service port can be a physical port or a logical port, and implements the 802.1X authentication
of access users.
The authentication server is an entity that provides the authentication service for the authenticator
system. The 802.1X authentication server is usually located in the operator's AAA center.
The ports of the authenticator system can be controlled ports or uncontrolled ports.
l A controlled port is used to transmit the authenticated service packets. If a user passes the
authentication, the controlled port changes to the authenticated state, and then the port can
transmit the service packets. If the user fails to pass the authentication, the controlled port
changes to the unauthenticated state, and the port cannot transmit the service packets.
l An uncontrolled port is always in the bi-directional connection state and can transmit
authentication protocol packets, regardless of the authentication state (authenticated state
or unauthenticated state) of the controlled port.
Feature Implementation
The MA5600T supports control over access users based on the physical port, service virtual
port, or "physical port + MAC address".
In the case of the authentication based on the physical port, 802.1X runs on one service virtual
port of the port. If the port passes authentication, all other service virtual ports of the port are
enabled.
In the case of the authentication based on the "port + MAC address", only the packets with the
MAC address that passes authentication are allowed to pass through the port.
In the case of the authentication based on the service virtual port, a service virtual port is disabled
before authentication. Once the authentication is passed, the service virtual port is enabled and
in such a case, all user terminals of the service virtual port can access the network.
In the case of the authentication by service virtual port, a service virtual port can be any of the
following:
l An xDSL ATM service virtual port which is identified by the PVC or the PVC plus the
user VLAN
l An xDSL PTM service virtual port which is identified by the user VLAN
The MA5600T supports the 802.lX authentication triggered by EAPoL or DHCP packets. You
can set the method for EAPoL or DHCP packets to trigger the 802.1X authentication according
to the terminal capability.
With the 802.1X protocol running, the MA5600T works as an authenticator and receives the
authentication requests from the users. In the case of a remote authentication, the MA5600T
sends the authentication information to the RADIUS server for authentication. If an access port
passes the authentication of the RADIUS server, it is enabled.
The MA5600T supports the EAP termination and EAP relay modes.
l In the EAP termination mode, the MA5600T abstracts the user authentication information
from the EAP packets, encapsulates the information into the corresponding attribute of the
RADIUS protocol, and then sends the information to the RADIUS server for authentication.
l In the EAP relay mode, the MA5600T encapsulates the EAP packets into the corresponding
attribute of the RADIUS protocol, and sends the packets to the RADIUS server for
authentication. In this mode, the RADIUS server needs to process the EAP packets.
12.9.1 Introduction
Definition
MAC spoofing means that the malicious users forge the MAC addresses and attack the network
by transmitting packets. Malicious users can forge the MAC addresses of common users to
damage the services of these users. Malicious users can also transmit a large number of forged
packets that contain different MAC addresses to the system, which affects the normal operation
of the system or even causes the system to be down.
The anti MAC spoofing feature refers to the feature that the system prevents users from attacking
the system by forging MAC addresses.
Purpose
To protect the system and the network of a carrier, the following measures are taken to prevent
malicious users from forging the MAC address of the authorized users to attack the system or
network.
1. MAC address binding
l The system only allows the users (who go online through the normal PPPoE or DHCP
process) with the limited and trustful MAC addresses to access the network through
binding the dynamic MAC addresses. The users with untrusted MAC addresses are
prohibited from entering the network of the carrier.
l For common users that do not access the network of the carrier through the PPPoE or
DHCP online process, the system binds the static MAC addresses of the users but allows
the limited and trustful MAC addresses to enter the network of the carrier.
2. Anti MAC duplicate
After the anti MAC duplicate function is enabled, the system regards the first MAC address
learned by the port as a valid MAC address. Before the MAC address is aged, the system
does not allow duplication of the MAC address.
12.9.2 Specifications
l Static binding: The system supports up to 1K static MAC addresses. The number of the
static MAC addresses that can be bound to a traffic stream is not limited.
l Dynamic binding
The system can be bound with up to 8K traffic streams.
Each traffic stream can be bound with up to 8 MAC addresses.
If each traffic stream is bound with 8 MAC addresses, then the system can be bound
with up to 1024 traffic streams.
l Supports PPPoE and IPoE users. The IPoE users include those with static IP addresses and
those with dynamic addresses allocated by DHCP.
l Supports user-side anti MAC duplicate (supported by EPBD, SCUN, and GPBD).
l Supports network-side anti MAC duplicate (supported by SCUN, ETHB, and GIU).
NOTE
According to the default setting, MAC addresses learnt from the network side are preferentially ensured
during the initialization of the SCUN control board to avoid being overwritten by those learnt from the
user side. Such a default setting is not affected by anti MAC duplicate.
l Supports the global setting of anti MAC duplicate.
BRAS
Original services fail
Direction of the
packets is changed
Access
service
Modem Modem
PC1 PC2
Sends packets with the MAC
address of the BRAS server as
its source MAC address
between the source MAC address and the service port or between the source MAC address
and the service flow.
2. Only the service packets whose source MAC addresses are bound to the service port or
service flow are allowed to pass through the device.
3. When the user goes offline, the system unbinds the source MAC address of the user from
the service port or service flow.
When the anti MAC duplicate function is enabled on the network side, the MAC address learned
by a network-side port is not duplicated to any other network-side port or any user-side port.
12.10.1 Introduction
Definition
IP spoofing attack means that the malicious users forge the IP addresses and attack the network
by transmitting packets. Malicious users can forge the IP addresses of common users to damage
the services of these users.
The anti IP spoofing feature refers to the feature that the system prevents users from attacking
the system by forging IP addresses.
Purpose
To protect the system and the network of a carrier, the system only allows the users (who go
online through the normal DHCP process) with the trustful IP addresses to access the network
through binding the dynamic IP addresses. The users with untrusted IP addresses are prohibited
from entering the network of the carrier.
For common users that do not access the network of the carrier through the DHCP online process,
the system binds the static IP addresses of users but allows the trustful IP addresses to enter the
network of the carrier.
Benefits
Benefits to carriers: Anti IP spoofing protects the network of the carrier from being attacked by
binding the dynamic or static IP addresses.
Benefits to users: Anti IP spoofing enhances the security of user services by binding the dynamic
or static IP addresses.
12.10.2 Specifications
The specifications of the anti IP spoofing feature are as follows:
l Static binding: MA5600T supports up to 8K static IP addresses. Up to 8 IP addresses can
be bound to one traffic stream.
l Dynamic binding:
Dynamic binding can be enabled or disabled globally or for a service virtual port.
By default, dynamic binding is disabled globally and is enabled for a service virtual
port. The anti IP spoofing function takes effect only when dynamic binding is enabled
at both two levels.
The system supports up to 8K dynamic IP addresses.
Dynamic binding for up to 8K traffic streams is supported.
Up to 8 IP addresses can be bound to a traffic stream.
If 8 IP addresses are bound to each traffic stream, the system supports binding of 1K
traffic streams.
12.10.3 Principle
Working Principle
l Anti IP spoofing by binding dynamic IP addresses
The system shuts down the dynamic IP address learning of the user and monitors the
DHCP online and offline processes of the user. During the process of going online, the
system dynamically obtains the source IP address of the user and sets up the binding
relation between the source IP address and the user service flow.
Only the service packets whose source IP addresses are bound to the service flow are
allowed to pass through the device.
During the process of going offline, the source IP address of the user is unbound from
the service flow.
l Anti IP spoofing by binding static IP addresses
The system sets up the binding relation between the source IP address and the user service
flow through the NMS or the CLI.
12.11.1 Introduction
Definition
The MA5600T supports the MUX VLAN and the Smart VLAN. The MUX VLAN divides user
services into different virtual local area networks (VLANs). The services of each VLAN are
isolated, thus restricting visits between users in different VLANs.
Different service ports in the same Smart VLAN are also isolated, thus restricting visits between
users in the same VLAN.
Purpose
Users are restricted to visit each other when the user service flows or service ports are divided
into different VLANs or the user service flows or service ports in the same VLAN are isolated
by the Smart VLAN. Hence, the security of user services is ensured.
Benefits
For the carrier: The carrier can improve its brand value by providing high-security services.
For users: Users can enjoy high-security networks.
12.11.2 Specifications
The specifications of the user isolation feature are as follows:
l MUX VLAN
l Smart VLAN
12.11.3 Principle
Working Principle
The MUX VLAN realizes user isolation by dividing the user service flows or service ports into
different VLANs.
The Smart VLAN restricts visits between users by isolating the service flows or service ports in
the same VLAN.
12.12.1 Introduction
Definition
The broadcast mode is used in the downstream direction of the EPON system, and thus malicious
users may intercept information about other users in the system easily. Therefore, the triple
churning feature needs to be used on the line between an OLT and an ONU in the EPON system
to prevent the information from being intercepted, and to enhance data security. The triple
churning feature is extended based on the single churning feature. In the triple churning feature,
the time domain correlation of churned data is added to enhance the security of user data.
Purpose
The triple churning feature is used to encrypt user information to prevent the user information
from being intercepted and decrypted maliciously, and to enhance the security of user data.
12.12.2 Specifications
l The system supports the churning function for each logical link identifier (LLID) and each
LLID has an independent key.
l With the churning function, the OLT puts forward the key update requirement, the ONU
provides the 3-byte churning key, and the OLT uses this key to implement the churning
function.
l After the churning function is enabled, the system performs churning on all data frames
and OAM frames.
12.12.3 Principle
Working Principle
The update and synchronization processes of keys are based on the OAM PDU mode of the
organization-specific extension.
The EPON system uses the triple churning mode based on the churning. The triple churning
algorithm is extended based on the single churning algorithm. In the triple churning algorithm,
the time domain correlation of churned data is added to enhance the security of user data.
The implementation of the single churning is described as follows: Logical operations are
performed on X1-X8 and P1-P16 to generate churning keys K1-K10. At the churning end, K1,
K2, and P1-P12 (total 14 bits) are used to churn data streams with 8-bit width. At the dechurning
end, 14 bits are used to dechurn the encrypted data with 8-bit width.
The churning starts from the destination MAC address field of the Ethernet frames, and ends at
the FCS check field. After the processes of MPCP discovery and OAM discovery complete, the
exchange of churning keys starts. After the key exchange completes, all downstream frames,
MAC control frames, and OAM frames of the ONU must be churned.
The triple churning feature uses three subtended churners. Each churner needs to perform the
single churning operation specified above. Keys used in each churning, however, are different
as follows:
l In the level 1 churning, the original 24-byte keys (X1-X8 and P1- P16) are used.
l In the level 2 churning, the keys are the original 24-byte keys that are moved rightwards
by one byte in cycles (P9-P16, X1-X8, and P1-P8).
l In the level 3 churning, the keys are the original 24-byte keys that are moved rightwards
by two bytes in cycles (P1-P16 and X1-X8).
12.13.1 Introduction
Definition
The downstream data of the GPON is transmitted in the broadcast mode, and thus information
may be intercepted. Therefore, the encryption technology needs to be used on the line between
an OLT and an ONU to enhance the data security and ensure the secure transmission of
information over the line.
The advanced encryption system-Federal Information Processing Standard 197 (AES-FIPS 197)
is the latest encryption standard issued by the National Institute of Standards and Technology
(NIST) of the USA. The AES algorithm can use 128-bit, 192-bit, and 256-bit encryption keys
to encrypt or decrypt 128-bit data blocks to protect electronic data.
Purpose
The GPON system uses the AES128 encryption mechanism for line security control, and thus
effectively prevents security problems such as data embezzlement.
12.13.2 Specifications
The system enables or disables the encryption function based on GEM ports. The encryption
function is disabled by default.
12.13.3 Principle
Working Principle
The AES algorithm can use 128-bit, 192-bit, and 256-bit encryption keys to encrypt or decrypt
128-bit data blocks to protect electronic data.
The AES algorithm replaces the DES and 3DES algorithms with low security. The AES128
encryption feature can be used to randomly select a key from as many as 3.4 x 1038 unique
password keys to encrypt bit streams. Therefore, accurate hacker programs that can decrypt one
million encryption keys in a second (very advanced concurrent algorithm capability) need
thousands of 100 billion years to find the encryption key generated through the AES-128
encryption feature.
In the AES128 encryption system, the MA5600T support key change and switchover.
1. When key change is required, an OLT sends a key change request. After receiving the key
change request, an ONU (ONT or MDU) gives a response and generates a key.
2. The length of a PLAOM message is limited. Therefore, the generated key is sent to the
OLT in two parts and for three times repeatedly.
3. If the OLT does not receive the key in any of the three times, the OLT resends the key
change request. The OLT stops sending the key change request until it receives the same
key for three times.
4. After receiving the new key, the OLT starts the key change.
5. The OLT notifies the ONU (ONT or MDU) of the new key by sending a command
containing the frame number and new key. Generally, this command is sent for three times.
As long as the ONU receives the command once, it switches the check key on the
corresponding data frame.
Smart VLAN A smart VLAN is a VLAN that can contain multiple upstream ports and
multiple service ports. The service ports in one smart VLAN are isolated
from each other.
MUX VLAN A MUX VLAN is a VLAN that can contain multiple upstream ports but
only one service port. The traffic streams of different MUX VLANs are
isolated.
13 Network Security
This topic covers the overview, availability, and sub-features of network security.
13.1 Introduction
13.2 Availability
13.3 Anti-DoS Attack
This topic provides an introduction to the anti-DoS attack feature, and describes the working
principle of this feature.
13.4 Anti-ICMP/IP Attack
This topic provides an introduction to the anti-ICMP/IP attack feature, and describes the working
principle of this feature.
13.5 Source Route Filtering
This topic provides an introduction to the source route filtering feature, and describes the working
principle of this feature.
13.6 MAC Address Filtering
This topic provides an introduction to the MAC address filtering feature, and describes the
working principle of this feature.
13.7 Firewall Blacklist
This topic provides an introduction to the firewall blacklist feature, and describes the working
principle of this feature.
13.8 Configuration of Acceptable or Refused Address Segments
This topic provides an introduction to the feature of configuring acceptable or refused address
segments, and describes the working principle of this feature.
13.9 Service Overload Control
This topic provides the definition, purpose, and principle of service overload control.
13.10 Acronyms and Abbreviations
13.1 Introduction
Sub-feature Description
Anti-DoS attack Indicates the defensive measures taken by the system to control
and limit the number of protocol packets sent from a user.
Anti-ICMP/IP attack Indicates that the system discards malicious ICMP and IP
packets sent from a user.
Source route filtering Indicates that the system filters the IP packets with the route
option sent from a user.
MAC address filtering Indicates that the system filters the user packets according to the
source MAC address or destination MAC address.
Firewall blacklist Indicates that the system filters the service packets whose source
IP addresses are in the blacklist.
Firewall Indicates that the system filters the packets according to the
access control list (ACL).
13.2 Availability
Related NEs
The operation and maintenance security of the device is related to mainly the security
management of the device, which does not involve the other NEs.
License Support
The operation and maintenance security features of the device are provided without a license.
Feature Dependency
l When a port on the OPGD board (or the ETHB board) serves as a subtending port, anti-
DoS attack does not take effect on the port.
l The ICMP/IP packets are filtered by the host CPU. Therefore, if a large number of ICMP/
IP packets are sent to the CPU, the CPU usage will be overhigh. In this case, the anti-DoS
attack function can be enabled as a countermeasure.
l After the anti-ICMP/IP attack function is enabled, the user cannot ping the L3 interface of
the device and cannot log in to the device through telnet.
l There is no impact on the system performance because the MAC address is filtered by
hardware.
l The MAC address filtering feature and the anti-MAC spoofing feature can be enabled at
the same time. When they are enabled at the same time, the MAC address filtering feature
takes priority over the anti-MAC spoofing feature.
l The firewall blacklist feature is used to check the source IP addresses of packets or match
the ACL rule. This has no impact on the system performance.
l You can use the ACL rule when enabling the firewall blacklist. When both the ACL rule
and firewall blacklist are used, the priority of the ACL rule is higher than that of the firewall
blacklist.
l There is no impact on the system performance because the MAC address is filtered by
hardware.
l When adding an address segment, ensure that the start IP address of the address segment
is different from that of existing address segments.
l If the IP address of a user exists in the refused IP address segments, the user is forbidden
to log in to the system. Therefore, you need to configure acceptable IP address segments
for login in advance.
13.3.1 Introduction
Definition
The denial of service (DoS) attack refers to an attack from a malicious user who sends a large
number of protocol packets, which results in denying service requests of normal users by the
system.
The anti-DoS attack feature refers to the defensive measures taken by the system to control and
limit the number of protocol packets sent from a user.
Purpose
The DoS attack affects the running of the system. That is, the system may fail to process the
service requests of normal users, or even the system may be crashed.
To protect the system, the number of protocol packets received by the system is restricted to a
specified range. If the number of protocol packets exceeds the specified range, the packets are
discarded as invalid packets and the user who sends these packets is added to the blacklist to
deny the packets from the user. The system administrator can force a user in the blacklist to go
offline.
Benefits
Benefits to carriers: With the anti-DoS feature, the user who initiates the DoS attack is added to
the blacklist. In this way, the carriers' networks are protected.
Benefits to subscribers: Subscribers can enjoy stable and safe services because the security of
services provided for subscribers is enhanced.
13.3.2 Specifications
The specifications of this feature are as follows:
l The anti-DoS attack feature can be enabled or disabled (disabled by default).
l The number of IP addresses in the blacklist supported by the system is the number of user
ports supported by the system.
l An alarm is generated when a DoS attack occurs and a recovery alarm is generated when
a DoS attack disappears.
l The processing policy of protocol packets can be configured in case of DoS attacks.
l When the anti-DoS attack function is enabled, the system supports configuration of the rate
threshold at which protocol packets are allowed to be sent to the CPU.
13.3.3 Principle
The working principle of the anti-DoS attack is as follows:
1. The system maintains a DoS blacklist. The system administrator can take measures to
manually force the user in the DoS blacklist to go offline (for example, deactivating the
port).
2. After the anti-DoS attack function is enabled, check whether the DoS attack occurs or stops
as follows:
l The system monitors the number of protocol packets that each user port sends to the
CPU. If the number of protocol packets exceeds the average number of the packets for
normal services, the system considers that the DoS attack occurs on this user port.
l When the DoS attack occurs on a user port, the system adds the user port to the blacklist.
In this case, a policy can be configured for allowing or forbidding protocol packets to
be sent to the CPU.
l After no DoS attack from the blacklisted user port is detected within a specified period,
the system deletes the blacklisted user port from the blacklist. In this way, the protocol
packets on the user port are allowed to be sent to the CPU again.
13.4.1 Introduction
Definition
The ICMP/IP attack means that a malicious user sends ICMP packets or IP packets whose
destination IP address is the system IP address. These packets affect the running of the system.
The anti-ICMP/IP attack feature means that the system discards malicious ICMP and IP packets
sent from a user.
Purpose
The destination IP address of the packets sent by normal users is not the system IP address. The
malicious users, however, may send the ICMP or IP packets whose destination IP address is the
system IP address to attack the system. The ICMP/IP attack can be regarded as one type of DoS
attack.
If a malicious user sends a large number of ICMP messages (such as ping messages) and IP
packets to a certain access system and keeps requesting the response at a short interval, the access
system is overloaded and thus cannot process legal tasks.
The anti-ICMP/IP attack feature can identify and discard the ICMP or IP packets whose
destination IP address is the system IP address, thus protecting the system.
13.4.2 Principle
If an access user sends the access devices the ICMP/IP packets whose destination IP address is
the system IP address, the ICMP/IP packets are discarded.
13.5.1 Introduction
Definition
The IP packets with the source route option specify the transmission path of the packets. For
example, to configure an IP packet to pass three routers R1, R2, and R3, the interface addresses
of the three routers can be specified in the source route option. In this way, this IP packet passes
R1, R2, and finally R3, regardless of the route tables on the three routers
During the transmission of these IP packets with the source route option, the source address and
destination addresses keep changing. Therefore, by properly setting the source route option, the
attacker can forge certain legal IP addresses to access the network.
The source route filtering feature is to filter the IP packets that are sent by the user and contain
the route option field.
Purpose
This feature is used to identify and discard the IP packets with the source route option, and also
to protect the carrier networks from being attacked by the forged IP packets.
13.5.2 Principle
After the source route filtering function is enabled on the MA5600T, the MA5600T discards the
IP packets with source route option sent from the access user.
13.6.1 Introduction
Definition
MAC address filtering is to filter the user packets according to the source MAC address or
destination MAC address of the user packets.
Purpose
This feature, which supports configuring the user packets without the source MAC address or
destination MAC address, is mainly to prevent the carriers' networks from being attacked by a
malicious user who forges the legal MAC address.
13.6.2 Specifications
The specifications of this feature are as follows:
l Up to four source MAC addresses can be filtered.
l Up to four destination MAC addresses can be filtered.
13.6.3 Principle
The MAC address filtering function mainly filters packets according to the source IP address
and the destination MAC address, and its working principle is as follows:
1. The MAC address of the network-side device can be set to the source MAC address to be
filtered to prevent the user from forging the MAC address of the network-side device.
2. When the user packets are sent upstream, the system checks the source MAC address of
the packets. When detecting that the source MAC address of the packets are the same as
the configured MAC address of the network-side device, the system discards the packets.
3. The MAC address of the network-side device can be set to the destination MAC address
to be filtered to prevent the user from attacking the network-side device.
13.7.1 Introduction
Definition
A firewall blacklist is an IP address set. With the firewall blacklist, the system filters all service
packets, whose source IP addresses are listed in the firewall blacklist, to enhance the system
security and network security.
Purpose
The firewall blacklist feature is to shield IP addresses that are used by malicious users to attack
the system by setting the blacklist.
Benefits
Benefits to carriers: Carriers can set the blacklist to shield IP addresses that are used by malicious
users to attack the system.
13.7.2 Specifications
The specifications of this feature are as follows:
13.7.3 Principle
The working principle of the firewall blacklist feature are described as follows:
1. If the source IP address of a user packet exists in the firewall blacklist, the user packet is
discarded.
2. If packets match the ACL rule, but the IP address of the packets is rejected in the ACL rule,
the packets are discarded. If the IP address of the packets is allowed to pass through in the
ACL rule, regardless of whether the IP address of the packets exists in the blacklist, the
packets can pass the firewall.
13.8.1 Introduction
Definition
This feature is to configure acceptable or refused IP address segments for login through the
firewall of a specified protocol type.
Purpose
The system supports configuring acceptable or refused IP address segments for login through
the firewall of a specified protocol type. This prevents the users of illegal IP address segments
from logging in to the system, and maintains the system security.
Benefits
Benefits to carriers: This feature prevents the users of illegal IP address segments from logging
into the system and maintains the system security.
13.8.2 Specifications
The specifications of this feature are as follows:
l The login to the system through Telnet, SSH and SNMP is supported. For each protocol
type, the configuration of acceptable/refused address segments is supported.
l Each type of firewall can be configured with 10 acceptable IP address segments and 10
refused IP address segments.
l The acceptable address segment can be configured for IP packets of the telnet, SSH, or
SNMP protocol type.
l Up to 10 acceptable IP address segments can be configured, and the packets whose source
IP address is not within the range of the acceptable IP addresses cannot access the system.
l The refused IP address segment has a higher priority. That is, if an IP address is in both the
acceptable IP address segment and the refused IP address segment, the IP address is not
allowed to access the system.
13.8.3 Principle
When a user logs in to the system through telnet, SSH, or SNMP, the system checks whether
the IP address of the user is within the acceptable IP address segments to determine whether to
allow the user to log in to the system.
1. If the IP address of the user is within the acceptable IP address segment, the user is allowed
to log in to the system.
2. If the IP address of the user is out of the acceptable IP address segment, the user is not
allowed to log in to the system.
NOTE
The refused IP address segment has a higher priority. That is, if an IP address is in both the acceptable IP address
segment and the refused IP address segment, the IP address is not allowed to access the system.
13.9.1 Introduction
Definition
Service overload control is used to prevent the overuse of system resources such as the CPU
resources. This is to ensure that service interruption or NMS' failure in management due to the
overuse of system resources (such as the CPU resources) does not occur in case of large traffic
quantity. When the system is overloaded, the quality of the key services with a high priority
(such as the 119 call) is guaranteed first within a specified range.
At the same time, regarding the narrowband service, a balance of the traffic of different VAGs
or different ISDN boards/ports can be maintained in case of large traffic quantity. In addition,
the traffic impact on one VAG, or ISDN board or port does not affect the services of another
VAG, or ISDN board or port.
Purpose
On the MA5600T, a large quantity of burst traffic will cause the CPU usage and occupation of
service resources to increase significantly. If the traffic is not differentiated by priority, the
system cannot process services normally, which will cause service interruption. Therefore, the
packets reported to the CPU should be filtered according to specific priority rules. Then, when
the CPU usage reaches a certain threshold, the packets with a low priority are dropped and the
packets with a high priority are processed first.
With the service overload control feature, the packets reported to the CPU can be filtered to
prevent malicious attack and moment service overload. This improves the security and reliability
of the device.
13.9.2 Principle
A proper priority must be set for each type of traffic reported to the CPU. The traffic includes
internal management packets, network topology management packets, and service (voice service
or data service) packets. Priority planning lays a foundation for differentiation control. To be
specific, the priority of the service packets (including the upstream traffic from the user side and
the downstream traffic from the network side) on the control plane should be the same as that
on the forwarding plane, which is planned by carriers in a unified manner. Therefore, on the
control plane, when the internal management packets can be processed separately from other
traffic and the traffic with a lower priority can be restricted when the CPU usage exceeds a
certain threshold, the system reliability can be guaranteed.
The possible traffic types in the system are as follows:
l Internal management packets, such as inter-board handshake packets, higher-layer protocol
packets, and loading packets
l Link-layer network management protocol packets such as MSTP and LACP packets
l Protocol packets such as routing protocol packets, BFD packets, and ETH OAM packets
l SNMP, ANCP, telnet, and NTP packets
l VoIP service packets, IPTV service packets, and private line service packets
The priority of service traffic is planned by carriers. By default, internal management packets
are always of the highest priority (priority 7) and other packets are of the priority of the mapping
queue. To meet the requirements of different users, setting the priority by protocol is supported.
By default, the system sets the priority by traffic.
To perform differentiation control over the traffic of various priorities, the management function
on the queues of multiple priorities must be implemented, where queue scheduling is
implemented mainly through the weight round robin (WRR) algorithm and the leaky bucket
algorithm.
This topic covers the overview, relevant standards and protocols, availability, and sub-features
of device management security, and provides the glossary, acronyms and abbreviations of this
feature. This topic will cover the following contents:
14.1 Introduction
14.2 Relevant Standards and Protocols
14.3 Availability
14.4 SNMP
This topic provides an introduction to the SNMP feature, and describes the working principle
of this feature.
14.5 SSH
This topic provides an introduction to the Secure Shell (SSH) feature, and describes the working
principle of this feature.
14.6 User Management
This topic covers the overview and working principle of user management.
14.7 Remote Connection Security
This topic provides an introduction to the remote connection security, and describes the working
principle of this feature.
14.8 Log Management
This topic covers the overview and working principle of log management.
14.9 Version and Data Management
This topic provides an introduction to the version and data management feature, and describes
the working principle of this feature.
14.10 Alarm and Event Management
This topic covers the overview and working principle of alarm and event management.
14.11 Glossary, Acronyms, and Abbreviations
14.1 Introduction
Device management security includes the following features: SNMP, SSH, user management,
remote connection security, log management, version and data management, and alarm and event
management.
Feature Description
SSH SSH is a protocol based on the application layer and transport layer
and provides security for remote login sessions and other network
services. It is used for remote management connection and file
transfer.
Log management Logs include the security event logs relevant to system security
events and the operation logs of users.
Version and data This management function includes patch management, rollback
management function, configuration data management, and version upgrade.
Alarm and event This management function includes recording and setting alarms
management and events and collecting alarm and event statistics.
1. SNMPv1
l RFC1157: Simple Network Management Protocol (SNMP)
2. SNMPv2c
l RFC1905: Protocol Operations for Version 2 of the Simple Network Management
Protocol (SNMPv2)
3. SNMPv3
l RFC2570: Introduction to Version 3 of the Internet-standard Network Management
Framework
SSH
Encryption for remote management connection:
None
Log Management
None
14.3 Availability
Related NEs
The operation and maintenance security of the device is related only to the security management
of the device. Therefore, it is related only to the MA5600T and not to any other NE.
License Support
The corresponding service is provided without a license.
Version Support
Feature Dependency
l The password must meet the requirements of the current system.
14.4 SNMP
This topic provides an introduction to the SNMP feature, and describes the working principle
of this feature.
14.4.1 Introduction
Definition
The Simple Network Management Protocol (SNMP) is a network management protocol that is
widely used in the TCP/IP network. It provides a means of managing network resources using
a central computer (network management workstation) that runs the network management
software.
Network management involves four parts:
l Managed node: device that is monitored, namely NE.
l Agent: software used to trace the status of the managed nodes (devices).
l Network management workstation: central device that communicates with the agents of
the managed nodes and displays the status of the agents.
l Network management protocol: protocol (such as SNMP) for exchanging information
between the network management workstation and the agent.
Figure 14-1 shows the typical configuration of an SNMP-managed network. The entire network
must have at least one network management workstation, which acts as the network management
center and runs the manager process. Each managed node must have an agent. The manager and
the agent communicate with each other using UDP-based SNMP messages.
UDP TCP
Network management workstation IP
Manager Network interface
process
SNMP
UDP TCP
IP
Network interface
Purpose
SNMP is used for network management. There are two types of network management.
l One type is management of network applications, user account, and access right
(permission). Such management is related to software and is not described in detail.
l The other type is management of NEs such as the MA5600T. Generally, the managed
devices are far away from the central telecommunications room where network
management engineers work. When such devices are faulty, it is ideal if the network
management engineers are notified of the faults automatically. However, devices such as
the MA5600T cannot notify the network management engineers of its application faults.
To resolve such an issue, equipment vendors provide network management functions for some
devices. With these functions, the network management workstation can query the status of
managed devices remotely; likewise, the managed devices can send alarms to the network
management workstation when a specific type of event occurs.
14.4.2 Specifications
The MA5600T supports the following SNMP specifications:
l The manager in the network management workstation sends an SNMP request PDU to the
agent.
l After obtaining the required information by querying the MIBs of managed devices, the
agent sends an SNMP response PDU to the manager.
l When the managed device is malfunctioning, the agent notifies the manager of the fault
through a trap, which helps network management engineers resolve the issue in time.
NMS
Agent
MIB
Managed
object
GetNextRequest
GetNextRequest
GetResponse
GetResponse
SetRequest
SetRequest
GetRequest
GetRequest
Trap
Trap
SNMP manager SNMP agent
UDP UDP
IP IP
Physical network Physical network
IP data packet
IP address
PDU type Trap type Specific- Time-
Enterprise of the Name Value Name Value ...
(4) (0-6) code stamp
agent
Trap header Variable-bindings
Version. The value of this field is the PDU version minus one. For example, the value
of this field for the SNMPv1 PDU is 0.
Community. It is the password in plain text used between the manager and the agent,
in the format of character string. A common community name is public, a string of six
characters.
PDU type. There are five types of PDU, as listed in Table 14-2.
0 Get-request
1 Get-next-request
2 Get-response
3 Set-request
4 Trap
l Get/Set header
Request ID
It is an integer set by the manager. When sending the get-response PDUs, the agent also
needs to return the request ID. The manager can send the get PDUs to multiple agents
using the UDP port. However, the response PDU for the first get PDU does not
necessarily arrive first. Considering such a situation, the request ID is set so that the
manager can correlate incoming response PDUs with corresponding request PDUs.
Error status
It is filled when the agent responds to the manager, as described in Table 14-3.
Error index
When an error such as noSuchName, badValue, or readOnly occurs, the agent sets an
integer as the error index during its response. The error index specifies the position of
the error variable in the variable list.
l Trap header
Enterprise
This field is filled with the object ID of the network device carried in the trap PDU.
Trap type
The formal name of this field is generic-trap. There are seven trap types, as described
in Table 14-4.
In the case of 2, 3, or 5, the first variable in the variable-bindings of a PDU needs to specify
the port that is used for response.
Specific-code
This field specifies the event (for example, trap type 6) defined by the agent. If the event
is not defined by the agent, this field is filled with 0.
Time stamp
This field specifies the time elapsed between the initialization of the agent and the
generation of the trap, in the unit of 10 ms. For example, if the time stamp is 1908, it
indicates that the trap is generated 19080 ms after the initialization of the agent.
l Variable-bindings
This field specifies the name and value of one or more variables. In the get or get-next PDU,
this field is filled with 0.
l SNMP does not provide the bulk access mechanism, causing low efficiency of accessing
large amount of data.
l SNMP runs over only TCP/IP. It does not support other network protocols.
l SNMP does not provide the mechanism for communication between managers. It is
applicable to centralized management, but not to distributed management.
l SNMP can be used for monitoring network devices, but not for monitoring the network.
To resolve these problems, Internet Engineering Task Force (IETF) continuously optimizes
SNMP and finally produces SNMPv2c. SNMPv2c has the following enhancements to SNMPv1:
SNMPv3 has four major models: message processing and control model, local processing model,
user-based security model (USM), and view-based access control model (VACM).
Different from SNMPv1 and SNMPv2, SNMPv3 implements access control, identity
authentication, and encryption by using its local processing model and USM.
USM
The USM provides identity authentication and data encryption. To implement such functions,
the manager and the agent must share the same key.
l Identity authentication: When receiving a message, the agent (or manager) must determine
whether the message is sent from the authorized manager (or agent) and whether the
message is changed during transmission. This is called identity authentication. RFC2104
defines HMAC, which is an effective tool of generating message authentication codes using
cryptographic hash functions and keys. It is widely applied in the Internet. HMAC used by
SNMP are HMAC-MD5-96 and HMACSHA-96. HMAC-MD5-96 adopts the MD5 hash
function, using the 128-bit authKey as its input. HMACSHA-96 adopts the SHA-1 hash
function, using the 160-bit authKey as its input.
l Data encryption: It adopts CBC-DES, using the 128-bit privKey as its input. The manager
uses a key to calculate the authentication code and then adds the authentication code to the
message. After receiving the message, the agent uses the same key to obtain the
authentication code and decrypts the message. Similar to identity authentication, encryption
also requires that the manager and the agent share the same key for message encryption
and decryption.
VACM
The VACM implements view-based access control over user groups or community names. A
user must first configure a view with rights specified. Then, the user loads the view when
configuring a user, user group, or community name so that the read operation, write operation,
or traps (v3) can be limited.
SNMPv3 USM
SNMPv1 and SNMPv2c do not provide a security mechanism. SNMPv3 supports the user-based
security model (USM) to protect against information tampering and masquerade.
USM mainly checks whether the SNMP message is modified during the network transmission
and whether the SNMP message is sent by the alleged user. USM also detects outdated SNMP
messages and provides a privacy mechanism for SNMP messages.
USM consists of three modules:
l Authentication module: authenticates the data origin.
l Timeliness module: prevents message delay or replay.
l Privacy module: prevents message disclosure.
SNMPv3 VACM
The access control subsystem of the SNMP engine checks whether an access to a special object
is allowed. View-based access control model (VACM) is a default access control model in
SNMPv3. Compared with SNMPv1 and SNMPv2c, SNMPv3 adopts a more strict and dynamic
access control model, which facilitates configuration by network management engineers.
VACM consists of the following parts:
l Groups
A group is a set of zero or multiple mappings. It defines all the access rights to all security
names that belong to the group.
l Security level
Different access rights are defined by different security levels.
l Contexts
An SNMP context is a collection of management information accessible by an SNMP
entity.
l MIB views and view families
l Access policy
Read-view
Write-view
Notify-view
14.5 SSH
This topic provides an introduction to the Secure Shell (SSH) feature, and describes the working
principle of this feature.
14.5.1 Introduction
Definition
Secure Shell (SSH) is made by the Internet Engineering Task Force (IETF). Based on the
application layer and transport layer, SSH provides security for remote login session and other
network services.
Purpose
Conventional network service programs such as FTP and Telnet transmit password and data in
plain text over the network. Unlike these programs, SSH encrypts the data to be transferred,
which effectively prevents information divulge during remote management. In addition, during
SSH encryption, data is compressed to a smaller size, which achieves faster data transfer.
14.5.2 Specifications
The MA5600T supports the following SSH specifications:
l SSH 1.x and SSH 2.0
l RADIUS authentication for user logins through SSH
l User password authentication, user public key authentication, user password+public key
authentication, user password or public key authentication.
l AES, DES, 3DES, and BLOWFISH encryption algorithms for logins through SSH
authentication, an SSH secure channel is set up. Any data transfer protocol can transfer data
through the channel. The tool used by the secure maintenance terminal provides the SSH client
function.
Figure 14-5 shows the interaction process between the client and the server using SSH.
Client Server
Connection
establishment
Key exchange
User authentication
Connection cut-off
Client Server
SSH
initialization
phase
SFTP version
authentication
phase
SFTP file
open phase
SFTP file
read/write
phase
SSH
connection
cut-off phase
14.6.1 Introduction
Definition
User management includes the following:
l A user needs to be authenticated by user name and password when the user attempts to log
in to the device through the command line interface (CLI).
l Users are classified into four levels: super user, administrator, operator, and common user.
Different levels of users are assigned different operation rights.
Purpose
User management is to ensure the security of device management and maintenance by user name
+password authentication and hierarchical right-based management.
14.6.2 Specifications
The MA5600T supports the following user management specifications:
l Supports separation and unification of the security administrator and the system
administrator.
l Binds a user profile to a user. The user profile defines the valid period for the user name
and password, and the time range when the user is permitted to log in.
l Divides operation users into four levels by rights: super user, administrator, operator, and
common user.
l Supports character strings of 6-15 characters for the user name and password.
l Requires the password to contain at least one character and one digit.
l Automatically locks out the user name if it remains idle for a long period of time.
The idle time is configurable.
l Locks out the user name and password if the valid period expires.
The valid period of the user name and password is configurable.
l Locks out the user name if the number of failed logins with the user name exceeds N.
N is configurable and is six by default.
l Locks out the user IP address if the number of failed logins from this IP address exceeds
N.
N is configurable and is six by default.
14.6.3 Principle
When a user logs in to the system through the CLI, the user must enter the user name and
password for authentication. User authentication ensures system security.
Users are classified into four levels: super user, administrator, operator, and common user.
Different levels of users are assigned different operation rights.
The super user and the administrator have the right to add a user at a lower level.
l The super user can add an administrator, operator, or common user.
The system also supports management of user profiles. A user profile supports setting of the
following parameters:
l Minimum length of a user name (6-15 characters)
l Minimum length of a password (6-15 characters)
l Valid period of a user name (0-999 days)
l Valid period of a password (0-999 days)
l Start time of user login in the format of hh:mm (for example, 08:30)
l End time of user login in the format of hh:mm (for example, 18:30)
If the valid period of the user name or password is set to 0, it indicates that there is no restriction
on the valid period of the user name or password. This rule also applies to the start time and end
time of user login. If specific values are set, the user login time is restricted by the set values.
The system reminds the user with a message three days before the user name and password
expire.
The preceding settings enhance the security of system management. When a user is created, if
the user is bound to a user profile and the start time of user login in the user profile is set to
08:30, it indicates that the user cannot log in to the system before 08:30. After a user profile is
set, the user profile can be directly bound to a user when the user is added. In addition, the user
profile bound to the user that is already created can be modified. A user supports a maximum
of 12 user profiles.
The system provides four default user profiles named root, admin, operator, and
commonuser, which helps manage and create users in a unified way.
Different names of user profiles indicate differences in the security settings for the user profiles
rather than differences in user levels. The user level is specified when a user is added. In a
root profile, restrictions on users are disabled so that the user bound to the profile can log in to
the system after upgrade. It is not recommended that this profile be bound when a user is added.
14.7.1 Introduction
Definition
With the remote connection security feature, an IP firewall is used or the service port of the
system is disabled to protect the device against attacks from unauthorized users or unauthorized
operations.
Purpose
Using an IP firewall or disabling the service port protects the device against attacks from
malicious users. This feature ensures system security.
14.7.2 Specifications
The MA5600T supports the following remote connection security specifications:
14.7.3 Principle
With the IP firewall function, only the operators from trusted IP address segments are allowed
to log in to the device through valid access protocols, and the operators from blocked IP address
segments or through invalid access protocols are not allowed to log in to the device.
With the function of disabling the system service, the default service monitoring port of the
system can be disabled to protect the port from malicious scanning or attacks.
14.8.1 Introduction
Definition
Logs can be classified into security event logs and operation logs.
l A security event log is generated by the system after a security event occurs. Currently,
three types of security events are supported: online/offline event of maintenance users, user
lockout event, and auto-backup success event.
l An operation log is generated by the system recording user operations. It records user login
and logout information and other operations performed on the system.
Generally, logs are queried through the CLI, syslog, or a backup log file during troubleshooting.
Operation logs and security event logs are reported to the NMS.
Purpose
Logs help users obtain the overall system maintenance information for effective troubleshooting.
14.8.2 Principle
Operation Log
The system records the commands about configurations successfully issued from the CLI or
SNMP interface. Such recordings are operation logs. Operation logs record both successful and
failed operations. The logs of failed operations also record the operation results.
By default, the system supports a maximum of 512 operation logs, which are saved in the order
of time. When operation logs reach the maximum number, new logs overwrite old ones. After
system restarting, the logs are not lost.
Log Server
Logs can be reported to the log server using syslog in real time. Also, logs can be transmitted
to the file server through TFTP/FTP/SFTP at a specified time or when the specified capacity is
reached after the automatic uploading conditions are configured. Integrity of logs must be
ensured.
14.9.1 Introduction
Definition
Version and data management includes patch management, rollback function, configuration data
management, and version upgrade.
Purpose
This feature facilitates carriers in version upgrade and maintenance.
Benefits
Benefits to carriers: This feature greatly reduces operating expenditure (OPEX) for carriers and
improves customer satisfaction.
14.9.2 Specifications
The MA5600T supports the following specifications for version and data management.
1 100 patches
2 Hot patches, SPH (Service Package of Hot patches), cold patches and SPC (Service
Package of Cold patches).
4 Rolling back the program, database, and extended BIOS automatically in case of
upgrade failure
7 Setting the time for canceling the rollback function automatically (5 minutes to 30
days)
8 Saving the configuration data automatically after any changes to the configuration
data
14.9.3 Principle
Patch Management
The flash memory (storage medium in the system) has a patch area to store loaded patches.
A patch can be a hot patch or cold patch. The system needs to be restarted for a cold patch to
take effect or stop functioning. Nevertheless, for a hot patch, the system does not need to be
restarted for the same purpose. A hot patch supports the rollback function; therefore, a hot patch
can be rolled back so that the system is restored to the source version.
In addition, a patch can be activated, deactivated, run, or deleted. A loaded patch is deactivated
by default; therefore, to make a loaded patch take effect, activate it. To make the patch take
effect after system restarting, activate and run the patch before the system is restarted.
l HP: refers to the host hot patch. It takes effect after being loaded and activated. For a user,
this type of patch is displayed as HPXXX after being loaded.
l SPH: a set of HP patches. It takes effect after being loaded and activated. For a user, this
type of patch is displayed as SPHXXX after being loaded, but the status of HP patches is
not displayed.
l CP: refers to the host cold patch. It takes effect after it is loaded and the system is restarted.
For a user, this type of patch is displayed as CPXXX after being loaded.
l SPC: a set of CP patches. It takes effect after being loaded and activated. For a user, this
type of patches is displayed as SPCXXX after being loaded, but the status of CP patches
is not displayed.
Rollback Function
The flash memory of the control board is divided into two identical storage areas (active storage
area and standby storage area) to store programs, database, and extended BIOS. The storage area
that is currently operating is the active storage area. During an upgrade of programs, database,
and extended BIOS, the new programs, database, and extended BIOS are loaded to the standby
storage area. After the system is restarted, the system automatically loads the new programs,
database, and extended BIOS. The rollback function is implemented based on two sets of
programs, database, and extended BIOS, each stored in the active and standby storage areas.
By default, after upgrade, the system saves the pre-upgrade host programs and database for 48
hours. 48 hours later, the system automatically cancels the rollback function. That is, 48 hours
later, the system duplicates the programs, database, and extended BIOS from the operating area
to the standby storage area. In this way, the versions in both the active and standby storage areas
are the same. You can set the time for canceling the rollback function to 5 minutes to 30 days.
The system supports automatic rollback and manual rollback. After version upgrade, if the
system fails to start up, the system is automatically rolled back to the version before the upgrade
(the source version). After version upgrade, if the system becomes abnormal following
successful startup and cannot recover, you can run the rollback command to manually roll back
the system to the source version.
l Saving configuration data manually: The current configuration data can be saved manually
by using commands. If the configuration data is not saved before the system is reset or
restarted, it will be lost after the system resets or restarts. Therefore, the configuration data
needs to be manually saved once before the system is reset or restarted.
l Saving the configuration data after any changes are made to the configuration data: After
the configuration data is changed, the system will save the changed configuration data
automatically at a preset interval. This interval is user-defined and ranges from 10 minutes
to 10080 minutes (defaulted to 30 minutes).
l Saving the configuration data at a preset time or interval: In the system, the configuration
data can be saved automatically at a preset time or interval. This time or interval is user-
defined. For example, the time can be set to 23:00 or the interval set to two hours. Then,
the configuration data is automatically saved at 23:00 or every two hours.
Data erasure can be performed to restore the configuration data of the device to default settings.
The system also supports manual backup for the current configuration data or automatic backup
at a preset time to a specified file server.
Version Upgrade
Software version in the system can be upgraded through the CLI or the NMS by using FTP/
TFTP/XMODEM/SFTP.
14.10.1 Introduction
Definition
Alarm and event management mainly involves recording and setting alarms and events and
collecting their statistics.
Purpose
Alarm and event management facilitates carriers in performing routine maintenance on the
device, locating device faults, and restoring the services for users quickly in case of service
abnormalities.
14.10.2 Specifications
The MA5600T supports the following specifications for alarm and event management.
SN Specification Description
1 Supporting alarms and events of four severity levels: critical, major, minor, and
warning
3 Backing up the historical alarms and historical events automatically to a file server
14.10.3 Principle
Alarm and event management refers to recording and setting alarms and events and collecting
their statistics. Maintenance engineers maintain the device by using the alarm and event
management function to achieve higher efficiency of device management.
After an alarm or event is generated, the system broadcasts the alarm or event to the terminals,
mainly including the network management system (NMS) and command line interface (CLI)
terminals. Currently, the system can store 1000 historical alarms and 800 historical events.
The severity of an alarm or event can be critical, major, minor, or warning. Although an alarm
or event has a default severity level, this severity level can be adjusted according to actual
conditions. The contents of an alarm or event include name, parameters (including subrack, slot,
and port information), description, possible causes, and suggested solutions.
When an alarm/event is generated, the system implements the jitter-proof function for the alarm/
event to prevent alarm/event misreporting. To be specific, an alarm/event is reported only after
a specified period expires after the alarm/event status changes. The specified period ranges from
1s to 60s and is defaulted to 10s. If the alarm/event status recovers within the specified period,
the alarm/event is not reported.
The alarm and event statistics collection function collects the statistics of alarms and events
within a specified period. The collected statistics help locate system faults.
With the alarm and event filtering function, the user can configure filtering conditions so that
the system reports only the alarms and events that meet the filter conditions. In this way, the
user can concentrate on the important and specified alarms and events. Alarms and events can
be filtered according to their ID, severity, and type.
The operation & maintenance (O&M) feature is intended for the operation, administration and
maintenance (OAM) of the device. It plays an important role in guaranteeing the normal running
of the device on a daily basis, managing the device in the network topology, locating faults, and
upgrading and maintaining the device. This topic describes the sub features of the O&M feature
in detail.
15.1 Introduction
15.2 Reference Standards and Protocols
15.3 Remote Operation
15.4 Ring Check
The ring check feature is mainly used to detect and eliminate the user-side ring network.
15.5 ANCP
The Access Node Control Protocol (ANCP) is used by the broadband network gateway (BNG)
to manage line parameters (including QoS and user parameters) of the access node (AN).
15.6 Environment Monitoring
In general, environment monitoring involves environment parameters monitoring and power
monitoring. Environment parameters monitoring refers to monitoring of the environment
parameters that might cause failure or damage to the system. Power monitoring refers to
monitoring of the power supply system.
15.7 Power Saving and Remote Maintenance
This topic describes the power saving feature of the system from two aspects: stepless speed
adjustment of the fans and power turnoff of the board. It also describes the remote maintenance
feature of the system from two aspects: power turnoff of the board and recording the model and
operational information for the fans and power module.
15.1 Introduction
The operation & maintenance (O&M) feature is intended for the operation, administration and
maintenance (OAM) of the device. It plays an important role in guaranteeing the normal running
of the device on a daily basis, managing the device on the network topology, locating faults, and
upgrading and maintaining the device.
The following uses the MA5600T as an example to describe the definition, purpose,
specifications, principle, and reference standards and protocols for the O&M feature.
The MA5600T O&M feature includes a number of sub features such as user management and
remote operation, program and configuration data management, device anomaly management,
access node control protocol (ANCP), and MA5600T registration and authentication.
15.3.1 Introduction
Definition
Remote operation refers to performing routine maintenance on the device remotely, without any
on-site visits.
Purpose
This sub feature facilitates carriers in maintaining remote device.
Benefits
Benefits to carriers: The carriers' operating expenditure (OPEX) is considerably reduced, and
the customer satisfaction is improved.
Benefits to users: Interrupted services can return to normal within a shorter period after the
carriers quickly locate and troubleshoot the issue.
15.3.2 Specifications
The MA5600T supports the following specifications of the remote operation and user
management feature, as listed in Table 15-1
Table 15-1 Specifications of the remote operation and user management feature
SN Specification Description
5 Four default user profiles named root, admin, operator, and common user
NOTE
The RADIUS or TACACS authentication is selected by default for an administrator during its login to the device.
Local authentication is adopted only when the RADIUS or TACACS server is unreachable. When the RADIUS
or TACACS server is reachable, all administrators except user root cannot use local authentication to log in to
the device.
15.3.3 Principle
The MA5600T supports Inband telnet and outband telnet.
l Outband telnet
The interface used by outband telnet is the only one Ethernet port (RJ-45) on the front panel
of the control board. After the IP address and related route are configured on this port, the
device can be logged in to through this port in the telnet mode for related operations and
maintenance.
l Inband telnet
The interface used by inband telnet is the VLAN L3 interface inside the device. The
system supports up to 32 IP addresses for the VLAN L3 interface and the subnets of
these IP addresses must be different.
In the case of remote telnet, it is recommended to configure the trusted/blocked IP
address segments to prevent the login of unauthorized users.
15.4.1 Introduction
Definition
Ring check is a function of detecting any ring network formed on the user side. The ring check
feature enables the device to transmit the ring check packets to the user port periodically, and
to monitor the ring check packets received on the user side and the network side. In this way,
the device detects whether a loop occurs on the network of the carrier. If a loop occurs, the
MA5600T deactivates the user ports on the loop and reports the corresponding alarm to the
NMS. This ensures that the device runs in the normal state and that the services of other users
are not affected.
Purpose
Ring check is used to quickly locate the user-side ring network, and eliminate the ring network
according to requirements.
Benefit
Benefits to carriers
The Ring check feature enables the system to detect the carrier's network and report an alarm to
the NMS when a loop occurs. The alarm enables the carrier to know the network fault in the
shortest period of time so that the fault can be quickly rectified to resume the normal running of
the network.
Benefits to users
The Ring check feature enables the device to deactivate the user port on a loop to ensure that
authorized users receive a good network service rather than be affected.
15.4.2 Specifications
The MA5600T supports the following ring check specifications:
15.4.3 Availability
License Support
The ring check feature is a basic feature of the MA5600T. Therefore, the corresponding service
is provided without a license.
Version Support
Product Version
15.4.4 Principle
DMAC SMAC
Payload
Payload
l DMAC indicates the broadcast MAC address with value 0xFF and SMAC indicates the
bridge MAC address.
l 802.1Q Head is optional according to flow attributes on the user side.
l Type indicates the proprietary Ethernet type, which can be configured.
l Payload of the packet content is proprietary and it does not need to be configured.
Principle
After the ring check function is enabled, the device periodically transmits private ring check
packets to the user port and captures the user-side ring check packets on the network and user
sides simultaneously.
l As for the ring check packets captured on the network side, the system first checks whether
they are transmitted from the local device.
If yes, the system finds out the source port transiting the ring check packets and reports
an alarm to the NMS, but does not deactivate this source port. This is because a user
can forge the ring check packets and the system cannot determine whether the ring check
packets are forged by a user or are transmitted from the device. The check performed
by the system prevents misjudgment of the check point.
If not, the system discards the packets.
l As for the ring check packets captured on the user side, the system reports an alarm to the
NMS and deactivates the port receiving the packets, thus eliminating the loop in the
network.
l Assume that the system supports 8K traffic streams. The ring check function checks 300
traffic streams every second. If a ring network occurs, it can be checked after 8000/300 =
26.67s.
Figure 15-2 shows the user-side ring network scenarios in FTTH/DSLAM applications.
l In the case of (1), (2), (3), and (4), when receiving ring check packets from the user side,
the system directly deactivates the port that receives the packets, eliminating the loop in
the network.
l In the case of (4), this kind of network topology needs to be avoided because the system
cannot determine whether the ring check packets captured on the network side are forged
by a user or are transmitted from the device. Such a network topology leads to misjudgment
of the check point.
Figure 15-3 shows the user-side ring network scenarios in FTTB/FTTC applications.
l The Ethernet Types of ring check packets of the OLT and the MDU are recommended to
be the same.
If Ethernet Types of ring check packets of the OLT and the MDU are the same, the ring
check packets are captured on the OLT and the MDU and are evaluated. In the case of
(6), the ring check packets transmitted by the MDU are terminated by the OLT.
Therefore, ring network cannot be detected in this kind of network topology.
If Ethernet Types of ring check packets of the OLT and the MDU are different, the OLT
and the MDU capture its own ring check packets. In the case of (4), the MDU and the
ONT connected to the OLT are not interconnected. Therefore, ring network cannot be
detected in this kind of network topology.
l In the case of (1), (2), (3), and (4), when receiving ring check packets from the user side,
the system directly deactivates the port that receives the packets, eliminating the loop in
the network. In the case of (5), the system can detect the ring network but does not deactivate
the port. Instead, the system reports an alarm.
l In the case of (5) and (6), these two kinds of network topology need to be avoided because
the system cannot determine whether the ring check packets captured on the network side
are forged by a user or are transmitted from the device. Such network topologies lead to
misjudgment of the check point.
15.5 ANCP
The Access Node Control Protocol (ANCP) is used by the broadband network gateway (BNG)
to manage line parameters (including QoS and user parameters) of the access node (AN).
15.5.1 Introduction
Definition
The Access Node Control Protocol (ANCP) is used by the broadband network gateway (BNG)
to manage the line parameters (including QoS and user parameters) of the access node (AN).
NOTE
l The user powers on, disables, or connects the RG to change the line status.
l The BNG and the AN exchange ANCP messages.
l The network administrator manages the AN through the NMS by using SNMP.
P
M
SN
User P
RG Access Node ANC
RG
User
Purpose
When ANCP is not used, if the BNG needs to manage the line parameters of an AN, the NMS
is required. When the AN and the BNG use different NMSs, the line parameters are difficult to
manage. Through ANCP, however, the BNG can directly manage such parameters without the
NMS.
15.5.2 Specifications
The MA5600T supports the following ANCP specifications:
l The multicast CAC and unicast CAC can be configured on the system.
15.5.4 Principle
The MA5600T supports ANCP for implementing the following functions:
l Multicast CAC
l Unicast CAC
Before the above-listed ANCP functions are implemented, an ANCP session needs to be set up
between the BNG and the access node (AN).
Access-line discovery
Line configuration
L2C OAM
Multicast
l User unicast VoD bandwidth CAC is implemented on the RACS (the policy server in
Figure 15-6).
Video bandwidth waterline mechanism:
The AN introduces a video bandwidth waterline mechanism to dynamically adjust the video
bandwidth between the AN and the RACS.
In this mechanism, the total video bandwidth of users is compared to a container. Through the
video bandwidth waterline mechanism, the waterline in the container can be upshifted/
downshifted to dynamically adjust the bandwidth resources for multicast programs and the
bandwidth resources for unicast VoD programs. When the bandwidth resources for multicast
programs are insufficient, the bandwidth resources for unicast VoD programs can be requested
for multicast programs. The same is true for unicast VoD programs. This mechanism enhances
user experience with program ordering.
Video bandwidth waterline data update:
When the AN or the RACS applies to each other for bandwidth, the applicant starts a timer (set
to 500 ms). If the applicant does not receive a response after the timer times out, the AN deletes
the IGMP join message of the corresponding user from the buffer and returns a program order
failure to the user. At the same time, the AN or the RACS actively queries each other about the
video bandwidth information and updates their own video bandwidth waterline data according
to the waterline data of the peer. If the query about the video bandwidth information fails, the
AN or the RACS does not update the video bandwidth waterline data.
ANCP feature and BTV feature:
The relationship between the ANCP multicast CAC feature and the BTV multicast CAC feature
is as follows:
l When BTV multicast CAC is not enabled, the AN cannot implement multicast CAC
regardless of whether ANCP multicast CAC is enabled or not.
l The video bandwidth between the AN and the RACS can be dynamically adjusted only
when BTV multicast CAC and ANCP multicast CAC are enabled.
Multicast CAC
Figure 15-6 shows the application scenario of multicast CAC.
SOAP
DHCP
COPS
1
server 3
2
Head end
Resource request
Resource video server
4
ACK
TE TUNNEL Multicast
ANCP
stream
ME NPE
TV
AN BNG
NPE
BNG RACS
AN
RG
IGMP Join
BTV CAC OFF, a failure is returned.
IF BTV CAC ON, and multicast program bandwidth
<Available multicast bandwidth
3
IF BTV CAC ON & ANCP CAC OFF
Multicast program
bandwidth>Available multicast
bandwidth, a failure is returned IF BTV CAC ON & ANCP CAC ON
Multicast program bandwidth>Available multicast
bandwidth, applies for unicast VoD bandwidth
Unicast CAC
Figure 15-7 shows the application scenario of unicast CAC.
1
1 VoD request
Policy
NMS server
SOAP
Resource
2 request
DHCP 4 Re
COPS
so
server 3 3 AC urce
K
Policy Policy
Head end
VoD stream video server
5
ANCP
ME NPE
TV
AN BNG
TE TUNNEL
NPE
After detecting that a user goes online, the RACS actively configures the user video bandwidth
information for the AN and updates the user video bandwidth waterline on the AN according to
the user video bandwidth information on the RACS.
The user video bandwidth waterline on the RACS equals the user unicast VoD bandwidth. If
the total video bandwidth equals the unicast VoD bandwidth, it indicates that the RACS manages
all the video bandwidth for the user. In this case, the multicast bandwidth of the user is 0.
The message exchange process of unicast CAC is as follows:
1. When the user orders a unicast program through the portal interface, the request is
transmitted to the VoD server through the data channel.
2. The VoD server requests unicast CAC from the RACS. The RACS compares the unicast
VoD bandwidth requested by the user with the available unicast VoD bandwidth of this
user.
l If the unicast VoD bandwidth requested by the user is smaller than the available unicast
VoD bandwidth, the user is allowed to order this unicast VoD program and the available
unicast VoD bandwidth is updated on the RACS. After the update, the available unicast
VoD bandwidth of the user = Pre-update unicast VoD bandwidth of the user - Unicast
VoD bandwidth requested by the user.
l If the unicast VoD bandwidth requested by the user exceeds the available unicast VoD
bandwidth, the RACS needs to apply to the AN for multicast bandwidth resources
through the extended ANCP message.
3. The ANCP module of the AN will receive the bandwidth application message. If the
multicast CAC of the ANCP module is disabled, the AN responds with a request failure
message. If multicast CAC is enabled on the ANCP module, the ANCP module checks
whether the available multicast bandwidth of the user is sufficient for the bandwidth
requested by the unicast user.
l If the remaining available multicast bandwidth is sufficient, the ANCP module grants
the multicast bandwidth to the unicast user and then sends an ANCP message to the
RACS to notify it that the request is successful, and updates the AN video bandwidth
waterline at the same time. The bandwidth requested by the RACS this time needs to
be deducted from the video bandwidth waterline and also needs to be deducted from
the remaining available multicast bandwidth of the user.
l If the remaining available multicast bandwidth is insufficient, applying for the multicast
bandwidth fails. In this case, the ANCP module sends an ANCP message to the RACS
to notify it that the request fails.
4. The RACS processes the bandwidth application results accordingly. If the application is
successful, the RACS updates the unicast bandwidth on the RACS (the original unicast
bandwidth + the successfully requested bandwidth), and returns the unicast CAC result to
the VoD server.
5. The VoD server processes the unicast CAC result. If the VoD server processes the unicast
CAC result successfully, the unicast program order continues. Otherwise, the unicast
program order is stopped.
NOTE
If the user stops ordering VoD programs, the VoD server sends the RACS a message indicating that ordering
VoD programs has stopped. In this case, the RACS updates the unicast bandwidth on the RACS (the unicast
bandwidth + the bandwidth of this VoD program).
15.6.1 Introduction
Definition
In general, environment monitoring involves environment parameters monitoring and power
monitoring.
Externally, to implement the environment monitoring, connect a serial port cable between the
monitoring serial port on the MA5600T and the communication serial port on the monitored
device. In this way, on the MA5600T, you can directly monitor its environmental conditions
based on the proprietary protocol.
l You can directly monitor the status of the power supply parameters, fans, external batteries,
and some built-in environment monitoring parameters.
l If external sensors are connected, you can query the functions provided by the sensors such
as the ambient temperature and humidity, buzzer, and cabinet lamp.
l You can also modify some configuration parameters such as the alarm thresholds of the
environment parameters and the control parameters of the power and batteries. In this way,
the monitored devices can work according to your requirements.
Purpose
The purpose of the environment monitoring is to monitor the running of the MA5600T at all
times. This helps to detect any fault as soon as possible, thereby meeting the requirement for a
stable telecommunication network.
15.6.2 Specifications
The MA5600T supports the following environment monitoring specifications:
l Monitoring of fans
l Monitoring of fans
l CITB:
Provides external alarms input and output
Provides test bus input and output
Provides 2 BITS inputs and 1 BITS output
15.6.3 Availability
License Support
The environment monitoring feature is the basic feature of the MA5600T. Therefore, no license
is required for accessing the corresponding service.
Version Support
Product Version
Hardware Support
l Fan tray.
l CITB board.
15.6.4 Principle
This topic describes the implementation principle of the environment monitoring feature.
Upper device
The upper device herein refers to the control board of the MA5600T, and the lower device herein
refers to a monitoring board or shelf that has the monitoring function, namely, the environment
monitoring unit (EMU).
The upper device interacts with the lower devices as follows:
l The upper device manages and maintains the status of the lower devices.
l A lower device detects through its own hardware interface the external data, processes the
data, and reports it to the upper device.
l The upper device translates the user commands, and then forwards the translated commands
to the lower devices. Then, the lower devices take the corresponding actions.
EMU
To implement the environment monitoring function, make sure that the environment devices are
available. The environment monitoring devices include:
l CITB
Provides external alarms input and output
Provides test bus input and output
Provides 2 BITS inputs and 1 BITS output
l FAN
It is the fan tray that has the monitoring function. That is, a monitoring board is built in the
fan tray. The FAN EMU can monitor only simple built-in analog and digital parameters.
It neither provides the interface for extended sensors, nor supports the power monitoring.
Therefore, it cannot monitor the batteries.
Slave Node
Environment monitoring is implemented in master/slave communication mode. In this mode, a
lower device (slave node device) must have a unique ID. Otherwise, in point-to-multipoint or
multipoint-to-multipoint communication mode, the communication is confused. The unique ID
of a lower device is called a slave node number (or a slave node address), which is determined
by the hardware (similar to the MAC address of the network interface card). In general, a lower
device provides the DIP switches that are used for adjusting the slave node number.
Make sure that the slave node numbers of all the lower devices corresponding to an upper device
are different. Otherwise, the upper device fails to communicate with the lower devices.
Analog Parameters
An analog parameter is a successive parameter, such as temperature, voltage and current. The
analog parameter monitoring interface generally uses the analog sensor, namely, the device that
detects the analog parameters in real time.
The analog sensor has the following attributes: upper alarm threshold, lower alarm threshold,
upper measurement threshold, lower measurement threshold, sensor type and unit, current value,
and current status.
l The upper and lower alarm thresholds are used to determine whether an alarm is generated
for an analog parameter. The analog parameter is in the normal state only when it meets
the following criteria:
Lower alarm threshold <= Current value <= Upper alarm threshold
Where, : indicates the hardware tolerance.
l The upper and lower measurement thresholds indicate that each sensor has its measurement
range. The measurement range of some sensors is adjustable. The measurement results vary
with the measurement range. The upper and lower alarm thresholds must be within the
measurement range.
l Sensor type: Generally, sensors consist of current sensors and voltage sensors. This
parameter is mandatory when you configure the analog parameters.
l Unit: You need to define the unit based on the object detected by the sensor and the actual
precision of the sensor.
l Current value and current status: The analog sensors can report the monitored values of
various analog parameters in real time, and generally can report the status of a parameter
(too high, too low, or normal).
For an EMU, analog parameters are divided into the built-in parameters and the extended
parameters.
l The built-in analog parameters are fixed and unchangeable.
l The extended analog parameters can be changed. That is, you can configure the analog
sensors to meet your requirements.
Digital Parameters
Compared with an analog parameter, a digital parameter is a discrete value or a state value. A
digital sensor has only two values: normal or faulty. A digital sensor tests the state value by
comparing the low level and the high level.
The digital sensor has the following attributes: alarm level, significant level, sensor type, and
current status.
l Alarm level: When the level of a digital parameter equals the alarm level, the digital sensor
generates an alarm. If the alarm level of the digital sensor is set to the high level, when the
monitored digital parameter becomes the high level, the sensor generates an alarm. When
the digital parameter becomes the lower level, no alarm is generated.
l Significant level: It is on the contrary to the alarm level. That is, when the level of a digital
parameter equals the significant level, the digital sensor does not generate an alarm.
l Sensor type: Generally, sensors consist of current sensors and digital sensors. This
parameter is mandatory when you configure the digital parameters.
l Current status: It is the state value detected by the digital sensor.
For an EMU, the analog parameters can also be divided into the built-in parameters and the
extended parameters.
l The fan rotating speed is precisely controlled based on the temperature of the parts (boards),
and the fans do not rotate at a constant speed, thereby reducing the power that is consumed.
l The power of a board is turned off to protect the board when the temperature of the board
reaches the threshold of danger.
l A remote board can be manually power cycled for maintenance, which is similar to hot
plugging.
l The model and operational information of the fans and power module can be recorded,
thereby reducing the costs associated with stocking replacement parts for several different
versions.
Introduction
Definition
l The fans for the shelf of the MA5600T do not rotate at a constant speed. They automatically
implement stepless speed adjustment according to the temperature inside the shelf detected
by the temperature sensor, thereby reducing the power that is consumed.
l The board of the MA5600T supports power turnoff, thereby saving the power.
Automatic power turnoff of the board: When detecting that the temperature on a board
exceeds the temperature threshold, the system automatically turns off the power supply
of the board, and then powers it on after 15 minutes.
Manual power turnoff of the board: When the port on the board is not configured with
any service, the power of the board can be manually turned off through the CLI or NMS
to reduce power consumption. The board can also be re-powered on through the CLI or
NMS.
NOTE
When the power of a board that is providing a service is manually turned off, the system displays a
message indicating that a service is running on the board and the service will be interrupted if the
power is turned off. It is then necessary to determine whether to turn off the power of the board.
Purpose
The purpose of the power saving feature is to reduce the power consumption of the system.
l The rotating speed of the fans is adjusted according to the temperature on the key
components inside the shelf instead of the ambient temperature. The rotating speed of the
fans is more precise, which is able to save the largest amount of power possible.
l Certain boards of the MA5600T are equipped with temperature sensors, which support
querying of the board temperature, automatic power turnoff of the board at high
temperature, and manual power turnoff of the board, thereby reducing the power
consumption.
Specifications
None
Limitations
l The fans on the shelf support stepless speed adjustment.
l The control board, power board, and GIU upstream board do not support power turnoff of
the board.
Availability
l Hardware Support
The following boards support automatic power turnoff of the board at high temperature:
SPUB, GPBD, EPBD, EDTB
l License Support
The power saving feature is a basic feature of the MA5600T. Therefore, the corresponding
service is provided without a license.
Principle
This topic describes the working principle of the power saving feature.
Table 15-4 Mapping between the speed adjustment of the fans and the temperature control
point of the boards
Control Action
Point
Tmin If the temperature on all the boards is less than Tmin, the rotating speed of
the fans on the shelf is decreased by N% (N = 10).
Tmax If the temperature on a board is larger than Tmax, the rotating speed of the
fans on the shelf is increased by N% (N = 10).
Tminor If the temperature on a board is larger than Tminor, the green indicator
blinks on and off at 1 second intervals, and the rotating speed of the fans
on the shelf is increased to the full speed.
Tmajor If the temperature on a board is larger than Tmajor, the orange indicator
blinks on and off at 0.25 second intervals, the rotating speed of the fans on
the shelf is increased to the full speed, and the system generates a high-
temperature alarm. The user needs to take power saving measures, such as
shutting down the port on the board or turning off the power of the board.
Tcritical If the temperature on a board is larger than Tcritical, the system turns off
the power of the board (excluding the control board) and generates a high-
temperature alarm. The turnoff time is 15 minutes. After that, the system
forcibly powers on the board, and adjusts the rotating speed of the fans
according to the temperature on other boards.
3. The rotating speed of the fans on the shelf is adjusted to the expected value.
Figure 15-9 shows the power saving principle of the automatic/manual power turnoff of the
board.
Figure 15-9 Power saving principle of the automatic/manual power turnoff of the board
BRAS
NMS
Remote
maintenance terminal
Fan tray
G
I
S S
Service board
Service board
U
C C
U U
G
I
U
Temperature sensor
Power supply of the board
The power saving principle of the automatic/manual power turnoff of the board is as follows:
1. When detecting that the temperature on a board exceeds the temperature threshold, the
system automatically turns off the power supply of the board, and then powers it on after
15 minutes.
2. When the system detects that the temperature on a board is too high or the port on the board
is not configured with any service, the power of the board can be manually turned off
through the CLI or NMS to reduce power consumption. The board can also be re-powered
on through the CLI or NMS.
l In case of high temperature, when a command is run to manually turn off the power of
a board that is providing a service, the system prompts that a service is running on the
board and the service will be interrupted if the power is turned off. It is then necessary
to determine whether to turn off the power of the board.
l In the case that the port on a board is not configured with any service, the system does
not display any message when a command is run to manually turn off the power of the
board.
l When a command is issued from the NMS to turn off the power of a board, the system
directly turns off the power.
Introduction
Definition
l The MA5600T supports power turnoff of the board, and can be forcibly powered off/on
even when it is down or faulty. This is like remote hot-plugging of the board, meeting the
requirement of remote maintenance.
l The MA5600T supports recording the model and operational information for the fans on
the shelf and the power module. When a fan on the shelf or the power module is faulty, the
model and operational information of the faulty part can be queried. In this manner, the
maintenance engineer can bring the correct replacement parts to the field and analyze the
cause of the fault according to the operational information.
Purpose
The purpose of remote maintenance is to reduce the human cost for multiple site visits of the
maintenance engineer and the costs associated with stocking replacement parts for several
different versions.
l The maintenance engineer can turn off the power of the faulty board through the CLI or
NMS in the CO, instead of removing and inserting the board on site, so as to recover
services.
l When a fan on the shelf or the power module is faulty, the maintenance engineer can query
the information about the faulty part and the operational information for the last three events
through the CLI in the CO, thereby allowing the engineer to prepare the correct replacement
parts and analyze the cause of the fault according to the information. This feature also
reduces costs associated with stocking replacement parts for several different versions.
Specifications
None
Limitations
l The control board, power board, and GIU upstream board do not support power turnoff of
the board.
Principle
Recording the Model and Operational Information for the Fans on the Shelf and
the Power Module
The current status analysis for the maintenance of the fans on the shelf and the power module
is as follows:
l When a fan on the shelf or the power module is faulty, the model and operational
information of the faulty module is not recorded on the host.
l The fans on the shelf and the power module have multiple models, and the maintenance
engineer needs to prepare multiple types of modules for the site, which increases the
preparation and process costs.
l The fans on the shelf and the power module at any given site can become faulty several
times, but there is no historical or current information for fault analysis.
The system records the model and operational information for the fans on the shelf and the power
module as follows:
1. When the fans on the shelf and the power module work in the normal state, the system
records the current system time and the operational information for the last three events.
The following points are included:
l System time (Systime)
l EMU type (EMU type)
l EMU name (EMU name)
l Fan type (FAN type)
l Software version (Soft ver)
2. When a fan on the shelf or the power module is faulty, a command can be run to query the
detailed information about the fans or the power module.
l The model of the fans or power module to be replaced can be precisely recognized,
which effectively reduces the preparation costs.
l The operational information can be used to analyze the fault and find the root cause,
thereby reducing possibility of fan and power module faults.
15.7.4 Glossary
Table 15-5 Terms related to the power saving and remote maintenance features
Term Description
Stepless Adjusts the rotating speed of the fans on the shelf according to the duty ratio.
speed The duty ratio is 100% when the fans rotate at full speed, and 0 when the fans
adjustment stop.
Duty ratio Describes the rotating speed of the fans on the shelf. The duty ratio is 100%
when the fans rotate at full speed, that is, the fans rotate at 100% of the
potential rotational speed.
16 Ethernet OAM
16.1 Introduction
16.2 Reference Standards and Protocols
16.3 Ethernet CFM OAM
The Ethernet CFM OAM provides an end-to-end fault detection solution for monitoring,
diagnosing, and troubleshooting the Ethernet network. This topic provides an introduction to
this feature, and describes the specifications, availability, and implementation principle of this
feature.
16.4 Ethernet EFM OAM
Ethernet EFM OAM provides a mechanism for monitoring links. It can serve as a complement
to the higher layer applications. This topic provides an introduction to this feature, and describes
the availability and working principle of this feature.
16.5 Glossary, Acronyms, and Abbreviations
16.1 Introduction
Table 16-1 Differences between Ethernet CFM OAM and Ethernet EFM OAM
ETH OAM Type Standard Purpose
Compliance
16.3.1 Introduction
Definition
OAM is a mechanism for monitoring and diagnosing network faults.
Ethernet CFM OAM is defined as connectivity fault management in IEEE 802.1ag to provide
an end-to-end fault detection and diagnosis solution.
Purpose
Ethernet is a widely used local area network technology because of its rich bandwidth, low cost,
convenience for plug-and-play, and support of multipoint operations. As the Ethernet technology
is gradually developing from carriers' networks to metropolitan area networks (MANs) and wide
area networks (WANs), managing and maintaining the network have become more important.
Currently, however, Ethernet has no carrier-class management capability, and thus fails to detect
the L2 network faults.
Ethernet OAM provides an end-to-end fault detection solution in monitoring, diagnosing, and
troubleshooting the Ethernet.
Limitations
l If 4k MAs are configured in MD 0, no MA can be configured in MD 1 to MD 7.
l One MA supports one MEP, and one MEP maps one RMEP.
l The MEP can be configured on only the following ports:
The upstream port on the GIU, and ETH boards
The upstream port on the control board
The GEM port on the PON board
The subtend port on the ETH board
The port on the OPGD board
Aggregated port. In addition, it must be the master port in the aggregation group.
l Creating of UP MEP and DOWN MEP
In addition, one port can be configured as one type of MEP.
l One UP MEP in a specified MA.
l The MA and MEP specifications are determined by the specifications of private line users
and subtending users. The supported specifications are as follows:
Each private line user uses one MA and creates one MEP.
Each subtending user uses one MA and creates one MEP.
l Auto creation of the maintenance association intermediate point (MIP).
16.3.2 Specifications
The following specifications are supported:
16.3.3 Availability
l Hardware Support
The control board, GIU, OPGD, ETH, and PON boards support this feature.
l License Support
The Ethernet CFM OAM feature is an optional feature of the MA5600T, and the
corresponding service is controlled by the license.
16.3.4 Principle
NOTE
In this feature, the MEP refers to the port on the MA5600T unless otherwise specified.
Principle of CC
To ensure the connectivity between two MA5600Ts, add the two MA5600Ts to the same MA
(for example, MA 0) of the same MD (for example, MD 0), and configure MA5600T-1 and
MA5600T-2 as two mutual MEPs.
l MA5600T-1: Assume that the ETH access port is configured as MEP 1, as shown in Figure
16-1. MEP 1 needs to send packets to the hardware and logic. Therefore, MEP 1 is
configured as the UP MEP.
l MA5600T-2: Assume that the GIU upstream port is configured as MEP 2, as shown in
Figure 16-1. MEP 2 needs to send packets to the convergence switch. Therefore, MEP 2
is configured as the DOWN MEP.
CC is monitored through the CCMs multicasted at intervals to the domain. The working principle
is as follows:
1. Each MEP (for example, MEP 1) actively multicasts the hello messages (CCMs) at intervals
to the domain. A CCM contains the configuration information of the MEP.
2. Every MIP and MEP (for example, MEP 2) can receive CCMs but do not need to send the
response messages.
3. The MIP and MEP 2 that receive the CCMs set up an MEP database in the format of [MEP
DA, Port]. When MEP 2 receives the CCM from MEP 1, MEP 2 checks the information
contained in the CCMs and saves the CCMs to learn about different MAs.
4. The source addresses for a group of expected MEPs (MEP 1 in this example) must be
configured on MEP 2. If MEP 2 fails to receive any CCMs or the information carried in
the received CCMs is not the information that MEP 2 expected within a certain period of
time, the network between MA5600T-1 and MA5600T-2 is considered failed. MEP2
checks the information by comparing the received CCM with the source address of the
expected MEP (MEP 1).
5. MA5600T-2 reports a CCM loss alarm.
NOTE
CC can determine whether a fault occurs on the network, but cannot locate the section of the link where
the fault occurs.
The following faults may occur on the network and the corresponding alarms are reported:
l CC timeout alarm: It is a lower-layer switch link fault due to which the MEP cannot receive
the CCM. In this case, the timer of the expected RMEP times out and this alarm is reported.
l Cross connection alarm: When the MA ID in the CCM received by the MEP is different
from the local configuration, this alarm is reported.
l Received errored packet alarm: The MEP ID in the CCM received by the MEP is different
from the MEP ID of the local port.
l RMEP port and interface alarm: The CCM received by the MEP contains Port and Interface
TLV.
l RDI alarm: The RDI in the received CCM is set to 1.
Principle of LB
A loopback detection message (LBM or LBR) is sent from an MEP to a specified MIP or MEP
to help the MEP locate the fault in the MA.
The MIP or MEP ahead of the fault location can respond to the LB message (that is, the MIP or
MEP sends the LBR packet), but the MIP or MEP behind the fault location cannot respond to
the loopback message (that is, the MIP or MEP cannot send the LBR). In this way, the fault is
located. The working principle of LB is as follows:
NOTE
The MEP must know the MAC address of the MIP or MEP to which the LBM is transmitted. Before an
LB,
l Configure the CCM so that it can record the information about the RMEP.
l Obtain the MAC address through the LTM. The LTM can acquire the MAC address of the MIP or
RMEP.
1. As shown in Figure 16-2, MEP 1 sends the LBM to MIP 1.
2. If link 1 is normal, MEP 1 receives the LBR responded from MIP 1.
NOTE
The MIP only sends the LBM to the MEP but does not forward the LBM to the next hop MIP or
MEP.
3. MEP 1 sends the LBM to MIP 2 (the next hop of MIP 1).
4. MEP 1 receives the LBR responded from MIP 2.
5. MEP 1 continues to send the LBM to MIP 3 (the next hop of MIP 2).
6. MEP 1 cannot receive the LBR responded from MIP 3 because link 2 fails.
7. MA5600T-1 can determine that the link between MIP 2 and MIP 3 (link 2) fails.
NOTE
LB is a method of diagnosing faults for any path only if the MAC addresses of all the MIPs (or MEPs)
between MA5600T-1 and MA5600T-2 are known.
Principles of LT
The LT message (LTM and LTR) is used for checking the MIP path between two MEPs. All
the MIPs on a link send the LT response message (LTR) to the MEP that initiates an LT message
(LTM), and forward the LTM until the it reaches the destination MIP or MEP.
Figure 16-3 shows the working principle of LT.
LTM
LTR
Port
MIP
MEP
If the destination is an MEP, each MIP in an MA responds to the source MEP. The source MEP
learns about the MAC addresses and locations of all the MIPs, and the link section where the
fault occurs through the LTR. The working principle of LT is as follows:
16.4.1 Introduction
Definition of EFM
OAM provides the mechanism for network administrators to monitor the network health
condition and to quickly locate the faulty links and determine the fault condition.
Ethernet in the First Mile (EFM) OAM is defined in IEEE 802.3ah Clause 57 by the IEEE EFM
Workgroup. It is an important part of Ethernet OAM. Ethernet EFM OAM provides a mechanism
for monitoring links, such as remote defect indication (RDI) and remote loopback control. It is
a mechanism at the data link layer, and is a complement to the higher layer applications.
Definition of OAMPDU
In addition to the RDI and remote loopback functions, Ethernet EFM OAM also provides an
OAM discovery mechanism, namely, an extended mechanism for the higher layer applications.
The above-mentioned functions are implemented by the exchange of the following types of
OAM protocol data units (OAMPDUs) between two neighboring entities on an Ethernet link.
l Information OAMPDU: It is used to transmit the OAM status information to the remote
end, including the OAM capability, multiplexer status, and parser status of the local end,
and the matching between the OAM capability of the local end and that of the remote end.
Here, the OAM capability refers to:
Whether unidirectional transmission is supported. This capability directly determines
whether RDI is supported.
Whether the response to the variable query is supported. That is, whether the query
about the local end information is supported.
Whether remote loopback is supported. That is, whether the local end can be set to the
loopback state by the remote end.
Whether the link parsing event is supported. That is, whether the link event transmitted
from the remote end can be processed.
Information PDU also includes the Organizationally Unique Identifier (OUI) field and the
Vendor Specific Information field, through which the vendor information of the remote
end can be learned.
l Event Notification OAMPDU: It is used to notify the remote end of specific events, such
as how many errored frames are received in a certain period and what is the threshold of
errored frames.
l Variable Request OAMPDU: It is used to query one or more MIB variables from the remote
end, such as the number of correctly received or transmitted frames.
l Variable Response OAMPDU: It is used to return one or more MIB variables to the remote
end after the Variable Request OAMPDU is received.
l Loopback Control OAMPDU: It is used to control the loopback state of the remote end.
When the remote end is in the loopback state, the data frames received by the remote end,
except OAMPDUs, are looped back to the local end.
Purpose
The MA5600T supports EFM OAM to obtain the alarm information (such as RDI) about the
Ethernet terminal, and supports the exchange of OAMPDUs to obtain the information about the
terminal device vendor.
Limitations
l When the EFM of the local DTE and the EFM of the remote DTE are in the passive mode,
the EFM function cannot be enabled. To enable the EFM function, the EFM of one end
must be in the active mode.
l EFM OAM must be supported by the remote DTE. If it is not supported by the remote DTE,
the line between the local DTE and the remote DTE cannot communicate normally.
l The remote loopback function is dependent on the remote DTE. If it is not supported by
the remote DTE, the remote loopback function also fails.
l When the remote loopback function is enabled, the line between the local DTE and the
remote DTE is in the loopback state. In this case, the non-OAMPDUs transmitted by the
remote DTE are directly looped back, and all the packets transmitted by the local DTE are
discarded.
l The system can receive, transmit, and process Information OAMPDUs for performing the
OAM discovery and obtaining the information about the terminal device vendor.
l The system can parse the received Event Notification OAMPDUs.
l The system can transmit and respond to the Loopback Control OAMPDUs. That is, it can
initiate and respond to a remote loopback.
l The system cannot transmit or respond to the Variable Request OAMPDUs. That is, it does
not support the query about the remote MIB variables.
16.4.2 Availability
l Hardware Support
The OPGD, VDSL boards support this feature.
l License Support
The Ethernet EFM OAM feature is an optional feature of the MA5600T, and the
corresponding service is controlled by the license.
16.4.3 Principle
The main functions of the Ethernet EFM OAM feature are as follows:
l Remote defect indication (RDI): If an Ethernet link between an ONU and the OLT supports
unidirectional transmission (that is, when one direction is faulty, data can still be transmitted
in the other direction), the faulty receiving end can transmit a special OAMPDU to notify
the remote end of the local fault.
l Remote loopback: The local end controls the remote end to enter the loopback state by
transmitting a special OAMPDU. After the remote end enters the loopback state, the packets
except OAMPDUs transmitted from the local end to the remote end are directly looped
back.
Ethernet link
ONU
OLT
NOTE
The EFM OAM packets are exchanged only between two neighboring entities on a link and are not
forwarded out of the link.
Remote Loopback
NOTE
l As defined in IEEE 802.3ah, the EFM loopback refers to the loopback at the data link layer controlled
by the remote end. This function is mainly used for locating the specific area of a fault and for testing
the link quality. The quality tests include the tests on the throughput, BER, delay, and jitter of packets.
l The prerequisite for enabling the remote loopback is that it is supported by the remote DTE. The remote
loopback can be configured only when it is supported by the remote DTE.
CAUTION
l When the remote loopback is enabled, the local DTE and the remote DTE stop transmitting
frames. The local DTE will not receive or transmit any packet, and at the same time transmits
an OAMPDU to the remote DTE, instructing the remote DTE to enable the remote loopback.
l When the remote loopback is enabled, the services of all the users connected to the DTE
involved are interrupted. Therefore, exercise caution when using this function.
Figure 16-5 shows the principle of the remote loopback function of Ethernet EFM OAM.
Figure 16-5 Principles of the remote loopback function of Ethernet EFM OAM
OLT
ONU
Ethernet OAMPDUs
Ethernet link
4
4 Remote DTE responds with 5 After receiving Information
Information OAMPDU to local OAMPDU, local DTE sets its
DTE, informing local DTE that multiplexer state machine to
multiplexer state machine on forward state.
remote DTE is in discard state
and parser state machine is in
loopback state. 6 Local DTE starts transmitting
loopback frames to remote DTE.
The principle of the remote loopback function of Ethernet EFM OAM is as follows:
1. The local DTE receives the OAM loopback configuration information and stops
transmitting frames. The multiplexer state machine and the parser state machine are in the
discard state.
2. The local DTE transmits the remote loopback control OAMPDU to the remote DTE.
3. After receiving the remote loopback control OAMPDU, the remote DTE sets its multiplexer
state machine to the discard state and the parser state machine to the loopback state.
4. The remote DTE responds with the Information OAMPDU to the local DTE, informing
the local DTE that the multiplexer state machine on the remote DTE is in the discard state
and the parser state machine is in the loopback state.
5. After receiving the Information OAMPDU, the local DTE sets its multiplexer state machine
to the forward state.
6. The local DTE starts transmitting loopback frames to the remote DTE.
NOTE
When the link and the remote DTE are in the normal state, the remote DTE directly loops back the
loopback frames to the local DTE.
7. The local DTE analyzes the MAC address frames transmitted by the local DTE and the
MAC address frames looped back by the remote DTE. By comparing the transmitted frames
with the looped back frames, the local DTE can determine whether quality problems such
as delay and bit error occur, and thus knowing the health condition of the link.
Table 16-3 Glossary of the terms related to the Ethernet OAM feature
Term Description
RMEP The MEP on any device that runs Ethernet CFM OAM is called the
local MEP. The MEPs on the other devices in the same MA are the
remote maintenance association end points (RMEPs) to the local
MEP.
Term Description
UP MEP and DOWN An UP MEP indicates that the MEP transmits packets to the bridge
MEP trunk direction. A DOWN MEP indicates that the MEP transmits
packets to the physical medium direction. When a device port is
defined as an MEP, it must be defined as the UP MEP or the DOWN
MEP. In addition, it can be defined as only either the UP MEP or the
DOWN MEP. That is, when a device port is defined as an MEP, it
can transmit packets to only one direction. For example, after the
GIU upstream port on the MA5600T is defined as an MEP, it is a
DOWN MEP if it can transmit packets to only the upstream direction
(convergence layer) according to the definition; it is an UP MEP if
it can transmit packets to only the downstream direction (to hardware
and logic) according to the definition.
EFM Ethernet in the First Mile (EFM) defines the Ethernet physical layer
specifications in the user access part and the Ethernet OAM in the
access part. EFM is mainly used for the link detection of the last mile.
It is a link-level OAM and can be regarded as a subset of OAM.
OAM discovery The Ethernet OAM discovery function is used for discovering the
remote DTE, including the OAM configuration, OAM mode,
OAMPDU information, and OUI information of the remote DTE.
Remote loopback Remote loopback is to set the remote DTE to enter the loopback state
through commands. The EFM remote loopback function is mainly
used for locating faults and testing the link performance. In the
remote loopback mode, the statistical packets of the local device and
the remote DTE can be queried and compared for the above-
mentioned purpose.
Multiplexer state It is a state machine defined by IEEE 802.3ah for controlling the
machine packet transmitting.
Parser state machine It is a state machine defined by IEEE 802.3ah for controlling the
packet receiving.
MD Maintenance Domain
MA Maintenance Association
CC Continuity Check
LB Loopback
LT Linktrace
17 Clock Feature
17.1 NTP
The Network Time Protocol (NTP) is used to synchronize the time between the distributed time
server and the client.
17.2 Clock and Time System
This topic describes the definition and principle of the clock and time system of the
MA5600T, and describes the specific applications of clock and time synchronization.
17.1 NTP
The Network Time Protocol (NTP) is used to synchronize the time between the distributed time
server and the client.
17.1.1 Introduction
Definition
The Network Time Protocol (NTP) is an application layer protocol in the TCP/IP protocol suite.
NTP is used to synchronize the time between the distributed time server and the client. The
implementation of NTP is based on IP and UDP.NTP involves the Time Protocol and the ICMP
Timestamp Message, with special design on accuracy and robustness.
Purpose
NTP defines the accurate time in an entire network. Because the network topology is
complicated, the clock synchronization among all the devices in the entire network becomes
more critical.
The objective of NTP is to synchronize the clocks of all the devices in a network which have
clocks. This helps to keep time consistency among all the devices in the network. Therefore, the
equipment can offer various applications based on the clock synchronization.
The MA5600T supports the NTP feature to guarantee that the clocks of all the devices in a
network are consistent.
17.1.2 Specifications
The MA5600T supports the following NTP specifications:
l NTP Version3
l NTP client/server mode
l NTP LAN broadcast mode
l NTP multicast mode
l NTP peer mode
l Clock filtering and selection
l Local clock calibration
l Clock source priority selection
l Support of the reference clock
l NTP security features
l Up to 128 peers in a static configuration
l Up to 100 peers in a dynamic configuration
17.1.4 Availability
License Support
The NTP feature is the basic feature of the MA5600T. Therefore, no license is required for
accessing the corresponding service.
Version Support
Hardware Support
No additional hardware is required for supporting the NTP feature.
17.1.5 Principle
As shown in Figure 17-1, the MA5600T serves as the NTP client and the router serves as the
NTP server. The MA5600T uses the time of the router as the reference and synchronizes its time
with the router through NTP.
Step 1:
Step 2:
Step 3:
Step 4:
1. The MA5600T sends an NTP packet to the router. This packet contains the timestamp when
it leaves the MA5600T. Assume that the timestamp is 10:00:00 am (T1).
2. When the NTP packet arrives at the router, the router adds its timestamp to the packet.
Assume that the timestamp is 11:00:01 am (T2).
3. When the NTP packet leaves the router, the router adds another timestamp to the packet.
Assume that the timestamp is 11:00:02 am (T3).
4. When the MA5600T receives the response packet, it adds a new timestamp to the packet.
Assume that the timestamp is 10:00:03 am (T4).
Now, the MA5600T has sufficient information to calculate two important parameters:
l The delay for a round trip of the NTP packet = (T4-T1) - (T3-T2).
l Offset between the MA5600T and the router = ((T2-T1)-(T4-T3))/2
In this way, the MA5600T can set its clock according to the information and thus keeps its clock
synchronized with that of the router.
17.2.1 Introduction
Definition
IP-nization is the trend of future network and service development, so is the trend of the bearer
network. Difficulties, however, currently exist in the transition from the SDH-based traditional
network to the IP-based Ethernet bearer network. One key technology involved is how to bear
traditional TDM service on the new network. Traditional TDM service has two major
applications: voice service and clock synchronization service.
In a traditional communications network architecture, the TDM service of the fixed network is
mainly voice service. Cumulative inconsistency between the clocks at both ends of the bearer
network over a long time causes bit slip. The ITU-T Recommendation G.823 defines the
requirements on and the test standards of the TDM service of the fixed network. The definition
is called the G.823 traffic interface standard. Apart from the bearer network, a traditional
communications network usually contains an independent clock-issuing network, which adopts
PDH/SDH for issuing clock signals. As specified by the ITU-T, the clock must meet the G.823
TIMING interface requirements.
In a communications network, the wireless application has the most rigorous requirements on
the clock frequency. The frequencies of different BTSs must be synchronized within a specified
precision. Otherwise, re-synchronization occurs during the BTS switching. Current wireless
technologies are in different systems. Different systems have different requirements on the clock
bearing. European systems, of which the GSM/WCDMA is a representative, adopt the
asynchronous base station technologies. In this case, only frequency synchronization is required,
at a precision of 0.05 ppm (or 50 ppb). The clock needs to be provided by the bearer network.
The traditional solution is to provide the clock through PDH/SDH. After the IP-nization, the
clock needs to be provided by the IP network. The synchronous BTS technologies, of which the
CDMA/CDMA2000 is a representative, require phase synchronization of the clock (also called
time synchronization). Table 17-2 lists the detailed requirements on clocks.
NOTE
Purpose
The purpose is to ensure the clock synchronization between communications devices and
communications networks.
17.2.2 Specifications
Scenarios of clock and time synchronization are as follows:
l In the scenario involving only broadband Internet access service, clock synchronization is
not required.
l Clock synchronization is required for high-speed fax service (similar to high-speed modem
service) that needs to stay always online; if always online is not required, clock
synchronization is not required.
l TDM private line services(Terminating CESoP for upstream transmission through T1)
require clock synchronization.
l In the mobile bearing scenario, the CKMC stratum-3 clock daughter board needs to be
configured if the base station obtains clock signals from the MA5600T.
The MA5600T supports the following clock processing functions:
Output wander
Input jitter and wander tolerance
l ITU-T G.8261, Timing and Synchronization Aspects in Packet Networks
The ITU-T G.8261 defines the wander budget of CES and synchronous Ethernet for packet
networks. The ITU-T G.8261 requirements are similar to the requirements of the ITU-T G.
823 on the TDM network. The CESoP service and synchronous Ethernet clock feature of
the MA5600T need to meet the ITU-T G.8261 requirements.
l IEEE 1588: High precision time synchronization protocol for networked measurement and
control systems, with short form Precision Time Protocol (PTP)
17.2.4 Availability
Version Support
Table 17-3 lists the MA5600T versions that support the clock and time synchronization feature.
Table 17-3 Versions supporting the clock and time synchronization feature
Product Version
Figure 17-2 Solution for the clock and time synchronization system
System clock output
BITS B port BITS_IN1
CITB BITS_IN0
1.544 MHz,
1.544 Mb/s
X2CS/GICK
GICK 1588
packet
ETHB/ OPGD Selector
xPON card
EDTB card
Selector
CKMC
PLL X2CS/GICK
GICK
NOTE
In the case of the CESoP service and native TDM service, the clock quality is not directly related
to the stratum-3 clock unit. The clock quality meets the requirements of G.8261 CESoP or G.
823 traffic.
Clock/Time Output
The MA5600T system outputs clock signals through the clock output interface or the
synchronization service interface. The output clock signals serve as the reference clock of the
device interconnected with the MA5600T. The synchronization service interface can select the
system clock, line receive clock, board oscillator, or CESoP clock as the transmit clock. The
interface is capable of outputting the system clock signals only when the system clock is selected
as the transmit clock of the interface.
The transmit clock of the synchronous Ethernet port is the system clock by default. The
MA5600T supports the change of the transmit clock mode of the synchronous Ethernet port.
The Ethernet port without the synchronization capability adopts the oscillator of the board as
the transmit clock. The following boards support the synchronous Ethernet port: H801X2CS,
H801GICK, H802OPGD, and H801ETHB. The H802OPGD, and H801ETHB boards do not
support the recovery of the line receive clock, and support only the system clock output.
The MA5600T functions as an OLT, and the line transmit clock of the GPON port and EPON
port is used as the system clock, the signals of which are transmitted to the ONT. The GPON
boards of the OLT include the H801GPBD.The EPON boards of the OLT include the
H801EPBD.
l Locked mode
In this mode, the system clock source is synchronized with the input clock source. The
phases of the system clock source and the input clock source are in a constant relationship.
The MA5600T locks the BITS clock source, the line clock source, or the IEEE1588 V2
message recovered clock source. Locking an ideal clock source can meet the 50 ppb
requirement for mobile bearing.
l Holdover mode
The MA5600T records the clock data of the locked mode. If the locked clock source is lost,
the system builds a system clock by using the recorded clock data, and maintains the clock
properties as consistent as possible with the clock properties of the locked mode. As such,
the system enters the holdover mode. The precision of the holdover mode meets the G.813
or G.8262 requirements.
l Free-run mode
The system supports the free-run mode only when configured with the stratum-3 clock
daughter board (CKMC). The system supports a maximum free-run duration of 24 hours.
In the free-run mode, the MA5600T works based on the inherent frequency of its internal
crystal oscillator.
System default
Local
oscillator
H
ol
do
ils
ve
fa
r
e is
t im
g
in
bl e
la rc
es
ck
ai u
Lo
av so
ou
t
ck
lo
C
GICK 1588
packet
Selector
EDTB
System clock
is output by CKMC Selector
service board X2CS/GICK
PLL
System phase-
locked loop
SCU card
(active)
Line clock source
When the system clock is selected as the reference output clock, the clock output through the
CITB port is the system clock phase-locked by the MA5600T.
The following clock sources can serve as the system clock:
l BITS clock source: CITB, including 1.544 Mb/s and 1.544 MHz signals.
l T1 line clock: EDTB
l Synchronous Ethernet line clock: H801X2CS, H801GICK.
l IEEE1588 V2 clock source
l Free-running internal clock source
CESoP packets. As shown in the figure, according to its source clock, the device on the left side
transmits packets to the destination device. The destination device buffers these packets in a
queue, and then sends these packets according to its local clock. A least disparity between the
source clock and the local clock of the destination device will cause the depth of the buffer queue
of the destination device to change. The depth of the queue can be used to determine whether
the local clock is synchronous with the source clock. If the depth of the queue continuously
increases, it indicates that the local clock is behind the source clock. In this case, puts the local
clock ahead. If the depth of the queue continuously decreases, it indicates that the local clock is
ahead of the source clock. In this case, puts back the local clock. The purpose of such adjustment
is to ensure that the local clock is synchronous with the source clock in the long term.
Source Destination
Data node node Data
Packets Packets
LIU Source Network Destin LIU
clock clock
The difficulty point of the auto-adaptation algorithm is that the IP network has inherent delay
jitter, the packet delay variation (PDV). PDV can also cause change to the depth of the buffer
queue. The destination IWF, however, cannot distinguish whether the change is due to the
disparity of frequency or the delay jitter of the IP network, and cannot make a correct reaction.
The delay jitter of the IP network is not cumulative and thus can be filtered by certain statistical
methods, such as by calculating an average value.
This method has low requirements on the system and can be affected by the network transmission
condition. In actual application, this method meets the G.8261 requirements on the CES port.
2. Differentiated recovery mode
Figure 17-6 illustrates the principle of the differentiated clock recovery. The system has a
primary reference source (PRS) clock that is shared by the receive end and the transmit end. The
transmit end compares the transmit clock with this PRS clock, and records the difference of the
two clocks in the packet as a time stamp. The receive end retrieves this time stamp from the
packet, and then recovers the clock of the transmit end according to the time stamp and the PRS
clock. The prerequisite of this mode is that a PRS clock must exist in the system.
Source Destination
Data node node Data
Packets Packets
LIU Source Network Destin LIU
clock clock
Time stamp Time stamp
generation generation
S E
E
S P D
P
C U T
C B
U B B
I D
T
B
Synchronization relation
The line clock mode refers to that the T1 line receive clock is adopted as the T1 transmit clock
of the EDTB board. In the networking, the port interconnected with the T1 port of the EDTB
board is required to serve as the master port. The T1 data received by the upstream port is
encapsulated as the CESoP data and transmitted to the ONU through the PON port. The ONU
recovers the T1 receive clock of the EDTB board through CESoP, and uses the recovered clock
as the T1 transmit clock of the ONU.
When the ONU recovers the clock in the CESoP mode, the ONU can choose not to synchronize
with the system clock of the MA5600T. As shown in Figure 17-8, the upstream port adopts the
line clock, and the ONU port adopts the CESoP recovery clock.
E S E
S P D
P
C U T
C B
U B B
I D
T
B
Synchronization relation
CESoP clock
ONU Service channel
T1
E S E
S
P P D
C
C B U T
U B B
I D
T
B
Synchronization relation
CESoP clock
ONU Service channel
T1
The MA5600T supports the 10GE and GE synchronization Ethernet application, and can issue
the GE and FE system clock signals. In the transmit direction, the system clock is adopted as
the port transmit clock by default, the clock mode of which cannot be changed. In the receive
direction, each port recovers the line clock, which serves as an optional clock source of the
system.
The following interface boards of the MA5600T support synchronous transmission and
reception: H801X2CS, H801GICK.
Some interface boards of the MA5600T can issue the system clock signals. Such boards include
H801OPFA and H801ETHB.The H801ETHB board provides eight 1000BASE-X ports.
In the transmit direction, the system clock is adopted as the transmit clock of each port to serve
as the reference clock of lower-level devices. In the receive direction, the line clock cannot be
recovered, and the transmission of clock signals is uni-directional.
In the following figure, the MA5600T on the left side adopts the BITS clock source as the system
clock, and the system clock is adopted as the transmit clock of the X2CS board by default. The
MA5600T on the right side adopts the line recovered clock of the X2CS board as the system
clock. In this way, synchronization is implemented between the two MA5600Ts. The ETHB
board of the two MA5600Ts respectively issue the system clock signals to the lower-level
network, thus realizing synchronization of the entire network, as shown in Figure 17-10.
Synchronous Ethernet
X X
2 2
E C E C
S S S S
T T
C C
C H G C H G
U U
I B I I B I
BITS C
T C T
B K B K
Synchronization relation
Service channel
Pull-in and pull-out ranges In the non-locked mode, when the frequency
of the input clock approximates a certain
range of the central frequency, the clock
phase-locked loop will enter the locked mode
from the free-run mode or the holdover mode.
Such a range is called the pull-in range.
In the locked mode, when the frequency of
the input clock wanders from a certain range
of central frequency, the clock phase-locked
loop will enter the holdover mode. Such a
range is called the pull-out range. The pull-
out range does not consider the repeated
changes of the input frequency.
Term Explanation
Long-term phase transient response The device records the holdover data when
(holdover) locking an external clock source for a long
time. When the external clock source is lost,
the device provides a system clock source
according to the recorded holdover data. A
device in such a state enters the holdover
mode. The holdover performance reflects the
stability of the system against frequency
wander within a certain period after the clock
source is lost.
E2E End-to-end
OC Ordinary clock
BC Boundary clock
TC Transparent clock
P2P Peer-to-peer