Академический Документы
Профессиональный Документы
Культура Документы
V600R002C03
Product Description
Issue 01
Date 2010-03-01
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and
the customer. All or part of the products, services and features described in this document may not be
within the purchase scope or the usage scope. Unless otherwise specified in the contract, all statements,
information, and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute the warranty of any kind, express or implied.
Purpose
This document describes the contents, the related version, the intended audience, the
conventions, and the update history.
Related Versions
The following table lists the product versions related to this document.
Intended Audience
The intended audiences of this document are:
z On-site maintenance engineer
z Commissioning engineer
z System maintenance engineer
Organization
This document consists of nine chapters and is organized as follows.
Chapter Description
Chapter Description
3 Hardware Architecture This chapter describes the chassis, fans, power modules,
and board types of the NE40E.
4 Link Features This chapter describes the link features of the NE40E.
5 Service Features This chapter describes the service features of the NE40E
6 Application Scenarios This chapter describes the networking applications of the
NE40E.
7 Operation and Maintenance This chapter describes the operation and maintenance,
and network management of the NE40E.
8 Technical Specifications This chapter describes the technical specifications of the
NE40E.
9 Compliant Standards This chapter describes the compliant standards of the
NE40E.
A Acronyms and This appendix lists the acronyms and abbreviations
Abbreviations mentioned in this manual.
Conventions
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
General Conventions
The general conventions that may be found in this document are defined as follows.
Convention Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
GUI Conventions
The GUI conventions that may be found in this document are defined as follows.
Convention Description
Convention Description
Keyboard Operation
The keyboard operations that may be found in this document are defined as follows.
Format Description
Key Press the key. For example, press Enter and press Tab.
Key 1+Key 2 Press the keys concurrently. For example, pressing
Ctrl+Alt+A means the three keys should be pressed
concurrently.
Key 1, Key 2 Press the keys in turn. For example, pressing Alt, A means
the two keys should be pressed in turn.
Mouse Operation
The mouse operations that may be found in this document are defined as follows.
Action Description
Update History
Updates between document versions are cumulative. Therefore, the latest document version
contains all updates made to previous versions.
Contents
2 Architecture .................................................................................................................................2-1
2.1 Physical Architecture.....................................................................................................................................2-1
2.2 Logical Architecture......................................................................................................................................2-2
2.3 Software Architecture....................................................................................................................................2-3
2.4 Data Forwarding Process ..............................................................................................................................2-4
3 Hardware Architecture..............................................................................................................3-1
3.1 NE40E-X2.....................................................................................................................................................3-1
3.1.1 Chassis .................................................................................................................................................3-1
3.1.2 Heat Dissipation System ......................................................................................................................3-2
3.1.3 Power Supply System ..........................................................................................................................3-2
3.1.4 Introduction to the Board Cage ............................................................................................................3-3
3.1.5 MPU.....................................................................................................................................................3-4
3.1.6 NPUI-20...............................................................................................................................................3-5
3.2 NE40E-X1.....................................................................................................................................................3-6
3.2.1 Chassis .................................................................................................................................................3-6
3.2.2 Heat Dissipation System ......................................................................................................................3-6
3.2.3 Power Supply System ..........................................................................................................................3-7
3.2.4 Introduction to the Board Cage ............................................................................................................3-8
3.2.5 MPU.....................................................................................................................................................3-8
3.2.6 NPUI-20.............................................................................................................................................3-10
3.3 Subcard........................................................................................................................................................ 3-11
4 Link Features...............................................................................................................................4-1
4.1 Ethernet Link Features ..................................................................................................................................4-1
4.1.1 Basic Features ......................................................................................................................................4-1
4.1.2 Eth-Trunk .............................................................................................................................................4-1
4.2 CPOS Link Features......................................................................................................................................4-3
4.2.1 Channelization .....................................................................................................................................4-3
4.2.2 PPP/TDM.............................................................................................................................................4-3
4.3 TDM Link Feature ........................................................................................................................................4-3
4.4 E1 Link Features ...........................................................................................................................................4-4
4.5 ATM E1 IMA ................................................................................................................................................4-5
4.6 E-Trunk .........................................................................................................................................................4-6
4.7 APS ...............................................................................................................................................................4-6
9 Compliant Standards.................................................................................................................9-1
9.1 Standards and Telecom Protocols..................................................................................................................9-1
9.2 Electromagnetic Compatibility Standards ...................................................................................................9-20
9.3 Safety Standards..........................................................................................................................................9-20
9.4 Environmental Standards ............................................................................................................................9-21
9.5 Other Standards ...........................................................................................................................................9-21
Figures
Figure 4-2 Inverse multiplexing and de-multiplexing of ATM cells in IMA groups ..........................................4-5
Figure 4-3 E-Trunk.............................................................................................................................................4-6
Figure 5-1 Networking diagram of applying interface-based QinQ...................................................................5-4
Figure 5-5 Network diagram of the VLAN swapping feature based on QinQ ...................................................5-9
Figure 5-6 Application of tangent RRPP rings in the MAN .............................................................................5-10
Figure 5-7 Structure of the IPv4/IPv6 dual stack .............................................................................................5-12
Figure 5-8 Networking diagram of applying LDP over TE..............................................................................5-19
Figure 5-9 Networking diagram of applying MPLS OAM ..............................................................................5-20
Figure 5-10 Networking diagram of a VLL .....................................................................................................5-22
Figure 5-11 VPLS networking..........................................................................................................................5-24
Figure 5-12 H-VPLS model .............................................................................................................................5-25
Tables
Table 3-6 Technical parameters of the fan module on the NE40E-X1 ...............................................................3-7
Table 3-7 Technical parameters of the DC power supply module on the NE40E-X1.........................................3-7
Table 3-8 Description of the slots on the NE40E-X1 .........................................................................................3-8
Table 5-1 Attack types and DHCP snooping working modes ...........................................................................5-56
Table 8-1 Parameters of the NE40E-X2 .............................................................................................................8-1
Table 8-2 Parameters of the NE40E-X1 .............................................................................................................8-2
1 Introduction
1.1 Positioning
Huawei NE40E-X1 and NE40E-X2 Metro Services Platform are a high-end network product
used to access, converge, and transmit carrier-class Ethernet services on Fixed-Mobile
Convergence (FMC) Metropolitan Area Networks (MANs).
The NE40E-X1 and NE40E-X2 operate on the Versatile Routing Platform (VRP) operating
system developed by Huawei and adopts the hardware-based forwarding and non-blocking
data switching technology. The NE40E features carrier-class reliability, line-speed forwarding
capability, perfect Quality of Service (QoS) mechanism, service processing capability, and
good expansibility.
The NE40E-X1 and NE40E-X2 feature strong capabilities in network access, Layer 2
switching, and transmission of Ethernet over MultiProtocol Label Switching (EoMPLS)
services. With the support of diverse high-speed and low-speed interface types, the NE40E
can bear triple play services, 2G services, 3G services, and LTE services. The NE40E can
work in conjunction with the CX, NE, and ME series products developed by Huawei to build
a hierarchical metro Ethernet that provides comprehensive services for customers.
Label Switching (MPLS), MPLS Traffic Engineering (TE), and IP Telephony Network (IPTN)
solutions. Supports the Interior Gateway Protocol (IGP) fast convergency, multicast fast
convergency, and Border Gateway Protocol (BGP) fast convergency.
Provides comprehensive VPN services and strong QoS capabilities, such as L2VPN services,
including Virtual Private LAN Service (VPLS), Hierarchical VPLS (HVPLS), and Virtual
Leased Line (VLL), L3VPN, multicast VPN services, Huawei-patent Hierarchy of VPN
(HoVPN) services, and multi-role host services.
Provides Eth PWE3, TDM PWE3, 1588v2 clocks, Ethernet clocks, and adaptive clocks, and
ensures network reliability and offers a complete IP backhaul solution by supporting
E-automatic protection switching (E-APS), enhanced-Trunk (E-Trunk), and PW redundancy.
NE40E-X1 and NE40E-X2 provide five levels of scheduling to meet the requirements of
different service combinations.
z The NE40E-X1 and NE40E-X2 support the PQ and WFQ, realizing fair scheduling and
preferentially guaranteeing services of high priorities.
z The NE40E-X1 and NE40E-X2 support the three-level switching network based on
Combined Input and Output Queuing (CIOQ), preventing head of line blocking.
z Flow-based scheduling: The NE40E-X1 and NE40E-X2 support DiffServ and Integrated
Service (InterServ), facilitating the implementation of MPLS TE.
z PQ: The NE40E-X1 and NE40E-X2 support eight priority queues, preventing traffic of
high priorities from being interrupted.
The preceding QoS mechanism answers the demands of the IPTN and the
multi-service-bearing IP network by providing differentiated delay, jitter, bandwidth, and
packet loss ratio for services to guarantee the launch of carrier-class services such as Voice
over IP (VoIP) and IPTV.
Item Description
The key components such as the clocks and management buses work in
backup mode.
Protections The system can automatically restart and recover when
against abnormalities occur.
abnormalities
The system can reset a faulty board and restore the
services on the board.
The system provides protections against over-current and over-voltage for
power modules and interfaces.
The system provides protection against mis-insertion of boards.
Power alarm The system provides alarm prompt, alarm indication,
monitoring running status query, and alarm status query.
Voltage and The system provides alarm prompt, alarm indication,
environment running status query, and alarm status query.
temperature
monitoring
Reliability The control channel is separated from the service channel to provide a
Design non-blocking control channel.
The system provides fault detection for the system and boards, indicators,
and the Network Management System (NMS) alarm function.
Reliable Supports online patching.
upgrade
Improves the upgrading methods of the device and supports In-Service
Software Upgrade (ISSU), which shortens the duration of service
interruption.
Supports version rollback without interrupting services.
Supports in-service upgrading of the BootROM.
The backplane bus supports 8BIP check.
The system supports the Error Checking and Correction (ECC) Random
Access Memory (RAM).
Fault Data backup The system supports hot backup of the data between the
tolerance active and standby units. When the active unit fails, the
design standby unit automatically takes over the active unit for
data transmission, preventing data loss.
The system supports the automatic upgrade and restoration of the
BootROM program.
The system can back up configuration files to the remote File Transfer
Protocol (FTP) server.
The system can automatically select and run correct configuration files.
The system provides abnormality monitoring for the system software,
automatic restoration, and log record.
Item Description
2 Architecture
Heat dissipation
Functional host system
system
Monitorbus
Network
management system
RTN: Return
Except the network management system (NMS), all the other systems are in the integrated
cabinet. Among these systems, the power distribution system works in 1+1 backup mode. The
following describes only the functional host system.
The functional host system consists of the system backplane, MPU, NPUI-20, and subcard.
The functional host system is mainly responsible for data processing, device monitoring, and
device management, including the control and management of the power distribution system
and heat dissipation system. The functional host system is connected to the NMS through
NMS interfaces. Figure 2-2 illustrates the structure of the functional host system.
Backplane
Control Bus Control Bus GE/Console/
2*10G MPU
Monitor Bus Monitor Bus (Slave) Bits/USB
NPU
Data Bus
The and NE40E-X1 has one NPU and four PIC subcards.
MPU MPU
System System
Control and monitoring unit monitoring unit
management
plane
Managemeng PIC
Managemeng unit managemeng
unit unit
Data Forwarding
plane unit
NPUI
Forwarding Data channel
unit
NPUI Pic *N
z The data plane is responsible for high speed processing and non-blocking switching of
data packets. It encapsulates or decapsulates packets, forwards IPv4/IPv6/MPLS packets,
performs QoS and scheduling, completes inner high-speed switching, and collects
statistics.
z The control and management plane is the core of the entire system. It controls and
manages the system. The control and management unit processes protocols and signals,
configures and maintains the system status, and reports and controls the system status.
z The monitoring plane monitors the system environment. It detects the voltage, controls
power-on and power-off of the system, and monitors the temperature and controls the fan.
In this manner, the security and stability of the system are ensured. It can isolate the fault
promptly in the case of a unit failure to guarantee the operation of other parts.
Power Fan
monitoring monitoring
RPS RPS
SNMP
Active Standby
IPC
In terms of the software, the NE40E consists of the Routing Process System (RPS), power
monitoring module, fan monitoring module, Forwarding Support Unit (FSU), and Express
Forwarding Unit (EFU).
z The RPS is the control and management module that runs on the MPU. The RPSs of the
active MPU and the standby MPU back up each other. They support IPv4/IPv6, MPLS,
LDP, and routing protocols, calculate routes, set up LSPs and multicast distribution trees,
generate unicast, multicast, and MPLS forwarding tables, and deliver routing
information to the LPU. The RPS includes IPOS software, VRP software, and product
adapter software.
z The FSU implements the functions of the link layer and IP protocol stacks on interfaces.
z The EFU performs hardware-based IPv4/IPv6 forwarding, multicast forwarding, MPLS
forwarding, and statistics.
PIC
Datagram Datagram
Congestion Queue
QoS in the management scheduling QoS in the
upstream Queue Congestion downstream
scheduling management
TM Multicast replication
As shown in Figure 2-5, the Packet Forwarding Engine (PFE) adopts the Network Processor
(NP) or Application Specific Integrated Circuit (ASIC) to search the routing table and forward
packets at a high speed. External memories include the Static Random Access Memory
(SRAM), Dynamic Random Access Memory (DRAM), and Net Search Engine (NSE). The
SRAM stores forwarding entries; the DRAM stores packets; the NSE performs non linear
searching.
The data forwarding process can be classified as the upstream and downstream processes
according to data flow directions.
Upstream process: Packets are encapsulated in frames on the Physical Interface Card (PIC)
and then sent to the PFE. On the incoming interface, packets are decapsulated and packet
types are identified. Then, traffic classification is performed according to the configurations
on the incoming interface. In addition, information about scheduling priorities are carried in
the packets sent to the Traffic Manager (TM ) for traffic scheduling. Then, the Forwarding
Information BASE (FIB) is searched to forward packets. For example, to forward an IPv4
unicast packet, the FIB is searched for the outgoing interface and the next hop according to
the destination IP address of the packet. Finally, the searching results and the packets are sent
to the TM.
Downstream process: According to the packet types parsed in the upstream process and the
outgoing interface, the packets are encapsulated through the link layer protocol and stored in
corresponding queues. For an IPv4 packet whose outgoing interface is an Ethernet interface,
the MAC address needs to be obtained according to the next hop. Then, the outgoing traffic
can be classified according to the configurations on the outgoing interface. Finally, the
packets are encapsulated with the new Layer 2 header on the outgoing interface and are then
sent to the PIC.
3 Hardware Architecture
3.1 NE40E-X2
3.1.1 Chassis
3.1.2 Heat Dissipation System
3.1.3 Power Supply System
3.1.4 Introduction to the Board Cage
3.1.5 MPU
3.1.6 NPUI-20
3.1.1 Chassis
The NE40E-X2 is of 442 mm x 220 mm x 222 mm (W x D x H), and can be mounted in an
N63E cabinet, a standard 19-inch cabinet, or a 23-inch North American open rack.
Figure 3-1 illustrates the appearance and components of the NE40E-X2.
Weight 1.7 kg
Maximum 180 W
power
consumption
Parameter Value
Maximum 477.2 Pa
wind
pressure
Maximum 64.4 CFM
air volume
Noise 64.3 dB
Table 3-2 Technical parameters of the DC power supply module on the NE40E-X2
Item Parameter
13 PWR 14 PWR
11 PIC 12 PIC
9 PIC 10 PIC
8 NPU
15
FAN
7 NPU
5 PIC 6 PIC
3 PIC 4 PIC
1 MPU 2 MPU
Slots 3 to 6, slots 8 Indicates the slots for subcards. Slots 5, 6, 9, and 10 can be
9 to 12 equipped with both high-speed and low-speed subcards.
Slots 3, 4, 11, and 12 support only low-speed subcards.
Slots 7 and 8 2 Indicates the slots for the NPUs.
Slots 1 and 2 2 Indicates the slots for the MPUs. Two MPUs work in 1:1
backup mode.
Slots 13 and 14 2 Indicates the slots for DC power supply modules. Two DC
power supply modules work in 1+1 backup mode.
Slot 15 1 Indicates the slot for the fan module.
Low-speed subcards refers to the subcards whose single port rate is lower than 1 Gbit/s; high-speed
subcards refers to the subcards whose single port rate is higher than or equal to 1 Gbit/s
3.1.5 MPU
The NE40E-X2 can work with a single MPU or two MPUs in backup mode.
When the NE40E-X2 is equipped with two MPUs, the master MPU works in the active state
and the slave MPU is in the standby state. You cannot access the management interface of the
slave MPU, or configure commands on the console or the AUX interface of the slave MPU.
The slave MPU exchanges information (including Heartbeat messages and backup data) only
with the master MPU. Data consistency between the master and slave MPUs is ensured
through high reliability mechanisms such as batch backup and real-time backup. After the
master-slave switchover, the slave MPU immediately takes over the master MPU. The default
master MPU is configurable. During the start process, the MPU that you set wins the
competition and becomes the master MPU.
MPUs support two switchover modes: failover and manual switchover. The failover is
triggered by serious faults or resetting of the master MPU. The manual switchover is triggered
by commands run on the console interface.
The MPU integrates multiple functional units. By integrating the system control and
management unit, system switching unit, clock unit, and management and maintenance unit,
the MPU provides the functions of the control plane, switching plane, and maintenance plane.
The function and hardware implementation of each integrated part are separated from each
other. The following describes the function and hardware implementation of the MPU.
z System control and management unit
The MPU is mainly responsible for processing routing protocols. In addition, the MPU
broadcasts and filters routing packets, downloads routing policies from the policy server,
manages the NPUI-20s, and communicates with the NPUI-20s.
The MPU implements outband communication between boards. The MPU manages and
carries out communication between the NPUI-20s and slave MPU through the outband
management bus.
The MPU is also responsible for data configuration. The system configuration data,
booting file, upgrade software, and system logs are stored on the MPU. The CF card on
the MPU stores system files, configuration files and log, and does not support hot swap.
The MPU manages and maintains the device through management interfaces such as the
serial interface and the network interface.
z System clock unit
The system clock unit of the MPU provides LPUs with reliable and synchronous SDH
clock signals.
The MPUs of the NE40E-X2 support the clock that complies with IEEE 1588v2.
z System maintenance unit
The system maintenance unit of the MPU collects monitoring information, remotely or
locally tests system units, or performs in-service upgrade of system units.
Through the Monitorbus, the MPU collects the operation data periodically. The MPU
produces controlling information, such as detecting the board presence and adjusting the
fan speed. Through the load bus, the MPU tests or in-service upgrades system units from
the far end or the near end.
The MPU works in 1+1 hot backup mode, improving the system reliability.
3.1.6 NPUI-20
The NPUI-20 has bi-directional 20 Gbit/s forwarding capability. All subcards exchange data
through the NPUI-20s. Each NPUI-20 provides two 10G Ethernet optical interfaces, supports
WAN and LAN modes, and can be installed with XFP optical modules.
The NE40E-X2 can be equipped with two NPUI-20s, working in back-to-back mode. In this
mode, the NPUI-20 in slot 7 is connected to the subcards in slots 3, 4, 5, and 6; the NPUI-20
in slot 8 is connected to the subcards in slots 9, 10, 11, and 12.
The NPUI-20 consists of the following units:
z Control and management unit
Through the GE channel connecting the MPU and the NPUI-20, the MPU manages the
LPUs and subcards and transmits routing protocol data.
z Data forwarding unit
Working as the forwarding core of the system, the NPUI-20 is connected to all subcards
through data channels.
Each NPUI-20 provides two 10G Ethernet optical interfaces, supports WAN and LAN
modes, and can be installed with XFP optical modules.
3.2 NE40E-X1
3.2.1 Chassis
3.2.2 Heat Dissipation System
3.2.3 Power Supply System
3.2.4 Introduction to the Board Cage
3.2.5 MPU
3.2.6 NPUI-20
3.2.1 Chassis
The NE40E-X1 is of 442 mm x 220 mm x 132 mm (W x D x H), and can be mounted in an
N63E cabinet, a standard 19-inch cabinet, or a 23-inch North American open rack.
Figure 3-4 illustrates the appearance and components of the NE40E-X1.
The NE40E-X1 supports six fan modules working in N+1 mode. In this mode, the NE40E-X1
operates properly even if a fan module fails.
Parameter Value
Weight 1.1 kg
Maximum 120 W
power
consumption
Maximum 477.2 Pa
wind
pressure
Maximum 64.4 CFM
air volume
Noise 64.3 dB
Table 3-7 Technical parameters of the DC power supply module on the NE40E-X1
Item Parameter
8 PWR 9 PWR
6 MPU 7 MPU
10 4 PIC 5 PIC
FAN 2 PIC 3 PIC
1 NPU
Low-speed subcards refers to the subcards whose single port rate is lower than 1 Gbit/s; high-speed
subcards refers to the subcards whose single port rate is higher than or equal to 1 Gbit/s
3.2.5 MPU
The NE40E-X2 can work with a single MPU or two MPUs in backup mode.
When the NE40E-X2 is equipped with two MPUs, the master MPU works in the active state
and the slave MPU is in the standby state. You cannot access the management interface of the
slave MPU, or configure commands on the console or the AUX interface of the slave MPU.
The slave MPU exchanges information (including Heartbeat messages and backup data) only
with the master MPU. Data consistency between the master and slave MPUs is ensured
through high reliability mechanisms such as batch backup and real-time backup. After the
master-slave switchover, the slave MPU immediately takes over the master MPU. The default
master MPU is configurable. During the start process, the MPU that you set wins the
competition and becomes the master MPU.
MPUs support two switchover modes: failover and manual switchover. The failover is
triggered by serious faults or resetting of the master MPU. The manual switchover is triggered
by commands run on the console interface.
The MPU integrates multiple functional units. By integrating the system control and
management unit, system switching unit, clock unit, and management and maintenance unit,
the MPU provides the functions of the control plane, switching plane, and maintenance plane.
The function and hardware implementation of each integrated part are separated from each
other. The following describes the function and hardware implementation of the MPU.
z System control and management unit
The MPU is mainly responsible for processing routing protocols. In addition, the MPU
broadcasts and filters routing packets, downloads routing policies from the policy server,
manages the NPUI-20s, and communicates with the NPUI-20s.
The MPU implements outband communication between boards. The MPU manages and
carries out communication between the NPUI-20s and slave MPU through the outband
management bus.
The MPU is also responsible for data configuration. The system configuration data,
booting file, upgrade software, and system logs are stored on the MPU. The CF card on
the MPU stores system files, configuration files and log, and does not support hot swap.
The MPU manages and maintains the device through management interfaces such as the
serial interface and the network interface.
z System clock unit
The system clock unit of the MPU provides LPUs with reliable and synchronous SDH
clock signals.
The MPUs of the NE40E-X2 support the clock that complies with IEEE 1588v2.
z System maintenance unit
The system maintenance unit of the MPU collects monitoring information, remotely or
locally tests system units, or performs in-service upgrade of system units.
Through the Monitorbus, the MPU collects the operation data periodically. The MPU
produces controlling information, such as detecting the board presence and adjusting the
fan speed. Through the load bus, the MPU tests or in-service upgrades system units from
the far end or the near end.
The MPU works in 1+1 hot backup mode, improving the system reliability.
3.2.6 NPUI-20
The NPUI-20 has bi-directional 20 Gbit/s forwarding capability. All subcards exchange data
through the NPUI-20s. Each NPUI-20 provides two 10G Ethernet optical interfaces, supports
WAN and LAN modes, and can be installed with XFP optical modules.
The NE40E-X2 can be equipped with two NPUI-20s, working in back-to-back mode. In this
mode, the NPUI-20 in slot 7 is connected to the subcards in slots 3, 4, 5, and 6; the NPUI-20
in slot 8 is connected to the subcards in slots 9, 10, 11, and 12.
The NE40E-X1 can be equipped with one NPUI-20.
The NPUI-20 consists of the following units:
z Control and management unit
Through the GE channel connecting the MPU and the NPUI-20, the MPU manages the
LPUs and subcards and transmits routing protocol data.
z Data forwarding unit
Working as the forwarding core of the system, the NPUI-20 is connected to all subcards
through data channels.
Each NPUI-20 provides two 10G Ethernet optical interfaces, supports WAN and LAN
modes, and can be installed with XFP optical modules.
3.3 Subcard
The NE40E-X2 has eight slots for subcards. All these slots can be equipped with high-speed
subcards or low-speed subcards. Subcards are hot swappable and support automatic
configuration recovery.
The NE40E-X1 has four slots for subcards. All these slots can be equipped with high-speed
subcards or low-speed subcards.
4 Link Features
4.1.2 Eth-Trunk
Ethernet bundling is a technology that bundles multiple physical Ethernet interfaces into a
logical interface (Eth-Trunk ) to increase bandwidth.
LACP (802.3ad)
The NE40E supports link aggregation in Link Aggregation Control Protocol (LACP) static
mode. Link aggregation in static LACP mode is in contrast with port bundling in manual
mode. Port bundling in manual mode requires neither LACP nor exchange of protocol packets.
The ISP alone decides the bundling of ports. Link aggregation in LACP static mode resorts to
LACP and automatically maintains the port status by exchanging protocol packets. The ISP,
however, needs to set up the aggregation group and add member links. LACP cannot change
the configuration information.
The NE40E supports LACP that conforms to IEEE 802.3ad. Administrators can create an
Eth-Trunk, add member ports to the Eth-Trunk, and enable LACP on the Eth-Trunk. The
NE40E negotiates with the peer device to determine the interfaces for data forwarding by
exchanging LACP protocol packets. That is, they negotiate to determine whether the
outbound interfaces are in the Selected or Standby state.
LACP maintains the link status based on the port status. LACP adjusts or disables link
aggregation in the case of aggregation changes.
4.2.1 Channelization
A CPOS interface is a channelized POS interface. In channelization, multiple independent
channels of data are transmitted over an optical fiber by using low-speed tributary STM-N
signals. During the transmission, each channel has its own bandwidth, start and end points,
and follows its own monitoring policy. Channelization can make full use of bandwidth in
transmitting multiple channels of low-speed signals.
A 155-Mbit/s CPOS interface can be channelized into 63 E1 channels.
After being channelized from the CPOS interface, the E1 interface can transparently transmit
unstructured TDM services over the MPLS PW, which complies with the SAToP protocol.
After being channelized from the CPOS interface, the E1 interface can transparently transmit
structured TDM services over the MPLS PW, which complies with the CESoPSN protocol.
4.2.2 PPP/TDM
The NE40E provides CPOS interfaces at a rate of 155 Mbit/s. On the link layer, CPOS
supports the following protocols:
z PPP
z TDM
PPP on CPOS interfaces supports the following:
z LCP
z IPCP
z MPLSCP
z MP
z PAP
z CHAP
are transmitted over Plesiochronous Digital Hierarchy (PDH) links or Synchronous Digital
Hierarchy (SDH) links through TDM. Generally, PDH and SDH services are called TDM
services.
Figure 4-2 Inverse multiplexing and de-multiplexing of ATM cells in IMA groups
Physical Link #1
PHY PHY
The IMA interface periodically sends certain special cells. The information contained in these
cells are used by the receiving end of IMA virtual links to recreate ATM cell flows. Before
recreating ATM cell flows, the receiving end should first adjust the link differential delay and
remove the Cell Delay Variation (CDV) imported by controlling cells. These types of cells are
called IMA Control Protocol cells (ICP), and are used to define IMA frames.
Upon sending, the sending end should keep alignment with IMA frames on all links so that it
can detect the differential delay between links according to the arrival time of IMA frames on
different links and perform adjustment thereafter.
The cells are consecutively sent at the sending end. If no cells on the ATM layer can be sent
between ICPs of an IMA frame, the IMA sending end keeps consecutive cell flows on the
physical layer by adding filler cells, which are later discarded at the IMA receiving end.
4.6 E-Trunk
A Enhanced Trunk (E-Trunk) is an extension of a trunk. In the E-Trunk, a trunk is divided into
two sub-groups that connect to two routers respectively, rather than connect to multiple LPUs
on one router. These two routers are PE devices that back up each other. The E-Trunk
provides reliability for Ethernet links, and also provides reliability for network connections by
connecting to two systems.
CPE
Trunk 1
Standby
PE2
As shown in Figure 4-3, LACP is used to manage trunk links, which ensures that one
sub-group connected to one PE device is in the Active state and the other is in the Standby
state. In this manner, no loop occurs. At the same time, the E-Trunk control protocol is
running between the two PE devices. The E-Trunk control protocol is IP based, and is run
between two devices that back up each other to synchronize the trunk status. When one PE
device fails, the other PE can still access the Customer Premises Equipment (CPE). The CPE,
however, is still configured with the standard trunk, and does not have to support the E-Trunk.
Therefore, the E-Trunk configured on the two PE devices is transparent for the CPE.
4.7 APS
Automatic Protection Switching (APS) has two protection modes, namely, 1+1 and 1:N.
When the N is 1, the protection mode is 1:1.
z In 1+1 mode, a protection interface is paired with each working interface. Normally, the
receiver only processes the traffic being received on the working link. When the working
link is faulty, traffic is switched to the protect link on the receiver, which is called
unidirectional switchover.
z In 1:1 mode, the working link transmits high-level traffic and the protect link transmits
nothing to the receiver. When the working link is faulty, the sender switches the
high-level traffic to the protect link and the receiver obtains the high-level traffic from
the protect link. This is called bidirectional switchover.
At present, the NE40E supports the following APS features:
z 1+1 unidirectional mode and 1:1 bidirectional mode.
z Manual switching of APS groups.
z Forcible switching of APS groups.
z Locking of APS groups.
z APS implemented on interfaces.
z APS implemented on the same SIC or inter-SIC APS.
z E-APS.
z Adding the working and protection interfaces of an APS group to a trunk and configuring
services on the trunk.
5 Service Features
VLAN Trunk
A trunk is a P2P link between two routers. The interfaces on the connected routers are called
trunk interfaces. One VLAN trunk can transmit data flows from different VLANs and allow
the VLANs to cover the interfaces of many routers. The NE40E can dynamically add, delete,
or modify the VLANs of a VLAN trunk to maintain the consistency of VLAN configurations
in the entire network. The NE40E can also work with non-Huawei devices for interworking.
VLANIF Interfaces
The NE40E supports VLANIF interfaces. You can assign IP addresses to VLANIF interfaces
and bind VLANIF interfaces to VPNs. This implements Layer 3 access of VLANIF interfaces.
You can also bind VSIs to VLANIF interfaces to implement the VPLS access.
VLAN Aggregation
Inter-VLAN routing is involved in the communication between VLANs. If each VLANIF
interface is assigned an IP address, IP address resources will be used up.
You can aggregate a group of VLANs to a super-VLAN. The VLANs in the super-VLAN are
called branch VLANs. A super VLAN is associated with an interface at the IP layer. In
addition, all branch VLANs in the super-VLAN use IP addresses in the same network
segment to improve the utilization of IP addresses.
Ethernet Sub-interfaces
The NE40E supports the configuration of sub-interfaces for a switched Ethernet interface.
You can configure Layer 3 services on the sub-interfaces and Layer 2 services on the main
interface. In this manner, the switched Ethernet interfaces can support both Layer 2 and Layer
3 services.
Ethernet Sub-interfaces
A common Ethernet sub-interface, which can belong to a VLAN only, has the following
functions:
z Terminating enterprise services
z Supporting complete routing protocols
z Supporting MPLS forwarding
Super-VLAN Sub-interfaces
A super-VLAN sub-interface, which can belong to multiple VLANs, functions to terminate
individual users' services. It supports the following features to ensure security:
z DHCP relay
z DHCP binding
z URPF
z ACLs
5.1.3 QinQ
The QinQ protocol is a Layer 2 tunneling protocol based on the IEEE 802.1Q technology. The
QinQ technology expands the VLAN space by adding a new tag to a packet that is already
tagged through IEEE 802.1Q. The private VLAN packets are thus transparently transmitted
across the ISP network, which functions the same as a Layer 2 VPN. The packets transmitted
in the public network carry double 802.1Q tags, one for the public network and the other for
the private network. This is called 802.1Q-in-802.1Q, or QinQ for short.
The ISP network only provides one VLAN ID for different VLANs from the same user
network. This saves VLAN IDs of an ISP. Meanwhile, QinQ provides a Layer 2 VPN solution
that is easy to implement for LANs or small-scale MANs.
The QinQ technology can be applied to multiple services in Metro Ethernet solutions. QinQ
features the following:
z Packets from different users in the same VLAN are not transmitted transparently.
z Private networks are separated from the public network.
z The ISP's VLAN IDs are saved to the maximum.
Without being a formal protocol, QinQ is widely applied among carriers because it is easy to
implement. The introduction to selective QinQ (VLAN stacking) makes QinQ more popular
among carriers. With the development of the Metro Ethernet, all device vendors have put
forward their Metro Ethernet solutions. The QinQ technology plays an important role in the
solutions because of its simplicity and flexibility.
The NE40E provides rich QinQ features, which satisfies diverse networking requirements.
Interface-based QinQ
Figure 5-1 shows the networking diagram of applying interface-based QinQ. A user
configures interface-based QinQ on the router. When the user's packets, carrying the user's
VLAN tag, arrive at the router, the router takes the user's packets as untagged packets and
adds a VLAN tag of the ISP outside the existing VLAN tag. The user's packets then go
through the VLAN tunnel of the ISP and reach the remote user. The VLAN tag of the ISP is
stripped from the packets.
VLAN100
ME60
100 100 300
ISP
Network
200 200 300
VLAN200
VLAN-based QinQ
VLAN-based QinQ is also called selective QinQ. Figure 5-2 shows the networking diagram
of applying selective QinQ. With the development of services such as broadband access, VoIP,
and IPTV services, ISPs may want to assign inner VLAN tags to different services. For
example:
z VLANs 1000-1999: broadband access services
z VLANs 2000-2999: VoIP services
z VLANs 3000-3999: IPTV services
iManager N2000
PVC1001
PVC2001
PVC3001
Users access the DSLAM through multiple PVCs. The DSLAM transfers PVC IDs to VLAN
IDs. You can enable selective QinQ on the gateway to apply an outer VLAN tag with the
VLAN ID as 100 to broadband access services, an outer VLAN tag with the VLAN ID as 200
to VoIP services, and an outer VLAN tag with the VLAN ID as 300 to IPTV services. This
breaks the limit of 4094 VLAN IDs for one ISP network. In addition, services are distributed,
which facilitates the ISP's service management.
Services are distributed in one of the following manners:
z Adds different outer VLAN tags based on VLAN ranges. That is, packets with a single
tag are changed to packets with double tags. In this manner, services from different
terminals are distributed.
z Adds different outer VLAN tags based on different protocol numbers. That is, a tag is
added to protocol packets. In this manner, services from different terminals are
distributed.
z Changes outer VLAN tags based on the range of inner VLAN tags. That is, a single tag
is replaced with another tag. In this manner, services of different user types are
distributed. This is also called VLAN mapping.
VLAN-based QinQ may serve as one of the VPLS modes to allow packets of private VLANs
to be transmitted transparently through the backbone network. It may also serve as one of the
VLAN Stacking
The early QinQ technology is used on switches on Layer 2 networks. With VLAN stacking,
packets are forwarded at Layer 2 by means of the outer VLAN tag. The outer VLAN usually
refers to the VLAN to which an ISP network belongs. VLAN stacking is usually applied on
switched interfaces.
The sub-interfaces for VLAN stacking are deployed on PEs. A sub-interface identifies a user
VLAN and then performs VLAN stacking to user's Layer 2 packets. After that, packets are
forwarded at Layer 2 by means of the outer VLAN tag.
With a sub-interface for VLAN stacking, packets from a batch of user VLANs can be
transparently transmitted. Packets enter an L2VPN based on their outer VLAN tag after
VLAN stacking is implemented. The outer VLAN tag is transparent to the ISP. User packets
from different VLANs can thus be transparently transmitted.
VLAN stacking support the following:
z Access to the VPLS through the sub-interfaces for VLAN stacking
z Access to the VLL/PWE3 through the sub-interfaces for VLAN stacking
QinQ Termination
Sub-interfaces for QinQ VLAN tag termination refer to the sub-interfaces that terminate the
double VLAN tags of users. The difference between the sub-interfaces for QinQ VLAN tag
termination and the sub-interfaces for VLAN stacking is as follows: For the sub-interfaces for
QinQ VLAN tag termination, a PE removes the double VLAN tags of user packets when the
packets enter the ISP network.
Double VLAN tags for users have specific meanings. For example, the outer VLAN tag
specifies a service and the inner VLAN tag specifies a user. Sub-interfaces for QinQ VLAN
tag termination access the user and identify the service by terminating double VLAN tags.
Sub-interfaces for QinQ VLAN tag termination are similar to common VLAN sub-interfaces.
In addition, sub-interfaces for QinQ VLAN tag termination are used to terminate double
VLAN tags and provide the following functions:
z IP forwarding
z L3VPN/PWE3/VLL/VPLS access
z Proxy ARP
z Unicast routing protocols
z VRRP
z DHCP server and DHCP relay
Sub-interfaces for QinQ VLAN tag termination terminate double VLAN tags in the following
manners:
z Exact termination
Double VLAN tags of specified VLAN IDs are terminated.
z Fuzzy termination
Double VLAN tags of VLAN IDs in a specified range are terminated.
In IEEE 802.1ad, the value of the EType field in the TPID is defined as 0x88a8.
Figure 5-3 Compatibility of the EType field in the TPID in the outer tag of QinQ packets
1 00
0x9
0x9100 Switch A
IP/MPLS
Core 0x 81
Router A ME60 00
Router C
As shown in Figure 5-3, the inbound interface on the router needs to identify the EType value
0x9100 in the outer TPID. The Etype values such as 0x9100 and 0x8100 of different outer
TPIDs can be set for different manufacturers' devices so that the devices can be set with the
same Etype value in the outer TPID. This ensures communications between different
manufacturers' devices.
Internet
/Intranet PE1
PE2
VLAN2 VLAN3
Whether multicast data packets or multicast protocol packets are received, they are not
encapsulated by QinQ. Instead, their packets are transmitted according to the outer P-VLAN
IDs. In IGMP snooping, only the P-VLAN ID mapping to the user host is maintained. In
forwarding, the system searches for the member host of the mapped multicast group
according to the P-VLAN ID and replaces the P-VLAN tag with the C-VLAN tag in the
packet for forwarding.
Figure 5-5 Network diagram of the VLAN swapping feature based on QinQ
UPE PE-AGG
Metro
Ethernet
VLAN Swap Network
Service-POP
Service
VLAN/Customer Customet
VLAN VLAN/Service
VLAN
Service VLAN
RG RG
RRPP Domain
Master
Node ME60B
Edge Node
SwitchA RRPP Sub-Ring 1 Transit Node
Router A
RRPP Major-Ring
SwitchB
An RRPP domain comprises of a group of switches that are mutually connected and
configured with the same domain ID and control VLAN. One RRPP domain consists of the
elements including the RRPP major ring and sub-ring, control VLAN, master node, transit
node, common port and edge port, and primary port and secondary port.
Polling Mechanism
The polling mechanism is used by the master node on an RRPP ring to detect the network
status.
The master node periodically sends Hello packets from its primary port. The packets are then
transmitted through all transit nodes on the ring. If the secondary port on the master node can
receive the Hello packets, the ring network is complete. If the Hello packets are not received
within a specified period, a link fault occurs on the ring network.
When the secondary port on the master node in the Failed state receives Hello packets from
its primary port, the master node immediately changes to the Complete state, blocks the
secondary port, and refreshes the Forwarding Database (FDB).
In addition, the master node sends packets from the primary port to instruct all transit nodes to
unblock temporarily blocked ports and refresh FDBs.
transit nodes to refresh FDBs. After other transit nodes refresh their FDBs, the data stream is
switched back to the normal link.
If the faulty link is recovered, the port of the transit node changes to the Up state. In this case,
the transit node temporarily blocks the recovered port. The Hello packets sent by the master
node can pass through the temporarily blocked port.
When the secondary port on the master node receives the Hello packet from the primary port,
the master node considers that the ring recovers to the healthy status. The master node blocks
the secondary port and sends packets to notify all transit nodes to unblock temporarily
blocked ports and refresh FDBs.
5.1.5 RSTP/MSTP
The Rapid Spanning Tree Protocol (RSTP) is an enhancement of the Spanning Tree Protocol
(STP). RSTP simplifies the processing of the state machine, blocks some redundant paths
with specific algorithms, and reconstructs the network with loops to a loop-free network. In
this manner, the packets are prevented from increasing and infinitely looping. Compared with
STP, RSTP speeds up Layer 2 loop convergence. In a Layer 2 network, only one Shortest Path
Tree (SPT) is generated.
The Multiple Spanning Tree Protocol (MSTP) is the multi-instance RSTP. MSTP supports the
running of STP based on one or more VLANs. In a Layer 2 network, multiple SPTs can be
generated.
z VLAN-based BPDUs
z QinQ-based BPDUs
5.2 IP Features
5.2.1 IPv4/IPv6 Dual Stack
5.2.2 IPv4 Features
5.2.3 IPv6 Features
IPv4/IPv6 Application
TCP UDP
IPv4 IPv6
Link Layer
and the DSLAMs do not need to be upgraded in large scale. This cuts the costs of initial
investment in IPTV services.
z The BRAS interface on the NE40E provides multicast authorization by defining a
channel list containing the authorized multicast for users. Then, the BRAS interface only
replicates authorized multicast channels. This simplifies the configuration of multicast
authorization and cuts the operating expense (OpEx). In addition, through multicast
authorization, the BRAS interface provides scheduling of high-priority multicast traffic,
which ensures normal transmission of multicast traffic when the network is congested.
z The BRAS interface on the NE40E provides virtual multicast scheduling to ensure the
unified bandwidth scheduling of multicast and unicast traffic in a dual-edge architecture.
Virtual multicast scheduling can effectively prevent packet loss on DSLAMs when
multicast traffic bursts, and improve users' experiences on IPTV services. In addition, the
NE40E provides full multicast scheduling, which improves IPTV service quality.
z The BRAS interface on the NE40E supports shaping, priority-based scheduling, and
HQoS scheduling, and multicast replication performed by the ASIC chip. In this manner,
the delay and jitter of multicast traffic are reduced and multicast traffic can meet the
requirements for QoS of IPTV applications.
IGMP Snooping
The NE40E supports IGMP snooping for Layer 2, Layer 3, and QinQ interfaces, VPLS PW,
STP, and RRPP.
IGMP snooping listens to the IGMP messages between routers and hosts and sets up the Layer
2 forwarding table for multicast data packets. In this manner, IGMP snooping controls and
manages the forwarding of multicast data packets to carry out Layer 2 multicast.
IGMP snooping aims to control the flooding of multicast flows, forward packets as required,
and save network resources. For the interface that joins a multicast group without transmitting
IGMP Report messages for application, the device does not send the multicast flow to the
interface.
Multicast VLAN
A multicast VLAN refers to the VLAN that converges multicast flows. When users need
certain multicast flows, they send a request to the multicast VLAN. Then, the multicast
VLAN replicates the multicast packets to different user VLANs. This implements the function
of multicast across VLANs.
The NE40E forwards multicast packets through the multicast VLAN and replicates the
packets based on the multicast routing entries. Then, the NE40E sends these packets to the
VLANs of different users. Using the multicast VLAN, the NE40E can converge the multicast
flows of different user VLANs to one or more specified VLANs.
Multicast across VLANs enables the NE40E to send unicast and multicast packets across
different VLANs. This facilitates the management and control of multicast flows. This can
also save bandwidth resources and improve network security.
Multicast VPN
With wide applications of Virtual Private Network (VPN), the requirements of users for
operating multicast services over VPNs are increasingly stringent. The NE40E adopts the MD
solution to implement multicast transmission over VPNs.
For details, see Section "5.5 VPN Features."
Multicast CAC
The NE40E supports multicast Call Admission Control (CAC). When multicast CAC rules are
configured, the number of multicast groups and bandwidth are restricted for IGMP snooping
on interfaces or the entire system.
Multicast CAC is part of the IPTV multicast solutions. With the development of the IPTV, the
number of program channels is bursting. The bandwidth of the access and convergence
network no longer satisfies the bandwidth demands of users. The previous static management
is thus outdated. In this manner, the number of users allowed to access each link must be set
on the convergence network.
Multicast CAC restrains the generation of multicast forwarding entries. When the set
threshold is reached, no more forwarding entries are generated. This ensures the processing
capacity of the device and controls link bandwidth.
5.4 MPLS
5.4.1 Basic Functions
5.4.2 MPLS TE
5.4.3 MPLS OAM
5.4.2 MPLS TE
Network congestion affects the performance of the backbone network. The congestion may be
caused by resource insufficiency or unbalanced load of network resources. Traffic
Engineering (TE) is introduced to address the congestion caused by unbalanced load of
network resources.
The MPLS TE technology integrates the MPLS technology with traffic engineering. It can
reserve resources by setting up the LSP tunnels to a specified path in an attempt to prevent
network congestion and balance network traffic.
In the case of resource scarcity, MPLS TE can preempt bandwidth resources of the LSPs with
low priorities. This meets the demands of the LSPs with large bandwidth or for important
services. In addition, when an LSP fails or a node is congested, MPLS TE can protect the
network communications through the backup path and the fast reroute (FRR) function.
MPLS TE provides the following functions:
z Processing of static LSPs
MPLS TE creates and deletes static LSPs, which require bandwidth but are manually
configured.
z Processing of Constrained Route-Label Switched Path (CR-LSP)
MPLS TE processes various types of CR-LSPs.
The processing of static LSPs is easier. CR-LSPs are classified into the types described in the
following sections.
RSVP-TE
RSVP is designed for the Integrated Service (IntServ) model and used on each node of a path
for resource reservation.
To put it simply, RSVP has the following characteristics:
z Unidirectional.
z Receiver-oriented: The receiver initiates a request for resource reservation and maintains
the resource reservation information.
z It uses a soft state mechanism to maintain the resource reservation information.
RSVP, after being extended, can support MPLS label distribution. It carries resource
reservation information when transmitting label-binding messages. The extended RSVP is
called RSVP-TE, used as a signaling protocol to establish LSPs in MPLS TE.
Auto Route
In auto routes, LSPs participate in IGP route calculation as logical links. The tunnel interface
is taken as the outbound interface of packets. In this manner, LSPs are considered as P2P links.
The following describes two types of auto routes:
z IGP shortcut: The LSP is not advertised to the neighboring router. So, other routers
cannot use this LSP.
z Forwarding adjacency: The LSP is advertised to the neighboring router. So, other routers
can use this LSP.
Fast Reroute
FRR is a technology in MPLS TE to implement partial protection of the network. The
switching speed of FRR can reach 50 milliseconds. This minimizes data loss when the
network fails.
FRR is only a temporary protection method. When the protected LSP becomes normal or a
new LSP is established, the traffic is switched back to the original LSP or the newly
established LSP.
After an LSP is configured with FRR, traffic is switched to its protection link and the ingress
node of the LSP attempts to establish a new LSP when a link or a node on the LSP fails.
Auto FRR
In Auto FRR, to protect a tunnel, you must configure a bypass tunnel and bind it to the tunnel
to be protected. When a link or a node is Down, the data flow can be automatically switched
to the bypass tunnel.
In the FRR protection, the bypass LSP must be configured manually. If it is not configured,
the protected LSP cannot be protected. Auto FRR can solve the preceding problem.
Auto FRR is an extension of MPLS TE FRR. Bypass LSPs can be automatically set up along
the LSP after you configure the attributes of bypass LSPs, global Auto FRR attributes, and
Auto FRR attributes of the interface. In addition, when the primary LSP changes, the original
bypass LSPs can be automatically deleted and new bypass LSPs are set up.
CR-LSP Backup
The LSP that is used to protect the primary LSP in the same tunnel is called the backup LSP.
When the ingress detects that the primary LSP is unavailable, it switches traffic to the backup
path. After the primary LSP recovers, traffic is switched back to the backup LSP. In this
manner, the traffic on the primary LSP is protected.
The NE40E supports the following methods of backup:
z Hot backup: The backup CR-LSP is established immediately after the primary CR-LSP
is established. When the primary CR-LSP fails, MPLS TE switches traffic immediately
to the backup CR-LSP.
z Ordinary backup: The backup CR-LSP is established when the primary CR-LSP fails.
LDP over TE
In existing networks, not all devices support MPLS TE. Only the devices in the core of the
network support TE and the devices at the network edge use LDP. The application of LDP
over TE is then put forward. The TE tunnel is considered as a hop of the entire LDP LSP.
LDP is widely used in MPLS VPNs. To prevent the congestion of VPN traffic on certain
nodes, you can configure LDP over TE.
10 10
R3
R1 R2 R5 R6
20 10
R4
Figure 5-8 shows the MPLS VPN networking where LDP is used as the signaling protocol.
As PE routers, CX1 and CX6 discover that the link between R2 and R3 is rather congested
after a great number of users access. This happens because the traffic between CX1 and CX6
must pass through this link. The link between R2 and R4 is idle. The LSP, however, cannot
use the link between R2 and R4 because the IGP cost of this link is high.
In this case, you can establish a TE tunnel passing through R4 between R2 and R5, and adjust
the metric of the IGP shortcut or forwarding adjacency. Thus, there are two routes carrying
out load balancing for R2:
z Route between physical interfaces connecting R2 and R3
z Route between TE tunnel interfaces connecting R2 and R5
In this manner, LDP establishes the LSPs for load balancing to allow traffic to go through the
idle link.
CV
FD /F
FD
/F
CV
Ingress Egress
LSR LSR
I
BD
I BD
VLL
Figure 5-10 shows the networking of a VLL supported by the NE40E.
VPN2 site2 PE
Support interworking
MPLS network
PE VPN1 site3
VPN1 site2
PE-ASBR
VPN2 site2
PE
Support inter-AS
solutions:
VRF-to-VRF
MP-Multihop EBGP
PE-ASBR
VPLS
Figure 5-11 shows the networking of VPLS. Several virtual switches (VSs) can be created on
a PE router. VSs on different PE routers form an L2VPN. LANs at the user end can access the
L2VPN through VSs. In this manner, users can expand their own LAN over the WAN. VPLS
can be taken as the VS across public networks. Like L3VPN, it establishes LSPs on public
networks for traffic transmission.
VS1 VS2 PE
VLAN1 VLAN2
VPLS requires that users access the network through Ethernet links. It forwards packets
according to the VLAN ID. For communication with remote users, a Virtual Channel (VC)
that can traverse the public network is established between PE routers, and the VC is
associated with the VLAN ID. Users communicate with each other over the Layer 2 tunnel
through the VC. The VLAN ID is used to identify the users' VPN.
When establishing a VC, the PE router allocates double labels to the VC. The outer label is
the MPLS LSP label of the public network and is allocated by LDP or RSVP-TE. The inner
label is the VC label and is allocated after the negotiation between the remote LDP sessions
on loopback interfaces.
The NE40E supports the following networking models:
z QinQ VPLS
QinQ is a tunnel protocol based on IEEE 802.1Q. In QinQ, the VLAN tag of private
networks is encapsulated in the VLAN tag of public networks. The packets carry double
tags when being transmitted across the ISP's backbone network. This saves VC resources
and provides users with an L2VPN tunnel that is easy to implement.
z H-VPLS
VPLS requires that PE routers forward Ethernet frames through the full-mesh Ethernet
emulation circuit or Pseudo-Wire (PW). Therefore, all PE routers must be connected to
each other in the same VPLS. If there are N PEs in a VPLS network, the VPLS has N x
(N - 1)/2 connections. When the number of PEs increases, the number of VPLS
connections increases by N^2.
Hierarchical Virtual Private LAN Service (H-VPLS) is thus introduced to address the
full-mesh VPLS.
Figure 5-12 shows the H-VPLS model.
SPE
AC SPE PW
PW PW
UPE PW
AC SPE
CE
z In a basic H-VPLS model, PEs can be divided into the following types:
− UPE
It is a convergence device that is directly connected to a CE. The UPE needs to be
connected to only one PE in a full-mesh VPLS network. The UPE supports routing
and MPLS encapsulation.
If a UPE is connected to multiple CEs and possesses the basic bridge function, frame
forwarding is performed only on the UPE. This reduces the burden on the SPE.
− SPE
It is connected to a UPE and is located in the core of a full-mesh VPLS network. The
SPE is connected to all the devices in a full-mesh VPLS network.
For an SPE that is connected to a UPE, the UPE acts as a CE. The PW set up between
the UPE and the SPE serves as the AC of the SPE. The SPE must learn the MAC
addresses of all the sites on the UPE side and those of the UPE interfaces that are
connected to the SPE.
z IGMP snooping
VPLS can isolate users. Each VPN needs to support IGMP snooping, namely, the
multi-instance IGMP snooping.
VPLS learns MAC addresses in the following modes:
− Unqualified
In this mode, there can be numerous VLANs in a VSI to share the MAC address
space and a broadcast area. When learning MAC addresses, VPLS also learns the
VLAN IDs.
− Qualified
In this mode, each VSI has only one VLAN that has the independent MAC address
space and broadcast area. When learning MAC addresses, VPLS does not need to
learn the VLAN IDs.
z VPLS/H-VPLS equal-cost load balancing
In VPLS/H-VPLS services, when there are multiple public tunnels of equal cost from the
local PE to a remote PE, the VPLS PW performs the HASH algorithm and then selects
one tunnel to forward data flows. Different data flows over the same PW may be
forwarded through different public tunnels.
z Fast switching of multicast traffic
If the VSI in VPLS/H-VPLS transmits multicast traffic and when the master TE tunnel in
the public network is faulty, the TE HSB switchover is performed within 500 ms.
z mVPLS
mVPLS refers to a management VPLS. The VSIs associated with the mVPLS are called
management VSIs (mVSIs).
The prerequisite to the Up state of an mVSI differs from that to a common VSI (service
VSI). The details are as follows:
− Common VSI: has two or more Up AC interfaces, or has one Up AC interface and
one Up PW.
− mVSI: has one Up PW or AC interface.
An mVSI can be bound to a common VSI. When an mVSI receives a gratuitous ARP
packet or a BFD Down packet, the mVSI instructs all the common VSIs bound to it
to clear MAC address entries and re-learn MAC addresses.
z STP over PW
STP over VPLS can address the following problems:
− Loops that are formed in inter-AS VPLS networks (Option A)
− Loops that are formed when multiple ring networks are dual-homed to an H-VPLS
network
− Loops that are formed when the DSLAM accesses multiple UPE devices
z Ethernet loop detection
Virtual Private LAN Service (VPLS) is a significant technology for the Metropolitan
Area Network (MAN). To prevent the impact of single point failures on services, user
networks are connected to the VPLS network of a carrier through redundant links. The
redundant links, however, lead to loops, which thus cause broadcast storms.
In networking applications, you can deploy the Spanning Tree Protocol (STP) or
common loopback detection technologies to avoid the preceding problems. In practice,
however, STP should be deployed at the user side, and the common loopback detection
technology requires the devices at the user side to allow special Layer 2 loopback
detection packets to pass through.
When user networks cannot be controlled, you can deploy Ethernet loop detection
supported by the NE40E over the carrier network. Ethernet loop detection need not be
deployed at the user side. This also prevents broadcast storms caused by loops formed in
a VPLS network.
PW Redundancy
PW redundancy provides reliability by setting up multiple PWs on a VPN to protect traffic
transmitted along the PW. Those PWs assume one of two roles: master PW or backup PW.
The master and backup PWs are dynamically negotiated and determined. Once one PW fails,
traffic on this PW is switched to another PW. This ensures traffic transmission.
PW traffic is transmitted over public network tunnels. When a tunnel fails, traffic is switched
to another tunnel for transmission. In some scenarios, such as in the case of a PE failure or a
AC failure, however, traffic cannot be protected. Thus, PW redundancy is introduced to
implement traffic protection.
VLL FRR protects traffic by switching traffic from the master PW to the backup PW in case
the master PW fails. The master and backup PWs are statically configured.
PW redundancy provides the master and backup PWs that are dynamically negotiated and
determined through E-Trunk or E-APS on AC interfaces. The applications of VLL FRR and
PW redundancy are similar.
PE-ASBR
VPN2 site2 UPE
Hierarchical
PE Support inter-AS
solutions:
Support HoVPN to
VRF-to-VRF
extend the VPN
MP-EBGP
MP-Multihop EBGP
PE-ASBR
Carrier's Carrier
The customer of the BGP/MPLS L3VPN service provider can serve as a service provider,
which is called the networking mode for the carrier's carrier. In this mode, the BGP/MPLS
L3VPN service provider is called the provider carrier or the first carrier. The customer is
called the customer carrier or the second carrier, which serves as a CE router for the first
carrier.
To keep good extensibility, the second carrier adopts the operating mode similar to the stub
VPN. That is, the CE router of the first carrier only advertises the routes (internal routes) of
the VPN where it resides to the PE router of the first carrier. The CE router does not advertise
its customers' routes (external routes). PE routers of the second carrier exchange external
routes through BGP. This greatly reduces the number of routes maintained on the first carrier
network.
Inter-AS VPN
The NE40E supports the following inter-AS VPN solutions explained in RFC 2547bis:
z VPN instance to VPN instance: ASBRs manage VPN routes in between through
sub-interfaces, which is also called Inter-Provider Backbones Option A.
z EBGP redistribution of labeled VPN-IPv4 routes: ASBRs advertise labeled VPN-IPv4
routes to each other through MP-EBGP, which is also called Inter-Provider Backbones
Option B.
z Multihop EBGP redistribution of labeled VPN-IPv4 routes: PE routers advertise labeled
VPN-IPv4 routes to each other through Multihop MP-EBGP, which is also called
Inter-Provider Backbones Option C.
Multicast VPN
The NE40E supports multicast BGP/MPLS L3VPN.
Multicast services are deployed in the network shown in Figure 5-14. VPN users at various
sites receive multicast traffic from the local VPN. The PE in the public network supports
multi-instance.
As shown in Figure 5-14, the public network instances on each PE and the P implement
public network multicast. VPN multicast data is multicast in the public network.
PE1_public-instance
P1
P2
PE3_public-instance
P3
PE2_public-instance
As shown in Figure 5-15, the VPN A instances on each PE and the sites that belong to VPN A
implement VPN A multicast.
VPNA
site1
CE1
PE1_vpnA-instance
PE3_vpnA-instance MD A
CE2
CE3
VPN A PE2_vpnA-instance VPN A
site3 site2
As shown in Figure 5-16, the VPN B instances on PEs and the sites that belong to VPN B
implement VPN B multicast.
CE4
PE1_vpnB-instance
VPN B VPN B
site4 site5
CE5
MD B
PE2_vpnB-instance
CE6
VPN B
site6
z Among all possible data receivers, only members of VPN A can receive multicast data
from S1.
z Multicast data is multicast at various sites and on the public network.
To implement multicast VPN, the following network conditions should be met:
z Each site that supports multicast based on VPN instance
z A public network that supports multicast based on public instances
z A PE device that supports the following multi-instance multicast:
z Connecting sites through the VPN instance to support multicast based on VPN instances
z Connecting the public network by using public network instances and supporting
multicast based on public network instances
z Supporting data switching between public network instances and VPN instances
IPv6 VPN
As an enhancement of IPv4, IPv6 is an Internet protocol of the next generation. IPv6 provides
the enhanced address space, configuration, maintenance, and security functions, and supports
more access users and devices in the Internet than IPv4.
The VPN is a virtual private communication network built over share links or public networks
such as the Internet. Users located in different areas can exchange data through the public
networks. Thus, the users can enjoy services similar to private P2P links.
An IPv6 VPN refers to a VPN where each site has the IPv6 capability and is connected to the
PE of the SP and then to the SP backbone network through an interface or a sub-interface by
using IPv6 addresses. To put it simply, an IPv6 VPN indicates that a PE router receives IPv6
packets from a CE router, which is different from an IPv4 VPN.
At present, IPv6 VPN services are implemented over the IPv4 backbone network of the SP. In
this case, the PE must support IPv4/IPv6 dual stack because the backbone network is an IPv4
network and the client sites use the IPv6 address family, as shown in Figure 5-17. Any
network protocol that can bear IPv6 traffic can run between the CEs and the PEs. PE
interfaces connected to the client run IPv6; PE interfaces connected to the public network run
IPv4.
Figure 5-17 Networking diagram of the IPv6 VPN over the IPv4 public network
IPv6
VPN site2
PE IPv6
CE VPN site1
P
IPv6
VPN site1
PE
CE
CE
IPv6 IPv6
VPN site2 VPN site1
Through Multiprotocol Extensions for Border Gateway Protocol version 4 (MP BGPv4), the
IPv6 VPN advertises IPv6 VPN routing information in the backbone network, triggers MPLS
to allocate labels for IPv6 packets to mark the packets, and uses tunnels such as LDP LSPs,
MPLS TE tunnels to transmit private network data in the backbone network. An IPv6 VPN is
implemented in the same way as that of a BGP/MPLS L3VPN.
The NE40E supports the following IPv6 VPN networking solutions:
z Intranet VPN
z Extranet VPN
z Hub&Spoke
z Inter-AS or multi-AS backbones VPN
z Carriers' carrier
HoVPN
In BGP/MPLS VPN solutions, the key device, PE router, provides the following functions:
z Provides access functions for users. To achieve this, a PE router needs a great number of
interfaces.
z Manages and advertises VPN routes and processes user packets. This requires that a PE
router have large-capacity memory and high forwarding capabilities.
This causes the PE to become a bottleneck. To solve this problem, Huawei launches the
Hierarchy of VPN (HoVPN) solution. In HoVPN, the functions of a PE router are distributed
to multiple PEs. Playing different roles in a hierarchical architecture, the PEs implement
functions of a centralized PE router together.
The basic architecture of HoVPN is shown in Figure 5-18. The device that is directly
connected to users is called the Underlayer PE or User-end PE (hereinafter referred to as the
UPE). The device that is connected to the UPE in the internal network is called the
Superstratum PE or Service Provider-end PE (hereinafter referred to as the SPE). Multiple
UPEs and an SPE form a hierarchical PE, functioning together as a traditional PE router.
VPN1 site
HoVPN
UPE 1 SPE 1
MPLS
Network
VPN1 site
SPE 2
In the networking of HoVPN, functions of PE routers are implemented hierarchically. Therefore, the
solution is also called the Hierarchy of PE (HoPE).
An HoPE is the same as a traditional PE in appearance. HoPEs and common PEs can coexist
in an MPLS network.
HoVPN supports the embedding of HoPEs:
z An HoPE can act as a UPE, and compose a new HoPE with an SPE.
z An HoPE can act as an SPE, and compose a new HoPE with multiple UPEs.
The embedding of HoPEs can be repeated.
The embedding of HoPEs can infinitely extend a VPN network in theory.
RRVPN
Resource Reserved VPN (RRVPN) is a tunnel-multiplexing technology. It can provide
end-to-end QoS guarantee for VPN users.
To reserve and isolate resources for a VPN, RSVP-TE tunnels must be used. When RRVPN is
implemented, different VPNs use different tunnels. The resources of different tunnels with the
same tunnel interface, however, are isolated and reserved.
Note that the total bandwidth of the tunnels must not exceed the total bandwidth reserved for
the physical links.
Multi-role Host
In a BGP/MPLS L3VPN, the VPN attributes of the packets received by PEs from CEs are
determined by the VPN instance bound to the outbound interface on the PEs. Thus, all the
CEs whose packets are forwarded by the same PE interface belong to the same VPN.
In practical scenarios, some servers or terminals need to access multiple VPNs. These servers
or terminals are called multi-role hosts. For example, a server in a financial system in VPN 1
and a server in an accounting system in VPN 2 need to communicate.
In a multi-role host model, only the multi-role host can access multiple VPNs; the
non-multi-role hosts can access only the VPN to which the hosts belong.
A multi-role host generally fulfils the following functions:
z Ensures that the data stream of the multi-role host reaches the destination VPN network.
z Ensures that the data stream from the destination VPN network reaches the multi-role
host.
As shown in Figure 5-19, the multi-role host (PC) belongs to VPN 1. If VPN 1 and VPN 2 on
PE1 cannot import routes from each other, PC can access VPN 1 only. The data stream sent
from PC to VPN 2 only reaches the routing table of VPN 1 on PE1. If PE1 finds no route to
the destination address of the packet, which belongs to VPN 2, in the routing table of VPN 1,
PE1 discards the packet.
To ensure that the data stream of PC reaches VPN 2, you can configure policy-based routing
(PBR) on PE1 interfaces that connect CE1. After the configuration, if PE1 cannot find the
destination address of a packet from CE1 in the routing table of VPN 1, it searches the routing
table of VPN 2 for the route and then forwards the packet. The PBR is generally based on IP
addresses and can guide data streams to access different VPNs.
VPN1
PC
Static-Route CE2
PE2
Backbone
VPN1
CE1 PE1
Policy-Based PE3
Routing VPN2
CE3
To ensure that the data stream replied from VPN 2 reaches PC, routes of the replied data
stream must exist in the routing table of VPN 1 on PE1. As a result, you need to add a static
route destined for PC to the routing table of VPN 2 on PE1. The outbound interface of the
static route must be the outbound interface that connects CE1 in VPN 1 to PE1.
The functions of a multi-role host are mainly implemented on the PE that connects the CE to
which the multi-role host is connected.
z Through the PBR on a PE, the PE can search the routing tables of different VPNs for
routes of the data streams from the same VPN.
z Static routes can be added to the routing table of the destination VPN on a PE. The
outbound interfaces of the static routes are the interfaces bound to the instances of the
VPN where the multi-role host resides.
Note that the IP addresses of the VPN where a multi-role host resides and the VPNs that the
host accesses cannot be the same.
MPLS is widely applied on the access network of the ISP because it features high reliability
and security and sound IP-based operation and maintenance capabilities, and supports QoS.
MPLS L2VPN provides MPLS-based VPN services and transparently transmits Layer 2 data
of users on the MPLS network. It thus provides a channelized path for user services and
reduces the LSPs maintained by transit nodes. MPLS L3VPN services are a type of common
services provided by the ISP over the bearer network. MPLS L2VPN tunnels enable users to
access the MPLS L3VPN of the bearer network. Users can access MPLS L3VPNs through
low-end devices such as the S-switches. In this manner, networking cost is reduced and secure
and stable MPLS L3VPN services are provided for users.
To access L3VPNs through MPLS L2VPN tunnels, two devices that are a PE-AGG and an
NPE need to be deployed at the border between the access network and the bearer network. In
addition, the PE-AGG is used to terminate the L2VPN and the NPE is used to terminate the
L3VPN. The PE-AGG and the NPE run as the CE router for each other. In this case, if an
NPE combines the capabilities of the PE-AGG, networking cost can be saved and networking
is simplified. The VE interface, which is supported by the NE40E to access multiple services,
can be bound to the L2VPN and L3VPN at the same time. That is, the VE interface can access
and terminate the L2VPN and L3VPN. In this manner, the NE40E can run as the NPE and
PE-AGG at the same time.
DSLAM DSLAM
UPE
L2VPN tunnel
L3VPN tunnel
Without a dedicated board, the NE40E can associate Layer 2 with Layer 3 VE interfaces by
using a VE group. The NE40E terminates the VLL and the VPLS through Layer 2 VE
interfaces and accesses the L3VPN through Layer 3 VE interfaces. The UNPE function is thus
implemented.
PE2 VPNA
Backbone
site 3
network
PE1
VPNA
site 1 PE3
VPNA
site 2
Only one type of services in
VPNA
Backbone
network VPNA
site 3
PE2
PE1
VPNA
site 1 PE3
VPNA
site 2
VPNA carries three types of services,
ensuring the QoS for each service in
the same VPN
flow1
flow2
Scheduler
classfier
flow3
flow4 port
flow5
flow6
flow7
flow8
CE-2
Interface
M
VPN-A:30M
A : 20
Interface -based
VPN-A
VPN-A -based PE-2
VSI-
CE-1 PE-1 VSI-A
VSI-A CE-4
P-2
CE-5 CE-6
PE-3
VPN-A VPN-A
P-3
CE-7
VSI-A
CE-8
SR
ISP
DSLAM
User
DHCP Server
An IP packet of the user is encapsulated in a QinQ packet with double VLAN tags through the
DSLAM and then accesses the SR. The outer VLAN ID specifies the DSLAM; the inner
VLAN ID specifies the user.
With the DHCP relay function, the SR forwards a DHCP request packet to the DHCP server
when receiving an access request from the user. After the DHCP server returns an assigned IP
address to the user, the SR reports information about the online user to the COPS server.
The information includes the following:
z Location of the user, namely, CircuitId in the DHCP Option 82 field
z VPN to which the user belongs
z IP address of the user
z MAC address of the user
In addition, the NE40E provides the following functions:
z Supports the three-level limit to the number of users.
z Provides the detection of online users and the processing of the user getting offline.
z Checks the validity of IPTN users.
z Displays information about online users and forcibly cuts off online users.
and make the Internet an integrated network that can carry data, voice, and video services at
the same time.
The following describes the QoS features of the NE40E.
5.7.1 DiffServ Model
5.7.2 Traffic Classification
5.7.3 Traffic Policing
5.7.4 Queue Scheduling
5.7.5 Congestion Management
5.7.6 Traffic Shaping
5.7.7 HQoS
5.7.8 QPPB
5.7.9 Ethernet QoS
...
Filling the bucket
Tokens
with tokens at a
specified rate
Classifying
Passed
Token bucket
Dropped
z The tokens are put into the TB at the rate preset by the user. The capacity of the TB is
also preset by users. If the token bucket is full, no more tokens can be added.
z On arrival, the packets are classified according to the IP precedence, source address, or
destination address of packets. The packets that conform to the preset rule go into the TB
for further processing.
z If there are enough tokens in the bucket, packets are forwarded. At the same time, the
number of tokens in the bucket decreases based on the length of the packets. If the TB
contains insufficient tokens or is empty, the packets not assigned enough tokens are
discarded or re-marked with the IP precedence, DSCP, or EXP values before being resent.
At this time, the number of tokens in the TB remains unchanged.
The preceding process shows that the CAR technology enables a router to control traffic, and
mark or re-mark packets.
CAR is used to limit the traffic rate. With the CAR technology, a TB is used to measure the
data traffic that flows through the interfaces on a router so that only the packets assigned
tokens go through the router in the specified time period. In this manner, the traffic rate is
limited. CAR specifies the maximum traffic rates of both incoming packets at the ingress and
outgoing packets at the egress. Meanwhile, the rate of certain types of traffic can be controlled
according to such information as the IP address, port number, and priority. The traffic not
conforming to the conditions is not limited in rate; such traffic is forwarded at the original
rate.
CAR is mainly applied at the network edge to ensure that the core device can process data
normally. The NE40E supports CAR for both the incoming and outgoing traffic.
serial 1
2M Ethernet
PC1 serial 1 10M
LAN 2
ME601
Ethernet Server2
10M
LAN 1
Server1
Congestion management provides means to manage and control traffic when traffic
congestion occurs. The queue scheduling technology is used to handle traffic congestion.
Packets sent from one interface are placed into many queues which are identified with
different priorities. The packets are then sent according to the priorities. A proper queue
scheduling mechanism can provide packets of different types with reasonable QoS features
such as the bandwidth, delay, and jitter. The queue here refers to the outgoing packet queue.
Packets are buffered into queues before the interface is able to send them. Therefore, the
queue scheduling mechanism works only when an outbound interface is congested. The queue
scheduling mechanism can re-arrange the order of packets except those FIFO queues.
Commonly-used queue scheduling mechanisms are as follows:
z First In First Out (FIFO) queuing
z Priority Queuing (PQ)
z Custom Queuing (CQ)
z Weighted Fair Queuing (WFQ)
z Class-Based WFQ (CBWFQ)
z Low Priority Queuing (LPQ)
The NE40E supports FIFO, PQ and WFQ to implement queue scheduling on interfaces.
different drop priorities with different probabilities within the same traffic. This can
effectively prevent and control network congestion.
5.7.7 HQoS
Hierarchical QoS (HQoS) is a QoS technology that can control users' traffic and support
scheduling according to the priorities of user services.
The HQoS of the NE40E has the following functions:
z Five levels of scheduling is provided for services.
z Configures parameters such as the maximum queue length, WRED, low delay, SP/WRR,
CBS, PBS, and statistics.
z The system supports the configuration of parameters such as the CIR, PIR, number of
queues, and scheduling algorithms between queues for each user.
z Provides the traffic statistics function. The user can view the bandwidth usage of services
and properly distribute the bandwidth by analyzing the traffic.
z The system supports HQoS of VPLS, L3VPN, VLL, BRAS user, and TE.
5.7.8 QPPB
QPPB propagates the QoS policy through BGP.
The receiver of BGP routes can perform the following functions:
z Sets QoS parameters for BGP routes, such as the IP precedence and traffic behavior,
based on the attributes of the routes.
z Classifies traffic by matching QoS parameters and sets the QoS policy for the classified
traffic.
z Forwards packets in accordance with the locally-set QoS policy to propagate the QoS
policy through BGP.
The receiver of the BGP route can set the IP precedence and the related specific traffic
behavior based on the following attributes:
z ACL
z AS path list of routing information
z Community attribute list of routing information
z Route cost of routing information
Configure a
QoS policy Advertise routing
information
AS200
AS100
Packets filtered by
the QoS policy
In the complex networking where routing policies need to be modified dynamically, QPPB
can applied to simplify the modification of policies on the route receiver. You can modify the
routing policy on the BGP route sender to achieve this purpose.
Q-in-Q Supports
802.1p Remark
ISP
Network
CE PE
bandwidth of logical interfaces changes, traffic is automatically balanced based on the new
bandwidth proportion.
Packets Statistics
Classifier
The default action for
unmatched packets is Pass
Packets that
match rules
Statistics
Perform the
action
Allow the packets complying
with URPF to pass through
Packets Statistics
Classifier
The default action for
unmatched packets is
Pass
Packets that
match rules
Statistics
In traffic policing, the system supports the collection of statistics on the following traffic:
z Total traffic that matches the CAR rule
z Traffic that is permitted or discarded by the CAR rule
Packets Statistics
z When the same traffic policy is applied to various interfaces, the CAR traffic statistics
collection in the traffic policy is based on the interface.
5.10.2 RPF/URPF
5.10.3 MAC Limit
5.10.4 Unknown Traffic Suppression
5.10.5 DHCP Snooping
5.10.6 Local Defense attack
5.10.7 GTSM
5.10.8 ARP Attack Defense
5.10.9 Mirroring
5.10.10 Lawful Interception
5.10.2 RPF/URPF
Unicast Reverse Path Forwarding (URPF) functions to prevent network attacks based on the
source address spoofing.
Generally, when receiving a packet, a router obtains the destination address of the packet and
searches the forwarding table for a route to the destination address. If a route to the
destination address is found, the packet is forwarded; otherwise, the packet is discarded.
When a packet is sent to a URPF-enabled interface, URPF obtains the source address and
inbound interface of the packet. URPF then takes the source address as the destination address
to retrieve the corresponding inbound interface and compares the retrieved interface with the
inbound interface. If they do not match, URPF considers the source address as a spoofing one
and discards the packet. In this manner, URPF can effectively prevent malicious attacks that
are launched through the change of the source address.
space of other customers; the system can also discard attack packets on the ingress and
prohibit invalid packets from consuming bandwidth.
MAC address learning is the basic feature of Layer 2 forwarding. It is automatically carried
out and is easy to use. It, however, needs to be deployed with caution to prevent attacks.
The NE40E supports the following types of limit to MAC address learning:
z Limit to the number of MAC addresses that can be learned
z Limit to the speed of MAC address learning
z Limit to interface-based MAC address learning
z Limit to PW-based MAC address learning
z Limit to MAC address learning based on the combination of the VLAN and port
z Limit to MAC address learning based on the combination of the port and VSI
z Limit to MAC address learning based on QinQ
MAC address learning limit can be applied to the network environment with fixed access
users and lacking in security, such as the community access or the intranet without security
management. When the number of MAC addresses learnt by an interface exceeds the limited
threshold, the MAC address of a new access user is not learnt. The traffic of this user is thus
broadcast at a restricted transmission rate.
DHCP snooping is mainly used to prevent DHCP Denial of Service (DoS) attacks, bogus
DHCP server attacks, ARP middleman attacks, and IP/MAC spoofing attacks when DHCP is
enabled on the device.
The working mode of DHCP snooping varies with the type of attacks, as shown in Table 5-1.
Whitelist
The whitelist refers to a group of valid users or users with the high priority. By setting the
whitelist, you can enable the system to protect existing services or user services with the high
priority. You can define the whitelist through Access Control List (ACL) rules. Then, the
packets matching the whitelist are sent to the CPU in preference at a high rate.
The valid users that normally access the system as confirmed and the users with the high
priority can be added to the whitelist.
Blacklist
The blacklist refers to a group of invalid users. You can define the blacklist through ACL rules.
Then, the packets matching the blacklist are discarded or, with a low priority, sent to the CPU.
The invalid users that are involved in attacks as confirmed can be added to the blacklist.
User-defined Flows
User-defined flows indicate that the user defines ACLs. It is applied when unknown attacks
emerge on the network. The user can flexibly specify the characteristics of the attack data
flows and limit the data flows that match the specified characteristic.
high priority. This feature is called Active Link Protection (ALP). Through ALP, the running
of the existing services can be ensured in the case of attacks.
When detecting that the session is deleted, the system deletes information about this session
from the whitelist.
Local URPF
URPF detects the packets forwarded and transmitted from the local devices at the ingress of a
network. In large-scale networks, local URPF can be enabled on local devices to prevent
impact on the forwarding performance. This allows URPF to detect only the validity of source
addresses of packets on the local devices. Thus, invalid packets are discarded. This prevents
the source address spoofing attacks.
To prevent the devices from being controlled by hackers through non-management interfaces
or by flooding management packets, the NE40E provides management plane protection. This
allows the management packets to be received only from management interfaces. The
management packets are thus controllable.
5.10.7 GTSM
Currently, some attackers on the network simulate valid packets to attack a router. As a result,
the finite resources of the router such as the CPU on the SRU/MPU is heavily loaded and
consumed. For example, the attacker continuously sends simulate BGP protocol packets to a
router. After the LPU of the router receives the packets destined for the local host, the LPU
sends the packets to the BGP processing module of the CPU on the SRU/MPU instead of
identifying the validity of the packets. As a result, the system is abnormally busy with the
high CPU utilization rate when the SRU/MPU of the router processes these valid packets.
To prevent the preceding attacks, the NE40E provides the GTSM. The GTSM protects
services of the upper layer over the IP layer by checking whether the TTL value in the IP
header is within the specified range. In the application, the GTSM is used to protect the
TCP/IP-based control layer such as the routing protocol from the type of CPU-utilization
attacks such as CPU overload.
The NE40E supports the following types of GTSM:
z BGP GTSM
z OSPF GTSM
Timestamp-based Scanning-proof
The timestamp-based scanning-proof function can identify the scanning attack on time and
suppress the processing of the requests generated by the scanning when a scanning attack
occurs, regardless of whether it is an ARP scanning attack or IP scanning attack. In this
manner, the CPU is kept away from attacks.
According to the analysis of actual ARP attacks on some networks, the ARP attack traffic
comprises 50% ARP request packets and 50% ARP response packets. Therefore, a solution to
the attacks of numerous ARP packets must be based on the two aspects: ARP request packets
and ARP response packets.
ARP bidirectional isolation enables a device to process ARP request packets and ARP
response packets separately.
z The device performs stateless responses for ARP request packets. That is, the device
generates neither ARP entries nor relevant states after replying to the ARP request
packets. Without sending the ARP request packets to the CPU for processing, the device
defends the ARP table of the gateway against address spoofing attacks by ARP request
packets.
z The device processes only the ARP response packets of the ARP request packets sent by
its CPU. The ARP response packets of the ARP request packets that are not sent by its
CPU are then discarded. The normal ARP request packets can thus be promptly
processed.
5.10.9 Mirroring
Mirroring means that the system copies the forwarding packets on a node in the network to a
specified observing port, without interrupting services. Users can specify the number of the
port to be observed and connect the packet analysis equipment to the observing port to
observe the traffic. In local mirroring, the observing port and mirroring port reside on the
same device. In remote mirroring, the observing port and mirroring port reside on different
devices. The NE40E supports both the local mirroring and remote mirroring.
Mirroring is divided into the following types according to the requirements for the packets to
be copied:
z Port mirroring: The packets received and sent by a mirroring port are completely copied
to a specific observing port.
z Flow mirroring: On the basis of traffic classification, the packets that match specific
rules are copied and other packets are filtered out. By analyzing the filtered packets that
the system does not concern about, the system can control packets with fine granularity.
The efficiency of the packet analysis equipment can thus be improved.
Mirroring is divided into the following types according to the direction in which the packets
are copied:
z Upstream mirroring: All packets or the packets that match specific rules received by a
mirroring port are copied to a specific observing port.
z Downstream mirroring: All packets or the packets that match specific rules to be sent by
a mirroring port are copied to a specific observing port.
Local Mirroring
Figure 5-33 shows the networking diagram of applying local mirroring.
ME60
Port A Port B
Network 1 Network 2
Incoming Outgoing
packets PortC packets
Mirroring
packets
Network 1 and Network 2 are connected through Router. When the incoming packets from
Network 1 to Port A need to be monitored, you can copy the incoming packets to Port A as
mirroring packets. When the incoming packets are normally forwarded, the mirroring packets
can be forwarded through Port C to the packet analysis equipment for processing. In certain
cases, both the incoming packets and outgoing packets to and from Network 1 need to be
monitored. This allows Router to copy the incoming and outgoing packets on Port A to the
observing port.
In local mirroring, a physical observing port and multiple logical observing ports can be
configured on an LPU. Multiple mirroring ports can be configured on an LPU.
Remote Mirroring
Compared with local mirroring, remote mirroring features the following:
z Network maintenance engineers can analyze mirroring packets from remote devices
rather than being on site.
z A network maintenance engineer can analyze mirroring packets on different sites, which
saves human resources.
Figure 5-34 shows the networking diagram of applying remote mirroring.
ME60C
Packet analysis
IP/MPLS backbone network
equipment
Customer1
ME60A ME60B
ME60D
Customer2
Router A and Router B are edge routers on the IP/MPLS backbone network. Customer 1 and
Customer 2 access the backbone network through Router C and Router D respectively. To
maintain the network, analyze attacks, and locate faults, you need to check whether the
protocol packets sent from or received by Router A are correct; or you need to check whether
the sub-interfaces of a VPN user bound to Router C are attacked. In this manner, you need to
copy a type of protocol packets received by Router A, protocol packets sent from Router A to
Router C, or packets received by sub-interfaces on Router A to Router B. Router B then
forwards the preceding packets to the packet analysis equipment for analysis.
In remote mirroring, data from the mirroring port is copied and then the copy of data is sent
over a specified tunnel to a remote destination router where the remote observing port resides.
The remote observing port then forwards the copy of data to the packet analysis equipment.
Data transmitted from a mirroring port to a remote observing port forms a flow. If there are
two pieces of data transmitted from two mirroring ports to a remote observing port, these two
pieces of data form two flows.
The NE40E provides MPLS LSPs, MPLS TE tunnels for remote mirroring.
In remote mirroring, multiple observing ports and mirroring ports can be configured on an
LPU.
In remote mirroring, the mirroring packets can be intercepted.
In this scenario, the IRI is provided by the AAA server and the CC is provided by the NE40E.
AAA Server
HI1 L1
Interception center 1
X1,X2
HI2 Internet
Interception center 2
…… X1,X3
HI3
LIG ME60
Interception center N
The interception management center is the agent of the interception centers. The
interception management center receives the interception request from the interception
center, transforms the information in the request to the location and service identifier,
and then delivers the configuration of interception to the network devices of the carrier.
z LIG
The lawful interception gateway (LIG) acts as the agent between the interception
management center and the devices of the carrier. The LIG plays an important role in
lawful interception. Its functions are as follows:
− Receives the interception request from the interception management center through
L1 and HI1 interfaces.
− Delivers the configuration of interception to network devices and obtains intercepted
contents through X interfaces.
− Sends the intercepted contents to the interception management center through HI2
and HI3 interfaces.
z LIG management system
The LIG management system receives the interception request from the interception
management center and sends the request to the LIG. A LIG management system can
manage multiple LIGs.
The LIG management system delivers the configuration to the LIG through an L1 interface. The LIG is
located on the network of the carrier. The LIG management system is managed by the interception
management center.
z Carrier
The carrier deploys the lawful interception function on the network devices. The devices
that support lawful interception receive the configuration from the interception
management center, and then send the intercepted traffic to the interception management
center.
5.11.4 VRRP
The Virtual Router Redundancy Protocol (VRRP) is a fault-tolerant protocol. VRRP realizes
route selection among multiple egress gateways by separating the physical devices from
logical devices.
VRRP is applicable to a LAN that supports multicast or broadcast, such as Ethernet. VRRP
uses logical gateways to ensure high availability of transmission links. This prevents service
interruption that results from a gateway device failure, without changing the configuration of
routing protocols.
VRRP groups routers on a LAN into a backup group that functions as a virtual router. Hosts
on the LAN know the IP address of only this virtual router rather than that of a specific router
in the backup group. Hosts set the IP address of the virtual router as their own default
next-hop address. In this manner, hosts on the LAN can access other networks through the
virtual router.
In the backup group, only one router is active and is called the master router; other routers are
in the backup state with different priorities and are called the backup routers.
Figure 5-37 shows the networking diagram of a VRRP backup group consisting of three
routers.
10.100.10.2/24 Master
RouterA
PC
10.100.10.3/24
Backup Internet
RouterB
Server
Internal network Backup
10.100.10.0/24
Backup group RouterC
Virtual IP Address
10.100.10.1/24 10.100.10.4/24
VRRP dynamically associates the virtual router with a physical router that transmits services.
VRRP can select a new router to take over the services when the physical router fails. The
entire process is transparent to users, and implements non-blocking communication between
the internal network and the external network.
mVRRP
The Management Virtual Router Redundancy Protocol (mVRRP) specifies an mVRRP group.
The only difference between an mVRRP group and a common VRRP group is that the
mVRRP group can be bound to service VRRP groups and can determine the status of the
bound service VRRP groups.
An mVRRP group can be bound to multiple service VRRP groups but cannot function as a
service VRRP group to be bound to other mVRRP groups.
An mVRRP backup group can join a VRRP Group Management Protocol (VGMP) group as a
member. After an mVRRP group joins a VGMP group, the mVRRP group can be configured
to monitor the status of both the peer and link BFD sessions. The state machine of the
mVRRP group, however, loses its independence. Except for the Initialize state, the Backup
and Master states are determined by the status of the VGMP group that the mVRRP group
joins.
VGMP
Some applications require the session with the same come-and-go path. That is, the packets of
the same session must pass through the same device. In this case, VRRP has its own
limitations. If the active/standby switchover is performed, the come-and-go paths of the same
session may be inconsistent.
To prevent the preceding problem, Huawei develops the VGMP on the basis of VRRP. The
VRRP management group set up on the basis of VGMP manages the status of joining VRRP
groups. On a router, the interfaces that belong to different VRRP groups are thus kept active
or standby simultaneously. In this manner, the VRRP status of the router is kept consistent.
VGMP is required in the following scenarios:
z The system is configured with a large number of VRRP groups.
z The system processes the VRRP protocol packets on the SRU/MPU. A large number of
VRRP groups may generate a large number of VRRP protocol packets. These protocol
packets compete with other protocol packets for CPU resources and the channel as well
as the bandwidth of the inter-board communication. In this case, the system is
overloaded.
z To decrease the system resources occupied by protocol packets, you can configure a
VRRP management group to control these VRRP backup groups. Thus, the VRRP
backup groups do not send packets by themselves and occupy less system resources.
z The routers are enabled with the firewall, NAT gateway, or policy server.
z These functions require the same come-and-go path of the same session. Configuring a
VRRP management group to uniformly manage the VRRP groups ensures that the status
of the VRRP groups is consistent.
E-VRRP
E-VRRP is designed to improve reliability on a network that is not enabled with multi-homed
Stream Control Transmission Protocol (SCTP) or load balancing.
G9 Bearer
Network
ME60 ME60
As shown in Figure 5-38, the MsoftX, Universal Media Gateway (UMG), and Home Location
Register (HLR) are dual-homed to the master and backup routers on a VRRP network. You
can ensure the reliability on the media plane by connecting UMG to the VRRP network and
the reliability on the signaling plane through dual-homed SCTP. If the devices do not support
SCTP, you can configure E-VRRP to ensure the reliability.
Virtual IP Address:
ME60A
2002::1
2002::2 Master
HostA
ME60B
2002::3 Backup
HostB Network
ME60C
Backup
2002::4
HostC
Ethernet
As shown in Figure 5-39, IPv6 runs on each host and each router on an IPv6 network. A
VRRP group, consisting of a group of routers on a LAN, functions as a virtual router. The
hosts on the LAN set the IPv6 address of the virtual router as the default gateway. In this
manner, the hosts only need to obtain the IPv6 address of the virtual router rather than that of
a specific router and use the default gateway to communicate with external networks. To
ensure reliability and utilize routers, you can create multiple VRRP groups to balance traffic
on the network.
5.11.5 GR
Graceful Restart (GR) is a key technology in implementing HA. The GR switchover and
subsequent restart can be performed by the administrator or triggered by faults. GR neither
deletes routing information from the routing table or the FIB nor resets the board during the
switchover when faults occur. This prevents services interruption of the entire system.
GR has the following advantages:
z Simple and easy to implement. You only need to modify some protocols rather than
changing the current software.
z It does not need to back up the protocol status.
z Few data needs to be backed up from the AMB to the SMB. The data includes
configuration modification, updated messages and events, interface status change, and
topology information and routing information from neighbors after restart.
z During the switchover, there is little probability of service interruption.
z The network converges rapidly in normal situations.
The NE40E supports system-based GR and protocol-based GR. The protocol-based GR
includes:
z BGP GR
z OSPF GR
z IS-IS GR
z MPLS LDP GR
z L3VPN GR
z RSVP GR
z PIM GR
5.11.6 BFD
BFD is a detection mechanism used on the entire network. It can quickly detect and monitor
the connection of links and forwarding state of the IP route on the network.
Detection packets are transmitted from both ends of a bidirectional link. The NE40E tests the
link status from both directions to detect failures in milliseconds. The NE40E supports
single-hop BFD and multi-hop BFD.
The following describes the BFD features supported by the NE40E.
relationship of the routing protocol is rapidly detected. The detection parameters of BFD
sessions are negotiated by both ends through the routing protocol.
z When detecting a fault, a BFD session goes Down. BFD triggers route convergence
through the RM module.
Generally, routing protocols implement detection in seconds through the Keepalive mechanism of Hello
messages, whereas BFD carries out detection in milliseconds. When the detection interval is 10 ms and
the detection multiplier is 3, BFD can report protocol failures within 50 ms. This speeds up route
convergence.
z When a routing protocol sets up a neighbor relationship, the routing protocol notifies
BFD through the RM module to establish sessions. The neighbor relationship of the
routing protocol is rapidly detected. The detection parameters of BFD sessions are
negotiated by both ends through the routing protocol.
When the neighbor is unreachable, the routing protocol notifies BFD to delete the session
through the RM module.
Generally, routing protocols implement detection in seconds through the Keepalive mechanism of Hello
messages, whereas BFD carries out detection in milliseconds. When the detection interval is 10 ms and
the detection multiplier is 3, BFD can report protocol failures within 50 ms. This speeds up route
convergence.
z When the neighbor is unreachable, the routing protocol notifies BFD to delete the
session through the RM module.
Generally, routing protocols implement second-level detection through the Keepalive mechanism of
Hello messages, whereas BFD carries out millisecond-level detection. When the detection interval is 10
ms and the detection multiplier is 3, BFD can report protocol failures within 50 ms. This speeds up route
convergence.
z When the neighbor is unreachable, the routing protocol notifies BFD to delete the
session through the RM module.
IP FRR
FRR can minimize data loss caused by network faults. The switching time can be achieved in
50 ms.
The NE40E provides FRR that enables the system to monitor and store the real-time status of
the boards and ports, and check the status of the ports when packets are forwarded. When
abnormality occurs on a port, the system can fast switch traffic to another preset route. This
reduces the Mean Time Between Failures (MTBF) and the amount of lost packets.
LDP FRR
The traditional IP FRR cannot effectively protect traffic on the MPLS network. The NE40E
provides LDP FRR and the solution to port protection.
Along an LDP with Downstream Unsolicited (DU) label distribution, ordered label control
and liberal label retention, a Label Switch Router (LSR) saves all label mapping messages.
Only the label mapping messages sent by the next hop corresponding to the FEC can generate
a label forwarding table. With this feature, the backup LSP is set up if a label forwarding table
is produced for the liberal label mappings.
Normally, a packet is forwarded through the primary LSP. When the outgoing interface of the
primary LSP goes Down, the packet is forwarded through the backup LSP. This ensures the
transmission of traffic before network convergence.
TE FRR
TE FRR is a technology used in MPLS TE to implement local protection for the network.
Only the interfaces at a speed of over 100 Mbit/s support TE FRR. The switching time of TE
FRR can reach 50 ms. It can minimize data loss when network failures occur.
TE FRR is only a temporary protection method. When the protected LSP becomes normal or a
new LSP is established, the traffic is switched back to the original LSP or the newly
established LSP.
After an LSP is configured with TE FRR, the traffic is switched to its protection link and the
ingress node of the LSP attempts to establish a new LSP when a link or a node on the LSP
fails.
With different protected objects, TE FRR is classified into the following types:
z Link protection: There is a direct link between the PLR and MP, and the primary LSP
passes through this link. When this link is invalidated, the traffic can be switched to the
bypass LSP. In Figure 5-40, the primary LSP is R1->R2->R3->R4; the bypass LSP is
R2->R6->R3.
PLR MP
R1 R2 R3 R4
Primary LSP
Bypass LSP
R6
z Node protection: In Figure 5-41, the PLR and the MP are connected through R3, and the
primary LSP passes through R3. The primary LSP is R1->R2->R3->R4->R5; the bypass
LSP is R2->R6->R4; R3 is the protected router. When R3 fails, the traffic can be
switched to the bypass LSP.
PLR MP
R R R R R
1 2 3 4 5
Primary
LSP
Bypass
LSP
R
6
VLL FRR
VLL FRR implements network protection on an L2VPN. It fast switches user traffic to the
backup link after a fault occurs on the network. This improves the reliability of the L2VPN.
VLL FRR is also called VLL redundancy.
VLL FRR on the L2VPN includes fault detection, fault notification, and active/standby
switchover of links.
The NE40E provides various types of features that can be combined to implement VLL FRR.
z Fault detection.
z BFD for PW can fast detect the fault of the PW at the network side on an L2VPN.
z Ethernet OAM can fast detect the fault at the attachment circuit (AC) side on an L2VPN.
z Fault notification
z LDP, BGP, or RSVP can notify the remote PE of the fault on the LSP/PW or the AC.
z BFD for LSP/PW can notify the remote PE of the fault on the LSP/PW or the AC.
z Ethernet OAM can notify the local CE of the fault.
z Active/standby switchover of links.
z On a symmetric network, CEs perform the active/standby switchover.
z On an asymmetric network, PEs work with CEs to perform active/standby switchover.
z With the end-to-end fault detection mechanisms such as BFD, the local PE senses the
fault of the remote active PE within 200 milliseconds and then switches the outer and
inner labels of the remote active and standby PEs at the same time.
z VPN FRR switches the inner labels. Its switching priority level is lower than that of
LDP/MPLS TE FRR. In this case, the time to sense the fault is longer than the protection
switching time of LDP/MPLS TE FRR.
5.11.8 NSR
Non-Stop Routing (NSR) ensures that the control plane of a neighbor does not sense the fault
on the control plane of a router that provides a slave control plane. In this process, the
neighbor relationships set up through specific routing protocols, MPLS, and other protocols
that carry services are not interrupted.
As an HA solution, NSR ensures that user services are not affected or least affected in the case
of device failures.
IS-IS NSR
IS-IS NSR ensures that the real-time data is highly synchronized between the master and
slave MPU/SRUs. In this manner, in the case of the master/slave switchover, the slave
MPU/SRU can rapidly take over services on the master MPU/SRU with neighbors not
sensing router failures.
BGP NSR
During the master/slave switchover, BGP NSR ensures the continuous forwarding at the lower
layer and continuous advertisement of BGP routes. In this process, the neighbor relationships
are not affected, with neighbors not knowing the switchover on the local router. This ensures
uninterrupted transmission of BGP services.
6 Application Scenarios
Distribution I n t e rnet
node
BRAS Internet
DSLAM
CMTS Aggregafion P/PE
Node
P/PE SoftX
VoD ES
Distribution P/PE
node
AccSwitch PE VoD CS
The aggregation layer device accesses and forwards services through IP/MPLS. Individual
services are converged to the aggregation node through the DSLAM; corporate services are
converged at Layer 2 through a switch or are directly converged to the aggregation node.
z DSLAM: accesses individual services through permanent virtual circuits (PVCs). The
DSLAM adds VLAN or QinQ tags to services based on the types of users and services,
and is generally connected to the aggregation node.
z Switch: refers to the access switch that converges the Layer 2 corporate services to the
aggregation node.
z Aggregation node: refers to the distributed service node (PE). The aggregation node
distinguishes VLAN or QinQ user services, forwards Layer 3 services or VPN services,
or transparently transmits services to the BRAS or the centralized PE through IP/MPLS.
z Distribution node: converges services on the metro Ethernet. The distribution node
terminates IP/MPLS and transparently transmits services to the BRAS or the centralized
PE.
z BRAS: processes PPPoE login services of individual users.
z PE: refers to the centralized service node, which can also serve as the distribution node.
PE accesses the services that should be converged and processed, such as centralized
L3VPN services.
z P/PE: refers to the core forwarding node or the edge node on the backbone network. A P
or a PE rapidly forwards or converge services to the backbone network.
The NE40E can be applied to the aggregation node and the distribution node to guarantee the
access of individual services and corporate services.
Individual Services
z HSI service: The DSLAM adds QinQ tags to distinguish user services. The outer VLAN
tag indicates the service type. The NE40E at the aggregation node transparently transmits
the services to the distribution node that can be the NE40E through EoMPLS (VLL or
VPLS). The distribution node terminates the transmission and then transparently
transmits the QinQ data to the BRAS.
z VOD/VoIP: The NE40E at the aggregation node terminates the VLAN or QinQ tags
added by the DSLAM, and forwards the services to the Layer 3 network or converges the
services to the L3VPN for transmission.
z BTV: The NE40E at the aggregation node serves as the designated router (DR) of the
Protocol Independent Multicast (PIM). The aggregation node receives the multicast data
distributed through PIM, and then sends the data to the DSLAM through multicast
VLAN. The user joins or quits a group through IGMP, and the popular channels send
data to the DR through a static route.
Corporate Services
z Corporate dedicated line: The corporate dedicated line is connected to the Layer 3
network through the NE40E at the aggregation node.
z E-LINE: The PW, an end-to-end L2VPN tunnel, is set up between the NE40E at the
aggregation node and the peer end. E-LINE services are transmitted to the peer end
through different tunnels based on the VLAN or QinQ tags identified at the aggregation
node.
z E-LAN: The NE40E at the aggregation node creates VSIs, and forwards service data to
different VSIs for forwarding after the VLAN or QinQ tag is identified. The service data
can also be converged to the E-LAN services through HVPLS, during which VSIs are
created by the distribution node.
L3VPN: Services are converged to the Virtual Route Forwarding (VRF) at the aggregation
node, or converged to the centralized service node for VRF forwarding through HoVPN.
IP RAN Solutions
Services of a 2G RAN network, mainly a small volume of voice services, are transmitted over
TDM links. Usually one to three E1 interfaces on a BTS are connected to a BSC. Some
wireless carriers do not have fixed network infrastructure, and have to lease E1 lines of
fixed-line networks, which costs a lot. Services between the BTSs and BSCs in the same city
can be transparently transmitted over TDM links on a Metro Ethernet network.
For a 2G RAN network, a Packet Switching Network (PSN) is constructed through NE40Es
between the BTSs and a BSC. The NE40E is connected to the BTSs in the downstream
through E1/T1 links, and to the BSC in the upstream through n x E1/T1 links or 155-Mbit/s
links, as shown in Figure 6-2.
Mobile carriers in worldwide construct RANs one after another. The 2G RAN network is
based on TDM/SDH, and thus it has a lower bandwidth usage, is hard to expand, and is
inflexible to configure. Therefore, IP RAN is a trend. UMTS R99/R4 defines ATM as the
protocol used during the transmission of services between the Node B and RNC, with E1 IMA
interfaces connecting the two ends. Figure 6-2 shows the networking diagram.
E1
T DM
*N CX600
CX600
E1 TDM E1 TDM*N
BSC
MPLS over SDH/ME
Deploying routers on an metro Ethernet MPLS network can solve the problem of bandwidth
multiplexing. Node B is connected to the NE40E that supports E1 IMA interfaces. After the
NE40E terminates IMA, the high-speed ATM cell flows are transparently transmitted through
ATM PWE3 to the NE40E at the RNC side. Then, The NE40E at the RNC side classifies the
high-speed ATM cell flows into n x E1 links, and sends multiple channels of low-speed cells
to the RNC. For the Node B and RNC, the NE40E and MPLS network are transparent. It
functions as if multiple E1 interfaces on the Node B and RNC were directly connected
through the TDM link.
GPS GPS
POS
BC BC
1588v2 1588v2
GE GE
BC BC
FE E1 E1 FE
1588v2 1588v2
7.1 Benefits
7.1.1 System Configuration Mode
7.1.2 System Management and Maintenance
7.1.3 HGMP
7.1.4 System Service and Status Tracking
7.1.5 System Test and Diagnosis
7.1.6 In-Service Debugging
7.1.7 Upgrade Features
7.1.8 GTL
7.1.9 Miscellaneous Features
7.1.3 HGMP
The NE40E supports Huawei Group Management Protocol (HGMP), which is a cluster
management protocol developed by Huawei.
HGMP is used to group Layer 2 devices that are connected to the NE40E into a unified
management domain, that is, a cluster. In addition, HGMP supports automatic collection of
network topologies and provides integrated maintenance and management channels. In this
manner, a cluster uses only one IP address for the external communication, simplifying device
management and saving IP addresses.
information to locate faults. You can enable or disable the trace function through the tracert
command.
In addition, you can query the CPU usage of the SRU/MPU and the LPU in real time.
The debugging and trace functions of the NE40E classify information. The sensitive
information of different classes is directed to different output destinations based on the user
configuration. The output destinations include the console display, Syslog server, and SNMP
Traps.
The NE40E also provides the Network Quality Analysis (NQA) function.
NQA measures the performance of each protocol that runs on the network and helps the
network operator collect network running indexes, such as total delay of HTTP, delay of a
TCP connection, delay of DNS resolution, rate of file transfer, delay of an FTP connection,
and rate of incorrect DNS resolution. By controlling these indexes, the network operator
provides users with services of various grades and charges them differently.
NQA is also an effective tool in diagnosing and locating faults on the network.
System Upgrade
The system upgrade optimizes the upgrade process. You can use one command to complete
the upgrade, which saves time for users. During the upgrade process, the progress is displayed.
After the upgrade is complete, you can view the results.
Rollback
During the upgrade process, if the new system software cannot start the system, you can use
the previous one that successfully started the system.
The rollback function can protect services against the failure in the system upgrade.
7.1.8 GTL
The NE40E is bearing more software features. Thus, the cost of software gradually constitutes
a larger percentage of the total cost. This mode, however, cannot cater to users and carriers in
the following aspects:
z Common users want to reduce the purchase cost.
z Users that need upgrade the devices want to be able to expand the capacity of devices
and choose the service features as required.
To meet different requirements, the NE40E provides flexible authorization of service features.
The NE40E provides a management platform of license authorization through the Global
Trotter License (GTL). This achieves the authorization of service features. In this mode, the
following are achieved:
z Common users can purchase the service features as required. The purchase cost is thus
reduced.
z Users that need upgrade the devices can expand the capacity of devices and add new
service features by applying for new licenses.
Provided with GTL, the NE40E manages the features of L3VPN, MVPN, 1588v2.
LLDP
At present, the Ethernet technology is extensively used on the Local Area Network (LAN) and
Metropolitan Area Network (MAN). With the increasing demand for large-scale networks, the
network management capabilities of Ethernet are in great demand. For example, the network
management of Ethernet should address issues such as automatically obtaining topology of
interconnected devices and conflicts in configurations on different devices.
Recently, the NMS software adopts the function of automated discovery to trace changes in
topology. Most NMS software, however, can at best analyze the network layer topology and
group devices to different IP subnets. The NMS provides data only about adding or deleting
devices. The NMS cannot obtain information about the interfaces on a device, which are used
to connect another device. That is, the NMS cannot locate a device or determine its operation
mode.
The Layer 2 Discovery (L2D) protocol can discover precise information about the interfaces
situated on the devices and the interfaces that are used to connect other devices. The L2D
protocol also displays the paths between the client, switch, router, application server, and
network server. The preceding detailed information helps locate a network fault.
The Link Layer Discovery Protocol (LLDP) is an L2D protocol defined in IEEE 802.1ab.
LLDP specifies that the status information is stored on all the interfaces and the device can
send its status to the neighbor stations. The interfaces can also send information about
changes in the status to the neighbor stations as required. The neighbor stations then store the
received information in the standard Management Information Base (MIB) of the Simple
Network Management Protocol (SNMP). The NMS can search for the Layer 2 information in
the MIB. As specified in IEEE 802.1ab, the NMS can also find the unreasonable Layer 2
configurations based on the information provided by LLDP.
When LLDP runs on the devices, the NMS can obtain the Layer 2 information about all the
devices they connect and the detailed network topology information. This expands the scope
of network management. LLDP also helps find unreasonable configurations on the network
and reports the configurations to the NMS. This removes error configurations in a timely
manner.
8 Technical Specifications
8.1.1 NE40E-X2
Table 8-1 Parameters of the NE40E-X2
Item Description
Item Description
temperature Short-term -5°C to 55°C (Short-term means that the continuous
working time does not exceed 48 hours and the accumulated
time per year does not exceed 15 days. Long-term refers to
the contrary situation.)
Remarks Limit of the temperature change rate: 30°C/hour
Storage temperature -40°C to +70°C
Relative Long-term 5% RH to 85% RH, no coagulation
humidity
Short-term 5% RH to 95% RH, no coagulation
Storage humidity 0% RH to 95% RH, no coagulation
Long-term altitude Lower than 3000 m
Storage altitude Lower than 5000 m
8.1.2 NE40E-X1
Table 8-2 Parameters of the NE40E-X1
Item Description
Item Description
humidity Short-term 5% RH to 95% RH, no coagulation
Storage humidity 0% RH to 95% RH, no coagulation
Long-term altitude Lower than 3000 m
Storage altitude Lower than 5000 m
8.2.1 NE40E-X2
Table 8-3 Default configurations on the NE40E-X2
Processor Dominant -
frequency: 1 GHz
SDRAM 2 GB -
CF card 1 GB The CF card within the MPU stores
system files and does not support hot
swap.
USB interface USB2.0 Host The USB2.0 interface is hot swappable
and used for software upgrade or
temporary data access.
Switching capacity 80 G -
(bi-directional)
User interface capacity 75.2 G -
Number of subcard slots 8
Number of MPU slots 2 -
Number of NPU slots 2 -
8.2.2 NE40E-X1
Table 8-4 Default configurations on the NE40E-X1
Processor Dominant -
frequency: 1 GHz
SDRAM 2 GB -
CF card 1 GB The CF card within the MPU stores
system files and does not support hot
swap.
USB interface USB2.0 Host The USB2.0 interface is hot swappable
and used for software upgrade or
temporary data access.
Switching capacity 40 G -
(bi-directional)
User interface capacity 52 G -
Number of subcard slots 4 Slots for the LPUs (optional)
Number of MPU slots 2 -
Number of NPU slots 1 -
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
Feature Description
9 Compliant Standards
L2 protocol
RFC1216 Gigabit network economics and paradigm shifts
RFC1619 PPP over SONET/SDH prior to insertion into
SPE
RFC1717 The PPP Multilink Protocol (MP)
RFC2285 Benchmarking Terminology for LAN Switching
Devices
RFC2665 Definitions of Managed Objects for the
Ethernet-like Interface Types
RFC2674 Definitions of Managed Objects for Bridges with
Traffic Classes,Multicast Filtering and Virtual
LAN Extensions
RFC2863 The Interfaces Group MIB
RFC3020 MIB for FRF.16 UNI/NNI MFR
RFC3201 Circuit to Interface MIB
RFC3635 Definitions of Managed Objects for the
Ethernet-like Interface Types
RFC4087 IP Tunnel MIB
ITU-T G.703 Physical/electrical characteristics of hierarchical
digital interfaces
ITU-T G.704 Synchronous frame structures used at 1544,
6312,2048, 8448 and 44 736 kbit/s hierarchical
levels.
ITU-T G.707 Network node interface for the synchronous
digital hierarchy (SDH)
ITU-T G.825 The control of jitter and wander within digital
networks which are based on the synchronous
digital hierarchy (SDH).
ITU-T G.823 The control of jitter and wander within digital
networks which are based on the 2048 kbit/s
hierarchy.
ITU-T G.824 The control of jitter and wander within digital
networks which are based on the 1544 kbit/s
hierarchy.
ANSI T1.105 Synchronous Optical Network(SONET) Basic
Description Including Multiplex Structures,
Rates,
and Formats
ANSI T1.105.02 Synchronous Optical Network(SONET) Payload
Mappings
L3 protocol
RFC2544 Benchmarking Methodology for Network
Interconnect Devices
RFC2668 Definitions of Managed Objects for IEEE 802.3
Medium Attachment Units (MAUs).
MPLS
RFC2205 Resource ReSerVation Protocol(RSVP)-Version
1 Functional Specification
RFC2209 Resource ReSerVation Protocol(RSVP)-Version
1 Message Processing Rules
RFC2210 The Use of RSVP with IETF Integrated Services
RFC2702 Requirements for Traffic Engineering Over
MPLS
RFC2747 RSVP Cryptographic Authentication
RFC2961 RSVP Refresh Overhead Reduction Extensions
RFC3031 Multiprotocol Label Switching Architecture
RFC3032 MPLS Label Stack Encoding
RFC3035 MPLS using LDP and ATM VC Switching
RFC3036 LDP Specification
RFC3037 LDP Applicability
RFC3063 MPLS Loop Prevention Mechanism
RFC3107 Support BGP carry Label for MPLS
RFC3209 RSVP-TE Extensions to RSVP for LSP Tunnels
RFC3210 Applicability Statement for Extensions to RSVP
for LSP-Tunnels
RFC3212 Constraint-Based LSP setup using LDP
(CR-LDP)
RFC3214 LSP Modification Using CR-LDP
RFC3215 LDP State Machine
RFC3270 Multi-Protocol Label Switching (MPLS) Support
of Differentiated Services
RFC3272 Overview and Principles of Internet Traffic
Engineering
RFC3443 Time To Live (TTL) Processing in Multi-Protocol
Label Switching (MPLS) Networks
draft-ietf-l2vpn-vpls-bgp-05 -
draft-ietf-l2vpn-requirements-04 -
draft-ietf-l2vpn-vpls-ldp-07 -
draft-ietf-pwe3-congestion-frmwk-01 -
draft-ietf-pwe3-dynamic-ms-pw-08 -
draft-ietf-pwe3-ms-pw-arch-04 -
draft-ietf-pwe3-ms-pw-requirements-0 -
7
draft-ietf-pwe3-oam-msg-map-07 -
draft-ietf-pwe3-redundancy-00 -
draft-ietf-pwe3-redundancy-bit-00 -
draft-ietf-pwe3-segmented-pw -
draft-ietf-pwe3-vccv-bfd-02 -
z AS/NZS 60950.1
z BS EN 60950-1
z ITU-T K.20
z GB4943
z FDA rules, 21 CFR 1040.10 and 1040.11
z IEC60825-1, IEC60825-2, EN60825-1, EN60825-2
z GB7247
z IEC GR-1089-CORE
B
BE Best-Effort
BGP Border Gateway Protocol
BGP4 BGP Version 4
C
CAR Committed Access Rate
DC Direct Current
DHCP Dynamic Host Configuration Protocol
DNS Domain Name Server
DS Differentiated Services
E
EACL Enhanced Access Control List
EF Expedited Forwarding
EMC ElectroMagnetic Compatibility
FE Fast Ethernet
FEC Forwarding Equivalence Class
FIB Forward Information Base
FIFO First In First Out
FR Frame Relay
FTP File Transfer Protocol
G
GE Gigabit Ethernet
GRE Generic Routing Encapsulation
H
HA High availability
HDLC High level Data Link Control
HTTP Hyper Text Transport Protocol
IPv4 IP version 4
IPv6 IP version 6
IPX Internet Packet Exchange
IS-IS Intermediate System-to-Intermediate System
ISP Interim inter-switch Signaling Protocol
ITU International Telecommunication Union - Telecommunication
Standardization Sector
L
L2TP Layer 2 Tunneling Protocol
LAN Local Area Network
N
NAT Network Address Translation
NLS Network Layer Signaling
NP Network Processor
NTP Network Time Protocol
NVRAM Non-Volatile Random Access Memory
O
OSPF Open Shortest Path First
R
RADIUS Remote Authentication Dial in User Service
RAM Random-Access Memory
RED Random Early Detection
RFC Requirement for Comments
RH Relative Humidity
RIP Routing Information Protocol
RMON Remote Monitoring
ROM Read Only Memory
RP Rendezvous Point
RPR Resilient Packet Ring
RSVP Resource Reservation Protocol
RSVP-TE RSVP-Traffic Engineering
S
SAP Service Advertising Protocol
SCSR Self-Contained Standing Routing
SDH Synchronous Digital Hierarchy
SDRAM Synchronous Dynamic Random Access Memory
SFU Switch Fabric Unit
SLA Service Level Agreement
SNAP SubNet Attachment Point
SNMP Simple Network Management Protocol
SONET Synchronous Optical Network
SP Strict Priority
SPI4 SDH Physical Interface
SSH Secure Shell
STM-16 SDH Transport Module -16
SVC Switching Virtual Connection
T
TCP Transfer Control Protocol
TE Traffic Engineering
TFTP Trivial File Transfer Protocol
TM Traffic Manager
ToS Type of Service
TP Topology and Protection packet
U
UBR Unspecified Bit Rate
UDP User Datagram Protocol
UNI User Network Interface
UTP Unshielded Twisted Pair
W
WAN Wide Area Network
WFQ Weighted Fair Queuing