Академический Документы
Профессиональный Документы
Культура Документы
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
Flexi ISN is based on NSN IPSO operating system, running in FlexiServer Blade
hardware. Flexi ISN inherits the well-proven Voyager user interface for managing IPSO
operating system and the applications. Management is also possible with NSN NetAct.
Flexi ISN utilises multi-CPU hardware to provide a high performance, scalable ISN
solution. At the same time it uses built-in high availability mechanisms to provide mobile
users with fault tolerant access to services. Redundancy is transparent to the other
network elements because Flexi ISN hides its multi CPU structure, appearing as a single
IP router and application platform.
Structural concept of Flexi ISN
Physical Logical
i/f i/f i/f i/f i/f i/f i/f i/f i/f i/f
The logical structure of Flexi ISN is shown in Figure 2. Each processing unit is running its
own instance of IPSO operating system. These units, Blades, are connected to each
other via internal networks. These internal networks are called Backplane networks. To
switch packets in the internal networks the system includes two built-in Layer 2 switches.
Some of the Blades also have external network interfaces, which are used to integrate
Flexi ISN to the GPRS/3G network, and for forwarding packets to and from the mobile
users.
The Blades have their specific roles in a functioning Flexi ISN system. In principle, all
important functions can be handled by more than one Blade, and thus the system is very
resilient to failures.
The performance-intensive tasks are distributed to multiple Blades. Flexi ISN has
excellent performance and capacity, since one system can have as much as 10
processing units.
Module contents
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
The Flexi ISN is built into a chassis. The Telecommunication Server Chassis (TSC)
provides a holder for the actual processing units (Blades). Maximum 14 units can be
installed in it, but some of the units have a dedicated function other than processing: the
so called Switch Blades and Hard Disk Blades do not run IPSO or any software
applications. The maximum number of Blades which can be used for processing in Flexi
ISN is therefore 10.
The Chassis has its own power entry modules. The backplane in the Chassis provides
the physical connections between the blades.
Chassis cooling is provided by a separate cooling unit installed in the top. This unit is
called Fan and Display Module (FADM) (Figure 4). It has multiple cooling fans, as well as
a small status display for system messages.
Cabinet
The rack where the chassis is installed is Cabinet. The Cabinet body comes with
grounding parts, side panel set, adjustable feet, as well as the installation accessories.
Other items provided for neat and stable installations are rack dummy panels, chassis
support brackets, a cable storage shelf, and an anti-topple bracket. An optional door set is
also available.
One Cabinet accepts four chassis in total.
Blades
The blades can have different functions which are affected by both hardware and
software. The CPU Blades are the processing units. They can be used in different roles.
They are different in software configuration and network connectivity only.
There are also other blades which have an embedded function: the Hard Disk Blade and
the Switch Blade. The features and differences of the blades are explained in the
following chapters.
A different number of blades are used in different element configurations. A number of
dummy blades are inserted into the empty slots to ensure proper ventilation.
CPI1 Blade
PMC
PMC slot
slot GIPMC
GIPMC SER SER SER
The CPI Blade is its own processing unit, capable of running an operating system. In Flexi
ISN, each CPI Blade is running its own instance of IPSO. The PMC slot in CPI Blade
accepts a GIPMC (optical) or ETHPMC (copper) network interface adapter for external
connectivity. All other connections to a CPI Blade take place via the chassis backplane.
Depending on which application software components the CPI Blade is running, and
whether it also has additional network interfaces installed in the PMC expansion slot, it
can have three different purposes in Flexi ISN system:
Management Blade
Service Blade
Interface Blade
NOTE: There are different revisions of CPI1 blade hardware (CPI1, CPI1-A, -B and C).
Always check the compatibility of various versions before replacing HW.
HDF1-C Blade
HD activity
HD fault
Power
State
Hot
Swap
HotSwap
Enable
HDF1
The Hard Disk Blade includes a hard drive. Access to the Hard Disk Blade takes place via
a Fibre Channel bus (FC-AL) in the Flexi ISN backplane. In theory, any blade installed in
the system has a physical access to the Fibre Channel bus and thus the Hard Disk Blade,
but in Flexi ISN only the Management Blade utilises the disk. The HDF1 Blade has a
configurable FC-AL address. The Management Blade recognizes the HDF1 disk from its
FC-AL address.
SWSE-A Blade
communication buses
EN:60825-1:1994
FC0
RX
FC1
FC-AL hub
FC bypass
S4 S3 S2 S1
S4 S3 S2 S1
IPMB
S4 S3 S2 S1
FC1 FC0
Eth MP
GbE
Interconnections between
TX
P2
RX
PS LINK
RX
GbE
P3
LINK
SW SW SW SW
PS
TX GbE
P4
RX
PS LINK
POWER
CPU_1 CPU_2 CPU_3 CPU_N STATE
HOT SWAP
HOT SWAP
i/f i/f i/f i/f i/f i/f i/f i/f i/f i/f ENABLE
The backplane switch is provided by the Switch Blade. SWSE-A hardware is used in Flexi
ISN. There are two Switch Blades installed which makes it possible to have redundant
connection between the CPU Blades. The Switch Blade has a pre-installed Linux
operating system running, but it does not run any Flexi ISN software applications.
Hardware configurations
Hardware Dual
Entry Medium Large
configuration chassis
Management Blades 2 2 2 2
Hard Disk Blades 2 2 2 2
Service Blades 2 3 4 13
Interface Blades 2 3 4 7
Switch Blades 2 2 2 4
Flexi ISN supports three different hardware configurations (Table 1). All configurations
are built into a single chassis. Positioning of the Blades into the slots in chassis is
incremental: when upgrading to a bigger HW configuration, the already installed Blades
do not have to be moved.
Entry level configuration can support up to simultaneous 333,000 PDP contexts with
maximum 333 Mbit/s throughput (with 512 byte nominal packet size).
Medium configuration can support up to simultaneous 666,000 PDP contexts with
maximum 666 Mbit/s throughput (with 512 byte nominal packet size).
Large configuration can support up to simultaneous 1,000,000 PDP contexts with
maximum 1,000 Mbit/s throughput (with 512 byte nominal packet size).
Dual chassis configuration can support up to simultaneous 2,000,000 PDP contexts with
maximum 5,000 Mbit/s throughput (with 512 byte nominal packet size).
Note: when the average package size is over 512B the throughputs of different Flexi-ISN
configurations are bigger than above mentioned. E.g. the maximum throughput of the
large Flexi-ISN configuration is 2.5 Gbit/s, which can be reached when the packet size is
1460B.
Hardware configuration: Entry
Management Blades
SWSE-A
SWSE-A
HDF1
HDF1
CPI1
CPI1
02 03 05 06 07 08 09 10 11 14
CPI1
CPI1
01 02 03 05 06 07 08 09 10 11 12 14
CPI1
CPI1
CPI1
01 02 03 04 05 06 07 08 09 10 11 12 13 14
FADM-A
16 17 18 19 20 21 22 23 24 25 26 27 28 29
P P P
M M M
C C C
SB IB IB SB SB SB SW SW SB SB SB IB SB SB
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
SWSE-A
SWSE-A
FADM-A
1 2 3 4 5 6 7 8 9 10 11 12 13 14
P P P P P P
M M M M M M
C C C C C C
SB IB IB SB MB HD SW SW HD MB SB IB IB SB
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
CPI1-B/C
SWSE-A
SWSE-A
HDF1-C
HDF1-C
The two chassises are interconnected via the SWSE-A blades on slots 7 and 8. Both the
Fibre Channel and Ethernet buses are extended to the other chassis with the external
cabling between the SWSE-As.
Upgrade to Dual Chassis Configuration
Upgrade Paths
CPI1-B / CPI1-C
CPI1-B / CPI1-C
CPI1 / CPI1-A
Soc Classification level
19 Nokia Siemens Networks FICOM40_02 / 2009-03-04
Flexi ISN Configuration Alternatives in FI 4
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
Flexi ISN 3.2 uses IPSO 3.9.2NET operating system. IPSO provides platform
management, IP routing and forwarding functions, and hosts the upper layer applications.
IPSO operating system is running in each CPI blade in Flexi ISN. One of the blades, the
Management Blade, handles the configuration of the complete ISN. It also takes care of
configuring the blades to run the specific software components for full ISN functionality.
Management Blade functions
The Management Blade is running the functions which are centralised for the whole Flexi
ISN system. The centralised functions in Flexi ISN platform for the Management Blade
are:
Runs O&M system
Handles disk access
Provides management interface to external control systems, like Voyager, NetAct, and
CNSM
Handles alarms
Delivers OS image and configuration data to other CPI1 Blades
Handles IP routing and distributes forwarding tables to other CPI1 Blades
In Flexi ISN application level the centralised functions for the Management Blade are:
CDR sending interface to Charging Gateways (Ga)
Lawful interception interface (X)
IP address pool management for PDP contexts
Loading IPSO from disk
IPSO
HD
CPU_5 CPU_11 CPU_12 CPU_14
The Management Blade is accessing the hard drive directly via FC-AL bus. The
Management Blade is able to boot its operating system from disk. The other Management
Blade does the same, using its own HDF1 hard disk.
Loading IPSO from internal network
The diskless blades are able to load their operating systems after the Management Blade
has booted up. The Management Blade acts as a dhcp6 and tftp6 server in the internal
backplane network. The diskless blades load their OS images via the backplane network.
Handling configuration data
slot11/config
config config config
slot12/config IPSO IPSO-DL IPSO-DL IPSO-DL
HD
slot14/config CPU_5 CPU_11 CPU_12 CPU_14
The Management Blade provides all configuration interfaces (Voyager, CNSM, SNMP,
and CLISH). It also keeps the configuration data on the hard drive. The operating systems
on the diskless Blades get their configuration parameters delivered from the Management
Blade whenever necessary.
Management Blade and HDF1 redundancy
Management Blade forms a pair with one HDF1 and runs active
The other MB uses another HDF1 and remains standby
If anything fails inside a pair, there is switchover to the other pair
Active Standby
IPSO IPSO
HD HD
CPU_5 CPU_10
The Management Blade in slot 5 forms a pair with the HDF1 in slot 6. This pair runs then
active. The other Management Blade in slot 10 uses another HDF1 in slot 9 and remains
standby. If anything fails inside a pair either in the CPI1 Blade or HDF1 Blade the
system switches over to the other pair. That is, a hard drive failure always triggers a CPI1
switchover and vice versa.
The system frequently synchronises the hard drive and CPI1 state information to the
standby pair.
O&M network interface redundancy
The Management Blade has at least one network interface active for providing a
management connection to the system. When the standby pair switches over to active, it
also activates the IP address previously used by the active Management Blade.
Because the Management Blades share their IP address, their interfaces must also be
connected to the same network. Interface configuration parameters are automatically
mirrored by the system.
Only the active Management Blade interface has its link up. In switchover, the new active
Blade sends Gratuitous ARP (G-ARP) message to advertise the interface change. The
neighbouring L2 switch must perform its MAC table update when switchover occurs.
Other directly connected network elements like L3 routers must update their ARP table
similarly.
O&M network interface in switchover
MAC_addr switchport
MAC_addr switchport
MAC_blade5 0/23
MAC_blade10 0/24
Distributed processing
Application specific tasks
PDP context handling and GTP tunnelling
Traffic analysis
Charging counters
Policy control
No direct external connections, all communication via
backplane
The Service Blades are running tasks which are performance and capacity intensive.
With distributed processing, the system becomes easily scalable by increasing the
number of processing units.
The important application specific tasks which are handled by the Service Blades are:
PDP context handling and GTP tunnelling
Traffic analysis
Collecting charging information
Policy control (Gx interface functionality)
The Service Blades do not have external connections, so they are entirely relying on the
backplane communications for data exchange.
Interface Blades
The Interface Blades provide the external network connections via GIPMC or ETHPMC
interface adapters.
All traffic in Flexi ISN is IP based. The Interface Blades have their own IP addresses for
example for the purpose of advertising dynamic IP routes, but most importantly they are
acting as forwarders towards the common loopback IP address in the Flexi ISN. The
application traffic like GTP usually takes place with this common loopback IP address.
The high availability options in Interface Blades are:
Dynamic IP routing
L2 RAG (Redundant Aggregation Group)
Dynamic IP routing is best done with OSPF protocol. It provides fast convergence in
failure cases, is multi-vendor compatible, and also supports traffic sharing when multiple
interfaces and Interface Blades are used.
L2 redundancy is targeted primarily for those environments where dynamic IP routing is
not feasible. With L2 RAGs it is possible to combine two physical interfaces into a single
aggregate interface. Behaviour of this function is similar to the interface redundancy
model of the Management Blades.
The Interface Blades automatically support load balancing for application traffic inside
Flexi ISN system. They play important role in distributing the application traffic and PDP
contexts to the multiple Service Blades. The number of Interface Blades is equally
important to the number of Service Blades in terms of performance, since they also have
to recognise the individual PDP contexts and have a database of them in their memory.
To guarantee good service to the mobile users, their packets should be switched as fast
as possible. Multiple Interface Blades help with this.
Flexi ISN application
The Flexi ISN 3.2 application package is installed and activated in each CPI Blade in the
Flexi ISN system.
The Flexi ISN system and application functionality is distributed into multiple processes,
which can be running in one or more CPI Blades. The software components will run as
application processes in the IPSO operating system in each blade, or in some cases as
kernel modules in IPSO.
For monitoring the important applications functions and their high availability, it is possible
to check the status for the following software components using the graphical user
interface.
Flexi ISN system and application components 1/2
Component Function
Run time database that holds mainly the statistical
pmd data. The pmd task is active on the service and
management blades.
Maintains static routes that the Flexi ISN creates for
routemgr access points of the type Normal IPv4, IPIP/GRE (not
'All'), Normal IPv6, and 6in4.
tracer Collects tracing data.
Pmd: Sending it a SIGUSR1 signal can dump the contents. First find the pid with the command ps -ax | grep pmd and
then use kill -USR1 pid. By default, the content is printed in the file /var/tmp/pmd_dump.txt.
The blade where this component (ippoold) runs is providing the dynamic IP address pool
for the Access Points. The IP pool role runs in the active or standby mode in the
Managements Blades: the active Management Blade runs the active IP pool role.
The blade where CDR sender (ggsncs) runs is responsible for sending Charging
Records (CDRs). The CDR role runs in the active or standby mode. One CDR role will
always be enabled on the management blade. The CDR role on the active management
blade is the active one. If it fails, the process on the standby Management Blade switches
to active.
Flexi ISN system and application components 2/2
Component Function
Monitors the uniqueness of all the SGSN tunnel endpoint identifiers (TEID) and
NAS accounting session identifiers (ASID). In addition, the conflict manager
sends GTP echo messages to all SGSNs that have created PDP contexts to
the Flexi ISN system, and checks the recovery identifiers in the echo replies.
If the same SGSN TEID or NAS ASID appears twice in Flexi ISN, this
indicates that signalling messages have not been delivered successfully to
conflictd Flexi ISN, and that obsolete sessions are still present in Flexi ISN. The conflict
manager is responsible for mandating the service blades to remove the old
and obsolete sessions. Whenever the conflict manager commands a service
blade to terminate a session, it writes a log entry.
The conflict manager commands the service blades to remove all PDP
contexts originating from an SGSN if the recovery identifier of the SGSN
changes, or the SGSN does not respond to the GTP echo requests.
This is the Flexi ISN main process. It performs the following functionalities:
tunnelling, signalling, Quality of Service, and charging This process should
ggsntunnel always be running.
The GGSN server task is active on the service blades.
The conflict manager conflictd monitors the uniqueness of all the SGSN Tunnel Endpoint
Identifiers (TEIDs) and NAS Accounting Session Identifiers (ASIDs) within the system.
The conflict manager is also responsible for sending GTP echo messages to all SGSNs
that have created PDP contexts to the Flexi ISN system. It then checks the recovery
identifiers in the echo replies. If the conflict manager identifies obsolete sessions present
in Flexi ISN, it requests the service blades to remove them. A log entry is written in such a
case. The conflict manager runs on the active management blade in the Flexi ISN.
Components for proxy analysis
Component Function
ita_httpd HTTP/WAP2 Analyser
ita_wapd WAP1.x Analyser
ita_rtspd RTSP Analyser
ita_smtpd SMTP Analyser
ita_ftpd FTP Analyser
The proxy analysis part of Flexi ISN application functionality is handled by the following
application processes:
ita_httpd HTTP/WAP2 Analyser
ita_wapd WAP1.x Analyser
ita_rtspd RTSP Analyser
ita_smtpd SMTP Analyser
ita_ftpd FTP Analyser
These processes run in Service Blades only. They are needed if Flexi ISN is running in
service aware mode, and if proxy services are used.
Optional P2P modules
All CPU Blades are running Flexi ISN base software application
The Service Blades are running the optional P2P SW modules
Pluggable Protocol Modules (PPMs)
NOTE: Since Flexi ISN 4.0, a 3rd party deep packet inspection suite (Sandvine)
can be integrated with FI for P2P detection
OS P2P P2P
image SW SW
SW
ISN ISN ISN ISN ISN
SW SW SW SW SW
Config
data
IPSO IPSO-DL IPSO-DL IPSO-DL IPSO-DL
files
The L7 traffic analysis options include a set of so called Peer-to-Peer (P2P) protocols. These
implementations of these protocols can potentially be changed often, and Flexi ISN may have to
be upgraded thereafter to guarantee compatibility. To help in this, the analysis functions are
carried out by Pluggable Protocol Modules (PPMs), which can be upgraded separately whenever
needed.
P2P software components in Service Blades
Component Function
PPM_bittorrent.o BitTorrent
PPM_dc.o Direct Connect
PPM_emule.o eMule/eDonkey
PPM_fasttrack.o FastTrack
PPM_msn.o Microsoft Messenger
PPM_oscar.o AOL Oscar
PPM_sip.o SIP
PPM_skype.o Skype
PPM_xmpp.o XMPP
PPM_ymsg.o Yahoo Messenger
Each PPM module analyses a specific P2P application. The PPM modules are loaded dynamically
to IPSO kernel. You can verify which modules have been loaded with the modstat command:
# modstat
Type Id Off Loadaddr Size Info Rev Module
DEV 0 97 bd977000 27e1 bda361d0 1 kgtp
MISC 1 0 be3ae000 0008 be3af050 1 PPM_bittorrent
MISC 2 0 be3b1000 0008 be3b2090 1 PPM_dc
MISC 3 0 be3ba000 000c be3bc050 1 PPM_edonkey
MISC 4 0 be43a000 0008 be43b090 1 PPM_fasttrack
MISC 5 0 be43e000 0008 be43f130 1 PPM_msn
MISC 6 0 be441000 0008 be442050 1 PPM_oscar
MISC 7 0 be447000 0015 be44b730 1 PPM_sip
MISC 8 0 be44f000 000d be451090 1 PPM_skype
MISC 9 0 be456000 0008 be457070 1 PPM_xmpp
MISC 10 0 be459000 0008 be45a050 1 PPM_ymsg
#
Module contents
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
Flexi ISN can be managed via Voyager, which is the traditional management interface for
IPSO based systems.
Flexi ISN configuration
Management
Blade is used to
manage the
configuration of the
whole ISN
Switch Blade and
Hard Disk Blade
configuration need Management
not be managed Blade Voyager
as they are strictly
part of the
embedded
hardware
As the Management Blade provides the Voyager Interface, the top Voyager level is really
the Management Blade plus the common parameters for the complete Flexi ISN system.
The common parameters are automatically delivered to the other relevant Blades, so the
administrator does not have to worry about it.
Navigating to CPI Blade configuration
To configure parameters directly for the other Blades, it is possible to navigate to each
individual CPI1 Blade configuration, but the options here are restricted to those which can
really be necessary.
Module contents
Module Objectives
General Architecture
Hardware
Software
GUI
Network Interfaces
FC0
RX
FC1
FC bypass
S4 S3 S2 S1
S4 S3 S2 S1
FC1 FC0
Eth MP
RX
GbE
P1
TX GbE
P2
PS LINK
RX
PS LINK
GbE
P3
together TX GbE
P4
PS LINK
RESET
STATE
HOT SWAP
ENABLE
The SWSE-A switch Blades provide the backplane communication buses for Ethernet
and FC-AL. They also have connectors in the front plane, which are directly connected to
these backplane buses. Although they are physically external, they connect to strictly
internal networks and therefore must not be used. The external connections to Flexi ISN
system are always provided via GIPMC and ETHPMC adapters in the CPI1 Blades.
The RJ-45 connector in SWSE-A front panel is the 10/100 Mbit/s Ethernet port connected
to the SWSE-A host operating system. By default, the host OS port has IP address
10.0.0.1, and it can be used for maintenance purposes, e.g. firmware upgrades if
necessary. It is best used with a cross-over cable instead of connecting to any O&M
networks directly.
CPI1 Blades
All CPI1 Blades have two physical network interfaces connected to backplane (eth1 +
eth2). The Flexi ISN system never uses these interfaces directly, since communication
would not be resilient. OS backplane communication uses aggregate interface (ae0 or
sfab0). This aggregate interface uses either of the physical backplane interfaces.
Management and Interface Blades have two additional physical interfaces in GIPMC or
ETHPMC (eth-s1p1 + eth-s1p2).
Backplane interface details (1/2)
The diskless Blades export their external network interfaces (GIMPC and ETHPMC) to
the Management Blade. The exported interfaces get a slot number identifier. This
centralised routing facility makes it possible to handle Flexi ISN system as a single IP
router. This must be considered in network planning, since the Blades are not
independent routers, although they are all running their own instance of IPSO operating
system.
View from Management Blade
Management
Blade ETHPMC
Exported
GIPMC from
slot 2