Вы находитесь на странице: 1из 441

Enterasys Educational Services

Switching/NMS/Policy
Student Guide
Version 1.71

Terms & Condition of Use:


Enterasys Networks, Inc. reserves all rights to its materials and the content of the
materials. No material provided by Enterasys Networks, Inc. to a Partner (or Customer, etc.)
may be reproduced or transmitted in any form or by any means, electronic or mechanical,
including photocopying and recording, or by any information storage or retrieval system, or
incorporated into any other published work, except for internal use by the Partner and except
as may be expressly permitted in writing by Enterasys Networks, Inc.
This document and the information contained herein are intended solely for informational use.
Enterasys Networks. makes no representations or warranties of any kind, whether expressed
or implied, with respect to this information and assumes no responsibility for its accuracy or
completeness. Enterasys Networks, Inc. hereby disclaims all liability and warranty for any
information contained herein and all the material and information herein exists to be used
only on an "as is" basis. More specific information may be available on request. By your
review and/or use of the information contained herein, you expressly release Enterasys from
any and all liability related in any way to this information. A copy of the text of this section is
an uncontrolled copy, and may lack important information or contain factual errors. All
information herein is Copyright Enterasys Networks, . All rights reserved. All information
contain in this document is subject to change without notice.

For additional information refer to:


http://www.enterasys.com/constants/terms-of-use.aspx

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

10

A quick positioning comparison of Enterasys switches may be useful.


The A4 is Enterasys Networks low cost entry level Layer 2 switch that delivers high density
and high availability switching via closed loop1 Gb stacking, redundant stack management
and external RPS support for all family members.
The C series family offers comparable port densities and Layer 2 features to the B series. In
addition, the C series offers advanced software features like routing and IPv6.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

11

The C5 only requires one license, unlike the C3; this single license enables both IPv6 and
advanced routing features.
While the base features of the C/G series may seem to be similar to the K series, the K series
offers a super-set (i.e. multi-user authentication and policies on a single port) of the advanced
software features found on the C/G series.
The S series offers better features than the K when it comes to the quantity of users being
able to authenticate and be given different policies. In addition, there are additional layer 3
features available on the 150/155 modules.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

12

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

13

14

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

15

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

16

Port mirroring is an integrated diagnostic tool for tracking network performance and security
that is specially useful for fending off network intrusion and attacks.
It is a lowcost alternative to network taps and other solutions that may require additional
hardware, may disrupt normal network operation, may affect client applications, and may
even introduce a new point of failure into your network. Port mirroring scales better than
some alternatives and is easier to monitor. It is convenient to use in networks where ports
are scarce.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

17

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

18

Supported in a bonded chassis

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

19

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

20

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

21

Remote Monitoring (RMON) is a standard network management protocol that allows network
information to be gathered at a single workstation.
RMON 1 defines nine MIBs that provide a much richer set of data about network usage.
Statistics: Overall packet statistics, including destination type breakdown, error breakdown,
and frame size breakdown.
History: Records periodic "snapshots" of the information collected in the Statistics Group. The
amount of time that the "snapshots" capture is normally user-configurable.
Alarms: Compares user-selected statistics and compares them to user-defined rising and
falling thresholds. Alarms can be generated if a threshold is exceeded. Any MIB defined as
an integer can be compared to a threshold.
Events: Works hand-in-hand with the alarm, filter, and packet capture groups, providing a
means for defining responses to alarm conditions and successful packet captures; events
can also be used to enable and/or disable an action or set of actions that will automatically be
taken in response to an event.
.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

22

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

23

Packet Flow Sampling


The packet flow sampling mechanism carried out by each S-Flow Instance ensures that any
packet observed at a Data Source has an equal chance of being sampled, irrespective of the
packet flow(s) to which it belongs.
Packet flow sampling is accomplished as follows:
When a packet arrives on an interface, the Network Device makes a filtering decision to
determine whether the packet should be dropped.
If the packet is not filtered (dropped), a destination interface is assigned by the switching/
routing function.
At this point, a decision is made on whether or not to sample the packet. The mechanism
involves a counter that is decremented with each packet. When the counter reaches zero a
sample is taken.
When a sample is taken, the counter indicating how many packets to skip before taking the
next sample is reset. The value of the counter is set to a random integer where the sequence
of random integers used over time is the Sampling Rate.
Packet flow sampling results in the generation of Packet Flow Records. A Packet Flow
Record contains information about the attributes of a packet flow, including:
Information on the packet itself a packet header, packet length, and packet encapsulation.
Information about the path the packet took through the device, including information relating
to the selection of the forwarding path.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

24

Login Security Password:- Used to access the devices CLI to start a Local Management
session via a Telnet connection or local COM port connection.
SNMP Community Names
Allow access to the device via a network SNMP management application, such as
Enterasys NetSight.
Host Access Control Authentication:- Authenticates user access to Telnet management,
console local management , and WebView via a central RADIUS Client/Server application.
802.1X Port Based Network Access Control using EAPOL :- Provides a mechanism via a
RADIUS server for administrators to securely authenticate and grant appropriate access to
end user devices directly attached to device ports.
MAC Authentication:- Provides a mechanism for administrators to securely authenticate
source MAC Addresses and grant appropriate access to end user devices directly attached to
device ports.
MAC Locking :- Locks a port to one or more MAC Addresses, preventing connection of unauthorised devices via the port.
Secure Shell (SSH) :- Permits or denies remote access based on IP address.
Access Control Lists (ACLs):- Permit or deny access to routing interfaces based on protocol
and source IP Address restrictions configured in access lists.
Denial of Service (DoS) Prevention

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

25

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

26

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

27

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

28

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

29

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

30

There are various configuration and management options for Enterasys switches, which vary
by switch product family, including:
Local Management (LM)
NetSight
WebView and SSL
Telnet and SSH
All Enterasys switch products may be managed via their console or COM port for out-of-band
access to either menu-driven management screens or to a command-line interface. This is
commonly referred to as Local Management (LM). The network administrator must be local
to the device in order to manage it.
A device IP address is not required to manage the device through LM. The console port on a
device may be either an RJ45 or a DB9 connector, which may be connected to a VT type
terminal, a PC with a terminal emulation application (such as HyperTerminal, PUTTY or
TeraTerm Pro), or to a modem.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

31

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

32

You must remember the type of port you are configuring.


If you are configuring a gigabit port that is running at 100 mbps, that port must still be referred
to as
ge.x.x

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

33

Other examples :
fe.1.1-10:
100 Mbps ports 1 through 10 in chassis slot 1/Unit 1
ge.3.2:
1 Gigabit port 2 in chassis slot 3/Unit 3
tg.3.1:
10 Gigabit port 1 in chassis slot 3/Unit 3
In addition to fe, ge, tg, and fg, other port types include:
com for COM (console) port
vlan for vlan interfaces
lag for IEEE802.3 link aggregation ports, or
lbpk for loopback interfaces
vsb for hardware VSB ports
With the S and K series, routed VLANs will be seen as vlan.0.x.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

34

Logout timer can be set 60 or disabled when configuring the switch in a lab, (set logout 0) but
for good practice should be kept at a minimum.
Using Simple Time Network Protocol (SNTP) is a better option.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

35

A, B, C, D, G and I Switches
Do not support time delayed reset (NetSight can be used for this) reset [unit].
Note: clear config does not clear stacking IDs and switch priorities - clear config all does.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

36

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

37

WebView is enabled by default on all products and usually works only when it is run with
Super User/Admin rights to the managed device.
Secure Socket Layer (SSL) works by using a private key to encrypt data for the transmission
of private documents over the Internet.
All but the S and K series support SSL.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

38

Telnet is a terminal emulation program for TCP/IP networks. Once an Enterasys switch has a
valid IP address, you can establish a Telnet session to the device from any TCP/IP based
node on the network. You can manage your devices via the Telnet program and they will be
executed as if you were entering them via the console or COM port. The management
screens seen during a Telnet session are identical to those seen via the console or COM
port.
An enhancement to Telnet is SSH. SSH is a protocol for secure remote login over an
insecure network. It provides a secure replacement of the Telnet feature by encrypting
communications between two hosts.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

39

Enterasys periodically provides firmware upgrades and, less frequently, Boot PROM
upgrades. These are required to:
Address software incompatibilities
Introduce and integrate new features
Address problems and issues with previous firmware versions
Support new and future technologies
Enterasys switches primarily support Trivial File Transfer Protocol (TFTP) or BootP server
functionality. Other methods of firmware upgrade include File Transfer Protocol (FTP) and
serial connection via zmodem.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

40

The online TFTP download process for upgrading firmware is as follows:


The operating image remains in LRAM while the new image is downloaded directly to the
flash memory.
Once the TFTP server and settings are initialized, the device will erase the contents of the
flash memory. (Caution should be taken in this state because with no image in flash memory,
the device would require a BootP if the device were reset for any reason.) The compressed
file will download directly to the flash memory.
Once the download is complete, the device will operate using the old image until such time
that the device is reset.
Upon reboot, the new image will be utilized via a normal boot up.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

41

The S and K series allows you to download and store multiple image files. This feature is
useful for reverting back to a previous version in the event that a firmware upgrade fails to
boot successfully. When installing a new module in an existing system, the systems
operating firmware image needs to be compatible with the new module. If they are not
compatible, we recommend that the system be upgraded prior to the installation of the new
module. If the system is not upgraded prior to the installation, the new module may not
complete initialisation and be operational. It will remain in a halted state until the running
chassis is upgraded to a compatible firmware version.
There are three ways to download firmware to the S and K series devices:
FTP download uses an FTP server connected to the network and downloads the firmware
using the FTP protocol. This is the most robust downloading mechanism.
A TFTP download uses a TFTP server connected to the network and downloads the firmware
using the TFTP protocol.
An out-of-band download is accomplished via the serial (console) port. By typing the
command download, you send the firmware image via the ZMODEM protocol from your
terminal emulation application.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

42

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

43

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

44

Once you have configured a device, you can save that configuration to a file as backup or
use it to configure a new, similar switch. Uploading and downloading configurations is useful
for replicating configurations of switches of the same model, and for troubleshooting
purposes. This section of the module describes how each product family handles
configuration uploads and downloads.
First, lets define some terms.
Uploading a configuration from a switch means that the configuration is currently on the
device and is copied to a local server via the TFTP protocol.
Downloading a configuration means that you are taking a configuration file previously
uploaded from a switch and downloading it. The switch will now take the properties that had
been previously uploaded.
For best results, the switch should be physically identical to the switch that the config was
uploaded from. That is, it should be the same switch type, with the same sub-module types
installed, and should be running the same firmware. This last bit is not an absolute rule, but
is based on the fact that interpretation of configuration files is somewhat firmware-specific.
The Enterasys recommended way to back up switch configurations is to use Inventory
Managers Archive utility. Note that each switch has a limited amount of storage for
configurations (the number of configurations a switch can store depends on the size of the
configuration).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

45

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

46

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

47

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

48

Default VLAN and Number of Supported VLANs


By default, all ports on all Enterasys switches are assigned to VLAN ID 1, with the egress
status defaulting to untagged for all ports. How many VLANs are supported and the range of
VLAN IDs (VIDs) allowed varies depending on the device. IEEE 802.1Q specifies 4096 VLAN
IDs. There is a distinction between the range of VID values (0 through 4095) that a switch
vendor implements, and the maximum number of active VLANs a particular switch can
support. For example, a switch may only support 10 active VLANs, but may support VIDs
from anywhere in the full IEEE specified range.
The allowable user-configurable range for VLAN IDs (VIDs) is from 2 through 4094.
VID 0 is the null VLAN ID, indicating that the tag header in the frame contains priority
information rather than a VLAN identifier. It cannot be configured as a port VLAN ID (PVID).
VID 1 is designated the default PVID value for classifying frames on ingress through a
switched port. It may be changed on a per-port basis.
VID 4095 is reserved by IEEE for implementation use.
Each VLAN ID in a network must be unique. If a duplicate VLAN ID is entered, the Enterasys
switch assumes that the administrator intends to modify the existing VLAN.
Enterasys switches use the VLAN tag information contained in a data packet for all ingress,
forwarding and egress decisions.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

49

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

50

For this example, assume that a unicast untagged frame is received on Port 3. The frame is
classified for VLAN 20. The switch makes its forwarding decision by comparing the
destination MAC address to information previously learned and entered into its filtering
database.
In this case, the MAC address is looked up in the FDB for FID 20.
Lets say the switch recognizes the destination MAC of the frame as being located out Port 4.
Having made the forwarding decision based on entries in the FID, the switch now examines
the Port VLAN egress list of Port 4 to determine if it may transmit frames belonging to VLAN
20. If so, the frame is transmitted out Port 4.
The VLAN egress config will dictate if the frame leaves tagged or untagged
If Port 4 has not been configured to transmit frames belonging to VLAN 20, the frame is either
discarded or will be forwarded through another port.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

51

For most networks, the following is the normal sequence you would follow to configure
VLANs:
Review existing VLANs
Create and name VLANs
Assign port VLAN IDs
Enable ingress filtering
Configure VLAN egress
Create a management VLAN
Enable/disable GVRP
Lets review each of these steps for Enterasys switches.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

52

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

53

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

54

When creating VLANs, first assign a VLAN ID within the supported range of the device. This
is a numeric ID. You may also assign a VLAN name to each VLAN. This name is for the
administrators use. The name of the VLAN has no affect on the VLAN or its functioning. It is
the VLAN ID that counts.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

55

If you are configuring multiple VLANs, we recommend that you configure a management-only
VLAN. This allows a station connected to the management VLAN to manage the device. It
also makes management secure by preventing configuration via ports assigned to other
VLANs.
The process of assigning a management VLAN must be repeated on every device that is
connected to the network to ensure that each device has a secure management VLAN. When
configuring multiple devices, the VLAN names can be different, but the management VLAN
ID must be the same on each device. It is not necessary to configure a physical port for
management on each switch. Only those switches that will have a management station
attached need a physical port assigned to the management VLAN.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

56

Before enabling VLANs for the switch, you must first assign each port to the VLAN group or
groups in which it will participate. Port VLAN IDs (PVIDs) determine the VLAN to which all
untagged frames received on one or more ports will be classified. This is a classification
mechanism that associates a port with a specific VLAN and is used to make forwarding
decisions for untagged packets received by the port.
For example, if port 2 is assigned a PVID of 3, then all untagged packets received on port 2
will be assigned to VLAN 3. If no VLANs are defined on the switch, all ports are assigned to
the default VLAN with a PVID equal to 1.
You should add a port as a tagged port (that is, a port attached to a VLAN-aware device) if
you want it to carry traffic for one or more VLANs, and the device at the other end of the link
also supports VLANs. If you want a port on a switch to participate in one or more VLANs, but
intermediate devices or the device at the other end of the link do not support VLANs, then
you must add the port as an untagged port (a port attached to a VLAN-unaware device).
On Enterasys switches, ports can be assigned to multiple tagged or untagged VLANs. Each
port on the switch is therefore capable of passing tagged or untagged frames.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

57

Switch Portfolio
PVIDs are configured in the same way on all our switches. The PVID is used to classify
untagged frames as they ingress into a given port. When setting a PVID with the set port vlan
command, you can also add the port to the VLANs untagged egress list (egress is discussed
later).
Example: If you assign ports 1, 5, 8, and 9 to VLAN 3, untagged frames received on those
ports will be assigned to VLAN 3. If the specified VLAN (VLAN 3 in this example) has not
already been created, this command (set port vlan) will create it, add the VLAN to the ports
egress list as untagged, and remove the default VLAN from the ports egress list.
The port egress type for all ports defaults to tagging transmitted frames. This can be changed
to forbidden or untagged. Setting a port to forbidden prevents it from participating in the
specified VLAN and ensures that any dynamic requests, either through GVRP or Dynamic
Egress, for the port to join the VLAN, will be ignored. (Dynamic Egress is discussed in a later
section of this module.) Setting a port to untagged allows it to transmit frames without a tag
header. This setting is usually used to configure a port connected to an end user or other
VLAN-unaware device.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

58

The egress process dictates where the packet is allowed to go within the VLAN. The ingress
process classifies received frames as belonging to one and only one VLAN. The forwarding
process looks up learned information in the filtering database to determine where received
frames should be forwarded.
Egress determines which ports will be eligible to transmit frames for a particular VLAN, or it
may be used to prevent one or more ports from participating in a VLAN. In general, VLANs
have no egress (except VLAN ID 1), until they are configured by static administration or
through dynamic mechanisms (GVRP, policy classification, or Enterasys Dynamic Egress).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

59

If the frame format is not specified in the set vlan egress command, the port is automatically
added to the VLANs egress list as tagged.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

60

On all platforms, the show vlan command displays the devices VLANs and only ports on the
VLANs egress list that are forwarding.
If a port possesses one or more of the following characteristics, the port is not displayed with
the show vlan command, regardless of the administrative configuration of the device:
No link
Blocking due to spanning tree
Member of a LAG port

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

61

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

62

This is quite a common configuration when using IP telephones in a network. This is an


example of using tagged and untagged devices off of a common port.
All network managers will want to place Voice Over IP (VOIP) traffic into a separate VLAN
than that for end user PCs.
The reason for this is that they will want to treat the VOIP traffic differently in time of
congestion and also to reduce the broadcast traffic, that is why the 2 types of traffic are
placed in different VLANs.
The way this is achieved is that the PCs send untagged packets and the phones send tagged
packets.
By doing this the Port VLAN Identifier (PVID) configured on the port of the switch will place
the PCs packets into that VLAN but the Phone sends tagged packets to the switch and the
switch keeps the packets in that VLAN, for this to work though the switches still has to have
all the VLANs configured on them.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

63

An Enable Ingress Filtering parameter is associated with each port on the Enterasys
switches. Ingress Filtering is disabled by default per the IEEE 802.1Q standard because it is
very limiting as to what packets will be forwarded. It can be useful, however, in limiting
broadcasts. If ingress filtering is disabled and a port receives frames tagged for VLANs for
which it is not a member, these frames will be flooded to all other ports (except for those
VLANs explicitly forbidden on this port).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

64

In this Figure, Workstation As packet has a VLAN ID tag of 7. It is received on port 1 of a


switch and it is a broadcast packet. The switch logic will check to see if port 1 is on the
egress list of VLAN 7.
If port 1 is on VLAN 7s egress list, the packet from Workstation A will be classified to VLAN
7, checked against the information in the filtering database and egress list, and transmitted
out the appropriate port.
If port 1 is not on the egress list of VLAN 7 (as in this figure), the packet will not be
transmitted. This configuration prevents Workstation As broadcast packets from flooding
across VLAN 7 and wasting valuable bandwidth.
The process just described is referred to as ingress filtering and it is used to conserve
bandwidth within the switch by dropping packets that are not on the same VLAN as the
ingress port at the point of reception. This eliminates the subsequent processing of packets
that will just be dropped by the destination port. It affects tagged frames only and does not
affect VLAN independent BPDU frames.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

65

GVRP employs three GARP timers:


Join Timer: Controls the interval between transmitting requests/queries to participate in a
VLAN group. The default value is 20 seconds.
Leave Timer: Controls the interval a port waits before leaving a VLAN group. It should be
more than twice the join time to ensure that the applicant can rejoin before a port actually
leaves the group. The default value is 60 seconds.
Leave All Timer: Controls the interval between sending out a LeaveAll query message for
VLAN group participants and the port leaving the group. This interval should be considerably
larger that the Leave Timer setting to minimise the amount of traffic generated by nodes
rejoining the group. The default value is 1000 seconds.
Management can prohibit ports from participating in GVRP, as well as change the timer
defaults. The default values for the GARP timers are independent of the media access
method or data rate. These values should not be changed, unless you are experiencing
difficulties with GVRP registration/deregistration. If changed, they must be changed to the
same values on all switches in the network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

66

Switch 1 and 4 have VLAN 60 configured and the edge ports to PC1 and PC2 have a
PVID of 60
Switch 1s uplink to Switch is configured for VLAN 60 as tagged, the same for Switch 4

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

67

Switch Portfolio
On all our switches GVRP is globally enabled by default.
Setting a port to forbidden prevents it from participating in the specified VLAN and ensures
that any dynamic requests (either through GVRP or Dynamic Egress) for the port to join the
VLAN will be ignored. If GVRP is enabled, VLANS will be propagated dynamically through
the network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

68

Classification is discussed in more detail in the Traffic Management module of this course.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

69

CONFIGURING PROTECTED PORTS


The Protected Port feature is used to prevent ports from forwarding traffic to each other, even
when they are on the same VLAN. Ports may be designated as either protected or
unprotected. Ports are unprotected by default. Multiple groups of protected ports are
supported.
Protected Port Operation
Ports that are configured to be protected cannot forward traffic to other protected ports in the
same group, regardless of having the same VLAN membership. However, protected ports
can forward traffic to ports which are unprotected (not listed in any group). Protected ports
can also forward traffic to protected ports in a different group, if they are in the same VLAN.
Unprotected ports can forward traffic to both protected and unprotected ports. A port may
belong to only one set of protected ports.
This feature only applies to ports within a switch. It does not apply across multiple switches in
a network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

70

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

71

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

72

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

73

The NetSight clients access the Console database to collect the device tree for their
individual usage in each Plug-in application. The server synchronizes all of the event logs
and makes them available to all the clients connected to the server.
The current applications that require a license are:
Console, which includes:
Inventory Manager
Policy Manager
Automated Security Manager
Network Access Control Manager
OneView
The client of Console will perform localised functions such as FlexView and Compass
searches. Since these functions normally are tactical diagnostic tools, the searches are kept
on the local machines, unless the user chooses to upload them to the server.
Encrypted Java Message Service and Enterprise JavaBean calls are made between the
client and server over SSL v3 (Secure Socket Layer).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

74

This window provides an area where you can paste the license information for each
application. Your license unlocks application functionality and allows clients to connect to the
server. If you have licensed multiple Enterasys NetSight applications, you can paste each
license into this window, or update the license in the Server Information > License tab of
NetSight applications later.
Click Next to continue.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

75

The NetSight Server runs on a set of non-standard ports. These TCP ports (4530-4533) must
be accessible through firewalls for clients to connect to the server.
4530/4531 -- JNP (JNDI)
4532 -- JRMP (RMI)
4533 -- UIL (JMS)
Port 8080 (Default HTTP traffic) must be accessible through firewalls for users to install and
launch NetSight client applications.
Port 8443 (Default HTTPS traffic) must be accessible through firewalls for clients to access
the Server Administration web pages.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

76

Enterasys NetSight Services are automatically stopped during a NetSight upgrade. The
NetSight Services Manager is provided on Windows platforms to allow easy access to the
associated services.
On Windows operating systems, the arrow in the NetSight Services Manager icon shows if
the server is running (green) or not (red); yellow indicates the server is starting.
To stop the running services including JBOSS and DeskTray, right click on the Services
manager icon, select Stop running services then go to the Server option and select Stop
Server and Database. Next click to Exit the Services Manager.
On Linux /etc/rc2.d contains the Enterasys NetSight background services.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

77

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

78

The Server Information Window allows you to view and configure certain NetSight Server
functions including management of client connections, database backup and restore options,
locks, logs and licenses. It also provides access to the server log and server statistics. To
access this information you would choose Tools > Server Information or click on the icon as
shown above.
The first tab (Client Connections) shows who is currently connected to the server. If desired
you can disconnect users from the server.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

79

Database Server Properties


Database server properties are used by the NetSight Server when it connects to the
database. The database is secured with a credential composed of a user name and
password. It is recommend to change this password, the Connection URL almost
never needs editing. You must restart both the NetSight server and client after you
change the database password.
Backup Button
Opens the backup database window where you can save the currently active
database to a file. If the NetSight Server is local, you can specify a directory path
where you would like the backup file stored. If the server is remote, the database will
be saved to the default database backup location.
Restore Button
Opens the restore database window where you can restore the initial database or
restore a saved database. Restoring an initial database removes all data elements
from the database and populates the NetSight Administrator Authorisation group with
the name of the user performing the restore. Both functions will cause all current
client connections and operations in progress to be terminated. You must restart both
the NetSight Server and the client following an initialise database operation. When
restoring a database, if the server is remote, you only have access to databases in
the default database backup directory. .

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

80

The Locks tab lets you view a list of currently held operational locks. Operational locks are
used to control the concurrency of certain server operations. They are used to lock certain
functionality so that only one user can access it at a time. For example, you would not want
two users in Authorisation/Device Access both configuring SNMP at the same time.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

81

The license tab displays a list of all NetSight applications that require a license, and their
respective license information.
You can also use this tab to change a license. You would change a license in the event that
you want to upgrade from an evaluation copy to a purchased copy or upgrade to a license
that supports more users/devices.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

82

You can customize many of Console's features to suit your needs or the needs of your
network. You can set Suite Wide options that affect all NetSight applications and Consolespecific options.
Options are set in the options window and like many of the Console windows, the information
found in the right panel depends on what you have selected in the left-panel. The left panel
lists suite-wide and NetSight Console options.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

83

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

84

Status Polling displays how NetSight polls the network devices to ensure they are up and
running. The Maximum number of devices polled at once (100) may need to be lowered if
there a large number of devices on the network, otherwise performance may suffer. Poll
Groups allow the Administrator to have critical devices polled more frequently than non
critical devices. From the main Console window under the Properties tab with the Access
radio button selected you can configure devices to use Fast, Default, or Slow polling.
System Browser provides the view where you can specify the web browser for NetSight to
use when launching web pages from NetSight applications. The browser selections displayed
depend on the web browsers installed on your system. Select Default to specify the system
default browser. This setting applies to the current logged-in user.
Web Server provides the view where you can specify the port ID for HTTP web server traffic.
This port must be accessible through firewalls for users to install and launch client
applications. By default, NetSight uses port ID 8080. If you change the port ID, you must
restart the NetSight Server for the change to take effect. This setting applies to all users. You
must be assigned the appropriate user capability to change this setting.
Web Updates checks for updates on the NetSight Suite, it does not check for newer
firmware.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

85

In the CDP Seed IP tab, enter the IP address for your CDP seed device into the appropriate
column. Discover will use the seed device's CDP Neighbor Table to begin discovering all
CDP-compliant devices.
In the IP Range tab is a table where you specify the IP address ranges. Each row defines a
single range. When you first open the tab, a default range is displayed based on the IP
address of the Console workstation. To add a new range, right-click on an existing row and
select Insert Row. The position of a row determines the range's Precedence, as indicated in
the second column. Precedence determines which parameters will be used if a device is in
more than one range (the lower number yields higher precedence). To edit a range, simply
tab through the parameters and either enter a new value or use the drop-down list to select a
value.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

86

You can restrict users from using specific applications like Policy Manager or Console. You
can also create more granular restrictions, like access to TFTP download, FlexViews, or MIB
Tools.
You can add users that you want to be able to use Console from the Authorisation
Configuration window.
It is necessary to have at least one administrative user.
The administrative user is capable of creating additional Console users and assigning their
access levels.
Console access levels are actually defined for groups and users within a particular group are
granted the access level defined for that group.
The last three tabs of Authorisation/Device Access are used to configure SNMP.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

87

The security deficiency of both SNMPv1 and SNMPv2 was finally fixed with the release of the
SNMPv3 standard. Designed to enable better support of the complex networks being
deployed in recent years and additional requirements of applications used in networked
environments, SNMPv3 defined standards for both enhanced security and administration.
The most noteworthy enhancement in SNMPv3 is the strong security protection it provides for
remote management, protecting SNMP itself from being used to automate exploiting
cascading vulnerabilities. As defined in RFCs 2571-2575, SNMPv3 added robust user-level
authentication, message integrity checking, message encryption, and role-based
Authorisation.
Authentication Determines the message is from a valid source
Message integrity Collects data securely without being tampered with or corrupted
Encryption Scrambles the contents of a frame to prevent it from being seen by an
unAuthorized source
Role-based Authorization Provides access to specific MIB information
To understand how these security enhancements are implemented, we need to take a look at
the architecture of SNMPv3.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

88

An SNMP security model is an authentication strategy that is set up for a user and the group
in which the user resides. A security level is the permitted level of security within a security
model. The three levels of SNMP security are: No authentication required (NoAuthNoPriv);
authentication required (AuthNoPriv); and privacy (authPriv). A combination of a security
model and a security level determines which security mechanism is employed when handling
an SNMP frame.
Configuring authentication and privacy for SNMPv3 is optional, but highly recommended.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

89

To create a credential:
Click or choose authorization/Device Access from the Tools menu. Select the
Profiles/Credentials tab in the authorization/Device Access window.
In the lower half of the tab, click Add Credential. The Add Credential window opens.
Type a name (up to 32 characters) for your new credential and select a SNMP version. If you
select SNMPv1 or SNMPv2, the window lets you enter a community name as the password
for this credential. If you select SNMPv3, you can specify passwords for Authentication and
Privacy.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

90

To create a credential:
Click or choose authorization/Device Access from the Tools menu. Select the
Profiles/Credentials tab in the authorization/Device Access window.
In the upper half of the tab, click Add Profile. The Add Profile window opens.
Type a name (up to 32 characters) for your new credential and select a SNMP version. If you
select SNMPv1 or SNMPv2, you can select credentials for Read, Write, and Max Access. If
you select SNMPv3, you can select credentials and security levels for Read, Write, and Max
Access. SNMPv1/SNMPv2 - Select credentials for Read, Write, and Max Access.
SNMPv3 - Select credentials and security levels to be used for Read, Write, and Max Access.
Click Apply. You can add another profile or click Close to dismiss the Add Profile window.
Your new profile(s) appears in the Device Access Profiles table.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

91

Use the Profile/Device Mapping tab to specify which profile will be used by each authorization
Group when communicating with a specific device. The Read credential of the NetSight
Administrator profile is used for device Discovery and status polling. All other SNMP
communications will use the profiles specified here.
Devices selected from the left panel appear in the table in the right panel together with the
current profile assignments associated with each authorization Group. The Table Editor
button activates the editing row where specific profile selections can be made. To assign
profiles:
Click or choose authorization/Device Access from the Tools menu.
Select the Profile/Device Mapping tab in the authorization/Device Access window.
Select one or more devices or device groups in the left (tree) panel.
Select one or more rows (devices) in the table and click the Table Editor button.
Click in the Table Editor Row for the authorization Group that you are configuring and select
a profile from the drop-down list.
Repeat steps 3 and 4 until you have finished assigning profiles.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

92

When a device is created, discovered, or imported, it automatically becomes a member of the


appropriate system-created group:
All Devices - contains all the devices in the NetSight database.
Grouped By - contains five subgroups:
Chassis - contains subgroups for specific chassis in your network.
Contact - contains subgroups based on the system contact.
Device Types - contains subgroups for the specific product families and device types
in your network.
IP - contains subgroups based on the IP subnets in your network.
Location - contains subgroups based on the system location.
Additionally, you can add your own device groups and subgroups under the My Network
folder, however you cannot add groups under the system-created groups. A device group
cannot have the same name as another device group at the same level. You cannot rename
or delete a system-created group. A device can be a member of more than one group.
TIP: System-created groups are displayed with blue folders in the left-panel tree. Any group
you add will display a yellow folder.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

93

As with Device Groups logically grouping ports can allow FlexViews to only look at certain
ports; for instance Uplink ports or Server ports.
You can add ports to the My Network or to any user-created group by choosing Add Port
Elements to Group from the right-click menu in a FlexView table. You can remove a port from
a specific group, or you can delete the port from the NetSight database, thereby removing it
from all groups where it is a member.
There are several ways to add ports to a group. You can add selected ports from a FlexView
table, drag and drop them in the tree, or copy and paste one or more ports from another
group.
Adding Selected Ports From a FlexView Table
Open a FlexView for the devices containing the ports that you want to add and click the
Retrieve button.
Click the right mouse button on the ports that you want to add to a particular group. The Port
Group Selection window opens.
Expand the tree and select the group where the selected port(s) will be placed.
Click Ok to confirm your choice and close the window. The ports are added to the selected
group and to the All Port Elements folder.
You can now select specific ports and use FlexViews to query information about those
specific ports. You should use the appropriate FlexView to view the type of port being
queried.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

94

When a device or device group is selected from the left panel, the Properties tab shows a
table listing information about your selection. Columns included here display IP Address,
Display Name, Device Type, Status, Firmware, BootPROM, Base MAC, Chassis ID,
Location, Contact, System Name, Nickname, and Description.
Note: Port numbers are five digit numbers on the S and K series, the first digit is the slot
number, the second is the technology (1 = fe, 2 = ge, 3 = tg) and the last three numbers are
the port. So 12005 represents slot 1, Gigabit Ethernet, port 5.
The Table Editor row is available when the Show/Hide Table Editor button is toggled to make
the Table Editor visible. Columns that contain a writable MIB object will appear in the Table
Editor as an editable field or drop down list as appropriate for the object type (integer,
boolean, text, etc.). Changing the value in the Table Editor row alters the value for that entry
in the row selected in the table.
Clicking Apply
sets the current writable table values on the devices in the currently
selected device group.
Additionally, User Data 1, User Data 2, Notes columns can be edited to provide extra
information about the device. The slide shows adding Notes.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

95

When a device or device group is selected from the left panel, the Properties tab shows a
table listing date and time information for your selection. The Retrieve button attempts to
contact the selected device or device group to update the table information. The Properties
view uses the Profile for the Read Access Level of the customizations for the current user.
While retrieving information the button changes to a red octagon.
Clicking in the table editor for Date/Time brings up the Change Date/Time window where
edits can be made. You can select one or more table rows where you want to change the
date/time for devices.
Clicking in the Table Editor row for the Date/Time column opens the Change Date/Time
window, where you can set a specific date and time to be set in the selected devices. When
the date/time are changed on a device, a green exclamation mark appears in that row to
indicate that the new value needs to be applied using the Apply icon.
.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

96

IMPORTANT: The Port Properties table is not automatically updated. Instead, the table must
be refreshed using the Retrieve button to update the table information each time you access
this tab. The first time you access the Port Properties tab, the table is blank making it
necessary to click retrieve to display port information. If you leave the Port Properties tab,
then return, the content of the table will not have changed, even though conditions on device
ports may have changed. You must again retrieve the information.
Port properties shows commonly used port specific MIBs. The information shown can be
filtered down to:
Statistics In/Out Octets, Errors, Discards, Unicast traffic
Configuration Show/Configure Auto Negotiation, Duplex, Speed
Capabilities - Show/Configure the ports Advertised Speed and Duplex and show the
Speed/Duplex advertised by the remote port
Specific columns can be used to configure auto negotiation for selected ports. If auto
negotiation is disabled, you can manually configure the speed, duplex, and flow control
parameters of the selected ports. These columns are hidden or displayed according to your
selection from the Column Filter toolbar.
To configure parameters on multiple ports, enable the Table Editor and select the ports in the
table by swiping with your mouse or using the Ctrl or Shift keys. The information for the first
port selected will be displayed in the Table Editor row. Any changes that you make will be
applied to all of the selected ports. Use the drop-down lists in the Table Editor row to
manually configure the parameters when auto negotiation is disabled on the selected ports.
NOTE: If you manually configure these parameters, be sure that the remote port supports the
same mode. Otherwise, no link between the local and remote port will be achieved.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

97

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

98

System-created device groups are permanent and cannot be moved or deleted. However,
you can add user-created device groups and populate them with devices as needed to create
as many custom maps as needed to manage your network.
The Map Creation Tool guides you through the decisions to create maps. After devices have
been selected, you choose the map attributes which include grouping and polling options.
The left panel reflects your selections and indicates whether link discovery will be
accomplised.
Grouped Map will create multiple maps, the main map being the root map will have a
separate icon for each device group. Double click will open a sub-map. The sub-maps and
the devices in them will mirror the device groups in Console.
Flat Map creates one map, showing all of the selected devices/device groups at the same
level .
Once maps are created (Grouped or Flat) sub maps can be created or deleted.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

99

You can customise your view of the topology by applying overlays and attributes to maps.
Overlays add visual context to the topology display such as link color, link weights, and
endpoint symbols that are meaningful to a particular logical view. For example, selecting the
Spanning Tree overlay for a physical map view identifies root ports, active links, and Root
Bridges in the view using colors for links and shapes superimposed on ports.
Attributes are properties associated with map views that allow you to manage information.
The coordinates of a mapped object, the color and weight of a type of link, the background
image displayed on a view, sub-maps, the default behavior when nodes are discovered are
all attributes of a particular map view.
Maps can be populated manually or automatically, using the Create Map tool. You can add
images, text for descriptive labeling and a variety of symbols to your map. The Edit menu at
the top of the Map, provides tools for selecting and adding symbols and graphic elements and
a variety of alignment and sizing tools. Right-click menus let you manage objects in a map,
delete objects, or display object properties.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

100

Map Creation Summary


This view gives you the opportunity to review your map configuration prior to creating the
map. You can change your settings by clicking in the left panel on the step where you want to
make a change. Once you are satisfied with your map configuration, click Finish to create the
map. Topology Manager performs a discovery of the area of your network that you are
mapping and shows the results in the Discovery Results window.
Discovery Results Window
Topology Manager performs a discovery when a map is created, when an existing map is
opened, or when you refresh your map. Newly discovered devices and links as well as
network elements that no longer exist are listed in this window. You can selectively (by
checking rows) add/remove the discoveries to your map.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

101

In this view, you set the poll interval for Topology Manager's poll groups (More Frequent,
Default and Less Frequent). These groups correspond to the poll groups in Console, but the
frequencies set here determine the poll frequency for each group used when retrieving device
status in Topology maps. The interval for individual poll groups can be set according to your
network's needs. Keep in mind that these values affect devices while they populate an open
map.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

102

Polling Update Overlay Data - Refreshes the overlay information for all submaps in the
current map.
Update Submap Overlay Data - Refreshes the overlay information in the current submap.
Rediscover Network - This option reruns the discovery process.
Synchronize Map to Console Groups - This option lets you update the groups in your map to
match changes to the groupings that were selected. This option is only available for grouped
maps.
Remove Missing Groups and Devices - Removes any groups/devices that no longer exist in
your the Console groups that were used to create the map.
Descend Sub Groups to perform actions - Performs the above actions recursively.
Add Devices to Submap - Opens the Add Devices to Current Submap window showing a
device tree containing all of the devices that have been modeled in the NetSight database.
You can expand the tree to select specific devices/device groups that you want to add to the
current submap.
Cancel Polling - This option is only active while the Topology Server is actually polling. It
aborts the current polling operation.
Polling Statistics - Opens the Polling Statistics window where you can view both a summary
of polling statistics and statistics for individual devices.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

103

You can use Compass to search one or more devices or device groups selected in the
Console left panel. If you do a search on a user-created group that contains interfaces, the
whole device on which the interface is located will be searched. The search is based on the
following:
The selection you make in the Console left panel (Search Scope)
The Search Type you select on the Compass tab
The Search Parameters you provide on the Compass tab
The Search Log tab displays a log of the progress of the search and notifies you of
unsupported devices. The Results tab displays the results of the Compass search. You can
customize table settings and find, filter, sort, print, and export the information in the Search
Log and Results tabs. Access these Table Tools through a right-click on a column heading or
anywhere in the table body.
Here a search was done on 172.26.2.200, a PC in the lab network. Compass works by
polling various MIBs on each switch that is selected for the search and then displays the
results. The column Active indicates that the user has been seen recently on the indicated
port, this is typically from one of the dot1* MIBs. Notice the green check marks indicating the
PC could be located off port fe.1.12 on 172.10.1.101.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

104

If you provide specific search parameters, Compass returns information on those parameters,
if it finds them within the search scope. If you do not provide specific search parameters,
Compass returns information on everything within the search scope.
Search Type: For a Search Type of Auto Compass will establish if the entry is an IP (four
octets) a MAC (six octets) or if the entry is neither Compass will assume the entry is a User
Name. The entry determines which MIBs will be polled.
All: For the Search Type of All Compass returns all IP, MAC, and user data from the MIBs.
This would be the equivalent to doing a search with Auto but leaving the Address field blank.
A search on an IP that results in different MAC addresses indicates DHCP (the Node/Alias
Tables does not timeout) in which case you may want to do a Select All on the results then
another right click and Delete Node/Alias Entries.
When the Search Type is set to IP Address a PING button appears so you can force the IP to
generate some traffic before doing the search.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

105

Accessed from Console, Tools > Options > Compass or from the Options button within the
Compass tab.
For larger networks you may want to increase the Number of SNMP Retries to ensure getting
the most information.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

106

When Console is initially installed, the Interface Summary tab is accessible in the right panel.
It is one of many FlexViews available with Console. In Console, you can use the FlexView
Properties window to customize pre-defined views and create your own FlexViews to provide
the kind of information you need to manage your network.
These views provide information and configuration capabilities across the entire system. The
FlexView tables can be filtered, searched, and sorted, making it possible to view specific
network conditions: for example, the top ten instances of an object such as the highest CRC
count on ports or the highest packet transmissions by port.
One or more FlexViews can be "Floated" into a separate window by clicking in a blank area
of the FlexView toolbar and dragging the FlexView out of the Console main window. This
allows viewing information from different FlexViews at the same time.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

107

Predefined FlexViews allow you to view/configure CDP, FST, Link Aggregation, PoE, RMON,
and many other functions. Highlighting any FlexView will show a short description as shown
above with Broadcast Suppression. The folders represent directories that contain related
FlexViews.
The Export Catalog button creates a file that lists all FlexViews and their descriptions for your
reference.
FlexViews you have created are saved to a directory called My FlexViews that exists inside
the Console directory to preserve your FlexViews during upgrades.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

108

Export Type allows you automatically export FlexView data (HTML or CSV) every time the
table is refreshed. Data is exported to the directory specified in the FlexView Options. For
example, you can select a FlexView that contains columns of various errors and set a filter to
show rows that contain greater than zero errors . Use MaxAccess/SuperUser - When
checked, this FlexView will use the Max Secure or SuperUser passwords for access to
retrieve information and set values on devices.
Read Only disables the table editor for this FlexView.
Hide instance column - This is the Interface column which can not be deleted but can be
hidden.
Enable event notification - When checked, the information in the table can be used with the
table filter feature to create an alarm for a specific condition.
Edit or add notes for your FlexView - Use this text field to create a detailed description of this
FlexView.
FlexView Editing Instructions - Use this text field to provide detailed instructions for how this
FlexView should be edited by the FlexView Guided Editor or Table Editor. Column Definitions
Table - This table shows how the attributes for each of the columns are configured for this
FlexView Every FlexView contains three permanent columns (ReqID, IP Address, and
Interface). When creating a new FlexView, this table contains only the three permanent
columns. Columns can be repositioned by clicking the heading for a column and dragging it
to the left or right.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

109

The Columns tab in the FlexView Properties window lets you define the content and
arrangement of information in your FlexViews. You can define columns that present the
values for particular MIB objects; or create expressions that combine specific MIB objects, to
present information that shows the relationship between those objects. With SNMP selected,
the Columns tab lets you configure columns to show the values for specific MIB objects.
When Expression is selected, the Columns tab becomes an expression editor, providing
functions that allow you to combine the values of specific MIB objects. For information on
creating FlexViews with expressions, please refer to the help documentation.
Any MIB object copied to the directory defined above is available to be included in existing or
newly created FlexViews. This allows you to extend NetSights capability to manage other
vendors hardware.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

110

The Table Editor row is visible when the Show/Hide Table Editor button is toggled to make
the Table Editor visible. Columns that contain a writable MIB object will appear in the Table
Editor as an editable field or drop down list as appropriate for the object type (integer,
Boolean, text, etc.). Changing the value in the Table Editor row alters the value for that entry
in the row(s) selected in the table. The Table Editor feature cannot be used at the same time
as the Guided Editor.
As values are changed for your selected columns, a green exclamation point marks the cells
that have been changed (but not Applied) and the Apply button becomes active. Clicking the
(Show/Hide Table Editor button) at this point will cancel your changes, restore the original
values, and hide the Table Editor. Clicking Apply sets the values that you've changed in the
selected devices and hides the Table Editor row. If the set is not successful, a red X appears
in the rows where the set has failed.
CAUTION: Enforcing certain MIB objects can disable devices and cause interruptions to
network operation. Do Not enforce MIB values unless you are sure of the outcome.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

111

Remember that we discussed setting the Options for the NetSight suite in the Getting Started
module. One of the tasks you must accomplish is setting the TFTP server root directory and
IP address prior to using the TFTP capabilities in Console. If there are multiple NICs on the
server and the wrong address is used, TFTP will fail. The Root Path defines where the TFTP
application has access to the hard drive. This can be configured through Options > Services
for NetSight Server. The Full Image Path: Points to the location of image on the Server. The
image must exist in the Root Path.
The Firmware Image Download window enables you to download a firmware image file to a
single device. You must have one TFTP Server running to perform the download operation.
To access the Firmware Image Download window from the main Console window, right-click
the device in the left panel and select Firmware Image Download from the menu. From the
Device Manager, you can select Utilities > Firmware Image Download from the Device View
menu bar.
The S and K series, A, B, C, D, G and I switches can support multiple images. If the switch
does not have enough memory to hold another image the download will fail.
Note: Information is populated in this window through MIB queries. Last Server IP and
filename might not be available due to MIB support in the firmware currently on the device.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

112

The Configuration Upload/Download window provides a way to upload configuration files


from devices to save them elsewhere as backups, or download configuration files to devices.
Using these functions, you can copy configuration files from one device to another. Files are
transferred using TFTP; therefore, you must have a TFTP Server running to perform the
upload or download. To access the Configuration Upload/Download window from the main
Console window, right-click the device in the left panel and select Configuration
Upload/Download from the menu. In Device Manager, select Utilities > Configuration
Upload/Download from the Device View menu bar.
Caution: Devices will reset after a configuration file has been downloaded.
Note: Information is populated in this window through MIB queries. Last Server IP and
filename might not be available due to MIB support in the firmware currently on the device.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

113

The menus in the Device Manager provide access to tools to accomplish configuration tasks
and to change the information available in the view.
Device Menu: Device information, VLAN, MIB II, Bridge, Ethernet Port, Configuration,
Broadcast Suppression
View Menu: Link, Admin, Operator, Load, Errors, I/F Mapping, I/F Speed, I/F Type
Utilities Menu: Firmware Image Download, Configuration Upload/Download
FlexViews are preferred over Device Manager because you can view and configure multiple
devices at one time.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

114

Launch the VLAN Elements Editor from the icon shown in the VLAN tab. The left panel
contains a tree hierarchy showing all of the VLANs that have been modeled in the NetSight
database. The right panel lists the currently defined VLAN models and indicates the number
of VLAN Definitions and Port Template Definitions that exist for each model. You are
provided with one VLAN model to start, the Primary VLAN Model, which is pre-populated with
a Default VLAN (VID 1) and a default Port Template. When a Port Template Definition is
selected in the left panel, the Port Template Definitions view appears in the right panel. When
a VLAN Definition is selected in the left panel, the VLAN Definitions view appears in the right
panel.
You can define the Primary VLAN model with VLAN definitions and port templates, and/or
you can create other VLAN models. Multiple VLAN models can be created, but only one
VLAN Model can be used in a VLAN domain.
To create a VLAN model:
Select the VLAN Element Editor from the Tools menu.
In the left panel, right-click the VLAN Elements folder and select Add VLAN Model from the
menu. This adds a "New VLAN Model" under the VLAN Elements folder, with its name
highlighted.
Type a name for the newly created model, or leave the new name as is, and press Enter.
You can now create VLANs and port templates for the VLAN model.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

115

Creating a VLAN adds a VLAN to a model's VLAN Definitions folder. It also automatically
creates a port template in the same model, with the new VLAN's VID set as the PVID.
Console provides you with one Default VLAN (VID 1) for the Primary VLAN Model and for
any other model you create. You can define this VLAN, and/or you can create and define
other VLANs.
To Create VLANs:
Open the VLAN Element Editor from the icon in the VLAN tab
In the left panel, expand the VLAN Elements folder, expand the VLAN model whose VLAN(s)
you want to create, then select the VLAN Definitions folder. The VLAN Definitions window
appears in the right panel.
In the VLAN Name text box in the lower portion of the Properties tab, change the name of the
VLAN to fit your requirements
If required, change the VID for the VLAN in the VLAN ID box
The VLAN retains the properties of the previously displayed VLAN - Edit these as needed
When you create a new VLAN, a new port template is automatically added to the VLAN
model, with the new VLAN's VID set as its PVID. You can also create your own port
templates.
Once a VLAN is defined, you can compare it to the settings on selected devices, update the
model from device VLAN settings, and/or enforce the VLAN on selected devices.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

116

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

117

The Device view of the VLAN tab enables you to do all of the following:
Compare model VLAN definitions with VLAN settings on devices using the verify operation
Update NetSight's model VLAN definitions with VLAN settings from devices
Write model VLAN definitions to devices using the enforce operation
To access the Device view of the VLAN tab, select the device(s) or group(s) of interest in the
left panel. Then select the VLAN tab in the right panel and confirm that the Device radio
button is selected. The Device view of this tab consists of an upper panel and a lower panel.
Use the panel control buttons to control the display of the two panels.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

118

The Advanced Port view of the VLAN tab enables you to do any or all of the following:
Compare port templates with device port settings with a verify operation
Update port templates with port VLAN settings
Write port templates to ports through an enforce operation
To access the Advanced Port view of the VLAN tab, select the devices or groups of interest in
the left panel. Then select the VLAN tab in the right panel and the Advanced Port radio
button. The Advanced Port view of the VLAN tab consists of an upper panel and a lower
panel. The table in the upper panel displays port VLAN information for the devices selected in
the left panel. It also indicates whether there are discrepancies between the VLAN settings on
the ports and those in the port templates in the selected VLAN model. Ports on which
differences are detected are marked in the table by a red not-equal sign.
To compare the egress state as defined in a port template with the current and static egress
states of a port, select the port in the upper table and the port template in the lower left table,
and click the Detail button to open the VLAN Egress Details window.
In addition to pushing a Trunk Port Template to a lag, it should also be applied to the
individual ports in the lag in case the lag should fail. Remember, user ports can also be
configured using Port Templates and the Advanced Port window.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

119

The Basic Port view of the VLAN tab enables you to view the port VLAN settings on selected
device(s) in table form. You can select a VLAN port template to enforce to some or all of the
ports in the table, or you can edit port data and enforce the individual changes.
Basic Port view on the VLAN tab is like any other FlexView:
To create user ports:
Click the Show/Hide Table Editor icon
Highlight the ports you wish to configure
Make the changes in the Table Editor
Click the Apply button
In this screen ports fe.1.10 15 are being set to the Green VLAN, frames will egress in
Untagged format, no Ingress Filtering, the default priority for all incoming frames is 5, and the
port will accept both Tagged and Untagged frames once the Apply button is enforced.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

120

NetSight Event View lets you view alarm, event, and trap information for Console, network
devices, and other NetSight applications. Each tabbed view in the Event panel lets you scroll
through the most recent 10,000 entries in the logs that are configured for that view. A
Console tab, showing Console events and a Traps tab that captures traps from devices
modeled in the NetSight database are provided when Console is initially installed. The Syslog
tab shows events from devices that are configured to use the NetSight Syslog Server. You
can add your own tabs that capture local logs. Local logs are not automatically polled, but can
be manually refreshed using the Refresh button.
With the Event tables, you can:
Configure your own tables to capture and combine similar information from various sources for example, you can combine event logs from other NetSight applications or merge trap logs
into an single Event View
Find, filter, and sort table information
Print table information or export the information to a file in HTML or delimited text format
Trigger e-mail notification, when a particular alarm, event, or trap occurs

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

121

The purpose of Event View Manager is to control what logs are viewed in the Event View.
Usually, this does not need to be changed.
To add a new tab (view) to Consoles Event View, click Add, give a name, then use the green
arrow in the middle of the window to add logs (lower left window) to your new view.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

122

Configuring your devices to send traps to the NetSight Server is easily accomplished with the
use of the Trap Receiver Configuration window. The window has two tabs. The Configuration
tab lets you create a list of trap receiver addresses. These are the addresses of the systems
that will receive trap information from your network devices. The snmptrapd tab is where you
configure the information that is required to allow NetSight SNMP Trap Service (snmptrapd)
to receive Trap and Inform messages from your network devices that are using SNMPv3.
To access this window, right-click on one or more devices in the Console left-panel tree and
select Trap Receiver Configuration.
Priority: If the switch is configured to send traps to multiple TRAP sever the priority
determines the order which the traps will be sent
Trap Receiver IP: This is the TRAP server that the switch sends traps to typically the
Console server
Trap Credential Version: This configures what version of SNMP the switch will use when
sending a trap to the server and also what passwords/community names are to be used
Update From All Device: Polls currently configured Trap configuration from the switches
Apply to All Devices: Pushes the trap server information in the upper window to all the
switches listed in the lower window

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

123

Use this tab to configure the information that is required to allow the SNMP Trap Service
(snmptrapd) to receive Trap and Inform messages from your network devices that are using
SNMPv3.
The engine ID looks like: 0x80003818030001f4917d80
The file entry resembles:
createUser -e 0x80003818030001f4917d80 bob MD5 authpasswd1 DES privpasswd1
Since the snmptrapd file is read upon startup of the SNMPTrap process, it must be restarted
for any changes to take effect.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

124

The Alarms Manager window is where you can configure alarms when certain trap/event
conditions occur on your network. You can also configure certain actions that will be triggered
by the alarms. The table at the top of the window shows a summary of the currently defined
alarms, while the fields below allow you to configure alarm parameters. Access this window
from the Tools > Alarm/Event > Alarms Manager menu option.
Configuring Alarms consists of two things: Defining the criteria to trigger an Alarm (Device
Down, Link Down, FST limit Exceeded) and then defining the action to be taken (Email or
run an application).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

125

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

126

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

127

NetSight's OneView is a separately licensed application that provides access to web-based


reporting, network analysis, troubleshooting, and helpdesk tools.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

128

Data Aging allows you to set the to maintain the collection and archiving of data

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

129

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

130

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

131

Reports: Historical and real-time reporting offering high-level network summary information
as well as detailed reports and drill-downs.
Search: Always you to search for a MAC Address / IP Address / host name / AP serial
number / a NAC custom field
Devices: Device details for all managed devices in the network with sorting and filtering of
relevant information for network troubleshooting.
Alarms and Events: Indicates and Alarms or Events that have occurred
Identity and Access: This is used only for NAC
Flows: The Flows tab provides the ability to view real-time NetFlow data for enhanced
network diagnostics
Wireless: Wireless monitoring providing details, dashboards and Top N information to
monitor the overall status of the wireless network plus the ability to drill in to details as
needed.
Administration: OneView administration tools to monitor and maintain the OneView
application and its components.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

132

The Reports tab offers you historical and real-time reporting, including both high-level
network summary information and detailed reports and drill-downs.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

133

The Reports tab has two sub-tabs: Reports and Dashboard. The Dashboard displays
summary NetSight data including switch and interface statistics as well as important Wireless
data. The Reports sub-tab displays a catalog of reports that includes predefined reports as
well as custom reporting. Many reports are interactive, allowing you to adjust the data and
time the report covers.
OneView provides you with the custom report a powerful tool that allows you to create a
historical report with fully selectable parameters including :
Category
Date range
Targets
Statistics
Field type
Choose the report target such as APs, controllers, or interfaces, as well as the statistics to
report on, timeframes, and field type. For example, you could use an Adhoc report to view
historical utilization data on a specific AP over the past month.
You can display your reports either as a chart or table. Each report you create can be
bookmarked for easy viewing at a later time or to share with others. Report data can also be
exported to a CSV file.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

134

OneView provides you with four built-in PDF reports:


Console Report
Inventory Report
NAC Dashboard Report
Wireless Configuration Report
To generate the Wireless Configuration Report PDF, click on Reports>Reports>PDF
Reports, then click on Wireless Configuration Report. OneView opens your Wireless
Configuration Report PDF in a new browser window. The report contains complete
information on your configurations by:
Summary
Controllers By Mobility Zone
Controllers
Radios Summary
Virtual Networks
Access Point Groups
Access Points

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

135

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

136

The Devices tab provides you with device details for all the devices in your network that you
are managing with Netsight. You can sort and filter relevant information for network
troubleshooting.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

137

The Devices tab also gives you access to any of the information you can obtain in a NetSight
Flexview. To open a Flexview in OneView:
Select the device(s) on which you wish to obtain information.
Click on the gear icon in the upper left corner.
Click on Choose FlexView.
Choose the FlexView you wish to see, and whether you want it to open in a new window and
click OK.
The FlexView will open with the information it provides.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

138

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

139

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

140

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

141

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

142

The Flows tab provides you with information on traffic passing through your flow-capable
switches (the K and S-series).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

143

The Wireless tab gives you wireless monitoring details, dashboards, and top N information to
monitor the overall status of your wireless network. You can use the left panel tree to drill in
to the details you need.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

144

The Administration tab provides you with tools to monitor and maintain the OneView
application itself.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

145

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

146

IEEE 802.1w, Rapid Reconfiguration Spanning Tree (RSTP), is built upon the original IEEE
802.1D Spanning Tree Protocol parameters. When a network fails in a traditional spanning
tree topology, two-way communication may not recover for up to 50 seconds. The same
recovery can happen almost immediately in an RSTP environment. Rapid reconfiguration
ensures that an end-user is insulated from dropped sessions or inaccessible resources.
IEEE 802.1w and IEEE 802.1D Spanning Tree algorithms will interoperate. An RSTP switch
detects when it is connected to an 802.1D STP switch. When the RSTP port is initialized, it
transmits RSTP Bridge Protocol Data Units (BPDUs) for three seconds, then transitions to
sending STP BPDUs when received from the STP switch. When a RSTP capable switch is
connected to a STP switch, 802.1D Spanning Tree rules apply for that connection. It is
important to remember when running both 802.1w and 802.1D in the same network that,
depending on where the respan or link failure occurs, either 802.1w or 802.1D rules will
apply. This will affect forward transition times and network recovery times.
802.1w provides all the mechanisms needed for a rapid transition whether it be in a "failover"
condition or a "failback" condition. These accelerated respans are done with an explicit
handshake agreement and a new root port detection process. However, the handshake
process can only be performed on a point-to-point LAN segment (full duplex); shared LAN
segments are prohibited from this handshake.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

147

Distributed algorithm to elect a single Root Bridge.


Root Bridge transmits Bridge Protocol Data Units (BPDUs).
BPDUs are generally passed downstream.
Bridges compare their received BPDUs to calculate their shortest path to the Root.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

148

RSTP assigns roles to individual ports on a bridge as follows:


Whether the port is to be part of the active topology connecting the Bridge to the Root
Bridge (a Root Port)
Whether the port is connecting a LAN through the Bridge to the Root Bridge (a
Designated Port)
Whether the port is an Alternate or Backup Port that may provide connectivity if other
Bridges, Bridge Ports, or LANs fail or are removed.
State machines associated with the Port Roles maintain and change the Port States that
control the processing and forwarding of frames by a MAC Relay Entity.
A Port State of Discarding, Learning, or Forwarding is assigned to support and maintain the
quality of the MAC Service.
Port states are also used to reduce the probability of data loops and the duplication and missordering of frames to a negligible level.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

149

In the example shown above, the port roles of the switch shown in blue is analyzed. It has
two designated ports that provide connectivity back to the root bridge for downstream
bridges. Furthermore, it has a root port which provides the shortest path back to the root
bridge. Its alternative port, which is not forwarding traffic, is providing an alternative path
back to the root bridge. Therefore, if its root port fails, the bridge can use the alternative port
shortening the reconvergence time of the spanning tree. Also, a backup port exists, which is
not forwarding traffic, providing redundant downstream connectivity to an adjacent bridge.
Connectivity through a bridge for the Spanning Tree occurs between its Root Port and
Designated Ports. Once Spanning Tree decides the Port Role for a given port, the proper
Port State is selected. In other words, PORT ROLES DICTATE THE PORT STATES
With that said, RSTP ensures that every link connecting a Root Port and a Designated Port
transition to the Forwarding Port States as quickly as possible. This is the goal of RSTP.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

150

In most networks, Spanning Tree version should not be changed from its default setting of
mstp (Multiple Spanning Tree Protocol) mode. MSTP mode is fully compatible and
interoperable with legacy STP 802.1D and Rapid Spanning Tree (RSTP) bridges. Setting the
version to stpcompatible mode will cause the bridge to transmit only 802.1D BPDUs, this will
prevent non-edge ports from rapidly transitioning to forwarding state.
set spantree portpri port-string priority [sid sid]
Use this command to set a ports Spanning Tree priority.
port-string - Specifies the port(s) for which to set Spanning Tree port priority.
priority - Specifies a number that represents the priority of a link in a Spanning Tree bridge.
Valid values are from 0 to 240 (in increments of 16) with 0 indicating high priority.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

151

If the Designated Root MAC Address matches the Bridge ID MAC Address, the device views
itself as the root bridge. Therefore, no root port is display for this bridge.
Note that the port role and port state are both displayed for the bridge when using the port
keyword with the show spantree command.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

152

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

153

The original 802.1D standard treats the overall topology as a single network, while switches
treat VLANs as completely separate networks. Some of the benefits of configuring multiple
VLANs are sacrificed with this compromise. IEEE 802.1s is a supplement to IEEE 802.1Q
that adds the facility for VLAN switches to use multiple spanning trees, providing for traffic
belonging to different VLANs to flow over potentially different paths within the LAN. 802.1s
allows network administrators to assign VLAN traffic to unique paths. Some or all of the
switches in a LAN participate in two or more spanning trees with each VLAN belonging to one
of the spanning tree instances.
An advantage of MST is that MST is built on top of 802.1w Rapid Reconfiguration with its
decreased time for re-spans within the network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

154

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

155

Where only 802.1d or 802.1w is running, with no failure there is no bandwidth utilization
between switches 2 and 3.
With 802.1s it is possible to make each switch a root bridge for different spanning tree groups
and then associate a different VLAN with each spanning tree instance. This way we are
reducing the likely hood of a link being over-utilized.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

156

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

157

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

158

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

159

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

160

Note that the SID column displays the value of 1 in this example.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

161

Standard 802.1D STP takes 30-50 seconds to recover from a failure or root bridge changes.
By default, all Enterasys current switches support 802.1w and 802.1s, which provide subsecond recovery. However, repeated topology change notifications or new root bridge
announcements can still cause a Denial of Service (DOS) condition.
An attacker taking over as root bridge can allow Man-in-the-Middle (MITM), ARP spoofing
and other attacks.
Sending BPDUs from the attacker can force these changes and cause a Denial of Service
condition in the fabric.
Continued inability to communicate can cause learned entries to be removed from the table,
leading to flooding and providing the attacker with additional frames with which to gain
visibility into the network for further attacks.
Conversely, massive flooding can cause MAC entries to be learned on the wrong port,
resulting in loss of communication. Recovery time is based on length of attack and FDB
timeout period.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

162

Gathering information the show commands


show spantree spanguard - shows the value of Span Guard (enabled/disabled)
show spantree spanguardlock - shows the value of spanguardlock for a given port
(locked/unlocked)
show spantree spanguardtimeout - shows the value of spanguardtimeout (0-65535
seconds)

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

163

When Span Guard is enabled, reception of a BPDU (except loopback) by a port which has
adminEdge set TRUE will cause the port to be locked and its state set to blocking.
The port will be locked for a globally specified time (spanguardtimeout) expressed in seconds
which may be forever if the timer value is set to 0.
The port will become unlocked when either the timer expires or it is manually unlocked or the
configuration is changed such that either Span Guard is no longer enabled or the port no
longer has adminEdge set to TRUE.
In order to utilize Span Guard the system administrator must know which ports are connected
between switches as ISLs (inter-switch links). AdminEdge must be configured globally before
Spanguard will work. AdminEdge is configured via the set spantree adminedge command
from the CLI. Adminedge must be set to false on all known ISLs. Any remaining ports where
protection is desired should be set to adminedge = True. Setting these remaining ports to
adminedge = True indicates to Spanguard that these ports are not expecting to receive any
BPDUs. If BPDUs are received on these ports the affected ports will become locked.
set spantree spanguardtimeout sets the timeout period that a port will remain in the locked
state.
By default the timeout period is 300 seconds. This can be configured to a range of 0-65535
seconds. Setting the value to 0 will set the timeout to forever.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

164

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

165

In the C:\Program Files\Enterasys Networks\NetSight\mysql\data\netsight directory you can


see the parts of Inventory Manager server that integrate with the Console server SQL
database.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

166

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

167

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

168

With Alternate Firmware Servers you can remotely tell a switch in one location to connect to
its local TFTP server for a firmware upgrade.
FTP transfer and TFTP transfer do the exact same things (upgrade firmware and config
Upload/Download), but with Inventory Manager you can choose which protocol you prefer to
use, as long as the switch supports it.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

169

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

170

By default all firmware images need to be in the C:\tftpboot\firmware\images\ directory, as


defined in Tools > Options > TFTP Transfer Settings.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

171

Note: the Boot PROM Upgrade Wizard works identically except for the scheduling ability.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

172

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

173

Here the selected devices will do a warm reboot in 60 seconds.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

174

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

175

Capacity Planning data is port and slot information that can be used with Capacity Planning.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

176

Using the Archive Wizard is very easy. Give it a name and description (optional), select
which devices are to be backed up, and then schedule how often the devices are to be
backed up. Devices can be backup up Now (one backup), Once (one scheduled backup),
Daily, Weekly, or on Server Startup.
Here we have the Configuration and Capacity planning data for all devices being backed up
every Sunday at 2:00 AM.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

177

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

178

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

179

Used/Unused Ports Report Show front-panel port utilization. Quickly locate unused ports, and
view port details such as port type, speed, media type and connector type.
Used/Unused Slots Report Show chassis slot utilization to quickly locate unused slots. View
results organized by chassis type and by individual chassis.
Field Replaceable Units (FRU) Report Show field replaceable/upgradeable (FRU)
components in your network devices. View descriptions and details on a variety of FRU types
(such as power supplies, sub-module, and ports) to help determine FRU usage.
Component Change Report Show field replaceable/upgradeable (FRU) components in your
network devices that have changed over time. Easily monitor changes made to your network
and verify network upgrades. (Archives are needed for this)
Chassis Capacity Report Show capacity and usage of certain chassis components such as
sub-modules, power supplies, and control modules. View results organized by chassis type
or for each individual chassis.
Sub-module Capacity Report Show network's sub-module capacity and usage. Determine the
number of sub-module slots available on your network devices and the actual number of submodules that are installed. View a description of each of the installed sub-modules.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

180

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

181

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

182

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

183

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

184

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

185

Link Aggregation, SmartTrunking, and other port trunking algorithms are all methods of
bonding together two or more data channels into a single channel that appears as a single,
higher-bandwidth, logical link. It is a cost-effective way to implement increased bandwidth.
Aggregated links also provide redundancy and fault tolerance.
In the absence of any type of link aggregation, Spanning Tree Protocol prevents the addition
of bandwidth. Link aggregation makes multiple physical links appear as a single logical link to
the Spanning Tree Protocol, such that those redundant links within the aggregation will not be
blocked. This is accomplished by positioning link aggregation as an optional sub-layer in the
Data Link Layer of the OSI Model (explained in more detail later in this module), presenting
itself as a single MAC address to MAC clients in the Network layer.
Link aggregation should be viewed as a network configuration option that is primarily used in
network connections that require higher data rate limits than can be provided by single links,
such as between switches or between switches and servers. It can also be used to increase
the reliability of critical links.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

186

IEEE 802.3ad Link Aggregation is a standards-based method of dynamically grouping


multiple physical ports on a network device into one logical link.
All proprietary methods of trunking multiple links involve manually choosing the links that are
to be a part of a trunk.
The IEEE 802.3ad is a protocol specification that allows the switch to determine which links
are eligible to aggregate and to configure them automatically.
Since LACP is an IEEE standard protocol, any switch from any vendor that supports this
protocol can aggregate links automatically. If links are capable of link aggregation, they will
aggregate.
Link Aggregation is a cost-effective way to implement increased bandwidth. It allows for
interoperability and provides for automatic configuration, with manual overrides when desired.
Link Aggregation can be used on 10Mbps, 100Mbps, or 1000Mbps Ethernet full duplex ports.
For example, a network administrator can combine a group of five 100Mbps ports into a
logical link that will function as a single 500 Mbps port. The capability of linear increments to
the bandwidth allows the administrator to use existing hardware.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

187

Key Benefits
By taking multiple LAN connections and treating them as a unified aggregated logical link,
you can achieve practical benefits in many applications. The key benefits of IEEE 802.3ad
Link Aggregation are:
Dynamic configuration: Determines which links are eligible for aggregation, configures them
automatically, and provides rapid reconfiguration. Automatic configuration is the key objective
of link aggregation. However, manual overrides are available for network administrators who
want to customize or tweak their networks.
Higher link availability: Provides higher link availability, in that the failure of any single link
within the aggregate is limited to that link only. Other links continue to function so there is no
disruption of the communications between the devices.
Increased bandwidth: Serves to increase bandwidth because the capacity of an aggregated
link is higher than an individual link alone.
Support of existing IEEE 802.3 MAC clients: Requires no change to higher-layer protocols or
applications.
Backwards compatiblity with aggregation-unaware devices: Links that cannot take part in Link
Aggregation operate as normal, individual IEEE 802.3 links.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

188

Link Aggregation Control Protocol


Link aggregation is accomplished via LACP. This protocol allows communication of
aggregation capabilities between switches, and automatic configuration of links between a
switch and its link partner. It maintains configuration information (reflecting the inherent
properties of the individual links, as well as those manually established by management) to
control aggregation. LACP exchanges configuration information with other devices to
allocate the link to a Link Aggregation Group (LAG). A given link is allocated to, at most, one
LAG at a time. The allocation mechanism attempts to maximize aggregation, subject to
management controls.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

189

IEEE 802.3ad introduces several key new terms.


Link Aggregation Group (LAG): Once underlying physical ports are associated with an
aggregator port, the resulting aggregation is represented as one Link Aggregation Group
(LAG). A LAG is referred to as a trunk in proprietary aggregation methods. A MAC client
treats the LAG as if it were a single link. All links in a LAG connect between the same pair of
aggregation systems. One or more conversations may be associated with each link that is
part of a LAG.
Aggregation system: A uniquely, identifiable entity comprising (among other things) an
arbitrary grouping of one or more ports for the purpose of aggregation. An instance of an
aggregated link always occurs between exactly two aggregation systems. A physical device
may comprise a single aggregation system or more than one aggregation system.
Aggregation keys: Parameters associated with each port and with each aggregator of an
aggregation system, identifying those ports that can be aggregated together. The keys
summaries physical characteristics such as data rate, network administrator configured
constraints, and limitations of the port implementation. Two keys are associated with each
port. The operation key is the key currently in active use for the purpose of forming
aggregations. The administrative key allows manipulation of key values by management.
Marker Protocol: Allows the data distribution function a means of determining the point at
which a given set of conversations can safely be reallocated from one link to another, without
the danger of causing frames in those conversations to be misordered at the collector.
Actor: The local device in a Link Aggregation Control Protocol (LACP) exchange.
Partner: The remote device in an LACP exchange.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

190

Link Aggregation Scenarios


There are three scenarios in which link aggregation may be useful in a network, as described
below.
Switch-to-switch connections: This is the most common scenario. Multiple ports on a switch
are joined to form an aggregated link. Aggregation of multiple links achieves higher speed
connections between switches without hardware upgrade. If two switches are connected,
each using four 1000 Mbps links, and one of those links fails between the two switches, data
traffic is maintained through the other links in the link aggregation group.
Note that such a configuration reduces the number of ports available for connection to other
network devices or end stations. Thus, aggregation implies a trade-off between port usage
and additional capacity for a given device pair.
Switch-to-station (server or router) connections: Many server platforms can saturate a single
100 Mbps link. Thus, link capacity limits overall system performance. You can aggregate
switch-to-station connections to improve performance. Better performance can be achieved
without upgrade to server or switch.
Station-to-station connections: Though not a common configuration, you can also aggregate
directly between two pairs of end stations, with no switches involved at all. Again, a higher
performance channel can be created without having to upgrade to higher-speed LAN
hardware.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

191

Link Aggregation Rules and Recommendations


Following are rules and recommendations for link aggregation:
Ports must be running at full duplex to aggregate.
A link aggregation cannot be split among systems. Logically, it is a single pipe and, as such,
is treated as a single point-to-point connection.
Link Aggregation is supported only on links using the IEEE 802.3 MAC.
All links in a LAG must operate at the same data rate.
A given port will bind to, at most, a single Aggregator at any time. A MAC client is also
served by one Aggregator at a time.
.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

192

This section reviews product-specific aggregation information, referencing commands and


menu screens. Not all aggregation commands and screens are included. The lab activities
associated with this module will allow you to investigate the aggregation configuration
displays and configuration options in more detail. The information on this slide applies to all
current Switch platforms.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

193

This section reviews product-specific aggregation information, referencing commands and


menu screens.
Not all aggregation commands and screens are included.
The lab activities associated with this module will allow you to investigate the aggregation
configuration displays and configuration options in more detail.
The information on this slide applies to all current Switch platforms.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

194

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

195

show port lacp port port-string {[status {detail | summary}] | [counters]} [sort {port | lag}]
At least two ports need to be assigned to a LAG port for a Link Aggregation Group to form
and attach to the specified LAG port. The same usage considerations for dynamic LAGs
previously discussed apply to statically created LAGs.
In normal usage and typical installations, there is no need to modify any of the default
802.3ad parameters on any platforms. The default values will result in the maximum number
of aggregations possible. If the switch is placed in a configuration with devices not running
the protocol, no dynamic link aggregations will be formed and the switch will function normally
(that is, will block redundant paths via Spanning Tree).
Something to keep in mind is that a Link Aggregation Group (LAG) may potentially cause
periodic network instability if the partner system participating in the LAG has its LACP
Timeout parameter set to short (encoded as a 1 in the LAC PDU). This parameter
determines the time interval between periodic LAC PDU transmissions.
A LAG will be maintained until all ports that comprise the group are disconnected. Even if
only one port is still active in a LAG group, configuration changes will still need to be made to
the virtual LAG port (not the physical port) to be effective.
Some proprietary implementations provide for a dedicated physical port within a link
aggregation for transmission of special frames (Bridge Protocol frames, multicast frames,
unknown frames etc.).

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

196

If you plan to connect to a device that does not support link aggregation but you want to
aggregate ports, that device must be configured to run in non-protocol mode.
The switch will need to be configured with a static LAG .

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

197

Once the underlying physical ports are associated with an aggregator port, the resulting
aggregation will be represented as one LAG with the lag.x.x designation. LACP determines
which underlying physical ports are capable of aggregating, by comparing aggregator keys.
The S and K series are able to utilize three different spreading algorithms to determine which
physical ports a packet will be transmitted out of in a LAG port.
DIP-SIP: Specifies that destination and source IP addresses will determine the LACP
physical outport. This is recommended for LAGs providing connectivity anywhere in the
network. It is not recommended to use this spreading algorithm if traffic being transmitted
over this LAG port is sourced and destined to mostly the same set of IP addresses. If this is
the case, the distribution of the traffic across the physical ports in the LAG will be uneven.
da-sa: Specifies that destination and source MAC addresses will determine the LACP
physical outport. This is not recommended for LAGs providing connectivity between two
routers. This is because the DA-SA pairs will mostly be identical in this scenario and
distribution of the traffic across the physical ports in the LAG will be uneven. This is
recommended for LAGs providing connectivity to LAN segments to which end systems are
connected.
round-robin: Specifies that the round-robin algorithm will determine the LACP physical
outport. This distributes traffic is an even fashion across the physical ports in the LAG.
However, bidirectional communication will most likely be asymmetrical across different
physical ports in the LAG with this configuration.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

198

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

199

As shown above, although ports ge.1.1-4 were configured on the egress list of VLAN 333,
when a LAG port formed using these physical ports are not on the egress list of VLAN 333.
They become dormant, you must now use the logical LAG port for any further config and
investigation.
Single port LAGs are supported on the all current switches.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

200

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

201

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

202

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

203

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

204

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

205

set netflow [cache {enable | disable}] [export-destination ip-address [udp-port]]


[export-interval interval] [port port-string {enable | disable}]
cache enable |disable
Enable or disable the collection for a NetFlow cache
export-destination ip-address udp-port
Sets the destination IP address of NetFlow collector.
ip-address specifies the IP address of the NetFlow collect.
(Optional) udp-port specifies the UDP port number used by
the NetFlow collector (default is 2055).
export-interval interval
Set the active flow timer value, between 1 to 60 minutes.
The default value is 30 minutes.
port port-string enable | disable
Enable or disable NetFlow collection on a port(s)

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

206

show netflow [config] [statistics {export}]


config
(Optional) Show the NetFlow configuration.
statistics
(Optional) Show the NetFlow statistics.
export
export - Show the NetFlow export statistics.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

207

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

208

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

209

When you configure NetFlow, Enterasys recommends that you configure bi-directional
capture on your trunk ports, and ingress capture on your client-facing ports.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

210

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

211

In Congested Networks, the purpose of Flex-Edge is to give higher priority packets such as
L2/L3/Discovery protocols a better chance to get processed by the switch before NonProtocol packets as those protocols ingress into the system. Without Flex-Edge,
L2/L3/Discovery protocols were fighting for switch processing time with regular user traffic.
This would cause networks to break i.e. Spanning tree topology changes, VRRP flapping,
missing ip routes, ports detaching from LAG ports, etc. in a highly congested switch in times
of peak network use.
By default the redirection/higher priority of Protocol packets is always on
On top of the default feature, the user has the ability to assign specific MACSAs and/or
individual ports a higher priority in-order to give that particular traffic a higher precedence to
get processed by the switch faster than regular traffic. This is particularly good if the specific
streams voice and video or critical applications such as SAP. The faster a stream can be
processed by the switch the better the chances that Video/Voice/critical application streams
will be of good quality/no freezing of applications in high congested peaks within the network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

212

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

213

Network worms, viruses and malware, and hacker attacks rely on ability to discover
machines on a network and assess their vulnerability.
The process of discovering machines on a network is typically done by attempting to
establish ICMP communication with a randomly generated IP destination address
(address scanning).
If the randomly generated address replies, then the machine must exist on a
network.
The process of assessing vulnerability of machine(s) is completed by performing
layer 4 port scan. If a port is open, it can be used as a gateway for the attack.
Each attempt to discover network device or assess its vulnerability requires new flow to be
created. Since attacks desire to discover susceptible machines as quickly as possible, flow
build-up is unavoidable.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

214

Taken from a ~20 second network trace taken from a college campus.
The trace clearly identified two distinct worms and provided much insight into worm
propagation flow characteristics.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

215

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

216

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

217

If maximum flow count equals 10, an end system may communicate up to 10 end systems at
any one time.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

218

By default, the switch uses only MAC layer information for flow setups (i.e. the source and
destination MAC address), but this may be extended to layer 3 and layer 4 information
depending on the configuration of the switch. Flow-Based Packet Processing : For each
conversation, a flow entry is made in the flow lookup table.
Flow entry defines how to switch/route packet by specifying exit port(s) as well as other
actions to be performed on the packet like reframing. Flows are programmed to accelerate
switching / routing of the successive packet of the same conversation.
For each received packet, the flow table is inspected to determine if the packet is part of an
existing conversation or if a new flow is to be created. Since flow entries are a limited
resource, an aging mechanism is implemented in order to remove stale entries. An entry is
considered stale if it has not been used in a time interval specified by flow age-out interval.
Flows are only setup for learned traffic. Therefore, if the exit port of the device is not explicitly
known, a flow will not be created.
For L2 switching, this could result from the FDB not containing the DMAC of the packet.
For L3 routing, this could result from the FDB not containing the MAC of the ARP cache entry
being used for layer 2 destination MAC address formatting.
If a flow entry exists (flow table hit), frame is forwarded as specified by the corresponding
flow entry and flow ageout timer is refreshed.
Otherwise, new flow is created based on lookup level and stored in the flow table. Packet is
switched / routed by the host.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

219

More information on particular FST Port classes:


Server Port: A port with a server attached to it. This class may encompass a wide range of
server types from a small workgroup print server to an enterprise exchange server.
Alternately, an administrator may choose to bind the small print server into the User Port
class since their flow setup needs may end up being similar.
Aggregated User Port: A port likely to have multiple end stations attached either through a
wireless access point or an unmanaged low cost hub or switch. Its expected this class may
also be used instead of the Inter-Switch Link class when switches are interconnected using a
lower speed link.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

220

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

221

A Virtual Switch Bonded (VSB) Chassis consists of 2 like physical chassis joined together to
create a single logical chassis. The bonded chassis has a single IP address; you manage it
as a single object. VSB requires you to connect the two chassis using one or more 10 GB
ports. These ports are designated as Bonding Ports on each chassis and create the virtual
backplane that ties the two physical chassis together.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

222

With Virtual Switch Bonding, the Bonded Chassis appears to the rest of the network as a
single device. This allows you to distribute the ends of your LAGs across you two physical
Chassis. No modifications to LACP are required.
Consider our example network. PC A sends a frame to Server A; Switch A performs the
distribution algorithm for the LAG that connects it to your Chassis Bonded Chassis. In this
instance, Switch A forwards the frame across Link 1 in the LAG.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

223

Chassis 1 receives the frame and consults its Filtering Database (FDB) for the appropriate
VLAN in this case, let us say VLAN 10. Note some important details:
Chassis 1 and Chassis 2 are operating as a single Chassis Bonded switch. The two physical
switches have a single FDB for VLAN 10.
Even though Chassis 1 and Chassis 2 are both S4s, the single Chassis Bonded switch has 8
slots. Slots 1-4 happen to be on Chassis 1, and slots 5-8 happen to be on Chassis 2.
However, the forwarding process on Chassis 1 believes that it owns all 8 slots.
Chassis 1 discovers from its FDB for VLAN 10 that Server A is attached to Slot 5. Slot 5 just
happens to be across the backplane formed by the Bonding Ports connecting it to Chassis 2.
Chassis 1 performs the distribution algorithm to decide which VSB link to send the frame out
of.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

224

The forwarding pattern becomes a bit more complex when two devices connected to edge
switches are communicating. Consider the network again. In this case, PC A sends a frame
to PC B. As in the first case, Switch A forwards the frame over one of the LACP links.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

225

Chassis 1 receives the frame, consults its FDB for VLAN 10, and discovers that PC B is out
the LAG attached to its Slots 1 and 5. Chassis 1 performs the LACP distribution algorithm,
with one of two possible results.
The LACP distribution algorithm may result in sending the frame out Link 1 or Link 2 of the
LAG. If so, Chassis 1 simply forwards the frame out LAG 2 toward PC B.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

226

However, the hash may result in sending the frame out Link 3 or Link 4 of the LAG, both of
which are connected to Slot 5.
If so, Chassis 1 performs the distribution algorithm once more to choose which of the
Bonding links to use.
It then forwards the frame across the virtual backplane formed by the Bonding Ports to Slot 5,
where it forwards the frame out the appropriate LAG link toward PC B.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

227

Thus, in a classic Bonded Chassis scenario where every edge switch or stack is running
LACP to the Bonded Chassis with an equal number of physical port connected to each
chassis, one would expect that 50% of the traffic traversing the bonded chassis will traverse
the bonding links by default.
In systems where a server is asymmetrically configured, but the user traffic arrives on a LAG
port, it is expected that traffic destined for the server would also travel over the bonding links
50% of the time.
This behavior could create the unsupportable situation where the VSB link would have to be
as large as 50% of the total uplink bandwidth from your edge switches. To avoid this
condition, Enterasys has created a feature called Local Preference, discussed on the next
slide.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

228

The virtual chassis bonding feature uses bonding ports to connect two chassis. These ports
participate in the LAG for the traffic leaving the VSB chassis. The LAGs default outport
algorithm does not take port location into account, so that traffic may be evenly distributed
over the bonding links and local uplink ports.
A new feature has been created to manage this behavior. The feature allows the local
chassis egress ports to be preferred over the bonding ports. The local LAG ports preference
can set using a choice of 1 of 4 types, none (default), weak, strong, or all-local.
For example:
Usage: set lacp outportLocalPreference [none | weak | strong | all-local]
None Do not prefer lag ports based on chassis
Weak Use a weak preference towards ports on local chassis
Strong Use a strong preference towards ports on local chassis
All-local Force all packets to be hashed to local chassis ports, if available

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

229

The VSB link functions as an external backplane for the Bonded Chassis. Thus, you can
expect traffic on the link to behave just as if it were crossing the internal backplane on either
switch. However, the VSB link is Ethernet at Layer 2, so the frame behavior across the link
combines the attributes of Ethernet and the backplane function. The sending switch
generates a complete Ethernet frame for transmission over the VSB link, including the header
with 802.1Q information (if that is appropriate for the frame being transmitted) and the Frame
Check Sum. The sending switch also inserts a field in the Ethernet header containing VSB
control/backplane control information specific to that frame, which allows the two physical
switches to coordinate their across-the-backbone treatment of the frame.
The VSB link also functions as the control link for the Bonded Chassis; all VSB control traffic
passes over the VSB link.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

230

The Link Failure Response (LFR) protocol provides for the configuration of one or more
1GbE monitor links. In the unlikely event that all 10GbE interconnect links should go down or
otherwise fail, the LFR monitor link determines whether both chassis are still operational and
places the chassis with the lowest LFR priority in a dormant state until at least one
interconnect link is restored. These links do not carry user traffic. The sole purpose of a an
LFR link is to monitor the partner chassis' status.
10GbE VSB configured ports are always set as interconnect ports. 1GbE VSB configured
ports are always set as LFR monitor ports.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

231

The VSB feature supports a combined total of 32 VSB 10GbE interconnect and LFR 1GbE
monitor links on a VSB system (32 VSB ports per chassis).
You must configure at least one of those links as a VSB link, and Enterasys recommends a
minimum of two.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

232

Every Enterasys switch ships with two MAC addresses: the MAC address it uses for all its
communications on the network, and a reserved, unused MAC address that is one higher
than the used MAC address.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

233

When you initiate Chassis Bonding, the process compares the Reserved MACs of both
switches. It chooses the higher of those two Reserved MACs, and establishes that MAC as
the MAC address of the Bonded Chassis. From that moment on, until you disable Chassis
Bonding, both physical switches use the MAC address of the Bonded Chassis for all of their
communications on the network.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

234

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

235

Consider the specific case of Spanning Tree. When the Bonded Chassis sends Spanning
Tree Bridge Protocol Data Units (BPDUs) it formats those BPDUs based upon the chosen
Reserved MAC address (or, if you have configured one, the Locally Administered MAC
address) for the Chassis.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

236

Thus, to the rest of the network, the Bonded Chassis looks like a single bridge for Spanning
Tree purposes. With VSB enabled, our example network looks like a single, loop-free
topology.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

237

You can pair any two S-series switches as long as they have the same form factor. For
example, you can bond two SSAs or two S3s into a VSB pair. Similarly, you can bond a nonPoE S4 with a PoE S4, since the chassis are the same form factor. However, you cannot mix
form factors in a pair. For example, you cannot establish Chassis Bonding between an S4
and an S6, or between an SSA and an S3.
Note that in a multi-slot chassis you can spread the ends of the bonding link across the
various slots in the chassis. Enterasys recommends that you do so for resiliency.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

238

With release 8.11, the S 180 fabrics now offer dedicated Virtual Switch Bonding ports. These
ports are hard-wired into the switch fabric, and give you direct fabric-to-fabric connections
between the two switches participating in your VSB chassis. Unlike the 10 Gb I/O ports, these
ports are line-speed ports they are not oversubscribed.
You can connect any kind of hardware VSB ports to each other. You cannot, however,
connect a hardware VSB port to an I/O port running VSB. You also cannot combine
hardware VSB-based links and software VSB links into a single Virtual Switch Bond.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

239

All of the S180 I/O blades support a VSB Expansion Module. The VSB Expansion Module
provides you with four hardware VSB ports. Installing this module in an S180 blade connects
it directly to the backplane, giving you fabric-to-fabric connectivity for your VSB bond.

240

Your third option for hardware VSB ports is to install an option module in the right-hand side
option module slot on any S140 or S180 I/O blade.

241

Set the Bonding Chassis ID for each switch (Switch 1 and Switch 2)
Set identical System IDs for both switches participating in the bond
Identify 10Gb ports to be used for bonding on each chassis (1 is required, a minimum of 2
recommended).
Connect and enable bonding ports to create the virtual backplane.
Validate feature entitlement:
License Keys* need to be in place before bond is enabled. (*apply licenses if required)
Keys offered as one per chassis to accommodate a S150/130 chassis being bonded to a
S155 chassis
Enable bonding. The Chassis will reboot. The act of configuring or de-configuring bonding
will yield a clean configuration. Conceptually the act of creating the bonded pair builds a new
virtual chassis into which the cards are virtually inserted which results in clearing the
configuration. This is the same behavior that occurs when a card is physically inserted into a
new physical chassis.

242

A VSB license or feature entitlement to VSB functionality must be present in each of the
physical chassis participating in the bond. You cannot enable the VSB feature until the
license or entitlement is present in each chassis.
Modular chassis consisting of S130 and S150 class S-Series products require the S-EOSVSB license. This license is available from Enterasys. Modular chassis with S155 I/O Fabrics
can use the VSB feature without the need for additional licenses.
SSA 130 or SSA150 chassis require the SSA-EOS-VSB license. This license is available
from Enterasys. An SSA155 class products can use the VSB feature without the need for
additional licenses.

243

The bonded system features such as route capacities, MAC address tables and user
capacities will remain the same as a single chassis.
Mirroring capacities are reduced: the bonded chassis will only support 5 mirrors (down from
15). On a per-flow basis, the chassis will only apply one mirror; it will apply the highest
precedence mirror that applies. For example, if a flow is policy mirrored and ingress port
mirrored, the chassis will apply the policy mirror. (A non-bonded switch will apply both
mirrors.) IDS mirroring is not supported in a bonded system.
Tunneling is not supported in a bonded chassis.
A complete list of Known Restrictions is available in the Release Notes.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

244

If the virtual backplane does not provide the bandwidth required to move user data between
chassis, the VSB chassis drops packets just like any other oversubscribed configuration. As
with all ports in the S-Series, the highest priority queue is reserved for management
communication and bonding protocols.
If the last bonding link fails, and you have not configured an LFR link, each physical chassis
continues to operate independently using the same configuration and the same MAC
address. This can cause enormous problems in the network. Enterasys strongly encourages
you to configure multiple links into the VSB Bond.
If one of the VSB chassis becomes non-functional, assuming a properly designed network
topology with redundant paths between chassis, the active chassis will continue to behave as
if some of the blades went down in the chassis. Non-operational blade traps and SYSLOG
messages will be logged.
If a non-operational bonded chassis returns to the bond, it will appear as if blades in the
chassis were booted. If the FW or config has changed while the bonded pairs were
separated, the returning chassis will be synchronized.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

245

The LFR protocol determines which chassis will be brought down should all VSB interconnect
links between the VSB chassis go down, and it is determined that both VSB chassis are
operational.
Both chassis in an operational VSB system use the same IP address and function as a single
system with the 10GbE interconnect links acting as a virtual backplane for the system.
Should all VSB interconnect links go down and both chassis remain operational, the two
physical chassis would function as independent network devices with the same IP address..
The LFR protocol allows 1GbE ports to be designated as VSB monitor links that operate in a
standby mode to the primary 10GbE VSB ports. The VSB monitor link provides dedicated
redundant control plane connectivity and is used only as a backup communication path
between two bonded chassis in the unlikely event that all of the primary VSB interconnect
links fail or become unavailable.
When the primary 10GbE VSB ports are down, the VSB monitor links facilitate a
communications path to allow the physical chassis with highest LFR priority in the bonded
pair to remain active while placing the chassis with the lower priority into a dormant state.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

246

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

247

High Availability Firmware Upgrade (HAU) is an Enterasys S/N series feature that provides
for a rolling firmware upgrade for maintenance releases that are HAU compatible with the
current system firmware.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

248

There are two methods for loading a system firmware image:


Standard The specified image is loaded after a system reset
High Availability Provides a rolling firmware upgrade
Using the standard upgrade method, the image is loaded automatically after the system has
been reset. The standard method takes the system out of service for the duration of the
firmware upgrade.
Using the HAU method, all populated system slots are assigned to HAU groups. The
firmware upgrade takes place one HAU group at a time with all modules belonging to HAU
groups not currently being upgraded remaining operational.
As each HAU group completes its upgrade, a mix of slots running the original firmware and
slots running the upgraded firmware are simultaneously operating on the device.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

249

To avoid potential feature conflicts between multiple firmware versions, the HAU firmware
upgrade feature is limited to maintenance firmware upgrades and will not be available when
upgrading to major feature releases.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

250

Consider this example of a default HAU configuration. Chassis 1 is being firmware upgraded.
In a default HAU configuration, each slot belongs to a separate HAU group:
Slot 1 HAU group1
Slot 2 HAU group 2
Slot 3 HAU group 3
Weve configured a LAG between Switch 1 and each edge switch. Both LAGs are distributed
between two Chassis 1 HAU groups. LAG 1 is configured on Slots 1 and 2. LAG 2 is
configured on Slots 2 and 3. As each HAU group upgrades, packets for both LAGs continue
to forward over connections to non-upgrading HAU groups.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

251

HAU groups can be administratively configured for multiple slots. All slots belonging to the
updating HAU group are upgraded simultaneously.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

252

The HAU group feature determines which slot or slots will be simultaneously upgraded. All
system slots within the same HAU group are simultaneously upgraded. Each system slot
belongs to an HAU group. HAU occurs one HAU group at a time. By default, there is one slot
per group. Therefore, the default HAU behavior is to upgrade each system slot one at a time.
Because HAU groups are upgraded sequentially, the total upgrade time increases with the
number of HAU groups configured. In a large chassis it could take a significant amount of
time to complete the upgrade and have all physical links back in operation. Upgrade time can
be reduced by assigning multiple slots to the same HAU group. When planning system
connections, the overall upgrade time will be reduced to the degree that multiple slots can be
configured into a single group and still retain sufficient resources in non-upgrading HAU
groups to assure system operation. With this in mind, all essential system capabilities on the
device should be configured across multiple groups. For example, all LAGs configured on the
device should provide sufficient redundancy between HAU groups for packets to continue
forwarding on the LAG using slots belonging to HAU groups that are not upgrading.
Use the set boot high-availability group command in any command mode to configure an
HAU group, specifying the group ID and the system slots that will be members of the HAU
group. This command is an intelligent command: it checks for illogical groupings - fabrics,
no I/Os, and all bond links.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

253

When the firmware upgrade of an HAU group completes, depending upon the applications
that are configured on the module, it is possible for the next HAU group to begin a firmware
upgrade prior to protocols or applications on the just completed HAU module becoming fully
operational. Under normal operation there is an approximately 5 second delay between the
completion of one HAU group upgrade and the start of the next group upgrade. You can
configure a delay of up to 600 seconds between the upgrade completion of one HAU group
and the beginning of a high availability upgrade for the next HAU group.
Use the set boot high-availability delay command in any command mode to set a delay in
seconds between the upgrade completion of any HAU group and the beginning of the next
HAU group upgrade.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

254

HAU default mode determines HAU behavior if a system boot mode is not set when
configuring the system boot image. There are three HAU default modes:
never A standard (non-high availability) upgrade is always performed unless over-ridden by
the system boot mode high-availability setting
if-possible A high availability upgrade is always performed unless:
All HAU preconditions are not met, in which case a standard upgrade is performed
Over-ridden by the system boot mode standard or high-availability settings
always A high availability upgrade is always performed unless:
All HAU preconditions are not met, in which case no upgrade occurs
Over-ridden by the system boot mode standard setting
If you want an HAU default mode change to affect a firmware upgrade, the change must take
place before configuring a pending upgrade. Changing the HAU default mode after setting the
system boot configuration.
Note: HAU default mode should always be set to never unless you intend to perform a high
availability upgrade. An if-possible or always HAU default mode setting in conjunction with no
system boot mode specified results in a high availability firmware upgrade each time you
reboot your system, if all HAU preconditions are met.
If you want an HAU default mode change to affect a firmware upgrade, the change must take
place before configuring a pending upgrade. Changing the HAU default mode after setting the
system boot configuration (using the set system boot command) has no affect on a pending
firmware upgrade.
Use the set boot high-availability default-mode command in any command mode to set the
HAU default mode.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

255

When a system is powered on or reset, the current system boot image is loaded on to all
system modules. To perform a system upgrade, change the current system boot image to the
upgrade image, also referred to as the target image. Image upgrade can occur immediately,
the next time the system boots, or by issuing a reset command. When specifying the new
target image, you can optionally, specify the system boot mode parameter:
Standard All system slots are simultaneously upgraded taking the system out of operation
for the duration of the upgrade. This is a non-high availability upgrade.
High-availability Providing all HAU preconditions are met, HAU groups are upgraded
sequentially. If any HAU precondition is not met, an upgrade does not occur.
If the system boot mode is not specified, the boot mode is determined by the HAU default
mode configuration. By default, the HAU default mode executes a standard system upgrade.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

256

The following preconditions must be met for an high availability upgrade to occur:
HAU Compatibility Key - The target image must have the same HAU Compatibility Key as the
active image. To display the HAU key, use the dir command, specifying the image to display,
or use the dir command image option to display all images. The HAU key field in the display
specifies whether the image displayed is compatible with the current image. If HAU
compatible is appended to the key field, a high availability upgrade can be performed
between the displayed image and the current image.
Configuration restore-points - Configuration restore-points may be set, but must not be
configured. A configured restore-point would cause upgraded slots to boot with different
configuration data, and all slots must be running the same configuration data.
Upgrade Groups - At least two upgrade groups are required, and each group must contain at
least one operational module at the start of a high availability upgrade.
Platform S series S4, S6, and S8 platforms require the presence of at least 2 fabric
modules in the system. VSB can create an exception to this rule; see the next slide.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

257

Virtual Switch Bonding (VSB) High availability upgrade is not allowed if the reset of any
single upgrade group would break all VSB interconnect bond links. An exception to this rule:
High availability upgrade is allowed in a bonded system that would break either the two fabric
module restriction or the all VSB interconnect links restrictions, if:
A single HAU group is configured per chassis
All chassis slots are members of that upgrade group
In this case, the upgrade is performed per physical chassis.

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

258

Changes to system configuration cannot be performed while a high availability upgrade is in


progress. While a high availability upgrade is running:
All SNMP set operations will be rejected. A noAccess reason will be given for the rejection.
All CLI commands will unavailable with the exception of:
reset
loop
show
exit
dir
history
ping
traceroute
telnet
ssh
set boot high-availability force-complete

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

259

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

260

Policy is an Enterasys proprietary solution that provides traffic control and manipulation.
The traffic will be forwarded at line rate.
There are several ways to assign policy within the network:
Physical ports
MAC addresses
IP addresses
VLAN tags
Policy can also be assigned using Radius Authentication and will allow Policy to follow the
user everywhere they authenticate into the network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

261

What is Policy?
NetSight Policy Manager is a configuration tool that simplifies the creation and enforcement
of policies on networks, enabling network engineers, information technology administrators,
and business managers to work together to create the appropriate network experience for
each user in their organization. Policy Manager enables you to create policy profiles, called
roles, that are assigned to the ports in your network. These roles are based on the existing
business functions in your company, and consist of services that you create, made up of
traffic classification rules. Roles provide four key policy features: traffic containment, traffic
filtering, traffic security, and traffic prioritization. Policy Manager provides authentication via a
RADIUS server to identify users at the time they log in to the network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

262

Policy can be used in multiple ways:


Security Solution
QoS Solution
Traffic Redirection
All of the above

The three primary benefits of using Enterasys Secure Networks policy in your network are
provisioning and control of network resources, security, and centralized operational efficiency
using the Enterasys NetSight Policy Manager. Policy provides for the provisioning and control
of network resources by creating policy roles that allow you to determine network provisioning
and control at the appropriate network layer, for a given user or device. With a role defined,
rules can be created based upon up to 23 traffic classification types for traffic drop or
forwarding. A Class of Service (CoS) can be associated with each role for purposes of setting
priority, forwarding queue, rate limiting, and rate shaping.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

263

Switches
RFC 3580 switches can be used with less functionality (VLAN only)
If your edge switches are not Enterasys another solution is to aggregate up to 256 users per
switch at the distribution layer using K series switches, 9216 users on a S Series switch
Enterasys NetSight
Policy Manager within the Enterasys NetSight is the Policy Configuration Tool
RADIUS
RADIUS is not required for Policy but is required for Authentication

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

264

Here we compare the implementation of Policy-based Networking for Enterasys with policybased Networking defined by other networking equipment manufacturers. Because
Enterasys products support the implementation of layer 2 policy that analyzes traffic at the
port level, each user/device may be placed in its own QoS/security container which
provisions network resources directly to the network connection point.
Other equipment manufacturers use ACLs to provision network resources to users contained
within their VLANs.
The Enterasys implementation of Policy to the physical layer supports true end-to-end QoS,
in that QoS guarantees are provisioned from the point of connection to the network, not
upstream at the first routed interface. Furthermore, security can be enforced before traffic
enters the network, conserving valuable bandwidth. Therefore, QoS guarantees are much
more robust, dropping undesirable traffic before it enters the network and reserving the
bandwidth needed for mission critical applications.
This is the power of Enterasys Policy-Enabled Networking.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

265

The Enterasys Secure Networks concept takes authentication and authorization to a new
level, assigning roles on-demand to users based on their authentication credentials (or lack
thereof). These roles are in turn associated with services, i.e. collections of traffic
classification rules. Services can be created to contain or deny traffic.
What this means to a network administrator is a truly scalable solution. In contrast to
competitive offerings which require VLANs (a cumbersome network topology consideration),
the Enterasys Secure Networks solution addresses the issue of guest access by applying
traffic classification rules within the context of policy. Specifically, the administration of policy
is not limited to VLAN assignment. This substantially reduces the network configuration
complexity required while allowing the organization to exercise strict control over the type
(protocol-based rules) and amount (bandwidth allocation, QoS) of guest traffic permitted on
the corporate LAN.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

266

If Internet access for non-employees (e.g. suppliers in a corporate environment, hotel guests,
students) is desired, most competitors solutions demand that these users be assigned to a
single guest VLAN. This can result in the obvious problem of users from different
organizations being on the same virtual LAN segment and therefore having access to each
others network traffic and information.
Additionally, there is no provision for providing varying levels of access and service. For
instance, hotel guests may have internet access in guest rooms. If the management wishes
to make a range of Internet services available to guests, each of which furnishes greater
bandwidth and/or access rights, then the above scenario is clearly inadequate. Furthermore,
there is no security mechanism in place to ensure that users within the same VLAN cannot
access one anothers traffic.
To avoid this situation, it may be necessary to configure multiple guest VLANs, theoretically
one for each potential guest user type. This adds a very large layer of complexity to the
configuration issues faced by IT and is not a scalable solution.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

267

Although the structure of a policy-enabled network architecture will vary from one
organization to another, most implementations will take the general form shown above.
Enterasys translates the business or network level of policy distribution to what we call
roles. This defines the specific job responsibility and function of individual employees or
groups of employees (e.g. engineering, sales, finance).
At the service provisioning level, network resources are allocated to the defined groups
based on whether or not the role is permitted or denied access to the resource(s). This level
would also include traffic shaping considerations such as how much bandwidth is to be
allotted to the various permitted protocols.
At the device level, classification rules are defined. These rules will be grouped together
logically to form the services which will be distributed to the roles to provide the policy
structure as defined by management and IT.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

268

A higher education policy deployment example:


In this example there are four roles: Network Administrator, Faculty, Student and
Guest. Each role defines a distinct set of network resources that are allocated to end
systems when the role is assigned to port.
The Faculty role is associated with the Deny Administrative Protocols, Deny
Unacceptable Use, and Deny Legacy Protocols services. Each of these services contains a
set of classification rules, each of which implements multilayer traffic classification logic
implemented at the port level to define a set of network resources. When the Faculty role is
assigned to a port, all traffic received on this port is manipulated by the classification rules
defined in the services. Thus an end system that is assigned the Faculty role will have any
TFTP, Telnet, and SNMP traffic it generates discarded at the port of connection, as defined
by the Deny Administrative Protocols service, as well as DHCP Reply, RIP, OSPF, Apple,
DECNet, and IPX, as defined by the Deny Unacceptable Use and Legacy Protocols
services. All other traffic will be allowed on the network.
To support modularity of network resource provisioning with policy, services can be assigned
to more than one role if the role demands the same set of network resources defined by the
service. For example, in this figure the Student role is associated to the same services as
the Faculty role in addition to the Deny Faculty Server Farm Service. An end system
assigned the Student policy role will be denied access to TFTP, Telnet, SNMP, DHCP
Reply, RIP, OSPF, Apple, DECNet, and IPX from the Administrative Protocols, Acceptable
Use, and Legacy Protocols services. Additionally, any traffic destined to the IP address
range used by the facultys server farm will also be discarded at ingress to the network, as
defined by the Deny Faculty Server Farm service.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

269

The three ways roles are assigned are Statically, Mapped and Dynamically.
Statically assigned roles are assigned to a port- also called Default Role
IP Address Mapping and MAC Address Mapping are supported on the S and K Series, VLAN
mapping is supported on all other Enterasys Policy capable switches.
Radius Authentication Types include:
802.1X
Port Web Authentication
MAC Authentication

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

270

In Enterasys NetSight Policy Manager, three levels of policy configuration deliver role-based
provisioning of network resources.
How does this map to configuration on the switches?
Policy roles in Policy Manager are actually Policy Profile commands within the CLI of
Enterasys switches, while the rules in Policy Manager are the classification rules on the
devices. Thus policy roles, as defined in Policy Manager, may also be referred to as policy
profiles.
Note that services are in fact not configured on devices; services act as a grouping of
classification rules in Policy Manager for organizational purposes and are not used by the
switches.
In Policy Manager, policy roles are associated to Services, which are then associated to
classification rules, thereby relating services policy roles to classification rules.
On devices this relationship is direct where policy roles are associated directly to
classification rules as previously described.
The underlying logic for this translation is accomplished by Policy Manager.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

271

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

272

Voice over IP traffic is sensitive to latency and jitter. Controlling the environment in which
Voice traffic travels will allow for the necessary voice over IP requirements for a given
vendor.
Latency is the delay between packets.
Jitter is the variation in that delay, which can be caused by congestion.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

273

Under conditions of overflowing buffers, interactive response time suffers. Delay and jitter
can devastate multimedia applications.
In the worst case, even the largest buffers overflow and packets must be dropped. Action
should be taken before buffers are filled.
NOTE: Increased bandwidth is not discussed further in this module, because it is a physical
upgrade of a network.
Buffering can solve some issues, but it can be inadequate. As buffers fill, traffic is
increasingly delayed and may be discarded.
Removing unnecessary equipment and traffic can also be helpful, but results vary and the
rewards can be outweighed by the time required to find offenders.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

274

Multi-layer frame classification allows the administrator to control traffic through the use of
classification rules at the point of ingress for the end system. This allows any number of
actions to be implemented dynamically on any combination of Layer 2, 3, or 4 packet
attributes in the packets. Note that Layers 2, 3, and 4 refer to the Data Link, Network and
Transport layers, respectively of the Open Systems Interconnection(OSI) Model.
Switches classify incoming frames into a particular VLAN and priority level, or discard/forward
the frame. The incoming frames are processed based on the VLAN and priority classification
assignment from Policy. Thus traffic classification with Policy supports a robust level of QoS
in addition to security.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

275

When a frame enters a switch, the Traffic Classification Logic, which will be explained in
detail in later slides, is used to inspect the frame. Traffic Classification is implemented on
Enterasys switches through the configuration of Policy and port settings of the device. The
Traffic Classification Configuration determines whether or not an action is taken on the frame.
While Enterasys switches make traffic classification decisions based on layers 2 through 4
information, their forwarding mechanisms are still that of a layer 2 store-and-forward device
or, for those platforms that support routing, that of a layer 3 router.
In the illustration above, a packet upon ingress to the switch is classified to VLAN 4 because
its SIP field is set to A.B.C.D through the implementation of the Traffic Classification Logic.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

276

The classified traffic can be forwarded or discarded. If it is forward, it can be assigned to a


specific VLAN or CoS. This allows the administrator to filter unwanted traffic and allow only
business-critical or other authorized applications to be transmitted over the network.
In the illustration above, a packet upon ingress to the switch is assigned to VLAN 4 because
of the contents of its SIP field through the implementation of the Traffic Classification Logic.
Furthermore, the CoS is set to 1 as the default behavior defined for traffic classification on
this port. After traffic is manipulated through Policy in the Traffic Classification Logic, the
switching/routing logic will be used to determine the forwarding decision on the packet.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

277

The ingress logic is used for applying CoS and VLAN assignment to packets entering the
switch. Note that the details for applying Policy are deliberately left out here; Policy
application is platform-dependent and will be covered later in this module.
Frames are placed in the proper queues based on the associated priority obtained from the
ingress rules. A higher value indicates a higher importance to send the packet out of a
particular port. This will be discussed in detail in the Forwarding Treatment module of this
course.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

278

Policy Profile:
A data structure used to define a set of specific services allocated to a user/device
for the alignment of business function to network resource usage.
Consists of a set of precedence-ordered Classification Rules that allocate network
resources to user/device.
Defines default packet handling behavior in the case of no classification rule match.
Default packet handling behavior includes default VLAN or default CoS assignment,
discard or allow.
In the example above, the Policy Profile Sales is defined with the default packet handling
behavior of assigning the VLAN of 5 and of CoS of 1.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

279

Classification Rule
A conditional packet handling rule that consists of three parts:
- Policy Profile association
- Packet attribute(s) with specified field value
SMAC, DMAC, Ethertype, VLAN tag, TCI, SIP, DIP, ToS, IP Protocol, IP
Frag bit, TCP port, UDP port, IPX source, IPX destination, etc.
- Corresponding action
VLAN assignment; CoS assignment; packet discarding; permitting; port
disabling, Syslog message generation; SNMP trap generation

A Classification Rule takes the indicated action on packets when a packet is


assessed against its associated Policy Profile , AND the ingressed packet
attribute(s) matches criteria defined in the rule:
- Packet attributes supported are platform-dependent.
- VLAN assignment, CoS priority assignment, packet
discarding, allowing, port disabling, Syslog message generation, and SNMP
trap generation are possible
actions for classification rule hit.
- Actions supported are platform dependent.

In the example above, the classification rules for the Policy Profile Sales define actions based
on specific packet attributes of received traffic such as the SIP field and DIP field of IP
packets.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

280

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

281

Traffic Classification Logic


For each ingressed packet, it is determined if and which Policy Profile is assigned to
the packet based on the policy profile assigned to the port on which the packet was
received.
In this example, a packet is received with a SIP of A.B.C.D on port ge.x.y. Because Policy
Profile Sales is assigned to port ge.x.y, this packet is manipulated by the policy
configuration of Policy Profile Sales as shown on the preceding slides.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

282

In this example, a classification rule match has been made for the classification rules under
Policy Profile Sales based on the SIP of the received packet and then packet is assigned to
VLAN 4. However, because the classification rule configuration does not specify the setting of
the CoS for this packet, the CoS value is taken from the default Policy Profile Sales
configuration, which specifies a CoS value of 1.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

283

The precedence ordering of Classification Rules is platform-dependent . Note that the


precedence of a classification rule is determined by the attribute on which the classification
rule is based.
In this example, there are 2 classification rule matches under Policy Profile Sales based on
the SIP and DIP of the received packet. Since a SIP-based classification rule has higher
precedence than a DIP-based classification rule, the SIP-based classification rule is selected
for use in traffic classification of this packet.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

284

Classification Rule Precedence determines which classification rule is used if multiple


classification rules match an ingress packet and the actions between the classification rules
are contradictory; for example one rule might assign the packet to a VLAN, while the other
rule says to discard the packet. Note that the classification rule precedence ordering is
organized based on the attribute on which the classification is based. A lower value denotes
higher precedence.
Note that for any classification rule, a mask may or may not be specified with various lengths
(mask support for classification rules is platform-dependent). Therefore, if multiple
classification rules are configured using the same attribute with varying mask lengths, the
matching classification rule with the best match (greatest mask length) has highest
precedence.
For example, assume two VLAN classification rules were configured; classification rule 1 for
SIP 172.16.3.0/24 assigning packets to VLAN 10, and classification rule 2 for SIP
172.16.0.0/24 assigning packets to VLAN 20. If a packet was received with SIP 172.16.3.1,
then the classification rule 1 would be selected because although this packet matches both
classification rules, classification rule 1 has a longer mask and is therefore a better match.
However, if a packet was received with a SIP of 172.16.2.1, then classification rule 2 would
be used because this packet would not even match classification rule 1.
Furthermore, note that platforms which support the IP source and destination socket
classifications rules, such as the S or K Series, do not explicitly support IP source address
and IP destination address classification rules. Classification rules for IP source addresses
and IP destination addresses are configured using a 32 bit mask for IP source socket and IP
destination socket classification rules.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

285

TCI Overwrite should be used very carefully. Enabling it means that any information within
802.1Q (which includes 802.1p) and Layer 3 QoS information will be ignored in favor of the
policy based settings.

Note: B/C Series devices do NOT support TCI Overwrite. The B/C Series switch will not
overwrite 802.1Q VLAN bits, but support the of overwrite 802.1p Priority bits when TCI is
enabled.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

286

In this example, because the incoming packet is tagged, the setting of the TCI overwrite for
Policy Profile Sales affects the action of the classification rule and Policy Profile. The
rewriting of the 802.1p priority and VID specified in the received VLAN tag is possible
because the TCI Overwrite is enabled for this policy profile.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

287

In this example, Policy Profiles Sales, Engineer, and Staff are all assigned to port ge.x.y
An attribute of incoming packets, in this case SMAC or SIP, is used to determine which Policy
Profile to apply.
Note: B3/C3 switches supports 3 Roles per port

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

288

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

289

The information that is configured in Enterasys NetSight will be enforced to the switch, and
CLI commands will be generated based on the Policy Manger GUI.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

290

This example shows a policy profile called MAP_IP with a default permit and a CoS of 2,
the role will be assigned dynamically to a port if an IP address defined by the mapping is
seen by the switch.
Tagged Packet VLAN to Role Mapping - Provides a way to let policy-enabled devices assign
a role to network traffic, based on a VLAN ID. When a device receives network traffic that has
been tagged with a VLAN ID (tagged packet) it uses the Tagged Packet VLAN to Role
mapping list to determine what role to assign the traffic based on the VLAN ID. Tagged
Packet VLAN to Role mapping can be configured at the device level (all devices) and at the
port level (for an individual port on a device). A VLAN can only be mapped to one role at the
device level, but the same VLAN can be mapped to a different role at the port level.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

291

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

292

This example added VLAN 152 to egress untagged wherever this Policy profile is applied.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

293

Accounting is switch sensitive; you must be aware of which switches support enabling Syslog
and Audit Trap on Rule Hits.
Once we have the packet classified we can assign:
- Class of service
- Permit
- Deny
- Contain to VLAN
- Log the hit
- Disable the port

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

294

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

295

You will get error messages if you select LAGs belonging to the other current switches.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

296

Create a Policy Domain for specific areas of the network in which the policy configuration of
the devices will be consistent. Policy Manager provides the ability to create multiple policy
configurations by allowing you to group your roles and devices into Policy Domains. A Policy
Domain contains any number of roles and a set of devices that are uniquely assigned to that
particular domain. Policy Domains are centrally managed in the database and shared
between Policy Manager clients.
The first time you launch Policy Manager, you are in the Default Policy Domain. You can
manage your entire network in the Default Policy Domain, or you can create multiple domains
each with a different policy configuration, and assign your network devices to the appropriate
domain. By default, the Default Policy Domain is pre-loaded with a Policy Manager Database
file called Demo.pmd. The roles, services, rules, VLAN membership, and class of service in
this initial configuration define a suggested implementation of how network traffic can be
handled. This is a starting point for a new policy deployment and will often need
customization to fully leverage the power of a policy-enabled network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

297

Following installation, access the roles/services/rules by opening the database file called
demo.pmd.
You can import policy data from a .pmd file into a Policy Domain. Make sure that the domain
you want to import a file into is your current domain. Select File > Import > Import From File.
The Import from File window opens. Enter the name and path for the data file (.pmd) you
want to import, or navigate to the appropriate folder to retrieve the file.
You can import policy configuration data from one policy domain into another. Make sure
that the domain you want to import data into is your current domain. Select File > Import >
Import From Domain. (This menu option is not available if only one domain exists, as there
are no other domains from which to import data.)

NOTE: If you decide that you want to return to the previous configuration (that the import
overwrote), you can perform a File > Read Policy Domain operation to restore the
configuration, as long as you have not saved the data you imported.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

298

Network Elements / Port Groups tab a database listing the devices on which the defined
policies will be deployed. These devices form the active edge of the network.
Port Group tab allows the network administrator to create logical groupings of ports on the
network for organizing policy and authentication deployment in Policy Manager.
A network administrator may either use predefine port groups (such as backplane ports) or
user-defined port groups. Use the Port Configuration Wizard to configure a group of ports at
the same time with one selection.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

299

Under the Roles/Services tab, Policy Roles can be created to define a set of network
resources to be allocated to an end system. In the example shown above, there are four
roles defined: Administrator, Enterprise Access, Enterprise User, and Guest Access, as
configured by default in the demo.pmd file for Policy Manager.
Click on the Roles folder in the left pane under the Roles main tab. The Details View sub-tab
provides an overview of the configuration of the Policy Roles, including the default access
control, default CoS settings and how many classification rules are assigned to the policy
role.
The Role Wizard may be used to configure a new Policy Role.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

300

Service Groups and Services are displayed under the Roles/Services main tab. Grouping
classification rules that serve the same purpose into Services can facilitate network
administration. For example, the Deny Unsupported Protocol Access service is used to drop
traffic for protocols that should not be running on the enterprise network. Individual rules that
deny access to Appletalk , Novell IPX and other protocols are grouped under the Deny
Unsupported Protocol Access service.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

301

When a policy role is selected in the left pane, the General sub-tab displays the default
settings for the policy role. This screen shows the setting for the Guest Access Policy Profile
displayed in the previous slide.
The TCI Overwrite can be enabled/disabled per policy role. When TCI overwrite is enabled,
the VLAN and Priority information of a tagged packet may be set to a new value either based
on a classification rule hit or by the default access control or CoS settings for a policy role.
Otherwise, when the TCI overwrite is disabled, the VLAN and priority values formatted in the
802.1Q tag of tagged packets are not affected by policy configuration.
The Default Access Control section sets the default behavior for packets assigned to this
policy profile. For example, if a packet assigned to the Guest Access policy role does not
match a Permit, Deny, or VLAN classification defined by rule associated with the Guest
Access policy role, the packet is permitted on the network using the PVID of the port the
packet was received on for VLAN assignment.
The Default Class of Service section sets the default behavior for packets assigned to this
policy profile. For example, if a packet assigned to the Guest Access policy profile does not
match any Priority classification rules associated with the Guest Access policy role, the
packet is permitted on the network and is assigned to the Priority 1 CoS as defined by the
Guest Access role.
Note:- Not all platforms support TCI overwrite.
Note: For Security reasons it is recommended to not allow end users to send VLAN/Priority
tagged frames.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

302

When a policy role is selected in the left pane, the associated services are displayed. In the
screen above, the Enterprise Access policy role is associated with two services, the
Acceptable Use Policy service and the Application Provisioning service. Using the Services
main tab, the classification rules composing each of these and other services can be viewed,
showing all classification rules are associated with this policy role (association is transitive).
After you are finished creating Classification Rules, grouping them into Services, and then
possibly grouping the Services into Service Groups all under the Services main tab, Services
and/or Service Groups may be associated with Policy Roles using the Services Field under
the Roles General sub-tab.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

303

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

304

When a Policy role is selected, the VLAN Egress sub-tab displays the VLAN egress list
controls. This specifies which VLANs, if any, have their egress lists updated with the port the
Policy Profile is applied to, and the format (tagged, untagged, forbidden) of this setting. This
should be configured for any Policy Profile that assigns packets to a VLAN either through the
configuration of the default access control or through the setting of VLAN classification rules.
By placing this port on the VLAN egress list of the specified VLANs, bidirectional
communication is enabled.
For example, the Student Class policy role may be configured with the default access control
of assigning packets to VLAN 3, while a VLAN classification rule (not shown above) assigns
VoIP traffic to VLAN 40, (the Voice VLAN). When the Student Class policy role is assigned to
a port, this port must be added to the egress list of VLAN 3 and 40 so communication on the
return path may be forwarded back out of this port. This is accomplished by using the VLAN
Egress sub-tab and adding these VLANs to the policy profiles VLAN egress list controls.
Because this policy role is assigned directly to end systems at their point of connection, the
VLAN 3 configuration is untagged. However, it is expected that devices using VoIP will be
sending and receiving tagged packets (such as a VoIP phone). Thus VLAN 40 egress list
control is formatted as tagged.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

305

In addition to statically assigning Policy Roles to a port, Roles can be mapped based on specific traffic
attributes (MAC address, IP address, and VLAN tag) at layer 2 and layer 3.
Note: Only the K & S-Series support the mapping of a Policy Role based on an MAC /IP address.
Mapping configurations can be made Under the Mappings sub-tab when a policy role is selected:
In the MAC-to-Role Mapping section, a MAC address with mask can be entered to specify that the
associated Policy Role will be assigned to a frame received on any port of the device for a specific
MAC address. For example, if the K series device receives a frame with a SMAC address of
00:11:22:33:44:55 on any port of the device, then the packet will be assigned to the Guest Access
policy role.
In the IP-to-Role Mapping section an IP address with mask can be entered to specify that the
associated Policy Role will be assigned to a packet received on any port of a device for a specific IP
address.
In the Tagged Packet VLAN-to-Role Mapping section, a VID can be entered to specify that a Policy
Role be assigned to a frame received on any port of a device with a specific 802.1Q tag. For example,
if the switch receives a frame with a 802.1Q tag formatted with a VID of 30, then the traffic will be
assigned to the Guest Access policy role.
The Authentication Based VLAN to Role Mapping section applies to Enterasys devices that
concurrently support both policy profile assignment through authentication with the RADIUS Filter-ID
attribute (discussed in detail in the next module of this course) and dynamic VLAN assignment through
authentication with the RADIUS Tunnel attribute. A VLAN can be specified in this field so when a
RADIUS Tunnel attribute is returned to the device during the authentication of a user/device that
matches this VID, the corresponding policy profile will be applied to the port of the authenticating user,
instead of just changing the PVID of the port to the indicated VLAN.
Note: In order for VLAN to Role mapping to work on B/C-Series devices, the device-level
Authentication Type must be set to Multi-User (via the Device Configuration Wizard or the device
Authentication tab) and the port-level "Number of Users Allowed" setting must be set to 2 (via the Port
Configuration Wizard or the Port Properties Authentication Configuration tab, Authenticated User
Counts sub-tab).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

306

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

307

Traffic Mirroring
Policy Manager provides policy-based traffic mirroring functionality that allows network
administrators to monitor traffic received at a particular port on the network, by defining a
class of traffic that will be duplicated (mirrored) to another port on that same device where the
traffic can then be analyzed. Traffic mirroring can be configured for a rule (based on a traffic
classification) or as a role default action. Only incoming traffic can be mirrored using
policy-based traffic mirroring, and the traffic mirroring configuration takes precedence over
regular port-based mirroring.
Traffic mirroring uses existing Policy Manager port groups (created using the Port Groups
tab) to specify the ports where the mirrored traffic will be sent for monitoring and analysis.
When an end user connects to the device where the specified ports exist, and is assigned the
role that has traffic mirroring configured, then there is a traffic mirror set up for the port the
end user connected to. However, if the end user is assigned a role that does not have traffic
mirroring configured, or if the end user connects to a device that doesn't have any ports in the
specified port groups, then no traffic mirror will exist.
Examples of how traffic mirroring might be used include:
Mirroring the traffic from suspicious users based on their MAC or IP address.
Monitoring VoIP calls by IP address or port range.
Mirroring traffic to optimized IDS systems, for example one system for all HTTP traffic (to look
for suspicious websites) or one system for all emails (to look for spam).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

308

Role Guest Access can also be viewed from the CLI, note that mirror-destination has been
appended to the policy profile configuration string, this indicates the role is configured for
traffic mirroring.
# policy
set policy profile 1 name "Guest Access" pvid-status enable pvid 4095 cos-status enable cos
8 mirror-destination 1

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

309

When a classification rule is selected, denoted by the green circular icon, a different set of
tabs is shown in the right pane. These tabs display the details of the classification rule.
The General sub-tab controls the enabling and disabling of the classification rule and sets the
classification for a specific platform or all platforms.
From the General sub-tab, the Traffic Description section displays the traffic attribute used for
classification. In the example shown above for the Discard Appletalk classification rule, the
Ethertype field is used to identify packets as being part of the Appletalk protocol. Layer 2
through 4 attributes may be used for classification.
The Action section displays the action to be taken for the classification rule.
Access Control:
Permit Traffic - any packet matching the traffic attribute will be assigned to either the
default VLAN for the policy role if specified, or the PVID of the port the packet was
received on.
Deny Traffic - specifies a Drop Classification rule. Therefore, any packet matching
the traffic attribute and value configured under the Traffic Description tab will be
dropped.
Contain to VLAN - Any packet matching the traffic attribute will be assigned to the
specified VLAN.
When a Class of Service section is selected, this specifies the configuration of a Priority
classification rule. Any packet matching the traffic attribute will be assigned to the CoS set in
this pull down. If a CoS is not specified, then either the default CoS for the policy role or port
priority will be used to assign the packet to a CoS.
The Device Support sub-tab displays which devices do and do not support the currently
selected classification rule.
The Rule Usage sub-tab displays whether or not the classification rules have been hit.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

310

Determine whether or not the configured services are fully supported on various platforms.
Use the role Device Support tab to view the classification rule information that would be
written to your device(s), should you decide to enforce the selected role. The information is
displayed according to device type, and is particularly useful if you have devices that only
support certain aspects of policy management. To access this tab, select a role in the left
panel Roles tab, then select the Device Support tab in the right panel.
Devices Area
This section displays folders for different device types. Expand the folders to see your
network devices and device groups organized according to device type. If the domain does
not include any devices of a specific device type, that device type folder is displayed in gray.
For those device types that are included in the domain, the device type folders are displayed
in black when the rule(s) are fully supported, and red when they are not fully supported.
Classification Rules Area
Based on the device type selected, this section displays the classifications rules that will be
included or excluded when you enforce the selected role.
Excluded:
Lists any unsupported classification rules. These rules will not be included when you enforce
the selected role.
NOTE: Disabled rules are always listed as Excluded.
Included: Lists any supported classification rules. These rules will be included when you
enforce the selected role.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

311

Services and Service Groups are displayed under the Services main tab.
Services are used to group a set of classification rules that serve the same purpose, for
example the Deny Unsupported Protocol Access service. Service Groups are used to group
Services that are used for the same purpose.
As shown above, the Acceptable Use Policy Service Group groups a set of services to define
the acceptable usage policy for a network.
Grouping Services into Service Groups provides another level of organization for ease of
policy administration.
Services and/or Service Groups may be associated with a Policy role to implement the proper
allocation of network resources.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

312

Roles, Service Groups, Services and Classification Rules may be configured manually by
right clicking in the left pane of the respective main tabs and selecting New.
Then, the tabs in the right pane are used to configure these data structures. The Role Wizard,
Service Wizard and Classification Rule Wizard make this process even easier.
Role Wizard:
Right click on the Role folder under the Roles main tab and select Role Wizard. This wizard
will walk you through the configuration of a Policy Role including the default access control
and CoS settings, as well as adding services to the policy role.
Service Wizard:
Right click on the Services folder under the Services main tab and select Service Wizard.
This wizard will walk you through the configuration of a service including the creation of
classification rules for the service.
Classification Rule Wizard:
Under the Service main tab, right click on the service the classification rule is to be grouped
into. This wizard will walk you through the configuration of a classification rule including which
attribute to use for classification and what the action is for the classification.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

313

From the Tools tab, use the Port Configuration wizard to configure port settings for individual
ports or groups of ports, including the default policy role and authentication method state on
user ports. We will use this wizard to configure default policy role settings for all ports in a
defined port group representing each port class.
In the first menu of the Port Configuration wizard, select the port level options for
configuration. Because authentication is not deployed on the enterprise in the static policy
scenario, we can just select Default Role and Drop VLAN Tagged Frames from the General
settings section to configure just the default role on ports.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

314

Statically apply previously created roles for specified edge ports using the Port Configuration
wizard.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

315

Finally, the ports must be specified for the execution of these configurations as shown in the
Port Selection window. In this window, we take advantage of the categorization of the ports
into port groups representing a port class. Instead of navigating through each device to find
the ports that are considered User Ports, we have already defined these ports by the
configuration of the User Ports port group, and therefore we simply select this port group to
configure the default policy role and unauthenticated behavior for all of these ports on the
enterprise.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

316

Because Wireshark is launched against a role's current configuration, and not the current
configuration on a network device, you do not need to configure any network device in order
to see how the role will handle traffic. This makes Wireshark very useful when planning your
network roles, by demonstrating the benefits of the role before enforcing the role to your
network devices.
In addition, Policy Manager provides the ability to simultaneously launch multiple instances of
Wireshark, allowing:
Side-by-side comparison of two roles against the same captured data.
Side-by-side comparison of the same role with different rule sets against the same captured
data.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

317

In the left-panel Roles tab, right-click on the role you want to view with Wireshark, and select
the Launch Wireshark with Rule Color Filters option from the menu. The Launch Wireshark
window opens.
Select the Action Type you would like color filters created for. Wireshark will color filter the
traffic data based on how that specific action type is defined in the role by the rules and the
role default actions.
If you want to create color filters only for certain device type specific rules, use the drop-down
list to select the device type. Otherwise, select "All Device" Rules Only.
Select the View Color Filters against PCAP File radio button, and use the Select button to
navigate to the .pcap file you want to use.
Select the Restrict data to traffic originating from endstation checkbox. This option will filter
out return and broadcast traffic, allowing Wireshark to accurately reflect only the traffic that
would be filtered by the role being applied to a user. Enter a valid IP address or hostname for
the end station, or enter "loca lhost." Click OK.
Wireshark opens, displaying the data.
Note: If no data appears when Wireshark opens, the end station IP address entered in step 4
does not match the source IP of any traffic in the .pcap file.
Note: See Enterasys NetSight Policy Manager User Guide for configuration details on
launching Wireshark against live local or live remote traffic.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

318

The Wireshark color filter scheme that is used varies according to the selected Action Type:
Access Control
Discarded traffic is colored black
Discard Rule = pink text
Role Default Discard = white text
Permitted traffic is colored green
Permit Rule = bright green
Role Default Permit = pale green
Contained to a VLAN traffic is colored yellow
Contain Rule = bright yellow
Role Default Contain = pale yellow
Note: See Enterasys NetSight Policy Manager User Guide for full details on color coding
scheme for all actions

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

319

The following example displays Wireshark launched against a data capture file using the role
Application Traffic. The example shows how the Application Traffic role denies certain traffic
(FTP, Telnet, & TFTP), packets are displayed in Black, and permits SNMP traffic, packets
are displayed in Green.

Determining Rule Hit:


You can determine the specific rule that each packet hit by selecting the packet in the table,
and then looking in the Coloring Rule Name field in the Frame packet data (identified by
arrow in above slide). The example below shows that the Deny TFTP" rule in the Application
Traffic role caused the selected packet to be denied (Black). You can also see for permitted
SNMP packets, the traffic is highlighted as (Bright Green).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

320

Note: If you use any of the configuration wizards in Policy Manager, enforcing is automatic
upon completion of the configuration; however, as a general rule, its a good idea to always
enforce any configuration changes you have made at the time they are made.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

321

Use the Enforce Preview window to view the information that will be written to your devices,
before you actually enforce. This feature is particularly useful if you have devices that only
support certain aspects of policy management. For example, some devices support only the
policy features of policy management; some devices support the policy features and
classification rules, but do not support VLAN forwarding for certain classification rules; and
some devices fully support all policy management features, including policy, classification
rules, and VLAN forwarding for all classification rules.
The Enforce Preview window appears whenever you click the Enforce button, or select the
File > Enforce Role Set menu option, or double-click the enforce icon on the status bar, so
that you always get a chance to review the effects of enforcing prior to actually performing the
enforce. You can control whether or not this view automatically appears with the Show this
view on Enforce checkbox, or in Optional Views in the Options window.
You can also access this window from the File > Enforce Preview menu option, and from the
Enforce Preview button on the confirmation message that appears when a verify has taken
place.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

322

Port Groups data structures are used to organize the deployment of configuration settings
supported in Policy Manager. Port Groups can be used to logically group ports based on a
certain characteristic to make administration tasks faster and more efficient. By grouping
ports logically, multiple ports on the same device, or on several devices, can be configured
simultaneously while maintaining the desired degree of specificity. Administrators have the
choice of defining port groups based upon their own criteria with User-Defined Port Groups or
using the Pre-Defined Port Groups.
Pre-defined port groups organize ports based on the speed of the port, or the type of port.
Other port groups may be created to define a set of ports where the same type of users
connect to the network, or a group of ports that are running the same authentication method.
This makes the configuration of per port settings in Policy Manager, whether it be policy or
authentication configuration, very easy in that a setting may be made for an entire set of ports
specified by the port group with one click.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

323

Port group explanation:


Guest User Ports
Untrusted guest users connect and are provisioned with a restricted set of
network resources by default defined by Guest Access policy role
If authentication is not deployed, then any device/user that connects to these
ports regardless of its credentials will be allocated the network resources
defined by the Guest Access policy role. Therefore, this may determine the
configuration of the Guest Access policy role deployed on the network.
Trusted users may successfully authenticate with dynamic allocation of
organizational network resources.
Example: Devices providing connectivity to conference rooms where
partners and customers are hosted
Trusted User Ports
A limited set of network resources is provisioned by default defined by the
Enterprise Access policy role.
Untrusted guest users must not have access to these ports.
Example: Devices providing connectivity to offices and other highly controlled
locations
Uplink Ports
Ports that are inter-switch links from connectivity to other infrastructure
devices on the network. As part of the deployment process uplink ports might
be frozen to avoid misconfiguration with policy.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

324

With the policy configuration now downloaded to the specified infrastructure devices, the
policy roles may now be applied to ports to affect ingressed traffic. To do this, set the Default
Role for a port to a configured policy role. By doing this, all traffic entering the port will be
assigned to the selected policy role and manipulated by the associated classification rules
and policy role settings.
Setting a Default Role can be implemented by highlighting a switch in the Network Elements
tab, then from the Ports tab click the Retrieve button. Then highlight desired ports, right click
and select Set Default Role.
From the Roles window select a Role and click OK, now that Role is applied to the
highlighted ports.
There is a Port Configuration Wizard, accessed through the Tools drop down menu, which
may also be used to configured common authentication and policy configuration for individual
ports, or ports grouped together into port groups, simplifying the default policy role
configuration.
To minimize the impact on inadvertently disrupting business aligned traffic on the network
with the deployment of policy, a Phased Implementation Approach to policy deployment may
be used.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

325

Freezing a port enables you to "lock" it so that no one can accidentally reconfigure sensitive
attributes such as port authentication or default role settings. For example, if a port is frozen
and the administrator later assigns a default role to the entire device, the frozen port will not
receive the new default role. To reconfigure a frozen port, you must clear its frozen status, do
the configuration, then freeze it again. One application of this feature would be to prevent
inter-switch link ports from being accidentally reconfigured, (It is normally a good practice to
Freeze uplink ports). You can tell if a port is frozen or not by looking at the port icon or by
checking the Frozen Status on the Port Properties General tab.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

326

Policy should be designed based on the capabilities and limitations of the switches.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

327

Policy Manager stores configuration data in database with files bearing the file extension
.pmd, for Policy Manager database. These .pmd files contain any configurations saved to
that filename including roles, services, VLANs, etc. Certain preconfigured .pmd files are
available for download and are deployable as is. Users may customize these .pmd files to
suit individual needs or build their own customized .pmd files.
It is possible to import an existing .pmd file that has been previously configured. This allows
you to change from one policy configuration to another simply by opening a different data file.
You can also import policy configuration data from one data file into another data file. When
you import a data file, Policy Manager checks for rule conflicts. It is good to get into the habit
of periodically saving the Policy Manager Database.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

328

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

329

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

330

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

331

When traffic first arrives at the switch it is all the same there is no differentiation between the
packets. It is all unclassified. With traffic classification we can test packets see if they match a
criteria you have defined. Once we have a match you can optionally mark the packet. Once
the packet is marked you dont have to reclassify at each switch.
There are two types of markings:
Layer 2 packets can be marked with an 802.1p priority
Layer 3 packets can be marked with a DSCP (Differentiated Services Code
Point) value
During the forwarding treatment section of QoS we can:
Prioritize the traffic based the markings
Rate limit the traffic
Control the traffic through the use of Queues

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

332

In the early days of Internet development, researchers took the attitude that all data traffic is
of equal importance. For this reason, every packet was treated the same way.
While this approach accelerated development of a robust and stable architecture, the
migration of internet technology into the enterprise field, the academic world and, ultimately,
general public use introduced some new concerns. Businesses did not wish to make trade
secrets available to their competitors; many Internet users with networking expertise became
adept at hacking into networks where they didnt belong; and new applications demanded
more network bandwidth and, in addition, were unable to tolerate time delay.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

333

Applications have different requirements for delay, delay variation (jitter), bandwidth, packet
loss, availability, etc. These parameters form the basis of QoS.
An IP network must be designed to support the QoS catering to the types of applications
implemented on the infrastructure.
Applications are characterized by several basic performance criteria:
Availability
Error performance
Connection setup and Response time
Lost transactions due to network congestion
Speed of fault detection and correction
Example:. Data traffic can be very tolerant to delays and packets drops, but bandwidth
intensive.
Voice traffic is not bandwidth intensive, yet very sensitive to jitter and packet loss.
Coexistence of multimedia traffic, such as video and voice, with traditional data traffic, such
as FTP and HTTP, has driven the need for QoS in recent years.
For example, VoIP traffic requires very low jitter with one-way delay around 100 milliseconds.
The guaranteed bandwidth needed for VoIP is in the range of 8 kbps to 64 kbps. However, a
file transfer does not suffer from the affects of jitter, although packet loss significantly
decreases the throughput of the application.
All Enterasys switches support Quality of Service (QoS) to an extent. This means that the
QoS mechanisms covered from a general perspective in this module of the course are
implemented in some fashion on the Enterasys infrastructure devices.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

334

In order to have priority some must be higher and others must be lower priority. If all packets
are high priority, it is the same as if they all had low priority. For someone to have more,
someone must have less.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

335

The DiffServ QoS model provides a relatively simple and coarse method of categorizing
traffic into different classes, called Classes of Service (CoS), and applying QoS mechanisms
to those classes.
Network infrastructure devices manage traffic by grouping similar types of traffic
together and treating each type as a class with its own level of service priority .
Highly scalable, implementing QoS for aggregated types of traffic and avoiding perflow QoS management.
Still provisions various delay, jitter, bandwidth requirements for different applications
on the network depending on configuration.
Appropriate for high-throughput traffic, for low-latency traffic, or for best effort traffic.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

336

The above diagram describes the switch operations on a packet as it is switch or routed
through an S, K, G, C , B Series device. The switch performs the following operations on the
packet:
Classify
When packets are received by the switch, the packets are classified to a specific
VLAN and CoS using either default port settings or using particular traffic attributes
as implemented by policy. A partial listing of these traffic attributes is shown above.
Action
After the packets are classified, the packets can either be forwarded unmodified or
modified in different ways as directed by policy such as dropping the packet or
rewriting its 802.1p priority and/or ToS field for CoS assignment in traffic
prioritization.
Delivery
While the switching and routing logic determines which port or ports a packet is
forward out of, the CoS assignment determines which priority transmit queue the
packet will be placed on for the implementation of traffic prioritization. The
forwarding treatment mechanisms implemented during the delivery of the traffic
actually perform the traffic prioritization component of the packet forwarding. These
mechanism include inbound/outbound rate limiting/shaping, queuing, and congestion
avoidance techniques.
The Classify and Action operations of the switch implement the Traffic Classification and
Traffic Marking components of the DiffServ QoS model while the Delivery operation of the
switch implements the Forwarding Treatment component of the DiffServ QoS model.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

337

The first component of QoS is traffic classification. Traffic classification should be


implemented as far to the edge of the network as possible. Traffic classification places traffic
to be transmitted onto the network into buckets based on how the traffic is serviced defining
Classes of Service on the network.
All packets within a bucket are part of the same Class of Service. During the Traffic Marking
component of QoS, all packets within the Class of Service will be marked, in some way, so
that packets belonging to a CoS can be uniquely identified by any node in the network based
on the same traffic attribute.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

338

Different features on an infrastructure device may be used for traffic classification.


ACLs may be applied to layer 3 routed interfaces to affect traffic classification before a layer 3
routed decision is made by an infrastructure device.
Enterasys policy is implemented at the physical layer for all packets received on a physical
port, regardless of the forwarding decision of the infrastructure device (layer 2 or layer 3).
Enterasys multilayer switches, such as the S series, K series, G Series and B/C Series
platforms, are capable of implementing both ACLs and policy at layer 2 for traffic
classification. However, layer 2 policy is far more powerful in the implementation of QoS than
ACLs.
Furthermore, ACLs and layer 2 policy are also used to implement network security through
access control enforcement denying certain types of traffic from entering the network in
addition to implementing QoS on the enterprise network by assigning identified traffic to a
CoS. Policy is also more effective in implementing security through access control over
ACLs in that access control with per device granularity can be provisioned directly to the port
of connection.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

339

Traffic classification uses attributes of received traffic to assign packets to a CoS. The
attributes that may be used for traffic classification are dependent upon the platform
implementing traffic classification. In the example above, OSPF traffic, identified by the IP
Protocol Type field in the IP header being set to 89, is discarded while ICMP traffic, identified
by the IP Protocol Type field in the IP header being set to 1, is assigned to the Low Priority
CoS. Furthermore, HTTP and FTP traffic, identified by layer 4 TCP destination ports, are
assigned to Medium Priority CoS. Finally, traffic used for Voice over IP (VoIP), identified by
layer 4 TCP/UDP destination ports, is assigned to the High Priority CoS.
The set of rules that define traffic classification for CoS assignment may be implemented
identically on all infrastructure devices on the network for any user that connects to the
network. However, this is not the desirable implementation for traffic classification in QoS.
Ideally, a network administrator would like to implement rules that define prioritization in
traffic classification based on the identity of the user that is connected to the network.
Therefore, each user, or group of users, may have the traffic they source classified and
prioritized based on the users organizational responsibilities on the network.
For example, the traffic classification implementation shown above may be applicable for
faculty in a university that are permitted use VoIP for communication. Therefore, VoIP traffic
generated by a member of the Faculty should be assigned to a high priority for transmission
through the network because applications using VoIP require low latency and jitter. However,
a student may not be allowed to use the VoIP, and therefore traffic classification for students
connected to the network should discard VoIP traffic from even entering the network. Locking
down the network with this type of access control not only improves QoS for users that are
permitted to use high priority applications, but also secures the network from exploitation of
prioritized applications from users not permitted to use these application.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

340

The second component of QoS is traffic marking. Traffic marking should also be
implemented as far to the edge of the network as possible, and in some cases the end
systems themselves implement traffic marking. An example of this is applications that run on
PC that format traffic with a specific ToS field setting. Another example of this is IP Phones
which generate traffic with the 802.1p priority and/or ToS field already set. Although this
scenario is the easiest for the infrastructure to handle, this implementation does not represent
how most end systems format traffic. Furthermore, using the traffic marking implemented by
an end system is not a secure QoS deployment in that these traffic marking characteristics
may be exploited by users connecting to the network as a vector to launch attacks against the
infrastructure.
During the Traffic Marking component of QoS, all packets within a Class of Service will be
marked in the same way so that packets belonging to a CoS can be uniquely identified by any
node in the network. This marking may be at layer 2 using the 802.1p priority field for
Ethernet packets, or at layer 3 using the ToS field for IP packets, or both. This enables
infrastructure devices to implement the defined forwarding treatment for a CoS at every hop
throughout the network in the packets transmission path. Traffic marking be explored in
depth in the proceeding slides.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

341

Traffic marking at Layer 2 is achieved with 802.1p priorities.


Traffic marking at Layer 3 is accomplished via the DSCP field.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

342

Traffic marking uses the CoS assignment from traffic classification to mark all packets within
a CoS with the same traffic marking attribute. In the example above, ICMP traffic was
assigned to the Low Priority CoS. The Low Priority CoS is marked on layer 2 with an 802.1p
priority of 0 and marked at layer 3 with a DSCP value of 0 for the Default PHB class.
Furthermore, HTTP and FTP traffic was assigned to Medium Priority CoS. The Medium
Priority CoS is marked on layer 2 with an 802.1p priority of 3 and marked at layer 3 with a
DSCP value of 40 for the Assured Forwarding class, AF12. Finally, traffic used for VoIP was
assigned to the High Priority CoS. The High Priority CoS is marked on layer 2 with an 802.1p
priority of 5 and marked at layer 3 with a DSCP value of 46 for the Expedited Forwarding
class.
However, it is important to note that support for traffic marking is platform dependent. Some
devices may only support traffic marking at layer 2 with the 802.1p priority while other devices
support traffic marking at both layer 2 and layer 3 with 802.1p priority and ToS rewrite. These
considerations must be taken into account when deploying QoS on the network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

343

The IEEE 802.1D standard for traffic prioritization provides for the ability to manage the
priority of received and transmitted frames. The configuration of user priorities and traffic
classes facilitate the management of:
Latency, to support business and voice applications.
Throughput, to meet service level agreements focused on bandwidth for specified
types of traffic.
Throughput, to meet bandwidth sensitive applications such as RTP type traffic
specifically for voice services
With traffic prioritization, latency and bandwidth guarantees can be supported at
higher levels of network loading.
Previously referred to as IEEE 802.1p, Traffic Class Expediting and Dynamic Multicast
Filtering, the standard was ratified and published in IEEE 802.1D-1998.
Every packet is associated to a priority upon receipt by the switch. To implement traffic
prioritization, four bytes of information are inserted into each packet after the source address
field. The first two bytes are referred to as the Tag Protocol Identifier (TPID). This field always
has a value of 0x8100 (hex). If a switch sees an EtherType of 0x8100, it knows that the
packet is a Q-tagged packet.
The next two bytes are the Tag Control Information field, which carries information about user
priority, canonical frame indicator (CFI), and VLAN identification.
The Priority field contains three bits and can carry an ordered sequence of eight user priority
values from 0 to 7. Zero is the lowest priority value and the default on most Enterasys
switches. Seven is the highest priority. The value of this field is referred to as the 802.1p
priority.
For Ethernet devices, the CFI is always internally set to 0.
The remaining 12 bits indicate the VLAN ID, which can range from 0 to 4094.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

344

In the Differentiated Services architecture, the TOS field from the IPv4 header, as the DS
field, is redefined where the six most significant bits are used for traffic marking indicating the
Differentiated Services Code Point (DSCP) value of a packet.
This replaces the Precedence field and the DTR bits previously defined in the TOS field of
the IP header. Therefore with the new definition of the DS field, there exists up to 64 (2^6)
classes of traffic that may be supported with this marking.
Furthermore, the IETF also defined the forwarding treatment, or PHB, for each of these
DSCP values to standardize QoS implementation on a network. The following slides explore
these PHBs.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

345

The third component of QoS is forwarding treatment. The forwarding treatment actually
implements the prioritisation in packet transmission of packets in one CoS over another CoS
using congestion management techniques. The CoS to which a packet belongs is
determined by the traffic marking characteristics of the received packets. The following list
represents several congestion management techniques:
Transmit Queue Mapping
Based on the CoS assignment of a packet and the port(s) the packet is being
transmitted out of, the packet is placed on a physical transmit queue for transmission.
Queuing
Transmitting packets assigned to one CoS before packets assigned to another CoS
are transmitted on a port, by placing packets assigned different CoS on different
transmit queues. Different queuing algorithms exist and will be explained in the
proceeding slides.
Rate limiting
Controlling the maximum rate of packets belonging to a particular CoS may be
received/transmitted by discarding/flagging all packets that exceed this limit. This
limits the maximum amount of bandwidth that may be utilised by a CoS without
affecting port Rx/Tx port buffers.
Rate shaping
Controlling the maximum rate of packets belonging to a particular CoS be transmitted
by buffering all packets that exceed this limit delaying the transmission of these
packets. This smooth's the bursti-ness of traffic while increasing the delay of
packets.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

346

Unlike Rate Limiting, Rate Shaping is UDP friendly because it buffers packets that are above
the rate, rather than dropping them. The above mechanisms are fully or partially supported
on Enterasys platforms.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

347

It is important to note that for ports that support only 4 transmit queues, the 802.1p
priority/CoS of 1 or 2 is actually mapped to a lower queue than the transmit queue used by
the default value of 0. Likewise, for ports that support 8 transmit queues, the 802.1p
priority/CoS of 1 or 2 is actually mapped to lower queues than the transmit queue used by the
default value of 0.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

348

This depicts how a packet is handled without priority queuing. The packets are forwarded first
in, first out (FIFO), regardless of their importance. Critical traffic, such as SAP/R3 or Voice
over IP traffic, can get stuck behind non-strategic traffic, such as HTTP, if the non-critical
traffic happens to reach the queue first.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

349

With Strict Priority Queuing (SPQ) enabled, each packet is assigned to a queue based on its
priority. A higher priority queue must be empty before a lower priority queue can transmit.
Within the queue, however, an FIFO algorithm is used. If high priority queues are constantly
serviced (rarely emptied), lower priority queues may rarely get to transmit.
This shows an example of 4 queues. In this example, Q0 (with the lowest priority) is not
getting a chance to transmit packets until the higher priority queues empty.
SPQ ensures that higher priority traffic gets the fastest handling at each hop and will be
transmitted before low priority traffic minimizing delay in the packet transmission through an
infrastructure device. This type of queuing algorithm is used on links carrying mission critical
traffic that is extremely delay sensitive, such as VoIP.
However, SPQ does cause starvation of lower priority traffic. This occurs because the
queuing delays that would have otherwise been encountered by higher priority traffic are
randomly transferred to lower priority traffic. With a large number of packets being placed on
higher priority queues, lower priority queues may not be emptied causing an inordinate
amount of queuing delay for lower priority traffic. To avoid this condition, rate limiting may be
used to limit the bandwidth utilization of high priority traffic.
Traffic classification is used to identify the priority of ingress traffic for the mapping to a
transmit queue on transmission from the infrastructure device.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

350

Strict Priority Queuing means that during periods of congestion and while there are packets in
higher queues, the higher priority packets are to be transmitted at the expense of delays to
lower priority traffic.
For example, if queue 3 should empty out its queue, then packets in queue 2 are transmitted.
If packets return to queue 3, then queue 2 stops transmitting its queued packets and queue 3
will start transmitting its packets again. Queue 1 and 0 may transmit very few times or never
get a chance to transmits their packets on a link that is consistently congested, which would
result in dropped packets, application timeouts, and packets retransmissions for TCP
connections. This is known as starvation.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

351

In contrast to Strict Priority Queuing, with Weighted Fair Queuing (WFQ), a percentage
(weighted value) is assigned to establish the amount of transmission capacity assigned to the
frames associated with each queue. Thus, the unused bandwidth amongst the queues is
used. Within a queue, an FIFO algorithm is implemented. For example, transmit queues
could be set as follows:
Q3 to 50%, so at least 50% of the Q3 frames are transmitted.
Q2 to 25%, so at least 25% of the Q2 frames are transmitted.
Q1 to 25%, so at least 25% of the Q1 frames are transmitted.
Q0 to 0%, so that no Q0 frames are transmitted until Q3, Q2, and Q1 frames are transmitted
In the example shown in this figure, for 100% of the packets serviced from the outbound
queue:
50% will be Queue 3
25% will be Queue 2
25% will be Queue 1
If there is bandwidth remaining when these frames are transmitted, Q0 frames will then be
transmitted.
WFQ allocates a percentage of bandwidth to traffic assigned to a specific priority and this
bandwidth is re-allocated to traffic of a different priority if it not used. Moreover, WFQ
protects against complete starvation of lower priority traffic in that it is guaranteed at least a
certain percentage of the time the lower priority queues will be serviced and traffic from these
queues will be transmitted.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

352

The Queue weights are user configurable, can be based on the networks traffic loads,
characteristics, priority assignments, etc. The total of the Queue weights must equal 100%.
Weighted Fair Queuing allows each of the queues to transmit some of its packets, based on
the weighted percentage that is configured.
With weighted fair queuing, queue access to bandwidth is divided up by percentages of the
time slices available. Should a queue empty before using its current share of time slices, the
remaining time slices are shared with all remaining queues. The above slide depicts how
weighted fair queuing works. Inbound packets enter on the upper left of the box and proceed
to the appropriate priority queue. Outbound packets exit the queues on the lower right. Queue
3 has access to its percentage of time slices so long as there are packets in the queue. Then
queue 2 has access to its percentage of time slices, and so on round robin. Weighted fair
queuing assures that each queue will get at least the configured percentage of bandwidth
time slices.
The value of weighted fair queuing is in its assurance that no queue is starved for bandwidth.
The downside of weighted fair queuing is that packets in a high priority queue, with low
tolerance for delay, will wait until all other queues have used the time slices available to them
before forwarding. So weighted fair queuing would not be appropriate for applications with
high sensitivity to delay or jitter, such as VoIP.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

353

Hybrid queuing allows for mission-critical traffic that is high priority to be serviced before all
other types of traffic, while the remaining lower priority traffic may be allocated a percentage
of the link bandwidth utilization. Transmit queues (TxQ) represent the hardware resources for
each port that are used in scheduling packets for egressing the device. The S-Series and KSeries scheduler runs in a Low-Latency mode which allows the customer to configure a
hybrid of strict priority and weighted fair queuing.
The S & K series support 11 transmit queues. Queues 0, 9 and 10 are low latency queues
(LLQ). You can not configure an LLQ. Queues 1 - 8 are non-LLQs and can be configured.
The hardware scheduler will service all packets on queue 10 and then queue 9. Once there
are no more packets, the available bandwidth will be used to service queues 1-8 based on
the configured (strict or weighted fair queue) or default mode (strict). If there is any available
bandwidth after servicing these queues, then the remainder of the bandwidth will be used to
process queue 0.
This type of queuing is ideal when only one CoS of traffic is highly delay sensitive on the
network and all other Classes of Service, although need to be prioritized relative to each
other, are not composed of highly delay sensitive applications. Therefore, while the delay
sensitive CoS is always serviced before all other Classes of Service, the other Classes of
Service are allocated a percentage of the time on the link to transmit information relative to
each other, not interfering with the mission critical, delay sensitive traffic. An example of this
may be in the implementation of QoS for VoIP on the enterprise network.
Note: LLQs are hardware dependent. Not all hardware devices support low latency queuing.
The show cos port-config txq command can be used from the CLI to display LLQs for a given
module.
Note: TxQ Scheduling is not supported on fixed switches.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

354

The queues implementing SPQ are serviced before the queues implementing WFQ, and the
queues implementing SPQ are serviced in sequential order based on the relative priority of
the queue.
The total of the Queue weights implementing WFQ must equal 100%, although this does not
translate directly into the bandwidth allocated to the queue for transmission in that the queues
implementing SPQ may consume some of this bandwidth.
In the example above, Queue 3 is implementing SPQ while the remaining queues are
implementing WFQ. Therefore, Queue 3 will be serviced before Queues 2-0, and Queues 20 are guaranteed each only a percentage of the bandwidth for transmission. After Queue 3 is
completely empty, Queue 2 then starts to transmit its packets, which uses its own 60% time
slice as the queue has packets to be transmitted. When Queue 2s time slice is up, Queue 1
then transmits its packets, in which it empties its queue, thus moving on to Queue 0 which
uses what is left of Queue 1s time slice and its own time slice to transmit its packets. Then
the whole process then starts over with Queue 2 transmitting until a packet comes in on
Queue 3, and Queue 3 is emptied first.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

355

Rate limiting may be configured for a CoS on a port, for an 802.1p priority on a port, or for all
Classes of Service on a port. This is dependent on platform limitations. Inbound Rate
Limiting drops or clips traffic for a certain priority if a configured rate is exceeded on inbound.
Note that traffic classification is implemented before inbound rate limiting. This is so traffic
assigned to a certain CoS can first be identified as being assigned to that CoS, and then rate
limited. Outbound Rate Limiting drops or clips aggregated traffic scheduled to be transmitted
out a certain port.
Rate limiting alleviates bursts in the traffic, limiting the size of the bursts to a defined
threshold as shown in the diagram above.
Note: that rate limiting functionality may also be used to identify packets received or
transmitted above a certain rate and specially mark these packets, such as changing the QoS
traffic marking by rewriting the ToS field or 802.1p priority of the packet.)
Note: Fixed switches (A,B,C,D, G, & I) support inbound rate limiting only.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

356

Rate shaping may be configured for CoS on a port, for an 802.1p priority on a port, or for all
Classes of Service on a port. This is depends on the platform.
Rate shaping retains excess packets in a queue and then schedules these packets for later
transmission over time. Therefore, the packet output rate is smoothed and bursts in
transmission are not propagated as seen with rate limiting.
Rate shaping infers that a queue exists with sufficient memory to buffer packets to be
transmitted, while rate limiting does not require the existence of a queue at all. Since queues
are an outbound concept, rate shaping may only be implemented on outbound queue.
Rate shaping requires the configuration of a scheduling function for the queue for later
transmission of delayed packets to transmit. As a result of rate limiting discarding packets
above a specified threshold, the scheduling algorithm configured for a queue is not related to
rate limiting configuration.
Rate shaping can be implemented for multiple reasons, such as controlling bandwidth to offer
different levels of service. Furthermore, it may be used to avoid traffic congestion on other
links in the network by removing the bursts of traffic that can lead to discarded packets. Rate
shaping is of importance for real-time traffic where packet loss is extremely detrimental to
these applications. Instead of discarding traffic imposed by rate limiting, delays are induced
into its transmission by retaining the data for future transmission. However, the delays must
also be bounded in that real-time traffic is also sensitive to delays, in addition to packet loss.
Note that rate limiting and rate shaping have the ability to restrict maximum output rate to a
certain value. However, neither mechanism provides minimum bandwidth guarantee during
periods of congestions. These mechanisms only guarantee that bandwidth will be available
to other types of traffic in periods of link congestion. Therefore, in order to set a minimum
bandwidth guarantee during periods of congestion, rate limit or rate shape all Classes of
Service on a particular port so that each CoS has a minimum bandwidth allocation when all
other Classes of Service are congested and utilizing the maximum bandwidth it is allocated.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

357

Traffic marking directs the forwarding treatment of packet within a CoS based on the PHB of
the CoS. In the example above, ICMP traffic was assigned to the Low Priority CoS, denoted
as the Default PHB. HTTP and FTP traffic was assigned to Medium Priority CoS, denoted as
the Assured Forwarding class AF12 PHB. VoIP traffic was assigned to the High Priority CoS,
denoted as the Expedited Forwarding class PHB.
The queuing aspect of the forwarding treatment is implemented with Hybrid queuing. The
Low Priority and Medium Priority Classes of Service use WFQ were the Medium Priority CoS
is serviced 75% of the time, and the Low Priority CoS is serviced 25% of the time.
Furthermore, the High Priority CoS is implemented with SPQ queuing which is serviced
before the queues used for the Low Priority and Medium Priority Classes of Service are
serviced. As shown above, VoIP traffic is currently consuming the line because traffic exists
in this queue.
Moreover, RED is implemented for the Medium Priority CoS for congestion avoidance
reasons. It is important to note that all applications classified within this queue are TCPbased and therefore RED will signal the sender of FTP or HTTP traffic to slow the
transmission of its traffic if congestion levels are increasing on the network. RED is not
implemented on the High Priority CoS transmit queue because VoIP payload traffic is UDPbased.
This forwarding treatment configuration is set on all ports on the infrastructure that are
forwarding traffic for these Classes of Service. Particularly, this queuing configuration, with
possible rate limiting/shaping and RED configuration settings, must be configured on ISLs
and user ports in the proper deployment of end-to-end QoS across the infrastructure at every
hop in the packets path.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

358

*The C5/B5 uses the highest transmit queue for the transmission of stacking control
protocols, so therefore it is strongly recommended that the highest priority queue is not used
for the transmission of data packets.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

359

*RED is only effective for TCP-based applications. Therefore, enabling RED for queues that
handle both TCP and UDP traffic could possibly not avoid congestion or even increase
congestion to an extent on the network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

360

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

361

Packets assigned to a CoS can be marked by assigning an 802.1p priority or TOS/DSCP


value.
The above slide provides details on how to configure an:
802.1p value
ToS/DCSP value
Drop Precedence to use (Low, Medium, High)
Once configured, the CoS can then be assigned to individual Roles and Rules within PM.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

362

As stated previously, additional Classes of Service may also be created in Policy Manager by
right clicking on the Classes of Service folder in the left pane under the Classes of Service
main-tab. For each CoS service defined, an 802.1p priority and ToS rewrite value can be
configured so that when packets are assigned to this CoS through policy configurations,
these traffic markings are implemented on the packet. Furthermore, transmit queue and
inbound rate limiting parameters may also be configured for the CoS, although this will be
discussed in detail later in the course.
In the example above, a Class of Service is created called VoIP that assigns a packet to the
802.1p priority of 5 and a ToS value matching the DSCP of the EF PHB. This defines a class
for all VoIP traffic on a network. The next step would be to configure policy to assign VoIP
traffic to this CoS.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

363

In addition to classifying traffic at the role level, classification rules (K & S-Series switches
only) can also be assigned their own unique CoS. This allows Enterasys Switches to be
configured to assign a CoS for very specific types of traffic. In the example above, VoIP
traffic, which is identified by classification rules in the VoIP Service, has been configured to
classify traffic destined to UDP port 5004 to the VoIP CoS. Therefore, traffic that meets this
profile will be assigned an 802.1p priority of 5 and a DSCP value of EF upon being identified.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

364

When a CoS is assigned in Policy Manager the configuration of the CoS is defined under the
Roles / Services tab.
When you install Policy Manager, the Class of Service Configuration window (available from
the Policy Manager Edit menu) is pre-populated with eight static classes of service, each
associated with one of the 802.1p priorities (0-7). You can use these classes of service as is,
or configure them to include ToS/DSCP, rate limit, and/or transmit queue values. In addition,
you can also create your own classes.
After you have created and defined your classes of service, they are then available when you
make a class of service selection for a rule action (General tab), a role default (General tab),
or an automated service (General tab).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

365

Policy Manager allows you to create and define rate limits as components of a class of
service. Rate limits are used to control the transmit rate at which traffic enters and exits ports
in your network. Policy Manager uses role-based rate limits that are tied directly to roles and
rules, and are written to a device when the role/rule is enforced.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

366

Rate Limit
Specify the highest transmission rate at which traffic can enter or exit a port before packets
will be rate limited:
% - A percentage of the total bandwidth available (not available for priority-based rate limits)
PPS - Packets per second (not available for priority-based rate limits)
Kb/s - Kilobits per second
Mb/s - Megabits per second
Gb/s - Gigabits per second
Actions
Select the action(s) you would like this rate limit to use:
Generate System Log on Rate Violation - a syslog message is generated when the rate limit
is first exceeded.
Generate Audit Trap on Rate Violation - an audit trap is generated when the rate limit is first
exceeded.
Disable Port on Rate Violation - the port is disabled when the rate limit is first exceeded.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

367

Priority-based rate limits are supported in Policy Manager for use with legacy devices such
as the E7 and E1. Priority-based rate limits are associated with one or more of the eight
802.1p priorities (0-7). When the associated priority is selected for a class of service, the rate
limit becomes part of that class of service. When priority-based rate limiting is implemented,
the combined rate of all traffic on the port that matches the priorities associated with the rate
limit cannot exceed the configured limit. If the rate exceeds the configured limit, frames are
dropped until the rate falls below the limit. In order to control traffic inbound and outbound on
the same port, two rate limits must be configured (one inbound and one outbound).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

368

An inbound/outbound priority-based rate limiter is associated to an 802.1p priority when it is


configured, and as a result, every CoS defined in the Policy Manager configuration utilizing
the specified 802.1p priority is automatically associated to this priority-based rate limiting
configuration.
An inbound priority-based rate limiter will be configured on devices implementing Prioritybased CoS and will rate limit all traffic entering a port that is associated to a specified 802.1p
priority. When a priority-based rate limiter is configured and enforced on devices running in
priority-based CoS mode, any policy role or classification rule assigning traffic to this 802.1p
priority will be rate limited to this value. Furthermore, traffic received by the switch on any
port with a VLAN tag set with this 802.1p priority will also be rate limited to this value. Note
that Policy Manager by default configures this rate limiter to the specified threshold for the
selected 802.1p priority on every port on the device. Therefore, it is important to disable the
configured inbound rate limiters on uplink ports to prevent undesirable consequences of rate
limiting.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

369

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

370

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

371

After configuring the transmit queue mapping for a Class of Service, the outbound rate
shaping threshold can be configured per transmit queue. The above slide covers TXQ Rate
Shaper configuration from Class of Service Configuration Window. Note, in Policy Manager
version 4.3, it is no longer necessary to access Advanced Class of Service Mode to perform
TXQ Rate Shaper configuration. An outbound rate shaper can be applied directly to a TXQ
CoS by selecting the CoS, double clicking on the TXQ Shaper column and selecting the rate
that is desired or creating a new rate for the highlighted CoS and its corresponding queue.

Rate shaping paces the rate at which traffic is transmitted out of the selected transmit queue.
Rate shaping is disabled by default.Specify the rate at which traffic will be transmitted out of
the queue. The ranges for each rate are listed:
% - A percentage of the total bandwidth available
Kb/s - Kilobits per second
Mb/s - Megabits per second
Gb/s - Gigabits per second

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

372

As with prior releases of PM, TX queue rate shaping configuration can still be performed from
Advance CoS Mode. To access Advance CoS Mode click on Domain Managed CoS
Components and select Show All CoS Components in Tree (Advanced Mode), the full Cos
Tree will then be available in the left hand pane on the Class of Service Configuration
Window.
By expanding a Port Group in the left pane under the Classes of Service main tab, the
physical transmit queues for the Port Group become available for configuration as shown
above. In this example rate shaping will be set for the Trunk Port Group which has 11
Transmit Queue as shown in the slide. When a transmit queue is selected, Policy Manager
allows the enabling of outbound rate shaping for the physical transmit queues under the
General sub-tab. Note that a previously configured rate may be selected, or a new rate can
be created. When creating a new rate, the rate can be specified in percentage of maximum
link bandwidth or in rate units such as Kbps or Mbps.
Shaping Algorithm
The shaping algorithm determines what will happen to traffic when the maximum amount of
traffic the transmit queue can hold is exceeded. The algorithm is set by default to Tail Drop.
In rate shaping, packets received on a transmit queue for transmission out of the port are
queued up for delayed transmission. With the tail-drop congestion avoidance algorithm for
outbound traffic shaping, packets are buffered up and discarded when the number of packets
exceeds the queue length.
Note: Not all Enterasys Switches support Outbound Rate Shaping.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

373

By expanding the Transmit Queue Port Groups folder in the left pane under Classes of
Service main tab, the queuing configuration may also be configured for any ports belonging to
the specified Port Groups. In the example above, the Default Port Group for the 4 Transmit
Queue Port Group Port Type is configured with Strict Priority Queuing as shown by the Slice
Distribution pie chart indicating Queue 3 will transmit any packets in this queue before queue
2 will transmit, before queue 1 will transmit, and so on. However, it is also possible to
configure Weighted Fair Queuing on all ports within this Port Group by using the drop down
menu for the Transmit Queue Arbiter Mode.
Note that because this configuration is affected through the Transmit Queue Port Groups
folder, Enterasys platforms that do not support Full CoS mode cannot be configured by Policy
Manager for the queuing algorithm.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

374

For the Weighted Fair Queuing configuration, the slice configuration for each physical queue
can be configured with a specified percentage of the overall bandwidth available on that port.
In the example shown above, physical transmit queue 3 is serviced 70% of the time, and
transmit queue 2, 1, and 0 are serviced 15%, 10%, and 5%, respectively. This WFQ
configuration will only be implemented for the ports in the Default Port Group having 4
transmit queues.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

375

Hybrid queuing allows for mission-critical traffic that is high priority to be serviced before all
other types of traffic, while the remaining lower priority traffic may be allocated a percentage
of the link bandwidth utilization. Transmit queues (TxQ) represent the hardware resources for
each port that are used in scheduling packets for egressing the device. The S-Series and KSeries scheduler runs in a Low-Latency mode which allows the customer to configure a
hybrid of strict priority and weighted fair queuing.
The S & K series support 11 transmit queues. Queues 0, 9 and 10 are low latency queues
(LLQ). You can not configure an LLQ. Queues 1 - 8 are non-LLQs and can be configured.
The hardware scheduler will service all packets on queue 10 and then queue 9. Once there
are no more packets, the available bandwidth will be used to service queues 1-8 based on
the configured (strict or weighted fair queue) or default mode (strict). If there is any available
bandwidth after servicing these queues, then the remainder of the bandwidth will be used to
process queue 0.
This type of queuing is ideal when only one CoS of traffic is highly delay sensitive on the
network and all other Classes of Service, although need to be prioritized relative to each
other, are not composed of highly delay sensitive applications. Therefore, while the delay
sensitive CoS is always serviced before all other Classes of Service, the other Classes of
Service are allocated a percentage of the time on the link to transmit information relative to
each other, not interfering with the mission critical, delay sensitive traffic. An example of this
may be in the implementation of QoS for VoIP on the enterprise network.
Note: LLQs are hardware dependent. Not all hardware devices support low latency queuing.
The show cos port-config txq command can be used from the CLI to display LLQs for a given
module.
Note: TxQ Scheduling is not supported on fixed switches.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

376

The above slide shows an example of a Hybrid Queuing configuration, for the Trunk Port
Group. The Trunk Port Group is an 11 Transmit Queue Port Group that supports Low
Latency Queues (LLQ), .
With the above configuration, queues 0, 9 and 10 will operate as low latency queues (LLQ),
(i.e., these queues will operate in Strict Priority Mode). Queues 1 - 8 will function as nonLLQs and these queues will be serviced based on the Weighted Fair Queuing time slices
shown above. The hardware scheduler will service all packets on queue 10 and then queue
9. Once there are no more packets, the available bandwidth will be used to service queues 18 based on the configured (weighted fair queue). If there is any available bandwidth after
servicing these queues, then the remainder of the bandwidth will be used to process queue 0.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

377

Data Center Bridging (DCB) is an Enterasys Switch feature targeted specifically for the
demands of the Data Center. Within the Enterasys Switch Line, The 7100 Series Switch,
implements the full suite of Data Center Bridging protocols required for converged data
center network applications. The 7100-Series Switch, is a family of high density, high
performance 10 Gigabit Ethernet switches targeted specifically to meet the demands of the
Data Center.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

378

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

379

Data Center Bridging (DCB) enhances Ethernet technology by enabling the convergence of
various
applications in data centers (such as Local Area Networks (LAN), Storage Area Networks
(SAN), and advanced application High Performance Computing (HPC) onto a single
interconnect technology, by providing enhancements to existing 802.1 bridge specifications.
Existing high-performance data centers typically comprise multiple application-specific
networks that run on different link layer technologies, such as Fibre Channel for storage,
InfiniBand for high performance computing, and Ethernet for network management and LAN
connectivity. Data Center Bridging enables 802.1 bridges to be used for the deployment of a
converged network where all applications can be run over a single physical infrastructure.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

380

A Data Center Bridging implementation consists of:


Enhanced Transmission Selection (ETS): Provides a common management framework for
assignment of bandwidth to 802.1p CoS-based traffic classes (IEEE 802.1Qaz).
Congestion Notification (CN): Allows a device to detect congestion on an egress transmit
queue and send a message back to the source to back off the traffic rate to alleviate the
congestion (IEEE 802.1Q-2011).
Priority-based Flow Control (PFC): Provides a link level flow control mechanism that can
be controlled independently for each Class of Service (CoS), as defined by 802.1p. The goal
of this mechanism is to ensure zero loss under congestion in Data Center Bridging networks
(IEEE 802.1Qbb).
Application Priority (AP): Provides for the advertisement to the link peer of a preferred
priority to be applied to frames carrying application-specific traffic.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

381

Bridging Exchange (DCBX) Protocol:


The base control protocol utilized in Data Center Bridging is the Data Center Bridging
Exchange (DCBX) protocol. DCBX can be used by a device: to detect peer device
capabilities, to detect mis-configuration of a feature between the peers on a link, to perform
configuration of DCB features on the link peer.

Enhanced transmission selection, priority-based flow control, application priority, and


congestion notification protocols utilize DCBX. DCBX uses LLDP to exchange attributes
between two linked peers. LLDP is unidirectional and advertises connectivity and
management information about the local station to adjacent stations on the same IEEE 802
LAN. DCBX state machines are invoked when the remote MIB changes and a DCBX TLV is
present.
Note: For Data Center Bridging to correctly, the Peer Device (end station attached to switch)
MUST be capable of processing the Enhanced Transmission Selection, Priority-based Flow
Control, Application Priority, and Congestion Notification messages sent by the switch. Under
most circumstances, end stations do not support Data Center Bridging functionality by
default.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

382

Enhanced Transmission Selection (ETS) queuing provides for configuring two or more traffic
class queues (transmit queue (TxQ)) to be allocated for bandwidth that will not be serviced
until all non-ETS queues are empty. The switch services non-ETS selection queues first
using strict priority, based upon the priority assigned to the queue.
Enhanced transmission selection queue contents are forwarded to a fair queue scheduler on
a strict priority basis. The fair queue scheduler distributes the remaining bandwidth, after all
non-enhanced transmission selection queues are empty, based upon the bandwidth
allocation configured for the enhanced transmission selection queues.
Note: Enhanced transmission selection queuing is restricted to configurable queues. 7100Series modules support both configurable and non-configurable queues. Non-configurable
queues are Low Latency Queues (LLQ). LLQs are labeled LLQ in the show cos port-config
command display.
Note: Enhanced Transmission Selection (ETS) is also an S-Series feature.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

383

In the above slide, 802.1 priorities 0 - 4 are configured for enhanced transmission selection
queuing. Priorities 4 and 3 are assigned to traffic class 4. Priorities 0, 1, and 2 are assigned to
traffic class 2. If all non-ETS queues are empty and there is remaining bandwidth, traffic
classes 4 and 2 will be serviced using weighted fair queue scheduling. Based upon enhanced
transmission selection bandwidth allocation, the weighted fair queue (WFQ) scheduler will
service traffic class 4 at 70 percent and traffic class 2 at 30 percent of remaining bandwidth.
Within each traffic class group (4 and 2 in this example), each priority is serviced based on a
strict priority (SPQ) scheduler.
Note: All Non-ETS queues are serviced using Strict Priority Queuing (SPQ)
Note: The 7100 and S-Series switches supports up to two ETS groups (traffic classes).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

384

Priority-based Flow Control (PFC), as defined by 802.1Qbb, functions similarly to 802.3 PAUSE,
but allows for pausing of flows per hardware egress queue that the PFC priority is mapped to
instead of per port.
Traffic congestion is determined on a per-queue basis. When the ingress queue that maps to a
particular 802.1p priority reaches a certain (non-configurable) threshold, a priority-based flow
control message is sent via LLDP, to the peer to pause transmission of traffic tagged with that
priority. Once queue buffer levels return to normal, the ingress queue stops sending priority-based
flow control messages, and the peer no longer pauses traffic tagged with that priority.
Priority-based flow control is defined only for a pair of full duplexed MAC devices connected by one
point-to-point link. An egress queue is paused by a switch when it receives a message from its peer
on the other end of the link that priority-tagged frames mapped to the queue should be paused.
Flows are paused by egress queue, not by priority. If non-PFC priorities are mapped to the same
egress queue as PFC priorities, the non-PFC priority data will be paused along with the PFC
priority data. For example, if priorities 3 and 6 are set to egress on queue 5 and only priority 3 is
enabled for priority-based flow control, when priority 3 is paused, queue 5 will stop transmission
and priority 6 will be paused as well.
Note: Priority-based flow control and 802.3 PAUSE are mutually exclusive. Enabling one feature
automatically disables the other.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

385

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

386

Application priority advertises to the peer a preferred priority for frames carrying application-specific
traffic. Applications are defined by protocol (Ethertype, TCP, UDP, or Layer 4 port) and protocol ID.
Priority tagging is performed by the peer, not by the device advertising the application priority. The
peer receiving the Application Priority TLV tags its traffic to the advertised priority. Application
priority works with enhanced transmission selection and priority-based flow control in that tagged
protocol-specific traffic for the specified priority enforces enhanced transmission selection and
priority-based flow control behaviors on the traffic.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

387

As the use of Ethernet technologies in the data center expands, prevention of packet loss by some
applications becomes more critical. Congestion notification was created to:
Allow the monitoring of Congestion Controlled Flows (CCFs)
Detect congestion
Notify the source to lower the transmit rate for the offending congestion controlled
flow
Congestion Notification (CN), as defined in IEEE 802.1Q-2011 allows a device to detect congestion
at a switch congestion point (egress transmit queue) and transmit a Congestion Notification
Message (CNM) PDU back to the reaction point (flow source). The reaction point backs off the
traffic rate to alleviate the congestion. Congestion notification supports long lived data flows in a
network with delay due to limited bandwidth. It allows for applications that are latency-or-losssensitive to run over Ethernet technologies experiencing egress transmit queue congestion.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

388

For the congestion notification to work, it must be supported at all egress queues for each switch
that is in the path from the source to the destination. Congestion notification is applied to an egress
queue by configuring an 802.1p value as a Congestion Notification Priority Value (CNPV) that is
mapped to the transmit queue using CoS. This collection of egress queues configured for
congestion notification make up the Congestion Notification Domain (CND).
Each transmit queue that has been configured for congestion notification is monitored to detect
congestion. When congestion is detected, a Congestion Notification Message (CNM PDU) is
generated at the congestion point and sent back to the source with the details of the queue and flow
that triggered the message. The source can then use this information to back off the transmission
rate for the application that triggered the CNM PDU.
A congestion notification priority value (CNPV) is an 802.1p value configured for congestion
notification and mapped to the same queue on all ports that make up the Congestion Notification
Domain for that CNPV. There are eight 802.1p values from 0 - 7. The maximum number of CNPVs
configurable on a port is seven. There must always be at least one alternate (non-CNPV) priority
value per port.
The congestion notification alternate priority is a non-CNPV used to protect the Congestion
Notification Domain from a non-congestion controlled flow packet with the same priority as a
configured CNPV on the port from triggering congestion notification. At least one 802.1p priority on
a port must be a non-CNPV. Any non-CNPV can be used as a congestion notification alternate
priority.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

389

Note: For Congestion Notification to work, Reaction Point MUST be capable of processing the Congestion
Notification Message. Under most circumstances, end stations do not support Data Center Bridging
functionality by default.
The above figure highlights the congestion notification process. Note, the Reaction Point (1) is the source of
a congestion controlled flow. A congestion controlled flow consists of frames, all with the same CNPV, and all
assigned to a single transmit flow queue in the originating end station. A CNPV is an 802.1p priority mapped
to a congestion notification egress queue of each device in the flow. The reaction point must support the
processing of CNM PDUs and must be able to back off the transmission rate of the congestion controlled flow
based on information contained in the CNM PDU.
The reaction point is connected to a destination device (5) by traversing switches A and B (2 and 3). All
egress ports on switches in the path between the reaction point and the destination are configured as
congestion points (4). All congestion points are configured for a CNPV. In the above example, CNPV 6 is
mapped to transmit queue 6 for all congestion points.
Up to seven 802.1p priorities mapped to a ports transmit queues can be configured as CNPVs. At least one
802.1p priority on a port must be a non-congestion aware priority. Any non-congestion aware priority can be
used as a congestion notification alternate priority. When a packet that does not belong to a congestion
controlled flow has the same priority as a CNPV configured on a congestion notification domain edge ingress
port, it must be remapped to an alternate priority to defend against a false triggering of a congestion
notification by a non-congestion controlled flow.
The reaction point tags the traffic with the CNPV 6 mapped to queue 6 and transmits it to the destination. The
congestion point at switch B identifies congestion and creates a CNM PDU packet that is sent back to the
reaction point. When the reaction point receives the CNM PDU, it uses the information contained in the CNM
PDU to back off the reaction point transmission rate for the queue associated with the congestion controlled
flow that triggered the CNM PDU.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

390

A Congestion Notification Domain Defense provides a means of defending a congestion notification


domain against incoming frames from outside of the domain. Domain defense assumes that every
bridge along a path between two congestion aware end-stations, using a particular CNPV, is
properly configured for congestion notification and therefore belongs to the congestion notification
domain. That every bridge ensures that frames not configured for a CNPV use different queues
than
the CNPV configured queues for those devices.
Domain defense protects the boundaries of a congestion notification domain by preventing frames
not in a congestion controlled flow from entering congestion point controlled queues. Domain
defense takes advantage of the ability to change the priority value based upon whether or not the
ports neighbor is also configured with the same CNPV. If a frame with the same priority as the
CNPV is not in the congestion controlled flow, the frame priority is changed to the configured
alternate priority for that CNPV. A default domain defense mode is configured at each congestion
point port.
Alternate Priority
The congestion notification alternate priority is a non-CNPV, used to protect the congestion
notification domain from a non-congestion controlled flow packet with the same priority as a
configured CNPV, on the port from triggering congestion notification. At least one 802.1p priority on
a port must be a non-CNPV. Any non-CNPV can be used as a congestion notification alternate
priority. When a packet ingresses a port at the edge of a congestion notification domain and has the
same priority as a CNPV configured on the ingress port, the packets priority must be remapped to
an alternate priority.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

391

Defense Modes:
Disabled The domain defense mode state on a port for which congestion notification is disabled.
The priority is not a CNPV. Congestion notification does not control priority remapping of input
frames on this port. CN-TAGs are neither added by an end station nor removed by a bridge.
Disabled mode is only set administratively.
Edge The domain defense mode configured on a congestion point port that resides at the edge of
the congestion notification domain. All frames ingressing the edge of a congestion notification
domain by definition do not belong to a congestion controlled flow for this domain. On this port for
the given CNPV, congestion notification controls priority remapping. The input frame priority
parameters are remapped to an alternate (non-CNPV) value. CN-TAGs are not added by an end
station, and are removed from frames before being output by a bridge. This mode is optional for an
end station.
Interior The domain defense mode configured on a congestion point port that resides within the
congestion notification domain between the flows source reaction point and the destination endstation. This port does not yet know whether its neighbor is able to receive a CN-TAG in frames
sent to it. On this port for the given CNPV, the input frame priority parameters are not remapped.
CN-TAGs are not added by an end station, and are removed from frames before being output by a
bridge.
Interior-Ready The domain defense mode configured on an interior congestion port that knows
its neighbor is able to receive a CN-TAG in frames sent to it. On this port for the given CNPV, the
input frame priority parameters are not remapped. CN-TAGs can be added by an end station, and
are not removed from frames by a bridge.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

392

In the above diagram there are two packet flow sources. One of the flow sources is a reaction point
configured for CNPV 6 and mapped to queue 6 (Server 1). The second flow source is not
configured for congestion notification (Server 2). Additionally, there are two paths between the two
packet flow sources and the destination. The first path is from the two flow sources to the
destination through switches A and B. The second path is from the flow sources to the destination
through switches A and C, an IP network cloud, and switch B.
There are three flow discussions that can be derived from the above diagram, we will
examine 2 flow paths:
Server 2 non-congestion notification flow The 802.1p priority 6 frame sourced at Server 2 is a
non-CN frame because its source is not a reaction point within a congestion notification domain. As
the frame enters port 2 Switch A, because port 2 is a domain edge port, and the frame priority
agrees with CNPV for this domain, the frame priority (6) is changed to the alternate priority value
(priority 4). The frame transits the remainder of the path to the destination incapable of triggering
congestion notification.
Server 1 congestion notification flow (Switch A/Switch B path) The CNPV 6 frame sourced
at Server 1 (reaction point) is a congestion notification frame. As the frame transits to the
destination, both ingress ports are configured for the interior-ready defense mode because they
have successfully negotiated CNPV 6 with their peers. The CNPV value is not changed to an
alternate priority when ingressing interior-ready ports. The frame exits the congestion notification
domain for CNPV 6 at port 2, Switch B, and arrives at the destination with its priority unchanged.
Should congestion occur at port 4 of Switch A or port 2 of Switch B, a CNM PDU will be sent back
to the reaction point which will back off the flow transmit rate so long as it receives CNM PDUs from
the congestion point.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

393

Admin Domain defense is administratively configured. When defense choice is set to admin,
defense mode defaults to disabled on all ports. Admin can be configured both globally and on a
port basis.
Auto Domain defense is dynamically configured using LLDP. When defense choice is set to
auto, defense mode defaults to edge on all ports. Auto can be configured both globally and on a
port basis.
Default Domain defense is based upon the creation setting (enable or disable) used when the
CNPV is created. If creation enable is set, domain defense defaults to auto. If creation disable is
set, domain defense defaults to admin. Default is a port level setting.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

394

Using the above chart, if the global priority choice is set to auto and the port-priority choice is set to
default (row 3 of table), both the defense mode and alternate priority are auto chosen. In this case,
the defense mode would default to edge and the alternate priority would default to the next lowest
non-CNPV value, or if no lower one exists, the next highest non-CNPV value. Priority choice can be
configured globally using the set dcb cn priority choice command and at the port level using the
set dcb cn port-priority choice command.
set dcb cn priority priority choice {admin | auto}
set dcb cn port-priority port-string choice {admin |auto | default}

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

395

For dynamic configuration of domain defense to take place you must assure that the global priority
choice is set to auto (default setting when creating a CNPV in creation enable mode). Enable
congestion notification LLDP on the device using the set dcb cn priority lldp command (defaults
to enabled). Enable the sending of congestion notification TLVs on each congestion point port using
the set lldp port tx-tlv congestion-notif command (defaults to disabled).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

396

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

397

This slide presents a high level view of how authentication and Enterasys policy are tied together in a
network deployment. The main components are as follows: Policy Profiles, Authentication Method, and
RADIUS. Authentication Methods (802.1X, MAC-based, PWA, CEP) and RADIUS will both be covered
in depth in this module. Authentication controls the access of connecting end systems to the network
based on supplied credentials. For Enterasys switches, the controlling of access to the network is
more than just opening or closing the physical port the authenticating user/device is connected to
based on the passing or failing of authentication by an end system. Upon passing authentication,
Enterasys switches have the capability to properly allocate network resources to authenticated
users/devices aligned with their business role. Therefore, authentication is used in conjunction with the
granular control of network resources supported through Enterasys policy implementation, to
automatically allocate network resources to an authenticated user/device directly to the physical port of
connection with location independence on the infrastructure. A high level overview of how Enterasys
switches accomplish this goal is explained as follows:
An authentication method is implemented between the user device connecting to the network and the
NAS in order to acquire credentials from the user/device for validation on the network. Authentication
methods vary in their implementation in order to cater to the types of devices that may connect to the
network. The RADIUS Server, containing a database of valid users and corresponding credentials, can
either accept or reject the authenticating user/device based on the credentials it received in comparison
to the credentials it has stored. If the credentials are correct, a RADIUS Access-Accept is returned to
the NAS, and if the credentials are invalid, a RADIUS Access-Reject is returned to the NAS. However,
in the database on the RADIUS server, each user can be associated to a Policy Profile by the
configuration of the RADIUS filter-ID attribute. This RADIUS attribute is simply a string and is
formatted in the RADIUS Access-Accept packet sent back from the RADIUS server to the NAS during
the authentication process. Therefore, each user on the RADIUS server can be configured with a
RADIUS filter-ID attribute that matches the name of the Policy Profile the user should be assigned for
the proper allocation of network resources.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

398

Finally, we see the entire system. Here is a simple system walkthrough.


1) Policies are created in the Policy Manager and are pushed to the network system.
2) The user authenticates to the network by connected to the network access layer device.
3) The switch validates the authenticating end systems credentials by using its RADIUS
client to communicate to the backend RADIUS server.
4) When the Directory Services can be leveraged by the RADIUS server to validate the
associated credentials of the authenticating end system.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

399

5) When the NOS/Directory sends the authorization packet back through the RADIUS server,
a matching of the NT Domain, or other grouping, to the appropriate policy profile occurs.
(note - The Administrator must configure the RADIUS server to add a Filter-ID to the
authentication packet, that has the appropriate Role based on some other information (such
as NT Domain) in that packet).
6) The switch receives the authentication frame, and does two things; it checks to see if the
user has successfully authenticated based on the type of RADIUS message (i.e. RADIUS
Access-Accept or RADIUS Access-Reject) returned from the RADIUS server, and it checks
the Filter-ID RADIUS attribute to see if there is a policy role to assign to the authenticated
users.
7) If the user has successfully authenticated, the port is changed to the OPEN state, and if
there was a policy role defined in the RADIUS Access Accept message, the appropriate
policy role and associated classification rules are applied to that port.
Note - If the port state, or the authentication state changes, the port is set back to the default
port setting.
The goal of policy-based networks has been realized.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

400

Policy Manager has been designed to work with a RADIUS server for authentication.
RADIUS (Remote Authentication Dial In User Service) is an authentication solution that has
been tested at Enterasys. It exchanges information between a RADIUS client (a device that
provides network access to users) and a RADIUS authentication server (a device that
contains authentication information for these users).

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

401

Note, the above example shows a Remote Access Policy on Windows 2003 Server IAS.
Policy_Class_Admin, maps to Security Group Employees in Active Directory (AD). Users in
AD, who are members of the Employees Security Group, will be dynamically assigned the
Administrator Policy Profile upon successful authentication by Windows 2003.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

402

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

403

Port Web Authentication (PWA) is an authentication process that uses a web browser, userlogin process to gain access to ports. It employs either CHAP (Challenge Handshake
Authentication Protocol) or PAP (Password Authentication Protocol). It locks down the port to
which a user is attached, until after the user successfully logs in using a web browser to
access the switch. The switch passes all login information from the end station to a RADIUS
server for authentication before permitting access on the port.
If you use PAP, the password is unencrypted. With CHAP, the password is used to generated
a digest by using a one way hash function, and this digest is transmitted over the line. If the
digest received by the RADIUS server matches the digest produced by the RADIUS server
generated with its locally configured password, the determination is made that the same keys
produced the same digest, and access is granted.
Depending upon the authenticated state of the user, a login page or a logout page will be
displayed. When a user submits username and password, the switch then authenticates the
user via a preconfigured RADIUS server.
If authentication is accepted by the authentication server, a string representing a locally
configured policy role may be returned using the Filter-ID attribute configuration on the
RADIUS server. If this policy role is configured on the switch, the switch applies the
classification rules associated to this policy role in the facilitation of user-specific dynamic
policy assignment; similar to the 802.1X and MAC authentication processes.
PWA can be configured concurrently with, 802.1X and MAC authentication with Multi-User
Authentication.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

404

MAC authentication enables administrators to allow access to the network based on a


devices MAC address. Note that MAC authentication authenticates a device, not a user
because a MAC address is a representation of a device and not actually a users identity.
MAC authentication is also termed MAC address authentication or MAC-based
authentication.
MAC authentication provides a mechanism for administrators to authenticate source MAC
addresses and grant appropriate access to end user devices directly attached to switch ports.
It is important to note that MAC authentication is vulnerable to MAC address spoofing attacks
and is not considered a secure authentication method.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

405

Convergence is a method to detect a remote IP telephony or video device and apply a policy
to the connection port based on the type of CEP device found. When a convergence end
point (CEP) is found, the global policy for CEP detection is applied to the user on that port.
The following phone detection types are available on S and K series.
Cisco Phone Detection Uses the Cisco Discovery Protocol (CiscoDP) to detect IP phones.
When using Cisco phone detection, CiscoDP must be enabled and configured properly.
Siemens or Hipath Phone Detection Uses either an IP address or a UDP / TCP port
number for detection. By default UDP port 4060 will be used and there is no IP address
configured. The commands in this section can be used to configure Siemens detection using
new parameters.
H.323 Phone Detection Uses either a UDP / TCP port number with multicast group IP
address or a UDP / TCP port number for detection. Default UDP ports are 1718,1719,1720.
Default group address is 224.0.1.41. The commands in this section can be used to configure
H.323 detection using new parameters. A second default H.323 detection excludes the
default group address.
SIP Phone Detection Uses either a UDP / TCP port number with multicast group IP address
or a UDP / TCP port number for detection. Default UDP / TCP port is 5060 and a multicast IP
of 224.0.1.75. A second default SIP detection excludes the default group address.
There is no way to detect if a Siemens, SIP or H.323 phone goes away other than a link
down. Therefore, if these types of phones are not directly connected to the switchs port and
the phone goes away, the switch will still think there is a phone connection and any
configured policy will remain on the port. Detected CEPs will be removed from the connection
table if they do not send traffic for a period of time equal to the etsysMultiAuthIdleTimeout
value. Additionally, CEPs will be removed if the total duration of their sessions exceeds the
time specified by etsysMultiAuthSessionTimeout.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

406

In this slide, the motivation for implementing multi-user authentication is described. In this
example shown above, a shared line is used to carry both voice and data traffic to the access
layer port on the infrastructure device. Lets assume the policy profile that would be assigned
to the PC would be Sales where the corresponding services are precluded for VoIP, and the
policy profile that would be assigned to the IP Phone would be IP Phone where the
corresponding services are exclusively VoIP. Therefore, if either policy profile was assigned
to all traffic entering this access layer port, then the proper network resources would not be
provisioned to the connecting user/device, and the companys security policy could be easily
violated.
The primary problem that administrators are presented with is that, in many cases, users
attach to access-layer devices that do not support authentication and policy capabilities;
rather, these devices are simply basic workgroup switches (which may support 802.1Q, for
example) or even shared-access hubs. The challenge presented is to be able to authenticate
users individually who may be utilizing a single physical uplink port into the distribution layer
of the network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

407

Enterasys addresses this issue by introducing Multi-User authentication.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

408

Additional Consideration
When deploying PWA or MAC authentication for one user per port or multiple users per port,
always set to the authentication mode of the S and K Series device to Multi-mode.
PWA and MAC authentication are operational only when device is set for Multi-mode;
irrespective of the number of authenticating users per port.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

409

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

410

802.1X or MAC authentication must be used for the IP phone


802.1X, PWA, or MAC authentication can be used for the PC
The authenticated VLAN is either;
- The VID specified in the policy profiles PVID Override if it is enabled; OR,
- The VID specified in the ports PVID

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

411

Using the RADIUS sub-tab at the device level, the RADIUS client can be configured for the
device. By clicking the Add button, a RADIUS server can be specified, for both authentication
and accounting, as either for authenticating users/devices attempting network access, or
users/devices attempting management access to the device itself.
The RADIUS Client Settings section contains global control for enabling/disabling the
RADIUS client for authentication and accounting.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

412

Three RADIUS Tunnel attributes:


Tunnel-Type attribute
Indicates the tunneling protocol to be used when this attribute is formatted in RADIUS
Access-Request messages, or the tunnel protocol in use when this attribute is formatted in
RADIUS Access-Accept messages
Type: Set to 64 for Tunnel-Type RADIUS attribute
Length: Set to 6 for six-byte length of this RADIUS attribute
Tag: Provides a means of grouping attributes in the same packet which refer to the
same tunnel.
Tunnel-Medium-Type attribute
Indicates the transport medium to use when creating a tunnel for the tunneling protocol
determined from Tunnel-Type attribute
Type: Set to 65 for Tunnel-Medium-Type RADIUS attribute
Length: Set to 6 for six-byte length of this RADIUS attribute
Tag: Provides a means of grouping attributes in the same packet which refer to the
same tunnel.
Tunnel-Private-Group-ID attribute
Indicates the group ID for a particular tunneled session.
Type: Set to 81 for Tunnel-Private-Group-ID RADIUS attribute
Length: Set to a value greater than or equal to 3.
Tag: Provides a means of grouping attributes in the same packet which refer to the
same tunnel.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

413

By default, all Enterasys policy capable devices will use the RADIUS Filter-ID attribute
returned from the RADIUS server to dynamically apply a policy profile to an authenticated
user/device upon successful authentication. However, by default if RFC 3580 Tunnel
attributes for dynamic VLAN assignment is returned to an Enterasys policy capable device
from a RADIUS server for a successfully authenticated end system, the VLAN will not be
dynamically updated to the value indicated in these attributes. For this to happen, the VLAN
authorization state of the device must be enabled globally and enabled per port. By default,
Enterasys policy capable devices are configured with their global VLAN authorization state as
disabled and their per port VLAN authorization state as enabled.
Using the Authentication tab in the right pane for a selected device in the left pane, the RFC
3580 VLAN Authorization section can be used to change the device level VLAN authorization
setting so that a switch receiving RFC 3580 VLAN Tunnel attributes during the authentication
of a end system dynamically alters the VLAN configuration of the port upon successful
authentication.
The per port VLAN authorization state can be configured by selecting the appropriate device
in the left pane, and double clicking on a specified port under the Details View tab in the right
pane. In the Port Properties window, the RFC3580 VLAN authorization tab under the
Authentication Configuration tab can be used to configure the per port VLAN authorization
state as well as the egress VLAN authorization setting for the port, defaulting to untagged.
Note that the Port Properties window will be used to configure all per port authentication
settings described in this section. The Port Properties window can be accessed by double
clicking on ports under the Details View tab in the right pane.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

414

In Policy Manager, the authentication method must first be selected at the device level. It is
recommended that the authentication method not be enabled at the device level, until all port
level authentication settings are properly set to avoid possible loss of connectivity to the
device. On Enterasys platforms, 802.1X authentication is by default globally enabled and
disabled per port.
To enable 802.1X authentication globally, select 802.1X under Single User or Multi-User from
the Authentication sub-tab, and change the Authentication Status to Enabled, as shown
above. Use caution with enabling 802.1X authentication globally, because by default 802.1X
authentication is enabled per port. Therefore, all 802.1X port level configuration must be
completed prior to globally enabling 802.1X authentication to avoid lost communication with
the device.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

415

To enable 802.1X authentication per port, navigate to the port where 802.1X is to be
configured by selecting the device in the left pane, and double click the port in the right pane
under the Details View tab to bring up the Port Properties window as shown above. Then,
select Active for the Authentication Behavior on the port. This changes the port level
configuration of all authentication methods enabled at the device level, which is only 802.1X
in this case. Therefore, this port will be set to 802.1X Auto mode of operation if it is not
already by default. However, if the Disable 802.1X authentication for the port checkbox is
checked, the corresponding port will be set to Forced-authorized.
NOTE: Before configuring an authentication method at the port level, the authentication
method must be chosen at the device level.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

416

If more than one user is connected to a port where authentication is implemented,


infrastructure will not properly provision per-user/device network resources if one policy
profile is applied to all traffic entering the port sourced from various users/devices based on
which device authenticated last. Multi-user authentication supports the authentication of
multiple users/devices, possibly implementing multiple authentication methods, per port, with
the assignment of multiple policy profiles per user/device on the same port.
To enable Multi-User Authentication globally for a set of authentication methods, select the
appropriate authentication methods under Multi-User from the Authentication subtab and
change the Authentication Status to Enabled. Also, set the relative authentication method
precedence. Furthermore, previously explained global parameters for all authentication
methods need to be configured such as the MAC authentication password and PWA
parameters.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

417

To configure multi-user authentication per port, navigate to the port where multi-user
authentication is to be configured opening the Port Properties window and selecting Active for
the Authentication Behavior on the port in the General tab under the Authentication
Configuration tab.
This changes the port level configuration of all authentication methods enabled at the device
level, which is 802.1X, PWA, and MAC authentication in this case.
To allow a subset of the authentication methods on the port, the Disable <Method>
Authentication for the port checkbox can be used.
Furthermore, the number of users to be authenticated per port can be limited by using the
Authenticated User Counts tab.
Additionally, the default role for the port can be configured in the Role Status tab under the
General tab of the Port Properties window.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

418

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

419

Setting MultiAuth Authentication Port Mode: MultiAuth authentication supports the


configuration of MultiAuth port and maximum number of users per port properties. The
MultiAuth port property can be configured as follows:
Authentication Optional Authentication methods are active on the port based upon the
global and port authentication method. Before authentication succeeds, the current policy role
applied to the port is assigned to the ingress traffic. This is the default role if no authenticated
user or device exists on the port. After authentication succeeds, the user or device is allowed
to access the network according to the policy information returned from the authentication
server, in the form of the RADIUS Filter-ID attribute, or the static configuration on the switch.
This is the default setting.
Authentication Required Authentication methods are active on the port, based on the global
and per port authentication method configured. Before authentication succeeds, no traffic is
forwarded onto the network. After authentication succeeds, the user or device gains access
to the network based upon the policy information returned by the authentication server in the
form of the RADIUS Filter-ID attribute, or the static configuration on the switch.
Force Authenticated The port is completely accessible by all users and devices connected
to the port, all authentication methods are inactive on the port, and all frames are forwarded
onto the network.
Force Unauthenticated The port is completely closed for access by all users and devices
connected to the port. All authentication methods are inactive and all frames are discarded.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

420

To configure User + IP Phone authentication on the A4 & D2, the multi-user mode for the
device must be enabled under the Authentication tab and the appropriate authentication
method must then be selected.
To configure User + IP Phone authentication per port, navigate to the port where User + IP
Phone authentication is to be configured and select Active for the Authentication Behavior
on the port.
This changes the port level configuration of all authentication methods enabled at the device
level, which is 802.1X, PWA, and MAC authentication in this case. To allow a subset of the
authentication methods on the port, the Disable <Method> Authentication for the port
checkbox can be used. Additionally, the number of users to be authenticated per port must be
set to 2 under the Authenticated User Counts tab.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

421

Furthermore for User + IP Phone authentication, the administrator must statically configure
the VLAN-to-policy mapping that assigns the IP phones VLAN tagged traffic to the policy role
associated to the IP phone. Click on the Roles tab and select the policy role to be associated
to the IP phone in the left pane. Then click on the Mappings tab in the right pane, to alter the
VLAN-to-policy mapping for the selected policy role. In the Mappings tab, use the Add button
in the Tagged Packet VLAN to Role Mapping field to map the selected policy role to the
particular VLAN to which the IP phone will be tagging its packets.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

422

Instead of configuring device-by-device or port-by-port, a group of devices or a group of ports


can be configured with the same device-level setting or port-level settings at one time in the
same way, avoiding the need to execute identical commands repetitively in Policy Manager.
This is implemented, respectively, through the Device Configuration wizard and the Port
Configuration Wizard. These wizards can be accessed using the Tools drop down menu.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

423

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

424

The concept of a flow is critical to understanding NetFlow. A flow is a stream of IP packets in


which the values of a fixed set of IP packet fields is the same for each packet in the stream. A
flow is identified by a set of key IP packet fields found in the flow. Each packet containing the
same value for all key fields is considered part of the same flow, until flow expiration occurs.
If a packet is viewed with any key field value that is different from any current flow, a new flow
is started based upon the key field values for that packet. The NetFlow protocol will track a
flow until an expiration criteria has been met, up to a configured number of current flows.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

425

NetFlow is a flow-based data collection protocol that provides information about the packet
flows being sent over a network. NetFlow collects data by identifying unidirectional IP packet
flows between a single source IP address/port and a single destination IP address/port, using
the same Layer 3 protocol and values found in a fixed set of IP packet fields for each flow.
NetFlow collects identified flows and exports them to a NetFlow collector. Up to four NetFlow
collectors can be configured on an S-Series device. A NetFlow management application
retrieves the data from the collector for analysis and report generation.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

426

Standard system feedback is simply not granular enough to provide for such network
requirements as planning, user or application monitoring, security analysis, and data mining.
For example, because of its ability to identify and capture network flows, NetFlow:
Provides a means to profile all flows on your network over a period of time. A network profile
provides the granularity of insight into your network necessary for such secure network
functionality as establishing roles with policy and applying QoS to policy.
Provides a means of isolating the source of DoS attacks allowing you to quickly respond with
a policy, ACL, QoS change, or all of these to defeat the attack.
Can identify the cause of an intermittently sluggish network. Knowing the cause allows you to
determine whether it is an unexpected, but legitimate, network usage that might be
rescheduled for low usage time blocks, or maybe an illegitimate usage of the network that
can be addressed by speaking to the user.
Can look into the flows that transit the network links, providing a means of verifying whether
QoS and policy configurations are appropriately configured for your network.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

427

To use NetFlow enable NetFlow on all ports where packet flows aggregate. In the figure
above, you will find an abbreviated sample of the independent flow records that are captured
at each NetFlow-enabled port. These flow records will be retained locally in a cache until a
flow expiration criteria has been met. As shown, when one of the flow expiration criteria is
met, NetFlow export packets are then sent to the NetFlow collector (NetSight). The
management application will process the records and generate useful reports. These reports
provide you with a clear picture of the flows that traverse your network, based upon such data
points as source and destination address, start and end time, application, and packet priority.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

428

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

429

The data captured for each flow is different, based on the NetFlow export version format
supported by the network device. This data can include such items as packet count, byte
count, destination interface index, start and end time, and next hop router.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

430

Enable NetFlow on the K, N, S-Series Switch with NMS Console In NMS Console go to
"Tools" then "NetFlow Sensor Configuration, the NetFlow Sensor Configuration Window will
open.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

431

In "Flow Sensor Configuration" mark your Switch in the right window pane, then click on
"Enable NetFlow. In the Set Port Range window, enable the desired ports for NetFlow
functionality. Then Launch NetFlow.
From the switch CLI issue the show netflow command to verify netflow configuration:
SSA Chassis(su)->show netflow
Cache Status:
enabled
Export Version:
9
Export Interval:
1 (min)
Number of Entries:
3145727
Inactive Timer:
40 (sec)
Template Refresh-rate:
30 (packets)
Template Timeout:
1 (min)
Enabled Optional Export Data:
----------------------------None
Destination IP
UDP Port
10.160.110.35
2055
Enabled Ports Both Ingress and Egress:
ge.1.13

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

432

The Dashboard is a historical collection of NetFlow data presented as bar charts that let you
quickly see top-level flow data for your network. The charts provide a summary view of the
top applications, clients, and servers, by bandwidth usage and flow count. The Dashboard
charts collect streams of NetFlow records, reduce the data to basic measures, and records
the most significant results. By reducing the volumes of data into the most significant results,
the Dashboard can efficiently display months of NetFlow information with hourly granularity.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

433

Aggregate Flows Tab


This table displays bidirectional flow data that is stored in memory. It provides aggregated
flow data for a given client, server, server port, and protocol. All matching flows are
aggregated to show the flow count, total duration, amount of data transmitted, and additional
information. The Aggregate Flows tab presents flow data for real-time troubleshooting
purposes, and is not designed for historical long-term flow collection.
By default, the table displays the latest flows collected. Use the Current Selection drop-down
menu to select different display options:
flows by flow count
applications by flow counts, packets, or bytes
clients by flow counts, packets, or bytes
servers by flow counts, packets, or bytes
most connected clients
most connected servers
Click on a link in the Flows column to open a Flow Details tab that displays the individual
flows that contributed to the aggregate flow.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

434

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

435

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

436

Latest Flows Tab


This table displays unidirectional flow data stored in memory. It provides the raw nonaggregated flow data received from the flow sensors on the network. It presents flow data for
real-time troubleshooting purposes, and is not designed for historical long-term flow
collection. The C/S column uses icons to identify Client Flows and Server Flows .

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

437

Create Policy Rule


Right-click on a flow in the Aggregate Flows table or Latest Flows table and select Create
Policy Rule to create a UDP or TCP rule using the IP port. In the Policy Manager domain that
you select, two services are created, each with their own rule: one that is server-based and
one that is client-based.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

438

Create Policy Rule


The above example shows two services (NetFlow Collection Service Client Traffic) &
(NetFlow Collection Service Server) and there corresponding rules for traffic to and from
UDP port 54020 .
These are simplified rules that have no associated action and are not added to any roles. You
must use Policy Manager to configure actions for the rules and assign them to the
appropriate role.

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

439

2013EnterasysNetworks,Inc.Allrightsreserved EnterasysConfidential

440

2013EnterasysNetworks,Inc.AllrightsreservedEnterasysConfidential

441

Вам также может понравиться