Академический Документы
Профессиональный Документы
Культура Документы
• Management Access
− Gen-1 Modules - 10/100BASE-T Ethernet port + RS-232 serial port.
− Gen-2 Modules - 10/100/1000BASE-T Ethernet port + RS-232 serial port
− Additional USB port on Gen-2
Cisco MDS 9200
• The Cisco MDS 9200 Series includes the following
multilayer switches supporting multiprotocol capabilities
• Cisco Fabric Switch for HP c-Class BladeSystem (24 ports; 16 internal 2/4 Gbps, and 8 full-
rate ports)
• Cisco Fabric Switch for IBM BladeCenter (20 ports; 14 internal 2/4 Gbps, and 6 external full-
rate ports)
These fixed configuration switches are packaged in 1 RU enclosures and provide 1-Gbps, 2-
Gbps, 4-Gbps, or 10 Gbps autosensing Fibre Channel ports. Besides Telnet access, a
10/100BASE-T Ethernet port provides switch access.
Cisco MDS Modules
Modules 9500 Series 9216i 9222i
48-port 8-Gbps X
24-port 8-Gbps X
4/44-port 8-Gbps Host Optimized X X
48-port 4-Gbps X X X
24-port 4-Gbps X X X
12-port 4-Gbps X X X
4-port 10-Gbps X X X
32-port 2-Gbps * X X
18/4-port Multiservice module (MSM-18/4) X X
18/4-port Multiservice module FIPS X X
18-port 4-Gbps X
16-port 2-Gbps * X
* - Gen1 Cards
9500 Supervisor Modules
Supervisor-1 Supervisor-2 Supervisor-2A
1.4 Tbps in 9509 / 9506 Chassis 1.4 Tbps in 9509 / 9506 Chassis
1.4 Tbps in 9509 /9506 Chassis 700 Gbps per Supervisor-2 Module 700 Gbps per Supervisor-2 Module
Switching Bandwidth
700 Gbps per Supervisor-1 Module 2.2 Tbps in 9513 Chassis 2.2 Tbps in 9513 Chassis
1.1 Tbps per Crossbar Module 1.1 Tbps per Crossbar Module
Note: Must Replace Supervisor 1 modules with Supervisor 2 modules before upgrading to 4.1(1a) or
later.
Must Replace Gen 1 Line cards with Gen 2 Line cars before upgrading to 4.1(1a) or later.
4.1(1a) also doesn’t support MDS 9120, 9140, 9216,9216A switches
9500 Series Architecture
Cisco MDS 9000 Fibre Channel
Switching Modules
Oversubscription (Bandwidth Allocation)
• significant enhancements to the second-generation modules
− allows any port to perform like a line-rate interface.
− Used in conjunction with round robin fairness and data bursting capabilities,
− bandwidth allocation provides the capability to completely manage end-device
performance.
− Allocation of bandwidth is defined at the port level within a port group.
− A port group is defined by a series of ports that share back-end bandwidth
− Within a port group, port speed can be set to dedicated bandwidth or shared
bandwidth
− Bandwidth allocation is independent of the configured speed of the interface
Gen -1
− The 32-port module has a four-port port group. That is to say, four ports share the
back-end bandwidth to the backplane of the chassis.
Port Group Size per Module
Part Number Port Group Size Group Bandwidth
DS-X9032 4 2.5 Gbps
DS-X9124 6 12 Gbps
DS-X9148 12 12 Gbps
Oversubscription
• One characteristic that makes oversubscribed modules ideal for most data center
servers is their ability to respond to line-rate bursts of data
• Over subscribed Module
− First-generation 32-port module (DS-X9032)
− Second-generation DS-X9124 (24-port) and the DS-X9148 (48-port)
Part
1 Gbps FC 2 Gbps FC 4 Gbps FC
Number
DS-X9032 1.6:1 3.2:1 -
DS-X9124 1:01 1:1 2:1
DS-X9148 1:01 2:1 4:1
Oversubscription (Gen-2 Module Example)
• In Figure the Fibre Channel interface speeds can be configured to 1 Gbps, 2 Gbps, or
4 Gbps
• The bandwidth dedicated to them might be above or below that
• With 12 Gbps of bandwidth available to a port group,
• the example explicitly reserves the amount of bandwidth required for ports 1, 2, 5, and
6.
• This allows the remaining 4 Gbps of bandwidth to be shared on ports 3 and 4 and
allows either port to burst to 4 Gbps.
Port-Group 2 Port-Group 2
****** ******
****** ******
Port-Group 4
Total bandwidth is 12.8 Gbps Port-Group 8
Total shared bandwidth is 10.8 Gbps Total bandwidth is 12.8 Gbps
Allocated dedicated bandwidth is 2.0 Gbps Total shared bandwidth is 12.8 Gbps
-------------------------------------------------------------------- Allocated dedicated bandwidth is 0.0 Gbps
Interfaces in the Port-Group B2B Credit Bandwidth Rate --------------------------------------------------------------------
Mode Interfaces in the Port-Group B2B Credit Bandwidth Rate
Buffers (Gbps) Mode
-------------------------------------------------------------------- Buffers (Gbps)
fc2/19 16 4.0 shared --------------------------------------------------------------------
fc2/20 16 4.0 shared fc2/22 32 8.0 shared
fc2/21 16 4.0 shared fc2/23 32 8.0 shared
fc2/22 16 4.0 shared fc2/24 32 8.0 shared
fc2/23 16 4.0 shared
fc2/24 250 2.0 dedicated
SAN Technology Overview
• Fibre Channel Protocol
− FC Communications
− Port types, ISL
− Addressing, Framing, Timers
− Virtual SAN (VSAN), Zoning
− Port Channels, IOD
− Virtual Output Queuing (VOQ)
Buffer to Buffer Credit Flow Control
• BB_Credits are used to ensure enough FC frames in flight
• A full (2112 byte) FC frame is approx 2 km long at 1 Gbps,
• 1 km long at 2 Gbps and ½ km long at 4 Gbps
• As distance increases, the number of available BB_Credits
• need to increase as well
• Insufficient BB_Credits will throttle performance—no data will be
• transmitted until R_RDY is returned
− Traffic management
• Preferential routing or resource allocation
− Fault isolation
• Consolidation while maintaining isolation
• Soft zoning
− Implemented in switch software and enforced by name
server
− Name server responds to discovery queries with only devices
found in requestor’s zone or zones
− “Soft zoning” used to be synonymous with “WWN zoning”
• Hard zoning
− Enforced by ACLs in port ASIC
− Applied to all data path traffic
− “Hard zoning” used to be synonymous with “port zoning”
• Domain ID Limitation
− The Fibre Channel standard allows for a total of 239 port
addresses; however, qualification of such a fabric size is
nonexistent
30 26 October 2018
N-Port Virtualization
N Port Virtualization
N port virtualization (NPV) reduces the number of Fibre Channel domain IDs in
SANs.
Switches operating in the NPV mode do not join a fabric. They pass traffic
between NPV core switch links and end devices, which eliminates the domain
IDs for these edge switches.
31 26 October 2018
N-Port Virtualization
N-Port ID Virtualization
• NPIV allows a Fibre Channel host connection or N-Port, to be assigned multiple N-Port IDs or Fibre
Channel IDs (FCIDs) over a single link.
• All FCIDs assigned can now be managed on a Fibre Channel fabric as unique entities on the same
physical host.
• A host bus adapter (HBA) that supports the NPIV feature follows the standard login process.
• The initial connection and login to the fabric is performed through the standard F-Port login (FLOGI)
process.
• All subsequent logins for either virtual machines or logical part ions on a mainframe are transformed
into FDISC login commands.
32 26 October 2018
VSAN – Routed Connectivity - IVR
Data traffic is transported between specific initiators and targets on different VSANs without merging
VSANs into a single logical fabric. Fibre Channel control traffic does not flow between VSANs, nor
can initiators access any resource across VSANs aside from the designated ones. Valuable
resources such as tape libraries are easily shared across VSANs without compromise
IVR is in compliance with Fibre Channel standards and incorporates third-party switches, however,
IVR-enabled VSANs may have to be configured in one of the interop modes
IVR Terminology
• Native VSAN
− The VSAN to which an end device logs on is the native VSAN for that end device.
• Inter-VSAN zone (IVR zone)
− A set of end devices that are allowed to communicate across VSANs within their interconnected SAN fabric. This definition is
based on their port world wide names (pWWNs) and their native VSAN associations. You can configure up to 2,000 IVR zones
and10,000 IVR zone members in the fabric from any switch in the Cisco MDS 9000 Family.
• Inter-VSAN zone sets (IVR zone sets)
− One or more IVR zones make up an IVR zone set. You can configure up to 32 IVR zone sets on any switch in the Cisco MDS
9000 Family. Only one IVR zone set can be active at any time.
• IVR path
− An IVR path is a set of switches and Inter-Switch Links (ISLs) through which a frame from one end-device in one VSAN can
reach another end-device in some other VSAN. Multiple paths can exist between two such end-devices.
• IVR-enabled switch
− A switch in which the IVR feature is enabled.
• Edge VSAN
− A VSAN that initiates (source edge-VSAN) or terminates (destination edge-VSAN) an IVR path. Edge VSANs may be adjacent
to each other or they may be connected by one or more transit VSANs
• Border switch
− An IVR-enabled switch that is a member of two or more VSANs. Border switches in Figure span two or more different color-
coded VSANs.
• Transit VSAN
− A VSAN that exists along an IVR path from the source edge VSAN of that path to the destination edge VSAN of that path.
• Autonomous fabric identifier (AFID)
− Allows you to configure more than one VSAN in the network with the same VSAN ID and avoid downtime when enabling IVR
between fabrics that contain VSANs with the same ID.
• IVR VSAN Topology
• IVR uses a configured IVR VSAN topology to determine how to route
traffic between the initiator and the target across the fabric. You can
configure this IVR VSAN topology manually on an IVR-enabled switch
and distribute the configuration using CFS in Cisco MDS SAN-OS
Release 2.0(1b) or later.
• SAN-OS Release 2.1(1a) or later, you can configure IVR topology in
auto mode. Prior to Cisco MDS SAN-OS Release 2.0(1b), you need
to manually copy the IVR VSAN topology to each switch in the fabric.
• Autonomous Fabric ID
• The autonomous fabric ID (AFID) distinguishes segmented VSANS
(that is, two VSANs that are logically and physically separate but have
the same VSAN number). Cisco MDS SAN-OS supports AFIDs from
1 through 64. AFIDs are used in conjunction with auto mode to allow
segmented VSANS in the IVR VSAN topology database. You can
configure up to 64 AFIDs.
• Transit VSAN Guidelines
− Consider the following guidelines for transit VSANs:
− Besides defining the IVR zone membership, you can choose to specify a set of transit VSANs to
provide connectivity between two edge VSANs:
− If two edge VSANs in an IVR zone overlap, then a transit VSAN is not required (though, not prohibited)
to provide connectivity.
− If two edge VSANs in an IVR zone do not overlap, you may need one or more transit VSANs to
provide connectivity. Two edge VSANs in an IVR zone will not overlap if IVR is not enabled on a switch
that is a member of both the source and destination edge VSANs.
− Traffic between the edge VSANs only traverses through the shortest IVR path.
− Transit VSAN information is common to all IVR zone sets. Sometimes, a transit VSAN can also act as
an edge VSAN in another IVR zone.
feature ivr
ivr distribute
ivr commit
FCIP Vs iFCP
• FCIP is a tunneling protocol that moves Fibre Channel
traffic over an IP network. It is mostly used for remote
connections between two Fibre Channel SANs over a
TCP/IP network
39 26 October 2018
FCIP
40 26 October 2018
iFCP
41 26 October 2018
Fcdomain
About fcdomain Phases
• Principal switch selection—This phase guarantees the selection of a unique principal switch across
the fabric.
• Domain ID distribution—This phase guarantees each switch in the fabric obtains a unique domain
ID.
• FC ID allocation—This phase guarantees a unique FC ID assignment to each device attached to
the corresponding switch in the fabric.
• Fabric reconfiguration—This phase guarantees a resynchronization of all switches in the fabric to
ensure they simultaneously restart a new principal switch selection phase.
• The behavior of a subordinate switch depends on the allowed domain ID lists, the configured
domain ID, and the domain ID assigned by principal switch.
•When the received domain ID is not within the allowed list, the requested domain ID becomes the
runtime domain ID and all interfaces are isolated.
•When the assigned and requested domain IDs are the same, the preferred and static options are
not relevant, and the assigned domain ID becomes the runtime domain ID.
•When the assigned and requested domain IDs are different, the following cases apply:
–If the configured type is static, the assigned domain ID is discarded, all local interfaces are
isolated, and the local switch assigns itself the configured domain ID, which becomes the runtime
domain ID.
–If the configured type is preferred, the local switch accepts the domain ID assigned by the principal
switch, and the assigned domain ID becomes the runtime domain ID.
Output of “show fcdomain”
• VSAN 101
The local switch is a Subordinated Switch.
Local switch run time information:
State: Stable
Local switch WWN: 20:65:00:05:73:ac:22:c1
Running fabric name: 20:65:00:05:73:ac:22:81
Running priority: 99 ( Default priority of 128 )
Current domain ID: 0x63(99)
• ( all fcid assignments on this switch will start with 0x63)
Local switch configuration information:
State: Enabled
FCID persistence: Enabled
Auto-reconfiguration: Disabled
• ( the domain will not automatically start fabric reconfig in case of pricipal switch failure )
Contiguous-allocation: Disabled
Configured fabric name: 20:01:00:05:30:00:28:df
Optimize Mode: Disabled
Configured priority: 99
• ( manually assigned priority, 1 – 254 range. 1 Is the highest )
Configured domain ID: 0x63(99) (static)
• ( preferred / static , preferred will lead to new runtime domain id assigned when overlap is noted.
For static domain id , the switch will be segmented in case of overlap )
Principal switch run time information:
Running priority: 2
Interface Role RCF-reject
---------------- ------------- ------------
port-channel 1 Upstream Disabled
---------------- ------------- ------------
Output of “sh fcdomain domain-list”
• VSAN 101
Number of domains: 6 Depending on the vsans active
Domain ID WWN on the switches the principal
--------- -----------------------
0x03(3) 20:65:00:05:73:ac:22:81 [Principal] switch id can differ. Looking at
0x21(33) 20:65:00:05:73:ac:b7:81
0x1f(31) 20:65:00:0d:ec:3b:cc:41 the example, vsan 101 is local
0x63(99) 20:65:00:05:73:ac:22:c1 [Local]
0x5d(93) 20:65:00:0d:ec:3c:1c:81 while 1001 is across site and
0x5f(95) 20:65:00:0d:ec:b6:c4:41 hence the principal switch for
VSAN 1001 the vsans is different.
Number of domains: 11
Domain ID WWN
--------- -----------------------
0x01(1) 23:e9:00:0d:ec:3b:ca:41 [Principal]
0x1b(27) 23:e9:00:0d:ec:3b:cc:81
0x1d(29) 23:e9:00:0d:ec:3b:c1:c1
0x1f(31) 23:e9:00:0d:ec:3b:cc:41
0x03(3) 23:e9:00:05:73:ac:22:81
0x5d(93) 23:e9:00:0d:ec:3c:1c:81
0x5f(95) 23:e9:00:0d:ec:b6:c4:41
0x21(33) 23:e9:00:05:73:ac:b7:81
0x5b(91) 23:e9:00:0d:ec:3b:ca:c1
0x61(97) 23:e9:00:0d:ec:b7:43:01
0x63(99) 23:e9:00:05:73:ac:22:c1 [Local]
General issue’s on MDS switches
• Slow drain device - a device that cannot cope with the incoming traffic in a timely manner.
Slow drain devices can't free up their internal frame buffers and therefore don't allow the
connected port to regain their buffer credits quickly enough.
In NX-OS 4.2(7a) the slow drain policy for port monitor application is enabled by default.
SNMP traps can be used to alert admin/vendors on slow drain devices.
On older versions slow drain devices can be manually detected using commands below :
show hardware internal packet-flow dropped ( this will point us to the mod )
module-x show logging onboard timeout-drops ( this command can be used to isolate
the device down to the specific port)
• Congestion - a situation where the workload for a link exceeds its actual usable bandwidth.
Congestion happens due to overutilization or oversubscription. ( will need to check ISL
utilizations to verify if this is an issue using cisco performance manager or web client )
• Bottleneck - a link or component that is not able to transport all frames directed to or
through it in a timely manner. (e.g. because of buffer credit starvation or congestion)
• Link issue’s due to faulty hardware. Sometimes link can automatically go down if the error
count on that port is deemed too high.
Understanding
fc2/45 is up
details of show port o/p
Port description is REUXEUUS507_6
Hardware is Fibre Channel, SFP is short wave laser w/o OFC (SN)
General commnds used to
Port WWN is 20:6d:00:0d:ec:3c:3a:80
Admin port mode is FX, trunk mode is off check port properties:
snmp link state traps are enabled Sh interface fcx/y
Port mode is F, FCID is 0x5e5981
Port vsan is 102 sh port interface fc x/y trans
Speed is 4 Gbps sh run interface fcx/y
Rate mode is shared
Transmit B2B Credit is 16
Receive B2B Credit is 16
Receive data field Size is 2112
Beacon is turned off
5 minutes input rate 30273160 bits/sec, 3784145 bytes/sec, 4450 frames/sec
5 minutes output rate 321758744 bits/sec, 40219843 bytes/sec, 22157 frames/sec
34631516316 frames input, 34268810545088 bytes
0 discards, 0 errors
0 CRC, 0 unknown class
0 too long, 0 too short
149240682528 frames output, 269415777626372 bytes
0 discards, 0 errors
2 input OLS, 2 LRR, 0 NOS, 2 loop inits
3 output OLS, 0 LRR, 1 NOS, 2 loop inits
16 receive B2B credit remaining
14 transmit B2B credit remaining
12 low priority transmit B2B credit remaining
Interface last changed at Thu Dec 29 20:36:02 2011
Port channels
•Provides a point-to-point connection over ISL (E ports) or EISL (TE ports). Multiple
links can be combined into a PortChannel.
•Increases the aggregate bandwidth on an ISL by distributing traffic among all functional
links in the channel.
•Load balances across multiple links and maintains optimum bandwidth utilization. Load
balancing is based on the source ID, destination ID, and exchange ID (OX ID).
* PortChannels may contain up to 16 physical links and may span multiple modules for
added high availability.
• Performance Manager
− Provides detailed traffic analysis by capturing data with SNMP. This data is
compiled into various graphs and charts that can be viewed with any web browser
using Fabric Manager Web Services.
Features
Logging Level
User Security
Roles
User
Switch Mgmt IP
Security
Snmp
User