Вы находитесь на странице: 1из 1

06/04/2018 Data center topology with Cisco Nexus, HP Virtual Connect and VMware | Andrea Dainese

then all VLANs need to converge. Both Catalyst switches have two Supervisors, so each Root is more
stable. Both Catalyst will act as gateway for routed VLANs (management and production). The DMZ
VLAN is routed by a Firewall (not shown in the diagram). vMotion and NFS VLANs are isolated. A NHRP
(Next Hop Resolution Protocol) can be configured between both catalyst: primary gateway should
follow the STP Root. ### Switching Path Each server has a teaming (Windows) or bonding (Linux)
mechanism to overcome a single path failure. A path will become active and the other one will remain
standby. Depending by the driver an active path can configured as preferred. In this scenario no
preferred path is assumed, so the worst case scenario will be analyzed. Each NetApp storage can be
configured with an active-standby path mechanism or with a LACP portchannel. The path between each
couple of host should be analyzed: - From ServerA to ServerC: the active path of both servers is attached
to the same Fabric Extender (Nexus 2000) which is not a switching device. All frame must be forwarded
to EdgeA switch and then back to the destination. The uplink path of Fabric Extender devices can be a
bottleneck. - From ServerA to ServerB: the active path of both servers is attached to different Fabric
Extenders which forward frames to the upper Edge switch. Analyzing the STP topology, the link
between edge switches should be in Alternate state (rapid-pvst is used): frames should flow to the Root
Catalyst switch, than back to the destination Nexus 5K. But, because vPC keepalive links are always non-
blocking link (a vPC domain use other mechanism to ensure a loop fre topology), frames flow through
the vPC keepalive link, using the shortest path. vPC keepalive link can also be a bottleneck. - Between
NetApp storages: traffic between NetApp depends by the interface configuration (active-standby or
active-active using LACP). As discussed in the previous point, the shortest path will be used. Because
storage traffic can heavily impacts on the network, NetApp should be directly attached to the Nexus 5K
switches rather than Fabric Extenders, otherwise N2K-N5K links will be over used (N2K are non-swiching
devices). - From a server to a NetApp storage: an active-standby NetApp configuration follows the
ServerA ServerB example. A LACP NetApp configuration behave in a different way: the storage will
present its MAC address to both N5K switches. ServerA will use the active path finding the destination
storage within the nearest N5K switch. NetApp LACP configuration allows server to reach it using always
the shortest path. On the other side an active-standby configuration allow to move cables to other
switches without a Takeover/Giveback action. N2K switches are not switches, they are just extension
(Fabric Extender) for the N5K/N7K switches with important features/limitations: - they are non-
switching devices; - they provide end host connectivity only. No switch can be attached to a N2K device:
each N2K has an implicit BPDU guard configured: if a BPDU is received, the port is immediately placed in
an error-disabled state which keeps the link down. ## The additional access layer: HP Virtual Connects
HP Virtual Connects (VCs) are hybrid devices and can be compared to Cisco UCS 61xx devices: VC
virtualize server information like UCS do. This is not a discussion between VC and UCS 61xx. The
comparison was made only to point out what kind of devices VCs are. Each HP blade is attached (VC
downlinks) to both VCs located in the same chassis: each VCs define WWNs and MAC addresses for
servers using profiles. Each profile can be moved/cloned to other servers. A bunch of trunk links from
upper Nexus layer brings all VLANs to VCs (VC uplinks). Each chassis must include a couple of VCs, which
communicate through internal cross-connects 10GbE links. VCs on different chassis are connected using
external fibers and form one VC domain. Each HP blade is connected to both VCs located in the same
chassis, with internal hardware connections. Within the same VC domain, traffic is forwarded internally.
In other words

traffic between blade servers is switched - within a VC if both blades are using the links to the same VC; -
within the VC domain using loop connection, if blades are using links to different VCs. A "Shared uplink
set" is a group of VC uplinks; in "Auto" mode the Shared uplink set negotiates a LACP port-channel. Each
blade can use one or more Shared uplink set to transmit and receive data. From the OS perspective, a
single Shared Uplink set is a network interface. LACP port-channel can be configured withing the same
VC only. ### Smart Link With Smart Link features each Shared uplink set brings blade network
connections down, if there are no more VC uplinks active in the same Shared uplink set. See the
following example:

http://www.routereflector.com/2013/05/data-center-topology-with-cisco-nexus-hp-virtual-connect-and-vmware/#main 1/1

Вам также может понравиться