Академический Документы
Профессиональный Документы
Культура Документы
Task
Provision vPC Host Mode (vPC-HM) on the N1Kv so that it returns adapters to the same groups
and redundant states they were in when they were running on the standard vSwitches (vmnic0
and vmnic1 together for VM-Sys-Uplink, vmnic3 and vmnic4 together for VM-Guests for each
ESXi VEM).
Migrate the remaining physical adapters off of ESXi standard vSwitches and onto the N1Kv vDS.
Configuration
On N1Kv, let's look at the existing hashing algorithm for load balancing traffic. We can see that
there are 17 options for hashing traffic up over the uplinks, ranging from very broad to very
granular.
FEEDBACK
dest-ip-port-vlan
destination-ip-vlan
destination-mac
destination-port
Destination L4 port
source-dest-ip-port
source-dest-mac
source-dest-port
source-ip-port
source-ip-port-vlan
source-ip-vlan
source-mac
source-port
Source L4 port
source-virtual-port-id
vlan-only
VLAN only
But if we look at the default in the config, we see that it is a very basic method of pinning traffic
coming from one source MAC address and going to one of the uplink port-groups.
Does this mean that any one VM can only send traffic up one link to the upstream switch? In
fact, it does. Let's recall our topology and take some time to consider this very important topic in
N1Kv.
When we remember that we are running Nexus 1000v on top of UCS, and that each of our
uplinks (in this scenario, at least) travels north from each blade up to a different IOM and
therefore to a different FI, and we remember that these FIs have completely separate forwarding
tables, MAC address learning, etc. and appear to each upstream switch as independent hosts,
you can see how there is no possible way that we can form port channels from the two separate
upstream switches down to the two separate FIs.
If that is the case, how can we possibly do a port channel here? This is the magic behind vPCHostMode or vPC-HM. (By the way, this really has nothing to do with vPC that runs on the
N5K/7Ks at all - it's more of a marketing term). The magic is in the load balancing hash.
Because we only ever pin traffic from one VM (based on source MAC) up to one of the two
uplinks, which will only show itself to one of the two switches that are upstream from the
corresponding FI, we can see that there is no need for the upstream switching to have any
knowledge that what is happening south is some (strange) form of a port channel. There is no
LACP, and the only real "Mode On" is to tell the N1Kv that there is a port channel, and that you
should consider it up, and that you can forward traffic - but in this case, only based on source
MAC. Also note that if one link should go down, traffic will failover to the other link in the port
channel, and the MAC address will simply disappear from one FI (and therefore its
corresponding upstream switch) and appear on the other FI (and be learned by the other
upstream switch).
In that case, you might ask, Why are all of the other load balance hashes even here?
Remember that just because we are running N1Kv on a UCS B-Series architecture in this
particular scenario, there is nothing preventing us from also running it on standard pizza-box
type rack-mount servers, where two NICs are, in fact, in the same chassis, and connecting them
to the same upstream switch (or even a proper pair of VSS or vPC switches).
Let's go ahead and turn on the vPC-HM type of port channel. Note that all port channels in N1Kv
use the auto command to automatically assign PC numbers.
Verification
Watch as the port channel comes up.
So why did it create two port channels? Because we have two ESXi hosts, and therefore two
VEMs, and there could never be a port channel that spanned multiple ESXi hosts - because if
you think about it, there is only one physical/electrical way out of each ESXi server, and that is
with the physical NIC(s)/physical KR traces that go from each blade to each IOM/FI.
Let's do the same for the VM-Guests port profile (note that we do not need this for vMotion,
because it has only one physical NIC per ESXi host).
Verification
2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 3 for member ports Ethernet3/4
2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD
: Interface Ethernet3/4 is added to port-channel3
2013 Feb 20 21:44:25 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel3 cr
eated
2013 Feb 20 21:44:25 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_
IN_PROGRESS: Interface Ethernet3/4 is down (Channel membership update in p
rogress)
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD
: Interface Ethernet3/4 is added to port-channel3
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet3/4, oper
ational speed changed to 20 Gbps
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet3/4,
operational duplex mode changed to Full
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethe
rnet3/4, operational Receive Flow Control state changed to on
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethe
rnet3/4, operational Transmit Flow Control state changed to on
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_CHANNEL_ID_ASSIGNED:
Assigning port channel number 4 for member ports Ethernet4/4
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel3, op
erational speed changed to 20 Gbps
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel3
, operational duplex mode changed to Full
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port
-channel3, operational Receive Flow Control state changed to on
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port
-channel3, operational Transmit Flow Control state changed to on
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD
: Interface Ethernet4/4 is added to port-channel4
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-CREATED: port-channel4 cr
eated
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_DOWN_CHANNEL_MEMBERSHIP_UPDATE_
IN_PROGRESS: Interface Ethernet4/4 is down (Channel membership update in p
rogress)
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel3: E
thernet3/4 is up
2013 Feb 20 21:44:26 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel
3: first operational port changed from none to Ethernet3/4
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet3/4 is up
in mode trunk
2013 Feb 20 21:44:26 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel3 is
up in mode trunk
2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-PCM_MEMBERSHIP_CHANGE_ADD
: Interface Ethernet4/4 is added to port-channel4
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-SPEED: Interface Ethernet4/4, oper
ational speed changed to 20 Gbps
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface Ethernet4/4,
operational duplex mode changed to Full
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethe
rnet4/4, operational Receive Flow Control state changed to on
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethe
rnet4/4, operational Transmit Flow Control state changed to on
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-SPEED: Interface port-channel4, op
erational speed changed to 20 Gbps
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_DUPLEX: Interface port-channel4
, operational duplex mode changed to Full
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface port
-channel4, operational Receive Flow Control state changed to on
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface port
-channel4, operational Transmit Flow Control state changed to on
2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-PORT_UP: port-channel4: E
thernet4/4 is up
2013 Feb 20 21:44:27 N1Kv-01 %ETH_PORT_CHANNEL-5-FOP_CHANGED: port-channel
4: first operational port changed from none to Ethernet4/4
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_UP: Interface Ethernet4/4 is up
in mode trunk
2013 Feb 20 21:44:27 N1Kv-01 %ETHPORT-5-IF_UP: Interface port-channel4 is
up in mode trunk
N1Kv-01(config-port-prof)#
Now that we have our port channels created, let's go back to vCenter and add a second physical
NIC to each Ethernet port profile in N1Kv.
Configuration
Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Distributed Switch, and click Manage Physical Adapters.
Click Yes.
Click Yes.
Click OK.
Verification
Note that in ESXi1's vDS view, we now see two uplink adapters for each port profile, and all ports
show green icons indicating they are connected.
Back in N1Kv, note the second Ethernet interface being added to each port channel.
Perform the same tasks back in vCenter for host ESXi2, and notice the output from the switch.
We can also see the result of these added port channels in a simple show run:
interface port-channel1
inherit port-profile VM-Sys-Uplink
vem 3
interface port-channel2
inherit port-profile VM-Sys-Uplink
vem 4
interface port-channel3
inherit port-profile VM-Guests
vem 3
interface port-channel4
inherit port-profile VM-Guests
vem 4
At this point, both local ESXi vSwitches are completely barren of both guests and physical NIC
adapters, and all networking has been transferred over to the the N1Kv switch.
Now it's time to see how the N1Kv has chosen to pin the VMs up to uplink Ethernet interfaces.
We'll do this by looking at the output of two main commands in N1Kv, taking note of a few key
fields in each output, and then bringing them together to give us the whole picture. The fields to
look for are called LTL and SGID. LTL means Local Target Logic, and SGID is the Sub-Group
ID, because each virtual port channel is divided further into sub-groups. The two commands are
really just show port and show pinning; however, because this is a virtual switch and we
are on the virtual supervisor, we need to tell it specifically on what remote linecard (VEM) we
want to run the command, and to do that, we must prefix the show commands a bit, as we see
here.
Note that we can run this directly from the CLI of an ESXi host after we SSH into it. If we wanted
to do that, we would simply omit the prefix, and run the commands as such.
There are many other show commands that we can run from vemcmd within an ESXi host or
from the VSM if we prefix the module X execute command to it. There are also some set
commands that can be quite useful, but they should be applied with care because they can
quickly take a VEM offline from its VSM (but that's what a lab is for, right?).
Before we look at their output, let's review the terminology:
LTL = Local Target Logic
PC-LTL = Port Channel Local Target Logic
SGID = Sub-Group ID
Eff_SGID = Effective Sub-Group ID
We are going to look at the PC-LTL in the show port, then look at the Eff_SGIDs in the
show pinning, and then look at them with respect to their PC-LTL values.
To clarify this, let's isolate two VM guests that share the same uplink for comparison. We will
contrast "Win2k8-www-2" with "N1Kv-01-VSM-1" (specifically, its first adapter). We must also
point out the physical NICs or vmnics, to see what port channels and sub-groups they belong to.
Now let's look at their output.
First, we can see that Eth 3/4 and 3/5 (vmnic 3 and 4) are in a port channel, and that their PCLTL value is 306, and that their Sub Group ID (SGID) matches their vmnic number (3 and 4),
which makes things quite easy.
VSM Port Admin Link State PC-LTL SGID Vem Port Type
17
Eth3/1
UP
UP
F/B*
305
vmnic0
18
Eth3/2
UP
UP
F/B*
305
vmnic1
19
Eth3/3
UP
UP
F/B*
20
Eth3/4
UP
UP
FWD
306
vmnic3
21
Eth3/5
UP
UP
FWD
306
vmnic4
49
Veth7
UP
UP
FWD
vmk0
50
Veth8
UP
UP
FWD
51
Veth9
UP
UP
FWD
4 N1Kv-01-VSM-1.eth0
52
Veth10
UP
UP
FWD
4 N1Kv-01-VSM-1.eth1
53
Veth11
UP
UP
FWD
3 N1Kv-01-VSM-1.eth2
54
Veth12
UP
UP
FWD
3 Win2k8-www-2.eth0
55
Veth13
UP
UP
FWD
3 Win2k8-www-3.eth0
56
Veth14
UP
UP
FWD
3 vCenter.eth0
305
Po1
UP
UP
F/B*
306
Po3
UP
UP
FWD
vmnic2
vmk1
Again, as we look at the show pinning, we can see that PC-LTL 306 is broken down into
SGID of 3 and 4, and that Win2k8-www-2.eth0 is pinned to SGID 3 or vmnic3, whereas N1Kv-01VSM-1.eth0 is pinned up SGID 4 or vmnic4.
10
306
32
12
306
32
49
1c000060
305
32
vmk0
51
1c000080
306
32
N1Kv-01-VSM-1.e
52
1c000090
306
32
N1Kv-01-VSM-1.et
53
1c0000a0
306
32
N1Kv-01-VSM-1.et
54
1c0000b0
306
32
Win2k8-www-2.et
55
1c0000c0
306
32
Win2k8-www-3.eth
56
1c0000d0
306
32
vCenter.eth0
th0
h1
h2
h0
0
Now let's look at the same commands, but on VEM module 4 - or ESXi 2.
VSM Port Admin Link State PC-LTL SGID Vem Port Type
17
Eth4/1
UP
UP
F/B*
305
vmnic0
18
Eth4/2
UP
UP
F/B*
305
vmnic1
19
Eth4/3
UP
UP
F/B*
20
Eth4/4
UP
UP
FWD
306
vmnic3
21
Eth4/5
UP
UP
FWD
306
vmnic4
49
Veth1
UP
UP
FWD
vmk0
50
Veth2
UP
UP
FWD
51
Veth3
UP
UP
FWD
4 N1Kv-01-VSM-2.eth0
52
Veth4
UP
UP
FWD
3 N1Kv-01-VSM-2.eth1
53
Veth5
UP
UP
FWD
3 N1Kv-01-VSM-2.eth2
54
Veth6
UP
UP
FWD
3 Win2k8-www-1.eth0
305
Po2
UP
UP
F/B*
306
Po4
UP
UP
FWD
vmnic2
vmk1
10
306
32
12
306
32
49
1c000000
305
32
vmk0
51
1c000020
306
32
N1Kv-01-VSM-2.et
52
1c000030
306
32
N1Kv-01-VSM-2.et
53
1c000040
306
32
N1Kv-01-VSM-2.et
54
1c000050
306
32
Win2k8-www-1.eth
h0
h1
h2
0
iSCSI_LTL* : iSCSI pinning overrides VPC-HM pinning
N1Kv-01#
We see the same exact LTL and PC-LTL values! Remember, these are Local Target Logic. The
global logic values are the port channel numbers in the VSM, but the local target values are just
that - local to each VEM. Therefore, all things being equal on both ESXi hosts (number of vNICs,
VLANs, purpose, assignemnt), it makes sense that they would calculate similar values for these
local target logic values.
^ back to top
2013 INE