Вы находитесь на странице: 1из 22

Configuration 3: Dynamic LACP - Source/Destination IP, TCP/UDP port & VLAN

https://storagehub.vmware.com/t/vmware-vsan/vmware-r-vsan-tm-network-design/configuration-3-7/

In this example, we are going to configure a 2 port LACP port-channel on a switch and a 2 uplink LAG group on a distributed vSwitch. We chose a
LAG group with 2 Uplinks to remain consistent with this document (and considering most vSAN Ready Nodes ship with this configuration). We will use
10Gb networking, two physical uplinks per server.

The process is the same in most cases:

Configuration 3: Switch Side Setup Overview


 Identify the ports in question where the vSAN node will connect to.
 Create a port-channel.
 If using VLANs then trunk correct VLAN to the port-channel.
 Configuring the desired distribution or load-balancing options (hash).
 Setting LACP mode to active/dynamic.
 Verify MTU is configured properly
Configuration 3: vSphere Side Setup Overview
 Configure vDS with Correct MTU
 Add hosts to vDS
 Create LAG with correct number of uplinks and matching attributes to port-channel
 Assign physical uplinks to LAG group
 Create distributed port group for vSAN traffic and assign correct VLAN
 Configure VMkernel ports for vSAN with correct MTU
Configuration 3: Physical Switch Setup Detailed
This setup was followed by Dell’s guidance here http://www.dell.com/Support/Article/us/en/19/HOW10364.
In our example, we are going to configure a 2 uplink LAG as follows:

 Switch ports 36 and 18.


 We are using VLAN trunking, so port-channel will be in VLAN trunk mode, with the appropriate VLANs trunked (VLAN 40).
 We have decided on “Source and destination IP addresses, TCP/UDP port and VLAN” as the method of load-balancing or
load distribution.
 We have verified the LACP mode will be “active” (aka dynamic).
On our DELL switch, these were the actual steps taken to configure an individual port-channel:

 Step 1: create a port-channel, “1” in this case:


#interface port-channel 1

 Step 2: set port-channel to VLAN trunk mode:


#switchport mode trunk

 Step 3: allow appropriate VLANs:


#switchport trunk allowed vlan 40

 Step 4: configure the load balancing option:


#hashing-mode 6

 Step 5: assign the correct ports to the port-channel and set mode to active
#interface range Te1/0/36, Te1/0/18

#channel-group 1 mode active

Full set of steps:

#interface port-channel 1
#switchport mode trunk

#switchport trunk allowed vlan 40

#hashing-mode 6

#exit

#interface range Te1/0/36,Te1/018

#channel-group 1 mode active

Verify that the port-channel is configured correctly:

#show interfaces port-channel 1


Channel Ports Ch-Type Hash Type Min-links Local Prf
------- ----------------------------- -------- --------- --------- ---------
Po1 Active: Te1/0/36, Te1/0/18
Dynamic 6 1 Disabled
Hash Algorithm Type
1 - Source MAC, VLAN, EtherType, source module and port Id
2 - Destination MAC, VLAN, EtherType, source module and port Id
3 - Source IP and source TCP/UDP port
4 - Destination IP and destination TCP/UDP port
5 - Source/Destination MAC, VLAN, EtherType, source MODID/port
6 - Source/Destination IP and source/destination TCP/UDP port
7 - Enhanced hashing mode

Note: This procedure must be repeated on all participating switch ports that are connected your vSAN nodes.

Configuration 3: vSphere Distributed Switch Setup


Before you begin, make sure that the vDS is upgraded to a version that supports LACP. To verify, right click on the vDS, and check of upgrade option
is available. Depending on the original version of the Distributed Switch, you may have to upgrade the vDS to a minimum version to take advantage
of LACP.
Figure 49. Verify that the vDS supports LACP

Configuration 3: Create LAG Group on VDS


To create a LAG Group on a distributed switch, select the vDS, go to Configure Tab, and select LACP. Click on the green + symbol to create a new
LAG:

Figure 50. Add a New Link Aggregation Group


The following properties need to be set on the New Link Aggregation Group Wizard:

 LAG name, in this case we will use name of lag1


 Number of ports should be set to 2 to match port-channel on switch
 Mode should be set to Active, as this is what we have configured on the physical switch
 Load balancing mode should match physical switch hashing algorithm and therefore we are going to set this to “Source and
destination IP addresses, TCP/UDP port and VLAN”
Figure 51. LAG Settings
Configuration 3: Add physical Uplinks to LAG group
Since we already have added our vSAN nodes to the vDS, the next step is to Assign the individual vmnics to the appropriate LAG ports.
 Select and right click on the appropriate vDS, Select Add and Manage Hosts…
 Select Manage Host Networking, and add your attached hosts you wish to configure.
 On the select network adapter tasks, select Manage Physical Adapters
At this point, we are going select the appropriate adapters and assign to the LAG port.

Figure 52. Assigning uplinks to the LAG (a)


In this scenario, we are re-assigning vmnic0 from Uplink 1 position to port 0 on lag1:
Figure 53. Assigning uplinks to the LAG (b)
Now we must repeat the procedure for vmnic1 to the second lag port position, i.e. lag1-1. The configuration should now look like this:
Figure 54. Assigning uplinks to the LAG (c)
This procedure must be repeated on all participating vSAN nodes. LAG configuration can also be interrogated using command line, i.e. esxcli
esxcli network vswitch dvs vmware lacp config get

Figure 55. Querying LAG configuration from ESXCLI (a)

esxcli network vswitch dvs vmware lacp status get


Figure 56. Querying LAG configuration from ESXCLI (b)
Note: The most important flag is the "SYN" flag (Port state). Without it, LAG won’t form.
Configuration 3: Distributed port group Teaming and Failover policy
We now need to assign the LAG group as an “Active uplink” on distributed port group teaming and failover policy. Select or create the designated
distributed port group for vSAN traffic. In our case, we already have a vSAN port group called “vSAN” with VLAN ID 40 tagged. Edit the port group,
and configure Teaming and Failover Policy to reflect the new LAG configuration.
Ensure the LAG group “lag1” is in the active uplinks position and ensure the remaining uplinks are in the Unused position.
Note: When a link aggregation group (LAG) is selected as the only active uplink, the load balancing mode of the LAG overrides the load balancing
mode of the port group. Therefore, the “Route based on originating virtual port” load balancing policy plays no role here.
Figure 57. Choosing a LAG for the active “uplink”

Configuration 3: Create the VMkernel interfaces


The final step is to create the VMkernel interfaces to use the new distributed port group, ensuring that they are tagged for vSAN traffic. This is an
example topology of a 4-node vSAN cluster using LACP. We can observe that each vSAN vmknic can communicate over vmnic0 and vmnic1 on a LAG
group to provide load balancing and failover:
Figure 58. Reviewing LACP/LAG configuration

Configuration 3: Load balancing considerations


From a load balancing perspective, while we do not see a consistent balance of traffic across all hosts on all vmnics in this LAG setup, we do see more
consistency compared to Route based on physical NIC load in configuration 1 and the air-gapped/multiple vmknics in configuration 2.

As before, if we look at the individual hosts’ vSphere Performance Graph, we see some better load balancing:
Figure 60. LACP Load Balancing, ESXi host view

Configuration 3: Network uplink redundancy lost


Now let’s take a look at network failures. In this example, vmnic1 is disabled on a given vSAN node. From an alarm perspective, we see that a
Network redundancy alarm has triggered:

Figure 61. Uplink failure with LACP configured


From the performance charts, we see the workload moved to vmnic0 from vmnic1:
Figure 62. Workload moving between uplinks with LACP configured
We do not observe any vSAN related health alarms, and the impact to Guest I/O is minimal compared to the air-gapped/multi-vmknics configuration.
This is because we do not have to abort any TCP sessions with LACP configured, unlike previous examples.

Configuration 3: Recovery/Failback considerations


In a failback scenario, we see distinct behavior differences between Load Based Teaming, multi-vmknics and LACP in a vSAN environment. After
vmnic1 is recovered, traffic is automatically (re)balanced across both active uplinks:
Note: This behavior can be quite advantageous for vSAN traffic.

Configuration 3: Failback set to Yes or No?


We have already discussed the fact that a LAG load-balancing policy overrides a vSphere Distributed port groups Teaming and Failover Policy. What
we also need to consider is the guidance on Failback value. In our lab tests, we have verified there is no discernable behavior differences between
Failback set to yes or no with LACP. LAG / LACP takes priority over the port-group settings as is the case with port group load balancing policies
Figure 64. Failback setting for LAG policies
Note: Network failure detection values remain as “link status only” as beacon probing is not supported with LACP. See
VMware KB Understanding IP Hash load balancing (2006129)

Вам также может понравиться