Вы находитесь на странице: 1из 7

Configuring VXLAN Transport

In this video, we're going to go over how to configure the VXLAN transport or VXLAN
tunnel endpoint on each individual hypervisor in your NSX environment. So from the Home
screen of your vSphere Web Client, go ahead and click on Networking & Securities and
Installation. Now in this case, what we're going to go do, once we've got the NSX Manager
up and accessible, and we have our controller node deployed, we're going to go ahead and
click on Host Preparation. Now in an earlier video, we went ahead and we installed the NSX
software on the individual clusters. Now what I have done in for this for you is I've actually
split out a hypervisor into its own cluster, so we have two clusters here to take a look at. And
you'll see here that we have the NSX software installed in both sets of clusters and the
distributed firewall is set and running above them as well. Now the existing cluster I already
had VXLAN Enabled on it, but the purpose of this video is to show you how to go about
doing this, that's very simple. You simply are going to click on the Configure button and then
you're going to pick the distributed virtual switch that you want the VXLAN tunnels to run
on. Now it's very important that if you are in an environment that has multiple distributed
virtual switches, please know that all of them are going to show in this drop-down menu. So
you need to make very sure that you've selected the proper distributed virtual switch that you
want the VXLAN tunnels to run over.

Now you'll see here, we have a field here to enter the VLAN. Now typically you're going to
want to go ahead and dedicate either a network or VMkernel port 2 VXLAN tunnels. But if
you want to go ahead and tag the VLAN and have a trunk down on a switch port and a NIC
interface on the server that has multiple VLANs on it, you can absolutely do that. But to go
ahead and tell NSX, which VLAN it needs to use when it's creating the kernel port, go ahead
and enter the VLAN here. Now one thing to note about the MTU field here, you'll see that it
automatically defaults to 1600. Now you can leave it at 1600, which is what VMware
recommends for setting up the VXLAN tunnel endpoint and everything will install, and it
will work correctly. But if you want to go ahead and change it to 9000, which is the jumbo
frames MTU, I...I would go ahead and do it now. Changing this after the fact is fairly
difficult, it's going to involve using both API calls to go and modify the existing tunnels, as
well as modifying the configuration of the VXLAN config for these hypervisors. So it's fairly
involved, it's not difficult, it just takes a bit of work to do. So I would go ahead and change
this to "9000" during your deployment if you already have jumbo frames enabled. But if
you're doing a proof-of-concept and you want to keep some things fairly standard or what
you are used to see in a now introduced additional problems, go ahead and leave it at "1600."

Now the VMkernel NIC IP Addressing is very important here. So we have two options, we
can Use IP Pool or we can Use DHCP. My recommendation is going to be Use IP Pool. Now
VMware may say go ahead and Use DHCP pool, but personally I prefer to be able to own
and control the IP addresses that have been assigned to me. So we've created a test_IP_Pool
in earlier videos, we've already gone over how to create IP pools. In this case, I have a VTEP
IP pool that I went ahead and created that is going to be handing out IPs to the VXLAN
networking, so that way I can configure and know all the...all the IPs that are being assigned
and used for my VXLAN tunnel endpoints. So I'm going to go ahead and select the VTEP
Pool. Now the VMKNIc Teaming Policy, this is where things can get very interesting. Now
we went over the five different teaming policies in a different set of slides earlier on in...in
this course. Now Fail Over is going to be the default one. That's the one most people are
going to end up using. If they are going to do any sort of load balancing across multiple
network interfaces, they are either going to be using some combination of Load Balance -
SRCID and Load Balance - SRCMAC or they are going to use some sort of Static
EtherChannel or maybe even an Enhanced LACP bundle.

My recommendation though is going to be stay away from Static EtherChannel and


Enhanced LACP and instead, use the Load Balance - SRCID and Load Balance - SRCMAC
because that's going to let you go ahead and select more than one VXLAN tunnel endpoint.
Now the way it will do that is the number of network uplinks that are configured on the
distributed virtual switch that you select in your drop-down here will automatically dictate
the number of VXLAN tunnel endpoint kernel ports that will be generated. So in my lab here,
I only have a single uplink, that's why we only have 1 here listed in the VTEP and you'll see
it's grayed out, I can't change it. If I had two uplinks, or four uplinks, or even eight uplinks,
what you would see here is you would see that number correspond to the same number of
uplinks. And if you go back and look at the previous set of slides, you'll know that there is a
certain set of features and capabilities that are only available to the Load Balance - SRCID
and Load Balance - SRCMAC in the NSX environment when you're creating your VXLAN
tunnel endpoints. But for the lab environment here, since I just have a single interface and I
am not doing any sort of load balancing, I am going to go ahead and just select Fail Over.

So we'll go ahead and hit OK. And if we actually want to see what is occurring on the
hypervisor, we can click on Home - Hosts and Clusters. In my case, we're going to look at
esxi03-lab.justavmwblog.com, so let's go ahead and take a look at Monitor and Tasks. And
you'll see here that we have added a virtual NIC and we've updated the network
configuration. So if we actually want to take a look at what was done on that hypervisor, we
can click on the Manage tab, click on Networking and VMkernel adapters and you'll see here
we have a new VMkernel adapter that's been added in the hypervisor, vmk1. Now the
Network Label is going to be a very long, very hard to pronounce, almost impossible to read
Network Label. You'll see this vxw-vmkPg port group-dvs-53- and then it goes on and on
and on. Really, you don't need to pay much attention on what that Network Label is, all you
need to know is when you see "vmknicPg" that is going to be your VXLAN tunnel endpoint.
So that is where all of your virtual networks are going to end, and began, and terminate on
your environment; they go in and out of this tunnel.

Now if we go ahead and take a look at the configuration of it, we'll see that it pulled an IP
address of 192.168.1.60, that's my default gateway and my net mask. So that is the IP address
of the VXLAN tunnel endpoint that has been configured on this hypervisor. So if you go
back, click on the Home button, click on Networking & Security again, Installation, go back
to our Host Preparation, we should see now that this new cluster with the new hypervisor
here has NSX software installed, a distributed firewall installed and configured, and now the
VXLAN tunnel has been configured as well. Now if you want to take a look at what is
actually being configured, you can click on Logical Network Preparation and click on
VXLAN Transport. Let me go ahead and Minimize this window here, so you'll get a little
better view of what we're taking a look at here. So you'll see here that we have our New
Cluster with the single hypervisor, it's currently in a configured state. It's running off the
HomeLab-DVSwitch, VLAN 0, so that's the native, MTU of 1600. It was configured using
that IP Pool and here is the IP address that was assigned to it. And way over on the far aside
here, let's see if I can play with the columns here, we have our Teaming Policy is Fail Over
and the number of VTEPs or VXLAN tunnel endpoints is 1.
Configuring VXLAN on the ESXi Hosts.

VXLAN on the ESXi Hosts


Once Cluster preparation is completed, it time to configure the VXLAN. Virtual Extensible LAN
(VXLAN) enables you to create a logical network for your virtual machines across different networks.
You can create a layer 2 network on top of your layer 3 networks. VXLAN transport networks deploy a
VMkernel interface for VXLAN on each host. This is the interface that will encapsulate network segments
packets if it needs to reach a guest on another host. By encapsulating via a VMkernel interface the
workload is totally unaware of this process occurring. As far as the workload is concerned the two guests
are adjacent on the same segment when infact they could be spanning many L3 boundaries.

To configure the VXLAN, Login to the Web Client > Networking & Security > Installation > Host
Preparation-> Configure . A wizard will ask for VXLAN networking configuration details. This will
create a new VMkernel port on each host in the cluster as the VXLAN Tunnel Endpoint (VTEP).

Provide the below options to configure the VTEP VMkernel Port:

 Switch – Select the DvSwitch from the drop-down for attaching the new VXLAN
VMkernel interface.
 VLAN – Enter the VLAN ID to use for VXLAN VMkernel interface. Enter “0″ if you’re not
using a VLAN, which will pass along untagged traffic.
 MTU – The recommended minimum value of MTU is 1600, which allows for the
overhead incurred by VXLAN encapsulation. It must be greater than 1550 and the
underlying network must support the increased value. Ensure your distributed vSwitch
(DSwitch) set MTU size more than 1600.
 VMKNic IP Addressing – You can specify either IP Pool or DHCP for IP addressing. I don’t
have DHCP in my environment. Select “New IP Pool” to create a new one same as we
created during NSX controller deployment. I have used a IP pool called “ VXLAN Pool”
Enter the IP Pool Name, Gateway, Prefix Length, Primary DNS, DNS Suffix and Static IP Pool range for
this New IP Pool and click on Ok to create the New IP Pool.

 VMKNic Teaming Policy – This option is defined the temaing policy used for bonding the
vmnics (physical NICs) for use with the VTEP port group. I have left with the default
Teaming policy “Static EtherChannel”
 VTEP – I left the default one and it is not even allowed to configure ,if you choose “Static
EtherChannel” as your Teaming policy.
Click on Ok to create the new VXLAN vmkernel interface in the ESXi hosts.

Once the VXLAN is configured, you will be able to see the status of the VXLAN is changed to “Enabled”
for that particular cluster.

As discussed in previous steps, Configure the VXLAN for other clusters in your vCenter.
Both of my compute clusters are configured with VXLAN and VXLAN status turned to “Enabled”.

You can notice the VXLAN VMkernel interface is created for the ESXi hosts in the Compute clusters. It
assigns the IP address for the VXLAN VMKernel interface from the IP Pool which we have created
earlier.
You can verify the same from the Networking & Security > Installation > Logical Network
Preparation>VXLAN Transport.

Вам также может понравиться