Академический Документы
Профессиональный Документы
Культура Документы
Discussion:
Traditional VMware Networking and the DVS
Nexus 1000v is a Distributed Virtual Switch software component for ESX and ESXi 4.0 and
above.
Traditional "old style" ESX networking defines a separately configured virtual switch (vSwitch)
in each ESX server. These switches have no coordinated management whatsoever, lead to a
great confusion about who is configuring virtual switch networking (the server administrator or
the network administrator?), and are generally just frustrating and unworkable across a large
number of ESX servers. Even supporting virtual machine migration (vMotion) across two ESX
servers using the traditional vSwitch is a "pain".
The Distributed Virtual Switch (DVS) model presents an abstraction of having a single virtual
switch across multiple ESX servers. While each pod in this lab has access to only two instances
of ESX, the DVS supports up to 64 ESX servers.
The conceptual picture of the DVS abstraction looks like the diagram below. Note that the "DVS
concept" implies that the DVS is managed from some "central location". VMware provides
their own DVS where this point of management is the vCenter server.
VSM-VEM Communication
The VSM communicates with each VEM in one of two fashions:
Layer 2 Communication
VSM and VEM communicate via two special VLANs (actual VLAN numbers are assigned at
Nexus 1000v installation time):
1. Control VLAN all Nexus 1000v-specific configuration and status pass between VSM
and VEM on this VLAN --- this is a low level protocol that does not use IP
2. Packet VLAN certain Cisco switching protocol information, such as CDP (Cisco
Discovery Protocol), pass on this VLAN this does not use IP.
Layer 3 Communication
VSM and VEM communicate using TCP/IP (there can be routers between VSM and VEM).
There are no control and packet VLANs in this scenario.
The Final Entire Picture: What you will See in the Lab UCS and N1K
The following screen shot shows the intergrated configuration corresponding to "choice number
2" in the section above (so that we can show more than one pair, but not go too crazy in the lab):
Lab Topology
You will have the ability to view the entire configuration of the UCS, including shared fabric
interconnects, chassis, and blade servers that are both dedicated to your pod as well as all others.
You will not have direct access to any of the shared northbound infrastructure (N7Ks, MDS
switches, and EMC Clariion storage arrays). These are preconfigured so that they will work
correctly with you service profiles.
Your own pod consists of two server blades on which you will be running ESXi 5.0. You will be
performing all Nexus 1000v installation and configuration as part of the lab.
You will be installing your own VSM(s) as virtual machine(s) in the same ESXi servers.
Connect to the remote student desktop. You will be doing all of your work on this remote
desktop. In order to connect, left click on the Student Desktop icon on your lab web page and
select RDP Client, as shown:
Left-click
here
An RDP configuration file is downloaded. Open it using your native RDP client (on Windows
clients you can just select Open with on the download dialogue, and you should not even have
to Browse for an application.)
Remote Desktop User: administrator
Remote Desktop Password: C!sc0123
10
Do all your work on the remote desktop. The following are the UCS Manager addresses that you
will be accessing. Almost all work is usually done with the first (primary) address listed, which
connects you to whichever UCS fabric-interconnect happens to have the primary management
role.
Device
Access
Username
Password
UCS1
10.2.8.4
USER-UCS-X
C1sco1234
UCS1-A
10.2.8.2
USER-UCS-X
C1sco1234
UCS1-B
10.2.8.3
USER-UCS-X
C1sco1234
(primary
address)
There is an organization created for all of your logical configurations ORG-UCS-X, where X is
your pod number. A template for an ESXi server with the correct storage and network
corrections (as discussed earlier in this document) is preconfigured for you in your organization.
You will be creating service profiles from this template, watching an automated install of ESXi
(or doing the install by hand, if you really prefer), and proceeding with the Nexus 1000v
configuration.
11
12
2. Note the VLANs that are used in this lab --- you might want to just write them down
because there are parts of the configuration where you will have to refer to them by
number. Since we are all using the same VLANs, later tasks in the lab will remind you of
the numbers when you need them:
Mgmt-Kickstart (198) will be used for management connection to ESXi itself (for
yourself and for VCenter to access ESXi) and for management connection (via ssh) to the
VSM. It will not be used for anything else.
The other VLANs are used as discussed in the introductory section to this lab. The
default(1) VLAN is not used at all in this lab, but there is no way to remove it from UCS
(following the operational theory of all Cisco switching products). Many enterprises
specifically do not ever use VLAN 1 for anything .
13
3. In the Content Pane, click on the Storage tab. You should be looking at something
like:
Pay no attention to the "desired order" column for the vHBA's. The important part is that
you have have two vHBA's for SAN access on the separate A and B VSANs. Note that
your node WWN and port WWNs will be chosen from a pool when you create service
profiles from the template in the next task.
4. Now click on the Network tab. You should see 5 vNICs configured just as discussed in
the discussion introduction to this lab. To review, the strategy is:
a. First vNIC is for "bootstrapping" ---- will be attached to the original vSwitch so
that N1K can be installed and all networking can be migrated to the N1K DVS.
14
b. Next pair of vNICs, on fabrics A and B (vmnic1 and vmnic2) for N1K Uplink
(VM data VLAN only)
c. Last pair of vNICs on fabrics A and B (vmnic 3 and vmnic 4) for N1K Uplink (all
other VLANs besides VM data).
5. Now click on the Boot Order tab. Note that service profiles derived from this
template will try to boot off the hard disk first. When you first create and associate your
Service Profiles the hard disks will be scrubbed, so the boot will fall through to the first
network adapter and your blade server will PXE-boot into a kickstart-install of ESXi.
15
3. You will be submitting the form a total of 4 times, twice for each service profile.
16
4. For each service profile, choose the correct WWNN and WWPN combinations. Make
sure you submit the correct WWPN for fabric A and then the correct WWPN for fabric
B.
From the checkboxes at the bottom, we need only prefab VM LUN 2. (the others are
either for other labs or don't really exist at all. We are not booting from a SAN LUN).
17
18. On the popup, uncheck Scan for New VMFS Volumes (leave only Scan for
New Storage Devices checked). Click OK.
19. Click the Configuration tab.
20. Click Storage (inside the Hardware box on the left)
21. You should see the existing Storage1 datastore, which is the boot disk
22. If you do not yet see the shared storage (snap-xxx), click Refresh near the word
Datastores near the upper right.
23. Your new datastore (snap-xxx) should appear in the list
19
5. Click on Properties.. to the right of the vSwitch0 (the lower one). If you see the
popup that mentions IPv6, that is the wrong one --- go back and look for the other
Properties..
6. Highlight the VM Network (Virtual Machine Port Group) and click
Edit.. at the bottom. At this point you should be here:
20
7. Modify the VLAN ID (Optional) field and make it 200 (which as you recall is the
VLAN over which we will run VM data).
8. Click OK (you should see the VLAN ID change on the Port Group Properties on the right
side).
9. Click Add.. on the lower left
10. Choose Virtual Machine radio button and click Next
11. Enter under "Network Label" OLDSW-CONTROL and "VLAN ID" 930. Click "Next" and
then "Finish"
12. Repeat steps 9-11 (choosing "Virtual Machine") each time with the following two new
network labels (port groups):
a. "Network Label" OLDSW-MGMT with "VLAN ID" None (0)
b. "Network Label" OLDSW-PACKET with "VLAN ID" 931
13. Close the "vSwitch0 Properties window"
14. Click Add.. on the lower left one more time
15. This time choose the VMKernel radio button and click Next..
16. Use "Network Label" OLDSW-VMOTION and "VLAN ID" 932, and check ONLY the
Use this port group for vMotion checkbox. Click Next.
17. Use Use the following settings radio button with:
a. IP address 192.168.X.1 (where X is your pod number)
[ when we repeat this whole task for second ESX, use 192.168.X.2 ]
b. Netmask 255.255.255.0
18. Click "Next" and then "Finish" When you get the popup about the default gateway, Click
"No"
19. Your full list of port groups should look like:
20. Repeat this entire task for the second ESX server. Add all networks with the same
labels and VLAN ID's. (capitalization counts!)
21
22
23
24
a. From the file chooser, pick My Computer on the left and then navigate to File
Repository(V:)N1KNexus1000v.4.2.1.SV1.4aVSM
Install and choose the nexus-1000v.4.2.1.SV1.4a.iso file.
7. Click back inside your VSM console and do CTRL-ALT-INSERT to reset the VM. This
time it should boot off the virtual media,
8. Choose the first menu item ( Install Nexus100V and bring up the new
image) or just let the timer expire. The VSM will install automatically. There will be a
delay of only a minute or two (amazingly fast install) after the line that contains "Linuxinitrd.."
9. The installer will fall into the configuration dialogue. Answer thus:
Enter the password for "admin": Cangetin1
Confirm the password for "admin": Cangetin1
Enter HA role [standalone/primary/secondary]:primary
Enter the domain id<1-4095>: use_your_pod_number!!
Would you like to enter the basic configuration dialog? yes
Create another login account: n
Configure read-only SNMP connunity string: n
Configure read-write SNMP connunity string: n
Enter the switch name : N1K
Continue with Out-of-band (mgmt0) management configuration? yes
Mgmt0 IPv4 address : 192.168.198.(100 + podnumber)
Mgmt0 IPv4 netmask: 255.255.255.0
Configure the default gateway? n
Configure advanced IP options? n
Enable the telnet service? n
Enable the ssh service? y
Type of ssh key your woul like to generate: rsa
Number of rsa key bits: 1024
Enable the http-server: y
Configure the ntp server? n
Vem feature level will be set to 4.2(1) SV1(4), Do you want to
reconfigure? n
Configure svs domain parameters? y
Enter SVS Control mode (L2/L3): L2
Enter control VLAN: 930
Enter packet VLAN: 931
Would you like to edit the configuration? n
Use this configuration and save it? y
25
10. You should get the login prompt on the console. Quit out of the virtual console (you will
be unhappy with it since the cursor gets stuck in there since there is no VMware tools for
it).
11. Access your VSM using putty(ssh) and the IP address you entered for the VSM in step 9.
12. Log in using admin and the password (Cangetin1) you entered in step 9. This is
where you will do all your VSM driving from.
26
27
4. At the white space near the bottom of the Plug-in Manager window, right-click to pop up
the New Plug-in button, and click on it.
5. In the Register Plug-in window,click on Browse and navigate to and select the
extension file that you saved. You should be here:
6. After you double-click or Open the extension file, its contents will appear in the "View
Xml (read-only) area. Click Register Plus-in at the bottom..
7. Click Ignore on the certificate warning popup.
28
8. Your extension will be registered in vCenter and you will see it in the Plug-in Manager:
9. Click "Close" at the bottom of the Plug-in Manager window to close it.
2. Stay on this View for now. As we go on, the instructions will say "switch to the Hosts
and Clusters view" and "switch to the Networking view" and you will know what to do.
29
30
Task 16: Create VLANs and Port Profiles for Uplinks on the
VSM
We will now start N1K configuration on the VSM. The first step is creatinVLANs and port
profiles for the uplinks. Remember we will have two types of uplinks --- one carrying only VM
data, the other for everything else.
1. Configure VLANs on the VSM:
N1K# conf t
N1K(config)# vlan 930
N1K(config-vlan)# name control
N1K(config-vlan)# vlan 931
N1K(config-vlan)# name packet
N1K(config-vlan)# vlan 198
N1K(config-vlan)# name mgmt
N1K(config-vlan)# vlan 200
N1K(config-vlan)# name vmdata
N1K(config-vlan)# vlan 932
N1K(config-vlan)# name vmotion
N1K(config-vlan)# copy run start
N1K(config-vlan)# sh vlan
VLAN
---1
198
200
930
931
932
.
.
.
Name
--------------------default
mgmt
vmdata
control
packet
vmotion
Status
Ports
--------- -----active
active
active
active
active
active
31
Note if you are looking at vCenter (Networking view), you can see the new port-group
other-uplink get created.
32
5. Click Next
6. Leave the page alone (everything says "Do not migrate". We will migrate vmk interfaces
(management and vMotion) to the DVS later. Click Next.
7. Leave the page alone. We will migrate VM networking to the DVS later. Click Next.
8. Click Finish.
33
6. On the VSM, create a "veth" profile for packet (this has to be marked as system VLAN
as discussed earlier).
Note: we need this only because the VSM is going to run as a VM inside this same
DVS.
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
35
6. Click OK
7. Run ping and/or putty(ssh) from your remote desktop to the IP for RH1 (remember you
can get it from the Summary tab for the VM not the one in blue (which is ESX IP).
8. Repeat steps 2-6 for the other two VM (WinXP and Win2K08). You can test with ping or
an RDP connection from the remote desktop.
In the VSM, see how virtual ports are represented:
N1K# sh port-profile usage name vmdata
N1K# sh int virtual
N1K# sh int veth1 //nice to have "real insight" into VM networking
36
10. Click Next. Click Finish to confirm and close the popup
11. Repeat steps 2-10 precisely for your other ESX server
37
2. Create an IP filter (access-list) on the VSM (NoRDP is just a name you are inventing for
the filter).
N1K# conf t
N1K(config)# ip access-list NoRDP
N1K(config-acl)# deny tcp any any eq 3389
N1K(config-acl)# permit ip any any
3. Create a vmdata-nordp port profile and specify the IP filter within it:
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
N1K(config-port-prof)#
39
3. Make sure traffic on the vmdata VLAN (200) is flowing as usual (make sure you can
ping, RDP, ssh as appropriate)
4. Create port-profiles to attach VM's to the new isolated and community VLANs.
N1K# conf t
N1K(config)# port-profile type veth vm-isolated
N1K(config-port-prof)# switch mode private-vlan host
N1K(config-port-prof)# switch priv host-assoc 200 2009
N1K(config-port-prof)# vmware port-group
N1K(config-port-prof)# no shut
N1K(config-port-prof)# state enabled
N1K(config-port-prof)# port-profile type veth vm-community
N1K(config-port-prof)# switch mode private-vlan host
N1K(config-port-prof)# switch priv host-assoc 200 2999
N1K(config-port-prof)# vmware port-group
N1K(config-port-prof)# no shut
N1K(config-port-prof)# state enabled
N1K(config-port-prof)# copy run start
5. In the vSphere client, attach (Network adapter 1 of) RH1 to the vm-isolated(N1K)
port-group (you should know how to do this by now )
6. In the vSphere client, attach (Network adapter 1 of ) both WinXP and Win2K08 to the
vm-community(N1K) port group
7. Verify that you can still access everything "from the outside" as if all the VM were still
on VLAN 200 (ie just ping, RDP, ssh as appropriate from the remote desktop).
8. Log into the RH1 (you know how to get its IP) via putty(ssh) (root/cangetin)
9. Verify that you cannot ping the two Windows VM's (or access any other port: you can
try telnet IP_of_Win2K08 3389 for example).
10. From the remote desktop, log into the Win-XP VM via RDP
(studentdude/cangetin). This should work fine
11. Invoke a cmd window on the XP and verify that you can ping the Win2K08 VM, but that
you cannot ping the RH1.
12. From vSphere client, restore all the VM's to the regular vmdata(N1K) port group.
Verify that everyone can talk to each other now.
It is fine to leave the private VLAN configuration in place and have everyone openly
communicating on the primary VLAN, saving any port-profiles referring to other isolated
or community VLANs for later use.
41
42
4. Configure a port profile for a vmk insterface to transport the ERSPAN session.
Nexus 1000v uses a VMkernel interface to implement the Layer 3 transport when using
ERSPAN. l3control will mark the interface as a being used for an internal L3
functionality for N1K. As usual, doing port configuration via new port profiles is best.
N1K(config-erspanpsrc)# port-profile ERSPAN
N1K(config-port-prof)# capability l3control
N1K(config-port-prof)# switch mode access
N1K(config-port-prof)# switch access vlan 200
N1K(config-port-prof)# vmware port-group
N1K(config-port-prof)# no shut
N1K(config-port-prof)# system vlan 200
N1K(config-port-prof)# state enabled
N1K(config-port-prof)# copy run start
5. Configure the new vmk in vSphere client to use the profile:
a. Go to Hosts and Clusters view if not already there
b. Highlight either of your ESX servers (not a VM)
c. Go to Configuration tab and click Networking (in Hardware box)
d. Change view to vSphere Distributed Switch if not already there
e. Click Manage Virtual Adapers
f. In the popup, click the blue Add (you should be here now:)
l. Back on the picture of the switch, verify there is a new vmk under the ERSPAN port
group. Open it up (with the +) and verify it has an IP from DHCP on our data net.
6. Show that the monitor session is active from the N1K point of view (on the VSM):
N1K# sh monitor session 1
7. Establish an RDP connection to the WinXP and log in (studentdude/cangetin)
and launch the web browser of your choice. This will be the traffic that we are
monitoring.
8. Establish an RDP connection to the Win2K08 and log in
(administrator/cangetin). Launch Wireshark from the desktop or start menu.
This will be our view of the packets being snooped.
9. In Wireshark, Click on the VMware vmxnet3 virtual network device (blue
link):
10. Choose the interface with the address on or data network and click Start:
11. Fill in the filter at the top of the Wireshark capture window as shown and apply:
44
45
46
47
48
VM-FEX
Nexus 1000v
yes
no
yes
no
yes
no
no
yes
no
yes
49
3. Click OK
4. You will get a popup warning you that your ESX servers with service profiles derived
from this template will reboot. Click Yes and then OK
5. Your ESX servers will reboot. In vSPhere client Hosts and Clusters view, wait until both
ESX servers are rebooted and reconnected to vCenter (they will go back to solid text no
longer italicized and/or fuzzy). You can watch the reboots by looking at the server KVM
Consoles as well, if you like.
50
1. Highlight the word All in the nav. pane and go to the Lifecycle Policy tab
2. Both policies should already be set to 1 Min (radio button). You will not have sufficient
privileges to change this policy.
4. In the Save Location pop-up, click the and navigate to the folder where you want
to store the extension file (My Documents is fine). Click Select
5. Click OK to save the file (it will be called
cisco_nexus_1000v_extension.xml), yes even 'though this is an extension for
UCS Manager rather than Nexus 1000v. If this conflicts with another file previously
downloaded for N1K lab, feel free to overwrite the old one.
6. In the vSphere client, choose the PluginsManage Plugins:
51
7. At the white space near the bottom of the Plug-in Manager window, right-click to pop up
the New Plug-in button, and click on it.
8. In the Register Plug-in window,click on Browse and navigate to and select the
extension file that you saved.
9. After you double-click or Open the extension file, its contents will appear in the "View
Xml (read-only) area. Click Register Plus-in at the bottom..
10. Click Ignore on the certificate warning popup.
11. Your extension will be registered in vCenter and you will see it (Cisco-UCSM-xxxxxx)
in the Plug-in Manager under Available Plug-ins (there is nothing else you need to do).
12. Click "Close" at the bottom of the Plug-in Manager window to close it.
Task V5: Create the vCenter Connection and the DVS from
UCS Manager
Now that the extension key is successfully installed in vCenter we can connect our UCS
Manager to vCenter and create the VM-FEX DVS
4. On your remote desktop (which is the same as the vCenter server), figure out which IP
address you have on the 10.0.8. network (call this the vCenter IP)
[ run ipconfig /all in a cmd window. Make sure you get the one that begins with
10.0.8.]
5. In UCS Manager, go to the VM tab in the nav. pane if not already there.
6. Highlight the word VMware on the left and go to the vCenters tab (there should not be
anything listed here yet).
7. Click the green "+" on the right
8. The name for vCenter is arbitrary but we cannot conflict with other groups using the
same UCS. Use MyVCX where X is your pod number and the IP you discovered in step 1.
Make sure to use the proper podnumber suffix to prevent our cleanup scripts from
wiping out your configuration when cleaning up another pod.
9. Click Next
10. Under Folders, leave blank and click Next (we do not have a folder outside the
DataCenter object)
52
11. Under Datacenters enter the name of your Datacenter MyDCX (this must match the
datacenter name in vCenter) and click Next.
12. Under Folders (this is now folders inside the datacenter, and is required), click the
green "+ Add" at the bottom and enter RedSox (this is a new name that will be pushed to
vCenter). Click Next.
13. Under DVSs, click the green "+ Add" at the bottom and enter a name for the DVS (this is
arbitrary). Use MyVMFex. Click the Enable radio button and click OK
14. Click Finish for the DVSs.
15. Click Finish for the Folders (you can see this sort of rolls backwards)
16. Click Finish for the DataCenters and OK to confirm creation. This pushes the folder and
DVS information to vCenter.
At this point look at the vSphere client (Networking view). You will see your DVS
named MyVMFex created inside a folder named RedSox. The tasks at the bottom of
vSphere client will indicate this configuration is done by your UCS Manager. When you
are done, vSphere client should look like:
53
54
7. On the nav. pane on the left, open up Port Profiles and highlight Port Profile
vmdata
8. Click the Profile Clients tab
9. Click the green "+" on the far right to bring up the profile clients form (defines which
vCenters / folders / DataCenters to which to push the profile)
10. Create a client name (this is arbitrary) vmdataX where X is your pod number. Make
sure you change the datacenter to your datacenter name (do not pick All, do not pick your
friend's datacenter name.): You can leave everything else as is:
11. Click OK
12. Back in vShphere client on Networking view (where you should be already), you should
see a port group named vmdataX get added to your DVS.
13. Repeat steps 1-9 (both port profile and profile client) twice with these values. Make sure
the native VLAN radio button is checked for the single VLAN for each profile.
Port profile and profile client name: mgmtX vlan(native): Mgmt-Kickstart
Port profile and profile client name: vmotionX vlan(native): VMotion
14. You should now see the three port groups in vSphere client Networking view
55
56
57
14. Click OK
15. Run ping and/or putty(ssh) from your remote desktop to the IP for RH1 (remember you
can get it from the Summary tab for the VM not the one in blue (that is ESX IP).
58
16. Examine how virtual machines appear in UCS Manager (they are visible, although there
are not any administrative actions. On the VM tab, open up the tree underneath VmWare
Virtual Machines. Your ESX servers may be mixed up among other servers
from other groups of students, so look carefully at the servers (chassis/slot) to find yours.
You can go all the way down to the vNIC level, and see that a vNIC has been associated
with the VM and inherited the MAC address for the VM.
17. Repeat steps 2-6 for the other two VM (WinXP and Win2K08). You can test with ping or
an RDP connection from the remote desktop. You should see these VM "pop into" UCS
Manager as well.
59
21. Click Next. Click Finish to confirm and close the popup
22. Repeat steps 2-10 precisely for your other ESX server
13. Click OK
61
14. In UCS Manager, go to the VM tab and highlight the name of your data port profile as
shown (please use yours and not anyone else's).Make sure you are on the General tab
on the right.
15. Click the High Performance radio button. You should be here:
62