Академический Документы
Профессиональный Документы
Культура Документы
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Implementing the Cisco
Nexus 1000V (DCNX1K)
v2.0 Lab Guide
L5557C-001
December 2012
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Implementing the Cisco Nexus 1000V
(DCNX1K) v2.0 Lab Guide
L5557C-001
December 2012
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
Copyright Information
Copyright 2012 by Global Knowledge Training LLC
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
The following publication, Implementing the Cisco Nexus 1000V (DCNX1K) v2.0 Lab Guide, is a Cisco Systems, Inc.
derivative work developed by Global Knowledge Training LLC. All rights reserved. No part of this publication may be
reproduced or distributed in any form or by any means without the prior written permission of the copyright holder.
Products and company names are the trademarks, registered trademarks, and service marks of their respective owners.
Course Director
Product Director, Cisco Product Management
WW Product Manager, Cisco Products & Services
Printed in Canada
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
Table of Contents
Lab 0: Global Knowledge Remote Labs .................................................................. L0-1
Lab 1: Set Up the VMware vSphere Environment................................................... L1-1
Lab 2: Install and Configure the Cisco Nexus 1000V VSMs .................................. L2-1
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Lab 3: Install and Configure the Cisco Nexus 1000V VEMs .................................. L3-1
Lab 4: Upgrading the Cisco Nexus 1000V VSM and VEM .................................... L4-1
Lab 5: Optimize the Cisco Nexus 1000V Implementation ...................................... L5-1
TOC-1
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Table of Contents
TOC-2
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L0
Global Knowledge Remote Labs
The purpose of this lab is to introduce you to the Global Knowledge Remote Labs
Environment used for this course.
L0-1
Activity Objectives
In this activity, you will be introduced to the Global Knowledge Remote Labs environment
and the labs contained in this course. You will familiarize yourself with the interface and
devices.
After completing this activity, you will be able to meet these objectives:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Become familiar with the lab topology and access all devices
Outline
L0-2
Visual Objective
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
The figure portrays the Global Knowledge DCNX1KV v2.0 lab topology you will be
accessing. Each pod (team of two students) will have three dedicated servers: a server
dedicated for VMware vCenter Server 5.0 and two VMware ESXi 5.0 hosts. Each pod will
leverage shared networking and storage.
L0-3
Interface
IP address
Mask
VLAN
vCenter Server
Management
10.0.1.50
/24
Production
10.0.14.50
/24
14
Management
10.0.1.1
/24
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
ESXi 1 Host
vMotion/Storage
10.0.11.1
/24
11
Management
10.0.1.2
/24
vMotion/Storage
10.0.11.2
/24
11
iSCSI Array
vMotion/Storage
10.0.11.99
/24
11
N1000V-VSM
Management
10.0.1.200
/24
Control
12
Packet
13
WinServer-1
Production
10.0.14.1
/24
14
WinServer-2
Production
10.0.14.2
/24
14
WinServer-3
Production
10.0.14.3
/24
14
ESXi 2 Host
L0-4
Required Resources
These are the resources and equipment that are required to complete this activity:
A computer with an Internet connection, a web browser, and Remote Desktop.
Lab logins assigned by your instructor.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
Note
The Global Knowledge Remote Labs environment is accessed via a web browser.
Each pod (team of two) will have a unique login, which will grant access to the
equipment assigned to your pod.
1.
Examine the lab topology diagram in the Visual Objective of this lab to familiarize
yourself with the environment before we login.
2.
Your instructor will provide the credentials necessary to log into Global Knowledge
Remote Labs. Write them down here for your reference:
There is also a tear-out topology diagram as the last page of the lab guide where you can
note your username and password. This page also contains logins and IP addresses for all
lab devices you may need to reference throughout the labs.
Username
Note
3.
Password
When troubleshooting with your instructor you will need to provide them with your
pod number, and possibly your credentials.
From the classroom computer (or your own computer), launch a web browser. Navigate to
the following URL:
http://www.remotelabs.com/
Note
You can access Remote Labs from the classroom, and also from home/hotel using
the same steps outlined in this lab. For the duration of this class you will have 24hour access to your equipment.
L0-5
You should see the Global Knowledge Live Labs login screen.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
4.
While at a Global Knowledge training center, you will need a wired Internet
connection to access www.remotelabs.com. You cannot connect to the site using
Global Knowledges wireless network.
5.
Log in using the credentials provided to you by your instructor. Click >Log In.
6.
Accept any terms and conditions and close any dialog boxes that appear.
7.
You should see the Live Labs start page when you have successfully logged in.
L0-6
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
8.
In the upper left-hand side of the Live Labs page there is a countdown timer. This timer
indicates the amount time remaining in your lab reservation and will provide ample time to
complete the labs. Review the time you have left in your pod for the week.
9.
Expand + Pod P (where P is your pod number) so you can view information about your
pod and its initial setup. DO NOT use the Reset To link.
10. The Topology link is how you connect to your Lab Topology. This is the only link you
should click in this menu.
L0-7
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
11. Click the Topology link. This will open an RDP session to the Remote Labs equipment.
Click Open to launch the RDP session, trust connections to the server, and dismiss all other
dialog boxes.
L0-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
Both students in a single team can log in to the Topology at the same time. One
student can type the commands for a given lab, while the other student shadows on
their own computer.
12. If prompted, click your username and again enter the password provided by your instructor
in Step 2, and then click the arrow or hit Enter to login.
13. Once the Remote Desktop window opens, you will see the Remote Lab Panel, with the
Lab Topology tab open. You should see a picture of the Remote Labs topology.
14. There are several clickable icons in the Lab Topology. This is how you will access your
lab devices. Clicking an icon will open a new tab.
L0-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
15. First, you will connect to the vCenter Server host. Click on the icon labeled vCenter
Server.
16. If you are not automatically logged into the server, click the Ctrl Alt Del icon in the
right-hand bar to login to the server.
L0-10
17. Click the Administrator user, and enter the password cisco123.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
This will open a session to a Windows Server system with a number of applications on the
desktop. This is where you will later install vCenter Server and perform most of the lab
configuration.
18. Go back to the Lab Topology tab at the top of the RDP window.
19. Next, click on the ESXi 1 host 10.0.1.1. Verify you see the following screen.
20. Last, click on the ESXi 2 host 10.0.1.2. Verify you see the same screen.
L0-11
Activity Verification
You have completed this task when you attain these results:
Understand all the devices and the lab IP addressing scheme
Logged into Global Knowledge Remote Labs using the credentials supplied by your
instructor.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L0-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L1
Set Up the VMware vSphere
Environment
Complete this lab activity to practice what you learned in the related lesson.
L1-1
Activity Objective
In this activity, you will install VMware vCenter Server on your server and configure it to
manage your ESXi hosts. After performing this lab, you should be able to do the following:
Install vCenter Server and the vSphere Client to manage your VMware environment.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Login to vCenter Server via the vSphere Client, create a data center, and add your ESXi
hosts to the data center.
View the default VMware vNetwork standard switches on your ESXi hosts.
Configure access to an iSCSI datastore.
Add a pre-configured Windows virtual machine, and connect the VMs vNICs to
vSwitch0 on the ESXi host.
Clone the first virtual machine and place the new VM on the seconds ESXi host.
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Job Aids
These job aids are available to help you complete the lab activity:
Appendix A: Answer Key
Lab Topology diagram
L1-2
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
From your vCenter Server machine, open the folder with the name VMware VIM 5.0.0
from your desktop and double-click on the autorun.exe application.
2.
The VMware vCenter Installer will open. Click vCenter Server, and then click Install.
3.
L1-3
Wait for the VMware vCenter installation window to start. You will have to wait for the
Microsoft C++ and .NET Framework to install.
5.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
4.
6.
L1-4
Click I agree to the terms in the license agreement and then click Next.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
7.
8.
Enter customer information and leave the License Key field blank, and then click Next.
L1-5
Accept the default Install a Microsoft SQL Server 2008 Express instance selection and
click Next. Click Yes on the pop up window.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
9.
Note
L1-6
Microsoft SQL Server 2008 Express is included with vCenter Server and is intended
for small deployments, including labs. Production-scale VMware deployments should
create a separate database first, and then install vCenter Server and point it to the
database DSN (Data Soure Name).
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
10. Accept the default SYSTEM account, and ensure the fully qualified domain name is listed
as LAB-VCENTER, and then click Next.
11. Click Okay to acknowledge if the fully qualified domain name cannot be resolved.
12. Accept the default installation folders and click Next.
L1-7
13. Accept the default to create a standalone instance of vCenter Server and click Next.
VMware vCenter Linked Mode allows you to view the inventory of multiple instances
of vCenter Server from a single vSphere Client session. You will only use a single
instance of vCenter Server.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L1-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L1-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
16. Accept the default JVM (Java Virtual Machine) memory size and click Next.
L1-10
The installation will take 20-30 minutes to complete. Kick off the installation, and then
take a break!
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
19. Return to the vCenter Installer wizard. Click vSphere Client, and then click Install to
install the client application used to access vCenter Server.
20. Accept the default language of English by clicking OK. The install wizard will start.
L1-11
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
23. Click I agree to the terms in the license agreement to accept the EULA, and then click
Next.
24. Enter user Cisco and organization Cisco Systems, and then click Next.
25. Leave the default Destination Folder. Click Next.
26. Click Install. The install process will take approximately 5-10 minutes.
27. Click Finish when the installation has completed.
Activity Verification
You have completed this task when you attain these results:
Installed vCenter Server 5.0
L1-12
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Since you are using the vSphere Client on the same physical server vCenter Server
is installed on, and your account has the same credentials as your Windows session,
you can connect to localhost using your Windows session credentials.
L1-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
30. If you receive a warning, click the Install this certificate checkbox and then click
Ignore.
31. Click OK. You are using the VMware evaluation license, which is valid for 60 days.
L1-14
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
32. Right-click your vCenter Server instance in the left-hand inventory pane and click New
Datacenter.
L1-15
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
35. Enter the first ESXi hosts IP address 10.0.1.1. Enter the username root and the password
cisco123.
L1-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L1-17
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L1-18
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L1-19
42. Repeat Steps 34 to 41 to add a second ESXi host with IP address 10.0.1.2. Enter the same
username root and password cisco123.
43. Note the progress on the Recent Tasks pane on the bottom. Confirm both ESXi hosts
appear under Lab-Datacenter.
Activity Verification
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
You have completed this task when you attain these results:
Created a datacenter and added two ESXi hosts using the vSphere Client, connected to
vCenter Server.
L1-20
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
44. Using the navigation bar, navigate to the Hosts and Clusters inventory view Home >
Inventory > Hosts and Clusters, or use the shortcut Ctrl-Shift-H.
Note
There are shortcuts for each inventory view in vCenter Server. Each inventory view
controls what is visible in the left-land inventory pane.
Note
Use Ctrl-Shift-H for Hosts and Clusters, Ctrl-Shift-V for VMs and Templates,
Ctrl-Shift-N for Networking, or Ctrl-Shift-D for Datastores and Datastore
Clusters.
45. If necessary, maximize the inventory view and select your first ESXi host 10.0.1.1, then
click the Configuration tab.
46. In the Hardware section, click the Networking link. A vNetwork standard switch
vSwitch0 is created by default when the Global Knowledge labs team installed ESXi
during the course setup procedure. On vSwitch0 you should see a VMkernel port, a Virtual
Machine port group, and a physical NIC (Network Interface Card) uplink adapter, labeled
as vnmnic0.
Note
VMkernel ports are used to provide an IP stack to the VMware Hypervisor. They are
used for management, vMotion, Fault Tolerance, and IP-based storage like iSCSI
and NAS.
Note
The VMkernel port created by default is assigned the management IP address of the
ESXi server. This IP address was assigned upon initial configuration of the server by
the labs team. This port is named Management Network by default.
Note
Virtual Machine port groups connect Virtual Machine vNICs to the vSwitch, just like a
regular NIC would connect to a switchport. The VM port group named VM Network
is created by default. Currently, there are no virtual machines (VM) connected to it.
L1-21
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
47. Now select your second ESXi host 10.0.1.2 in the left-hand inventory pane. Double check
that there is also a vSwitch0 that contains a similar configuration.
Note
Activity Verification
You have completed this task when you attain these results:
L1-22
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
First, you will create a new VMkernel port on vSwitch0 and assign it an IP address on the
same subnet as the iSCSI storage target. As a best practice, iSCSI traffic should not be
routed, and it should run over a dedicated storage network (preferably physical separate,
otherwise logically separated using VLANs).
Second, you will enable the iSCSI software initiator process in the VMkernel Hypervisor.
You must use the software initiator since you dont have a dedicated iSCSI HBA you are
using a standard Ethernet NIC, so the VMkernel Hypervisor must handle the iSCSI/TCP
encapsulation and processing.
Third, you will verify storage visibility. You will also rename your local storage datastores.
Activity Procedure
Complete these steps:
48. Ensure you are in the Hosts and Clusters inventory view. Click on your first ESXi host.
49. Click the Configuration tab, and then click Networking under the Hardware pane.
L1-23
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
52. The Add Network Wizard will appear. Select VMkernel as the new connection type, and
then click Next. You will use this port to connect to iSCSI storage.
L1-24
53. In the Network Label field, name the VMkernel port vMotion/Storage. Be sure to enter
this Network Label exactly the same on both ESXi hosts.
Enter VLAN 11 under the VLAN ID (Optional) field for both ESXi hosts. Note that the
pull-down menu wont show your VLAN, but you can still manually type in 11 as the
VLAN ID.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Check the box to Use this port group for vMotion. You will use the port for both IP
storage (iSCSI) and later, vMotion traffic.
Click Next to continue.
L1-25
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Use subnet mask 255.255.255.0 (/24) for both servers. Do not modify the VMkernel
Default Gateway. Click Next.
L1-26
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
55. Verify the VMkernel port configuration, and then click Finish to complete.
57. Now that you have a VMkernel port that can talk to the iSCSI storage target in VLAN 11,
you will enable the iSCSI software adapter to speak iSCSI over this network.
58. Under the Configuration tab, select Storage Adapters under the Hardware pane.
Click the Add link to add a new software storage adapter.
L1-27
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
59. Leave Add Software iSCSI Adapter selected, and then click OK. Click OK to dismiss
the notification.
60. Click the newly created iSCSI software adapter on top of the Storage Adapters pane, and
then click the Properties link on the lower Details pane.
L1-28
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
61. Click the General tab, and then click the Configure button.
62. Ensure the Enabled checkbox is clicked. Enter an (optional) iSCSI Alias of SW-Init-1 for
your first server as shown below, and then SW-Init-2 for your second server. Click OK.
Leave the iSCSI Name as it appears.
L1-29
63. Now that the iSCSI software initiator has been enabled, you can connect to iSCSI storage.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Click the Dynamic Discovery tab then click Add to connect to a storage target.
L1-30
64. Enter IP address 10.0.11.99 as the iSCSI server to connect to. Leave the default iSCSI
Port, click OK, and then click Close to return to the main vCenter screen.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
65. You may be prompted to rescan the host bus adapter. If so, select Yes. The rescan will
connect to the iSCSI array over the storage network and discover any available LUNs
(Logical Unit Numbers) and/or datastores. If you are not prompted to rescan, click the
Rescan All... link in the top right-hand side of the Storage Adapters pane, and press OK.
Note the event in the lower Recent Tasks pane.
Verify a new datastore called ISCSIVMFS appears in the lists of available datastores.
You may have to click Rescan All... again if the datastore doesnt appear.
There should also be a datastore called datastore1 this is the local hard drive inside the
ESXi host.
L1-31
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
67. Right-click on datastore1 and select Rename. On your first host, name this datastore
local1. On your second host, name this datastore local2.
68. Repeat all of Task 4 for your second ESXi server, 10.0.1.2. When you create the VMkernel
port for IP storage for this server, use IP address 10.0.11.2/24. Connect to the same iSCSI
storage target.
Activity Verification
You have completed this task when you attain these results:
You have connected your two ESXi hosts to iSCSI-based storage, and the ISCSIVMFS
datastore is visible to both hosts.
L1-32
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
69. Using the navigation bar, navigate to the Hosts and Clusters inventory view Home >
Inventory > Hosts and Clusters, or use the shortcut Ctrl-Shift-H.
70. Select your first ESXi host 10.0.1.1, and then click the Configuration tab.
71. In the Hardware pane, click the Networking link.
73. Click the network called VM Network, and then click Edit
Note
VM Network is the default virtual machine port group created when ESXi is installed.
As a best practice, this port group should be renamed.
74. Under the General tab, enter the Network Label Production, overwriting the existing
name. Enter VLAN ID 14, click OK, and then click Close.
L1-33
75. Click Finish and Close. Repeat all of Task 5 on your second ESXi host. Verify the virtual
machine port group was successfully modified. Your vSwitch configuration should match
the provided screenshots one shown for each host.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
Modified a virtual machine port groups name and VLAN assignment on vSwitch0 of
both ESXi hosts.
L1-34
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
76. Navigate to the Datastore inventory view by clicking Inventory > Datastores, or using
the shortcut Ctrl-Shift-D.
Expand your vCenter and Datacenter icons until you see your datastores listed in the
inventory pane.
77. Right-click on the ISCSIVMFS datastore in the inventory pane, and select Browse
Datastore...
78. In the Datastore Browser window, double-click the WinServer-1 folder. Locate the
WinServer-1.vmx file, right-click it, and then choose Add to Inventory.
L1-35
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
80. Select your first ESXi host 10.0.1.1 as the destination for the VM and click Next.
L1-36
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
82. Navigate to Inventory > Hosts and Clusters, or use the shortcut Ctrl-Shift-H.
83. Expand host 10.0.1.1 and right-click the newly imported virtual machine WinServer-1,
and then select Edit Settings.
L1-37
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
84. Click Network Adapter 1 in the hardware list. On the right-hand side, click the Network
label dropdown and select the newly named Production virtual machine port group (if it is
not already selected). Click OK.
L1-38
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
85. Right-click the WinServer-1 VM in the inventory pane and click Open Console.
VMware asks this since you imported an existing VM, instead of creating a new one.
88. After Windows boots, in the menu bar of the console, click VM > Guest > Send
Ctrl+Alt+Del (if you do not see the login screen).
L1-39
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
92. Select Internet Protocol (TCP/IP) and then click the Properties button.
L1-40
93. Ensure the IP address is 10.0.14.1 with a mask of 255.255.255.0, and no default gateway.
Correct the IP address/mask if necessary, and then click OK.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Select Show icon in notification area when connected. Click OK. Then close the
Network Connections window.
94. Next, confirm the Windows Firewall service is off so you can use ping to test VM-to-VM
connectivity once you create more VMs.
From within the VM console, click Start > Settings > Control Panel.
L1-41
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
97. Close the Control Panel window and return to the vSphere Client.
Activity Verification
You have completed this task when you attain these results:
Connected the virtual machines vNIC to the Production virtual machine port group on
vSwitch0
Configured the IP address and subnet mask on the vNIC inside the virtual machine
L1-42
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
98. Ensure you are in the Hosts and Clusters inventory view or use the shortcut Ctrl-Shift-H.
L1-43
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
101. Choose your second ESXi host 10.0.1.2 and click Next.
102. Choose the iSCSI datastore ISCSIVMFS as the destination for the VMs files. Click Next.
L1-44
Note
VM disks (represented as .vmdk files) can be thick provisioned, which means the
space allocated to a VMs hard drive is shown as used on the datastore, whether
there is anything written to it or not. Alternately, a VM disk can be thin provisioned,
which leaves the unused space free for other VMs to use until the VM requests more
space to write to.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
103. On the Guest Customization page click Power on this virtual machine after creation,
and then click Next.
Note
L1-45
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
104. Select Power on this virtual machine after creation, verify the clone settings, and then
click Finish.
105. Monitor the progress of the cloning task by viewing the Recent Tasks pane at the bottom
of the vSphere Client window. This task may take several minutes to complete.
106. The cloned virtual machine, WinServer-2 should now appear in the Hosts and Clusters
inventory view under ESXi host 10.0.1.2 (you may need to expand the host to see the VM
once the cloning process is complete).
Note
L1-46
You can also view virtual machines in the VMs and Templates inventory view by
navigating to Home > Inventory > VMs and Templates, or using the shortcut CtrlShift-V. The Hosts and Clusters view shows inventory in a physical hierarchy, i.e.
which VMs belong to which physical hosts.
The VMs and Templates view shows a logical view, without the physical hosts..
Folders created in one view will not appear in a different view, allowing a VMware
administrator to organize differently based on inventory object type.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
107. Right-click the VM WinServer-2 and select Open Console. Log in with username
Administrator and password cisco123.
Note
You may get an IP address and computer name conflict error message because
WinServer-2 initially has the same IP address and computer name as WinServer-1,
since a clone is a completely identical copy of VM. Next, you will change the IP
address and name of the WinServer-2 VM.
108. From within the Windows VM console window, click Start > Settings > Network
Connections.
L1-47
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
110. Select Internet Protocol (TCP/IP) and then click the Properties button.
111. Enter the IP address 10.0.14.2 with a mask of 255.255.255.0, and no default gateway.
Click OK. Select Show icon in notification area when connected. Click OK. Close the
Network Connections window.
L1-48
112. Still from within the WinServer-2 VM console window, click Start > Settings > Control
Panel, then double-click System.
113. Click the Computer Name tab in the System Properties window
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Click Change, so you can change the computer name to avoid any duplicate names on the
network. Name the computer WinServer-2. Click OK.
114. Click OK to note you will have to restart the computer for the changes to take effect.
115. Click OK to close the System Properties window and click Yes to restart the VM.
116. After the VM has rebooted, log in to the virtual machine using Administrator and
password cisco123. Ignore the popup that Your computer might be at risk.
L1-49
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
118. To verify connectivity between your two new VMs, ping the other VMs IP address at
10.0.14.1.
Activity Verification
You have completed this task when you attain these results:
Cloned the virtual machine WinServer-1 located on the first ESXi host to create a
second virtual machine, WinServer-2, located on your second ESXi host.
L1-50
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2
Install and Configure the Cisco
Nexus 1000V VSMs
Complete this lab activity to practice what you learned in the related lesson.
L2-1
Activity Objective
In this activity, you will install and perform initial configuration of a primary and
secondary Cisco Nexus 1000V VSM (Virtual Supervisor Module) using VMware vCenter
Server v5.0. After performing this lab, you should be able to perform the following:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Install a primary Cisco Nexus 1000V VSM using the Open Virtualization Format
(OVF) template wizard-based method
Perform the initial configuration of the primary VSM
Establish the SVS connection to vCenter Server
Install a secondary Cisco Nexus 1000V VSM
Required Resources
These are the resources and equipment required for each pod to complete this activity:
One server running VMware vCenter Server v5.0 and VMware vSphere Client v5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices (you will not be able to see other pods):
One switch for server networking
One iSCSI-based storage device
Command List
L2-2
Command
Description
svs-domain
domain id <number>
copy running-config
startup-config
show module
attach module 2
system switchover
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in visual objective section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L2-3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
The Nexus 1000V can also be deployed in routed mode (Layer 3), wherein the hosts
(VEMs) and Nexus 1000V VSM are on different subnets. Note the Nexus 1000V is not a
router. Routed mode simply means the VSM and VEMs are in different VLANs. Another
network device must actually perform the routing between the VLANs.
In this lab, you will deploy the Nexus 1000V in Layer 2 mode, where the VSM and VEMs
will have IP addresses in the same VLAN (subnet).
The Management VLAN is used for system login, configuration, and corresponds to the
mgmt0 interface. The management interface appears as the mgmt0 port on a Cisco Nexus
switch, and is assigned an IP address. Although the management interface is not used to
exchange data between the VSM and VEM, it is used to establish and maintain the
connection between the VSM and VMware vCenter Server.
The Control VLAN and the Packet VLAN are used for communication between the VSM
and the VEMs within a switch domain. The Packet VLAN is used by protocols such as
CDP, LACP, and IGMP. The Control VLAN is used for the following:
VSM configuration commands to each VEM, and their responses
VEM notifications to the VSM, for example a VEM notifies the VSM of the attachment
or detachment of ports to the DVS
VEM NetFlow exports are sent to the VSM, where they are then forwarded to a
NetFlow Collector.
Activity Procedure
Complete these steps:
1.
2.
Log in to vCenter Server via the vSphere Client using localhost and the Windows session
credentials if you are not already logged in.
3.
Go to the Hosts and Clusters inventory view, or use the shortcut Ctrl-Shift-H.
4.
Select your first ESXi host 10.0.1.1, and then click the Configuration tab.
5.
Click the Networking link, and then click the Properties link next to vSwitch0.
6.
L2-4
Choose Virtual Machine as the connection type, and then click Next.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
7.
8.
Change the name of the virtual machine port group to Management. Do not enter a VLAN
ID. Click Next.
L2-5
9.
10. Repeat Step 5 Step 8 to create the following additional two virtual machine port groups
and assign the VLAN numbers specified below. Add all virtual machine port groups to
vSwitch0.
VLAN
Control
12
Packet
13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Network Label
11. Repeat Steps 3 to 10 for your second ESXi host, 10.0.1.2. Ensure you enter the Label and
VLAN exactly the same on both hosts.
12. Verify your final vSwitch configurations on both your ESXi hosts match the following
screenshots:
ESXi Host 1 (10.0.1.1):
L2-6
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
Added the virtual machines port groups Management, Control, and Packet to vSwitch0
on both ESXi hosts.
The Management port group should not be a member of any VLAN.
The Control port group should be a member of VLAN 12.
The Packet port group should be a member of VLAN 13.
L2-7
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
13. Ensure you are in the Host and Clusters inventory view, or use the shortcut Ctrl-Shift-H.
Select your first ESXi host, 10.0.1.1.
14. Click the File Menu. Click Deploy OVF Template. The Deploy OVF Template wizard
opens.
Note
The OVF import wizard is an easy way to deploy the Nexus 1000V as a preconfigured virtual appliance. Alternately, you could configure the VSM manually
without the use of a wizard, by creating a VM from scratch and installing the Nexus
1000V NX-OS.
15. Click Browse and navigate to the OVA file in the following location:
N:\Nexus1000v.4.2.1.SV1.4a\Nexus1000v.4.2.1.SV1.4a\VSM\Install\nexus1000v.4.2.1.SV1.4a.ova.
L2-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
19. Click Accept to the End User License Agreement and click Next to proceed.
20. Enter the name N1000V-VSM1 and click Next.
L2-10
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
21. Ensure Nexus 1000V Installer is selected from the Configuration drop-down menu and
click Next.
L2-11
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2-12
24. Make sure the VSM source and destination networks are properly mapped: Control
Control, Management Management, Packet Packet. Click Next.
How you can click below the Destination Networks and a pull down option exists to
change the selection if needed.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L2-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2-14
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
27. Click the Power on after deployment checkbox, verify your configuration, and then click
Finish to complete the wizard and begin importing the VSM.
28. Wait for the deployment to complete, and then click Close.
L2-15
29. Click the N1000V-VSM1 VSM virtual machine in the left-hand inventory pane and click
the Summary tab. You should see the VSM deployed on the ISCSIVMFS datastore, and
connected to the three new networks.
If you accidentally deployed the VSM on the wrong host, simply drag and drop the
VM to the correct host to initiate a vMotion.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
30. Right-click the VSM and select Open Console (or click the Open Console icon in the
menu bar). Wait for the VSM to finish booting up, at which point the switch login prompt
will appear. This process can take several minutes.
31. Do not login to the VSM at this time. Close the console window when the VSM has
finished booting.
Note
If you clicked within the console window, you will need to press CTRL+ALT to
release the cursor out of the focus of the console.
32. From your vCenter Server host, open Internet Explorer and navigate to your VSM at the
URL http://10.0.1.200.
The Nexus 1000V has a web interface where you can access an installer application, the
extension (plug-in) required for vCenter Server, and VEM software.
L2-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
33. If the browser prompts you to add a security exception, do so. Close the window when you
are done.
L2-17
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
34. Right-click on cisco_nexus_1000v_extension.xml. Click Save Target As and save the file
to the desktop.
Close the download dialog when the download is complete. Close Internet Explorer.
L2-18
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
35. Go back to the vSphere Client connected to vCenter Server. Click Plug-ins > Manage
Plug-ins from the menu bar.
37. Click Browse and navigate to your desktop. Double-click the XML file you just
downloaded.
38. Click Register Plug-in to bind your VSM to vCenter Server using its unique extension
key.
L2-19
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
41. Once your Nexus 1000V plug-in appears as pictured, click Close in the Plug-in Manager
window.
L2-20
Activity Verification
You have completed this task when you attain these results:
Installed the Cisco Nexus 1000V VSM on your first ESXi host
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2-21
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
44. Log in to the switch by using username admin and password cisco123.
Note
SSH is the recommended method to access the VSM after you have installed the
Cisco Nexus 1000V.
Note
switch# configure
switch(config)# hostname N1000V-VSM
46. Configure the SVS domain, including the Control and Packet VLANs. SVS domain stands
for Server Virtualization Switch, and represents the 1000V domain configuration.
N1000V-VSM(config)# svs-domain
N1000V-VSM(config-svs-domain)# domain id 1
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
N1000V-VSM(config-svs-domain)# control vlan 12
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
N1000V-VSM(config-svs-domain)# packet vlan 13
Warning: Config saved but not pushed to vCenter Server due to inactive connection!
N1000V-VSM(config-svs-domain)# exit
Note
L2-22
You get warnings when configuring your SVS domain: Warning: Config saved but
not pushed to vCenter Server due to inactive connection. This is normal because we
have not yet made the connection between the VSM and vCenter Server.
The first line in the configuration specifies the name of the connection. This name
does not have to match the name of your vCenter instance. Multiple connections can
be stored in a single configuration. The second line specifies the protocol to use to
speak to vCenter Server, which is VIM (VMware). By default, VIM runs on SSL over
HTTP (HTTPS). The third and fourth lines specify the IP address of vCenter Server,
and the VMware Datacenter the Nexus 1000V should be a part of. Lastly, the
connect command uses the connection information entered to initiate a connection
to vCenter.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
If you are monitoring the Recent Tasks pane in vCenter Server, you can see the
VSM being added to the inventory.
connection LAB-VCENTER:
ip address: 10.0.1.50
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: Lab-Datacenter
admin:
max-ports: 8192
DVS uuid: 32 fe 2d 50 62 2c db 59-4e d7 c2 52 c9 aa f5 34
config status: Enabled
operational status: Connected
sync status: Complete
version: VMware vCenter Server 5.0.0 build-455964
Note
Your DVS (Distributed Virtual Switch) universally unique identifier (UUID) will vary.
UUIDs uniquely identify servers. The UUID shown is for this DVS. Each Nexus
1000V DVS will have a different UUID.
L2-23
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
--------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
--------------------------------------------------------------------mgmt0
-up
10.0.1.200
1000
1500
--------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
--------------------------------------------------------------------control0 -up
-1000
1500
Ports
----0
Module-Type
-------------------------------Virtual Supervisor Module
Mod
--1
Sw
---------------4.2(1)SV1(4a)
Mod
--1
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
Mod
--1
Server-IP
--------------10.0.1.200
Model
-----------------Nexus1000V
Status
-----------active *
Hw
-----------------------------------------------0.0
Serial-Num
---------NA
Server-UUID
-----------------------------------NA
Server-Name
-----------------NA
L2-24
52. Confirm this VSM is the only Supervisor in the virtual Nexus 1000V chassis.
N1000V-VSM# show system redundancy status
standalone
standalone
Redundancy mode
--------------administrative:
operational:
HA
None
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Redundancy role
--------------administrative:
operational:
Note
You have not yet installed a secondary VSM (Supervisor) or any VEMs, but note how
this looks like the output of a physical chassis-based switchs modules, although our
switch is completely virtual.
55. You should see in the Recent Tasks pane at the bottom of the window a new Distributed
Virtual Switch (dvS) has been added to vCenter.
56. Navigate to the Networking inventory view, or use the shortcut Ctrl-Shift-N.
L2-25
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
57. Expand the networking tree in the left pane to view the new vSwitch.
Note
Any ports not specifically placed in a port group will be placed in the Quarantine
port groups. Also notice that a VMware administrator cannot edit the settings of the
1000V dvS or its port groups all networking configuration is now the responsibility
of the network administrator.
Activity Verification
You have completed this task when you attain these results:
Performed initial configuration of the primary VSM
Registered and connected the Cisco Nexus 1000V VSM to VMware vCenter Server
L2-26
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
58. Return to the Putty SSH session to your VSM at IP address 10.0.1.200.
59. Change the VSM HA role from standalone to primary.
N1000V-VSM# configure
N1000V-VSM(config)# system redundancy role primary
60. Verify the VSMs role is now listed as primary, instead of standalone.
N1000V-VSM(config)# show system redundancy status
Redundancy role
--------------administrative:
operational:
primary
primary
Redundancy mode
--------------administrative:
operational:
HA
None
Note
You MUST change the redundancy role before installing the secondary VSM.
Otherwise, both VSMs will become active and independent control planes as they
are not expecting to see a secondary VSM.
L2-27
62. Now, you will add another VSM to fill the secondary role. Return to the vCenter Server
screen and go to the Hosts and Clusters view, or use the Ctrl-Shift-H shortcut.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
64. Click File. Then click Deploy OVF Template The Deploy OVF Template wizard
opens.
65. Verify that the OVA nexus-1000v.4.2.1.SV1.4.ova is selected.
66. Click Next to confirm the OVF Template Details.
67. Click Accept to accept the EULA and click Next to proceed.
68. Enter the name N1000V-VSM2 and click Next.
Note
Ensure you have selected the Secondary installer before moving on.
72. Accept the default Disk Format (Thick Provision Lazy Zeroed) and click Next.
73. Make sure the networks are properly mapped and click Next.
74. Configure domain ID 1 and password cisco123.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
For the secondary VSM, do not enter an IP address, subnet mask and gateway, since the
information will be shared between the primary and secondary. Click Next.
L2-29
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
75. Click the Power on after deployment checkbox, verify your configuration, and then click
Finish to complete the wizard and begin importing the secondary VSM.
78. Wait for boot up to complete and the switch login message to appear.
Note
L2-30
The VSM power on process can take several minutes. The primary VSM may cause
the secondary VSM to reboot for HA synchronization.
79. Login with username admin and password cisco123. If you cannot login via the VMware
console, move on to the next step. There is an alternate method to connect. Close the
console to the secondary when you have verified you can login.
The switch prompt should contain (standby) after the hostname, which indicates
this is the standby, or secondary VSM.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
80. Return to the Putty window to your primary VSM. Verify that the secondary VSM now
appears in the output of show module command.
You must wait for the secondary VSM to completely finish booting before it will show the
ha-standby state. While booting, it will appear as powered-up.
Ports
----0
0
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Mod
--1
2
Sw
---------------4.2(1)SV1(4a)
4.2(1)SV1(4a)
Mod
--1
2
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
Mod
--1
2
Server-IP
--------------10.0.1.200
10.0.1.200
Model
-----------------Nexus1000V
Nexus1000V
Status
-----------active *
ha-standby
Hw
-----------------------------------------------0.0
0.0
Serial-Num
---------NA
NA
Server-UUID
-----------------------------------NA
NA
Server-Name
-----------------NA
NA
Note
You may see a console message on the primary VSM regarding dropped frames
while the secondary VSM boots. This is normal.
Note
You should see a console message indicating the secondary VSM is now online:
switch %PLATFORM-2-MOD_DETECT: Module 2 detected (Serial number
:unavailable) Module-Type Virtual Supervisor Module Model : unavailable
L2-31
primary
primary
Redundancy mode
--------------administrative:
operational:
HA
HA
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Redundancy role
--------------administrative:
operational:
HA standby
HA standby
82. Use the attach command to connect directly to the secondary VSM.
N1000V-VSM1(config)# attach module 2
L2-32
1 minute: 0.02
5 minutes: 0.25
15 minutes: 0.15
197 total, 1 running
0.0% user,
1.0% kernel,
99.0% idle
2075740K total,
865364K used,
1210376K free
62632K buffers, 469044K cache
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
85. On the active VSM, initiate a manual switchover to the standby VSM.
N1000V-VSM(config)# system switchover
Note
Once you enter this command, the standby VSM becomes active. The formerly
active VSM reboots which causes your SSH session to terminate and the VSM
becomes standby after reboot.
87. Examine the connected modules. Eventually, you should see the reloaded VSM reappear,
this time in the standby status, and the VSM in module 2 is now the active VSM.
N1000V-VSM# show module
Mod
--1
2
Ports
----0
0
Mod
--1
2
Sw
--------------4.2(1)SV1(4a)
4.2(1)SV1(4a)
Mod
--1
2
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
Mod
--1
2
Server-IP
--------------10.0.1.200
10.0.1.200
Note
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Model
-----------------Nexus1000V
Nexus1000V
Status
-----------ha-standby
active *
Hw
-----0.0
0.0
Serial-Num
---------NA
NA
Server-UUID
-----------------------------------NA
NA
Server-Name
----------------NA
NA
It takes a little while for the primary VSM to reboot and change its status form
powered-up to ha-standby. Wait a minute, and then issue the command again
until the VSM in module one shows state ha-standby.
88. Do not proceed until the VSM in module 1 shows a status of ha-standby.
Global Knowledge Training LLC
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
L2-33
89. Open a Command Prompt window from your vCenter Server desktop.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
90. Start a continuous ping to the VSM IP address 10.0.1.200, using the command ping
10.0.1.200 t.
91. Switchover again to make the VSM in module 1 the active primary VSM again.
N1000V-VSM# system switchover
92. Verify connectivity to the VSM is maintained from the continuous ping. Although a couple
pings may be lost, there is no interruption to the data plane or end application user on the
failure of a VSM in a highly available configuration.
93. Open a Putty session to 10.0.1.200, login, and verify the active VSM is in module one and
that the VSM in module two changes from powered-up to ha-standby as follows.
Ports
----0
0
Mod
--1
2
Sw
--------------4.2(1)SV1(4a)
4.2(1)SV1(4a)
Mod
--1
2
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
L2-34
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Model
-----------------Nexus1000V
Nexus1000V
Status
-----------active *
ha-standby
Hw
-----0.0
0.0
Serial-Num
---------NA
NA
Server-IP
--------------10.0.1.200
10.0.1.200
Server-UUID
-----------------------------------NA
NA
Server-Name
----------------NA
NA
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
Installed a secondary VSM on your second ESXi host
L2-35
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L2-36
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L3
Install and Configure the Cisco
Nexus 1000V VEMs
Complete this lab activity to practice what you learned in the related lesson.
L3-1
Activity Objective
In this activity, you will install the Cisco Nexus 1000V VEM on each ESXi host, add hosts
to the distributed virtual switch, and configure Cisco Nexus 1000V port profiles. After
performing this lab, you should be able to perform the following:
Create a port profile for the Cisco Nexus 1000V uplinks
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
One server running VMware vCenter Server 5.0 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
L3-2
Command
Description
hostname <name>
port-profile [type
{ethernet | vethernet}]
<profile_name>
no shutdown
Activates an interface.
state enabled
vmware -v
vem status
show module
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
copy running-config
startup-config
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in the visual objectives section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L3-3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
1.
From the desktop of vCenter Server open a Putty SSH session to your VSM at IP address
10.0.1.200. Log in to the switch with username admin and password cisco123.
2.
N1000V-VSM# configure
N1000V-VSM(config)# hostname N1000V
N1000V(config)#
3.
L3-4
Return to vCenter Server and note that VSM has already pushed this configuration change
to vCenter Server. Navigate to the Networking inventory view, or use the shortcut CtrlShift-N.
4.
Return to your putty session to the VSM and create all the VLANs required for the labs.
11
name
vlan
name
vlan
name
vlan
name
exit
vMotion/Storage
12
Control
13
Packet
14
Production
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config)# vlan
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
N1000V(config-vlan)#
Note
5.
In NX-OS you must explicitly create VLANs. Simply putting a port into a VLAN that
does not exist will not create the VLAN for you. VLAN naming is optional.
VLAN
---1
11
12
13
14
Name
-------------------------------default
vMotion/Storage
Control
Packet
Production
VLAN
---1
11
12
13
14
Type
----enet
enet
enet
enet
enet
Status
Ports
--------- -----------------------------active
active
active
active
active
Secondary
---------
Type
---------------
Ports
-----------------------------------------
Activity Verification
You have completed this task when you attain these results:
Changed the hostname of the VSM and noted the change pushed to vCenter
Created and named the required VLANs on the VSM
L3-5
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
In vCenter, physical NICs are referred to as vmnics. For example, the first NIC on the
server will be labeled vmnic0.
Activity Procedure
Complete these steps:
6.
7.
L3-6
Note
The vmware port-group command allows you to present a different port group
name to vCenter. If you type the command vmware port-group with no name, the
name of the port profile will be pushed to vCenter instead (i.e. Host-Uplinks). The
vmware port-group command is required, but an alternate name is not.
Note
The system vlan command is crucial to understand and configure. System VLANs
behave differently than other VLANs in that they will always remain in a forwarding
state. Systems VLANs will forward traffic even before a VEM connects to the VSM.
System VLANs need to forward traffic in order for the VEM and VSM to
communicate, and must therefore be added to any port profile that will be applied to
uplinks that carry system VLAN traffic.
8.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
9.
N1000V(config)# wr
[########################################] 100%
10. Note the port profile configuration was pushed to vCenter when you entered the state
enabled command.
L3-7
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
port-profile Host-Uplinks
type: Ethernet
description: Uplinks from ESXi hosts to switch
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
evaluated config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
assigned interfaces:
port-group: VMNIC-Uplinks
system vlans: 1,12-13
capability l3control: no
capability iscsi-multipath: no
port-profile role: none
port-binding: static
Note
No assigned interfaces are shown because there are no vmnics (physical NICs)
connected to the port profile. So far you have created the port profile and pushed it to
vCenter. From vCenter you will associate a VMs vNIC to a port group, which
attaches a VM to a port profile. This draws a clear line where the network
responsibility ends (create port profiles and push to vCenter), and where the server
teams responsibility begins (associate vNICs or physical vmnics to a port group
the VMware name for a Nexus 1000V port profile).
Activity Verification
You have completed this task when you attain these results:
Configured an uplink port profile on the VSM and verified its presence in the vCenter
Server Networking inventory view.
L3-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
In this task, you will use the command line method to install the VEM software. The VEM
software is packaged as a VIB: vSphere Installation Bundle.
Activity Procedure
Complete these steps:
12. From vCenter, go to the Datastores and Datastore Clusters view (Home > Inventory >
Datastores and Datastore Clusters or use the shortcut Ctrl-Shift-D).
13. Right-click the datastore ISCSIVMFS, and select Browse Datastore.
L3-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
If the file is not present, click the Upload files icon and navigate to the file at the following
location: N:\Nexus1000v.4.2.1.SV1.4a\ Nexus1000v.4.2.1.SV1.4a\
Nexus1000v.4.2.1.SV1.4a\VEM\cross_cisco-vem-v131-4.2.1.1.4.1.0-3.0.4.vib.
Note
The VIB will be used to install the VEM software on each of the ESXi hosts. You
must have the VEM VIB version that matches both the VMware vSphere version and
the VSM version. For more information, refer to the Cisco Nexus 1000V and VMware
Compatibility Information documentation for the host software version compatibility
table. The compatibility table lists VIB version cross_cisco-vem_v131-4.2.1.1.4.1.03.0.4.vib for ESX/ESXi version 5.0.0 build 469512 used in the lab environment.
15. In vCenter server in the Hosts and Clusters view, highlight your first ESXi host. Select
Configuration, and select Security Profile in the Software pane.
18. Select Start and stop with host and click Start. Click Yes on the firewall popup message.
Click OK and OK again to close the Firewall Properties window.
19. Open a separate Putty SSH session to your ESXi host at IP address 10.0.1.1.
20. Choose Yes when prompted to confirm the SSH key.
21. Log in to the server using username root and password cisco123. You are now in ESXi
Tech Support Mode.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
22. Type the following command to obtain the ESXi version and build number. Compare the
output to the software version compatibility table in the previous steps for ESXi 5.0.0.
~ # vmware -v
VMware ESX 5.0.0 build-469512
Note
The output shows which ESXi version and build number you are running. You can
also view the build number in vCenter by clicking on a host and looking at the top of
the screen to view the IP address, ESXi version, build number, and license level.
23. Navigate to the directory where the Cisco Nexus 1000V VEM VIB file is stored.
Note
You can use the tab key to assist with typing the names. Also note that the
ISCSIVMFS directory name will change to a long set of alphanumeric characters
after you change to its directory - this is expected.
~ # cd /vmfs/volumes/ISCSIVMFS
24. List the contents of the directory (the ISCSIVMFS datastore) and verify the VEM VIB is
visible to the host. This is the same as browsing the datastore from vCenter.
/vmfs/volumes/4bab21a5-e7608223-4c78-003048bdc94f # ls
AddOns.iso
N1000V-VSM1
N1000V-VSM2
WinServer-1
WinServer-2
WindowsXP.iso
cross_cisco-vem-v131-4.2.1.1.4.1.0-3.0.4.vib
cross_cisco-vem-v144-4.2.1.1.5.2.0-3.0.1.vib
L3-11
25. Install the Cisco Nexus 1000V VEM image into the ESXi host.
Note
Use tab completion on the file name so you do not have to type the entire string.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Installation Result
Message: Operation finished successfully
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v131-esx_4.2.1.1.4.1.0-3.0.4
VIBs Removed:
VIBs Skipped:
Note
This command loads the software onto the ESXi host, loads the kernel modules, and
starts the VEM Agent on the running system.
/vmfs/volumes/4bab21a5-e7608223-4c78-003048bdc94f # cd
~ #
27. Verify the VEM was installed successfully. This command can also display the version
installed by adding the v option.
~ # vem status v
Package vssnet-esxmn-ga-release
Version 4.2.1.1.4.1.0-3.0.4
Build 4
Date Wed Jul 27 20:31:30 PDT 2011
Number of PassThru NICs are 0
VEM modules are loaded
Switch Name
vSwitch0
Num Ports
128
Used Ports
8
Configured Ports
128
MTU
1500
Uplinks
vmnic0
28. Go back to vCenter and go to the Networking inventory view (Home > Inventory >
Networking), or use the shortcut Ctrl-Shift-N.
L3-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
29. Right-click on the N1000V switch icon and click Add Host.
30. Select ONLY vmnic1 of ESXi host 10.0.1.1 and choose the VMNIC-Uplinks port group
from the Uplink port group drop-down menu. Click Next.
Warning
DO NOT select any vmnics that are currently in use by any vSwitches. DO NOT
select vmnic0.
L3-13
It is possible migrate port groups to the Cisco Nexus 1000V using this wizard,
instead of manually reconfiguring each VM vNIC. Since you have not created VM
port profiles for this purpose yet, you will migrate later.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
31. Do not select anything on the Network connectivity page. Click Next.
32. Leave Migrate virtual machine networking UNCHECKED and click Next.
L3-14
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
33. You are presented with an overview of the uplink ports that are created for the systemuplink port profile. You can have a maximum of 32 physical uplink ports per each v5.0
ESXi host. Click Finish.
34. Click the VMNIC-Uplinks port profile icon and then click the Hosts tab to ensure that
your ESXi host is listed as a member of the Nexus 1000V distributed virtual switch. Be
patient, this may take a few seconds to show up.
L3-15
It is normal to see a warning on your host. When you started an SSH session to the
host, Remote Tech Support mode (SSH access) was enabled. VMware recommends
you only leave this enabled when you need it (initial installation), and then disable it
since direct SSH access to ESXi hosts poses a security threat.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
35. Return to the Putty SSH window to your ESXi host and look at the VEM status again. You
should now see the VEM connected to the DVS (Distributed vSwitch) via the vmnic1
uplink adapter.
~ # vem status
Num Ports
128
Num Ports
256
Used Ports
8
Used Ports
12
Configured Ports
128
Configured Ports
256
MTU
1500
MTU
1500
Uplinks
vmnic0
Uplinks
vmnic1
VSM Port
Eth3/2
Admin Link
UP
UP
State
FWD
PC-LTL
0
SGID
Vem Port
vmnic1
37. Inspect the VLANs allowed on vmnic1 (displayed as Eth3/2 on the Nexus 1000V).
~ # vemcmd show port vlans
LTL
18
VSM Port
Eth3/2
Note
L3-16
Mode
T
Native
VLAN
1
VLAN
State
FWD
Allowed
Vlans
1,11-14
VEM commands can be run remotely from the VSM NX-OS CLI, for example,
module vem 3 execute vemcmd show port on the VSM would give the same
output.
38. Validate that the VEMs Control VLAN, Packet VLAN, and the domain ID match the
VSM configuration.
~ # vemcmd show card
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
39. Return to the VSM SSH session at 10.0.1.200. Verify the Nexus 1000V sees the ESXi
VEM as a module in the virtual chassis.
N1000V(config)# show module
Mod
--1
2
3
Ports
----0
0
248
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Virtual Ethernet Module
Model
-----------------Nexus1000V
Nexus1000V
NA
Status
-----------active *
ha-standby
ok
L3-17
Hw
-----0.0
0.0
VMware ESX 5.0.0 Releasebuild-469512 (3.0)
Mod
--1
2
3
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
02-00-0c-00-03-00 to 02-00-0c-00-03-80
Mod
--1
2
3
Server-IP
--------------10.0.1.200
10.0.1.200
10.0.1.1
Serial-Num
---------NA
NA
NA
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Mod
--1
2
3
Server-UUID
-----------------------------------NA
NA
44454c4c-5400-104a-8036-c7c04f43344a
Server-Name
----------------NA
NA
10.0.1.1
40. Verify the VSM has learned the MAC address of the VEM via Control VLAN 12.
N1000V(config)# show mac address-table vlan 12
VLAN
MAC Address
Type
Age
Port
Mod
---------+-----------------+-------+---------+------------------------------+12
0002.3d40.0102
static 0
N1KV Internal Port
3
12
0002.3d80.0102
static 0
N1KV Internal Port
3
12
0050.5687.3524
dynamic 0
Eth3/2
3
12
0050.5687.3527
dynamic 0
Eth3/2
3
Total MAC Addresses: 2
Note
The MAC address in the table should match the VEM Control Agent (DPA) MAC
from the previous vemcmd show card output on the ESXi host
L3-18
44. Verify that the VEM agent on both of your ESXi hosts is properly communicating with the
VSM.
N1000V(config)# show module
Ports
----0
0
248
248
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Virtual Ethernet Module
Virtual Ethernet Module
Model
-----------------Nexus1000V
Nexus1000V
NA
NA
Mod
--1
2
3
4
Sw
---------------4.2(1)SV1(4a)
4.2(1)SV1(4a)
4.2(1)SV1(4a)
4.2(1)SV1(4a)
Mod
--1
2
3
4
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
02-00-0c-00-03-00 to 02-00-0c-00-03-80
02-00-0c-00-04-00 to 02-00-0c-00-04-80
Mod
--1
2
3
4
Server-IP
Server-UUID
--------------- -----------------------------------10.0.1.200
NA
10.0.1.200
NA
10.0.1.1
44454c4c-5400-104a-8036-c7c04f43344a
10.0.1.2
44454c4c-5400-104a-8036-c4c04f43344a
Status
-----------active *
ha-standby
ok
ok
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Mod
--1
2
3
4
Hw
-----------------------------------------------0.0
0.0
VMware ESX 5.0.0 Releasebuild-469512 (3.0)
VMware ESX 5.0.0 Releasebuild-469512 (3.0)
Serial-Num
---------NA
NA
NA
NA
Server-Name
-----------------NA
NA
10.0.1.1
10.0.1.2
Note
Modules 1 and 2 are reserved for VSMs, one active and one standby (like reserved
SUP slots on a Nexus 7000 chassis,). Modules 3 and 4 represent each VEM. As
shown at the bottom of the screen, each VEM corresponds to a physical ESXi host,
identified by the server IP address.
Status
----------powered-up
powered-up
UUID
-----------------------------------44454c4c-5400-104a-8036-c7c04f43344a
44454c4c-5400-104a-8036-c4c04f43344a
License Status
-------------licensed
licensed
L3-19
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
port-profile Host-Uplinks
type: Ethernet
description: Uplinks from ESXi hosts to switch
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
evaluated config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
assigned interfaces:
Ethernet3/2
Ethernet4/2
port-group: VMNIC-Uplinks
system vlans: 1,11-12
capability l3control: no
capability iscsi-multipath: no
port-profile role: none
port-binding: static
Note
You should now see interfaces assigned to the port profile, as you connected vmnic1
on each host to the port profile in vCenter.
Activity Verification
You have completed this task when you attain these results on both ESXi hosts:
Installed and verified the Cisco Nexus 1000V VEM
Assigned vmnic1 to the uplink port group on both hosts to connect the hosts to the
Cisco Nexus 1000V DVS
L3-20
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
47. Create a vEthernet port profile for your production Virtual Machines to use.
48. Verify the port profile configuration virtual machine data port profile.
N1000V(config)# show port-profile name Production-VMs
port-profile Production-VMs
type: Vethernet
description: Production VM network
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode access
switchport access vlan 14
no shutdown
evaluated config attributes:
switchport mode access
switchport access vlan 14
no shutdown
assigned interfaces:
port-group: Production-VMs
system vlans: none
capability l3control: no
capability iscsi-multipath: no
port-profile role: none
port-binding: static
L3-21
No interfaces are shown because none have been assigned yet. Virtual interfaces
are assigned automatically when you add virtual machines to this port group in
vCenter.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config)# wr
[########################################] 100%
50. Return to the Networking inventory view in vCenter. The Production-VMs port profile
should now be visible as a port group on the Cisco Nexus 1000V.
Note
Ethernet port profiles have different icons than vEthernet port profiles. Ethernet
shows a green card to indicate physical NICs connect to it, and vEthernet shows a
blue icon to represent a VM network that vNICs should connect to.
Activity Verification
You have completed this task when you attain these results:
Created a vEthernet port profile for your production VMs to use on the VSM and
verified propagation to vCenter.
L3-22
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
51. Using the navigation bar in vCenter, go to the Networking inventory view (Home >
Inventory > Networking), or use the shortcut Ctrl-Shift-N.
52. Right-click on the N1000V switch icon and select Migrate Virtual Machine Networking.
L3-23
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
53. Select the source network Production and destination network Production-VMs
(N1000V). One network is a port group on a standard vSwitch, and the other is a port
group on the Nexus 1000V DVS. Click Next.
54. Click All Virtual Machines to select both of your Windows VMs. Click Next.
L3-24
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
55. Verify you have selected the correct source and destination networks, and that both your
Windows VMs will be migrated. Click Finish.
56. Wait for the task to complete in vCenter, and then click on N1000V switch icon and select
the Configuration tab.
57. Expand the port groups by clicking the plus icon. Verify the Windows VMs are connected
to the Production-VMs port group.
Note
You can click the information icon next to vmnic1 of each ESXi hosts uplink NIC for
Cisco Discovery Protocol (CDP) information. You may need to minimize the Pan and
Zoom box to view the icon.
58. Using the navigation bar in vCenter, go to the Hosts and Clusters view (Home > Inventory
> Hosts and Clusters), or use the shortcut Ctrl-Shift-H.
L3-25
60. Select Network adapter 1 and verify that it is now connected to the Cisco Nexus 1000V
port group Production (N1000V). Click Cancel.
The port number is also shown underneath the network label. This is the port the VM
is connected to on the Nexus 1000V.
Note
Clicking Switch to advanced settings allows you to connect to other Nexus 1000V
switches, as well as manually specify the port number. Do not modify this now.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
Note
L3-26
Instead of using the Migrating Virtual Machine Networking wizard as you did
earlier in this Task, you could also move your virtual machines to a Cisco Nexus
1000V port group by changing the network adapter connection in this window,
although this would have to be done one VM at a time.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
-----------------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
-----------------------------------------------------------------------------mgmt0
-up
10.0.1.200
1000
1500
-----------------------------------------------------------------------------Ethernet
VLAN
Type Mode
Status Reason
Speed
Port
Interface
Ch #
-----------------------------------------------------------------------------Eth3/2
1
eth trunk up
none
1000
Eth4/2
1
eth trunk up
none
1000
-----------------------------------------------------------------------------Vethernet
VLAN
Type Mode
Status Reason
Speed
-----------------------------------------------------------------------------Veth1
14
virt access up
none
auto
Veth2
14
virt access up
none
auto
-----------------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
-----------------------------------------------------------------------------control0 -up
-1000
1500
Note
Ports Veth1 and Veth2 connect to your Windows VMs vNICs that were just
connected to the Nexus 1000V DVS. The vEth ports were automatically created
when you migrated your VMs to the Production-VMs port group. Whenever the VM
moves, the vEth port moves with it, therefore the VM will always appear to be
connected to the same vEth port.
62. View the interfaces corresponding to each port profile. This command also lists the
configuration on each port, which is inherited from the port profile configuration.
N1000V# show port-profile expand-interface
port-profile Host-Uplinks
Ethernet3/2
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
Ethernet4/2
switchport mode trunk
switchport trunk allowed vlan 1,11-14
no shutdown
port-profile Production-VMs
Vethernet1
L3-27
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
port-profile Unused_Or_Quarantine_Veth
63. Explore some interface commands on your vEth interfaces, which are connected to your
Windows VMs.
N1000V(config)# show interface vethernet 1 status
-----------------------------------------------------------------------------Port
Name
Status
Vlan
Duplex Speed
Type
-----------------------------------------------------------------------------Veth1
WinServer-1, Netwo up
14
auto
auto
--
Vethernet1 is up
Port description is WinServer-1, Network Adapter 1
Hardware is Virtual, address is 0050.569c.3db7 (bia 0050.569c.3db7)
Owner is VM "WinServer-1", adapter is Network Adapter 1
Active on module 3
VMware DVS port 100
Port-Profile is Production-VMs
Port mode is access
5 minute input rate 0 bytes/second, 0 packets/second
5 minute output rate 0 bytes/second, 0 packets/second
Rx
20 Input Packets 0 Unicast Packets
0 Multicast Packets 20 Broadcast Packets
2135 Bytes
Tx
13 Output Packets 0 Unicast Packets
0 Multicast Packets 13 Broadcast Packets 13 Flood Packets
780 Bytes
0 Input Packet Drops 0 Output Packet Drops
-----------------------------------------------------------------------------Port
Name
Status
Vlan
Duplex Speed
Type
-----------------------------------------------------------------------------Veth2
WinServer-2, Netwo up
14
auto
auto
--
L3-28
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Vethernet2 is up
Port description is WinServer-2, Network Adapter 1
Hardware: Virtual, address: 0050.56a9.0000 (bia 0050.56a9.0000)
Owner is VM "WinServer-2", adapter is Network Adapter 1
Active on module 4
VMware DVS port 161
Port-Profile is Production-VMs
Port mode is access
5 minute input rate 0 bits/second, 0 packets/second
5 minute output rate 0 bits/second, 0 packets/second
Rx
15 Input Packets 0 Unicast Packets
0 Multicast Packets 15 Broadcast Packets
900 Bytes
Tx
16 Output Packets 0 Unicast Packets
0 Multicast Packets 16 Broadcast Packets 16 Flood Packets
1708 Bytes
0 Input Packet Drops 0 Output Packet Drops
Note
Note that the owner is listed as the name of the virtual machine, and that the module
the VM is connected to corresponds to the host that is currently running the VM (i.e.
module 3 for ESXi 1 or module 4 for ESXi 2 VEMs).
Open command prompt and confirm the vNIC MAC address using the command ipconfig
/all. Repeat this step on your second virtual machine WinServer-2.
L3-29
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
65. In the WinServer-1 command prompt, start a continuous ping to WinServer-2 at 10.0.14.2.
66. Display the VLAN 14 MAC address table in VLAN 14. This should now contain your
Windows VMs MAC addresses.
VLAN
MAC Address
Type
Age
Port
Mod
---------+-----------------+-------+---------+------------------------------+14
0050.5687.5a40
static 0
Veth1
3
14
000c.29ca.c69e
dynamic 4
Eth3/2
3
14
0050.5687.5a3f
dynamic 110
Eth3/2
3
14
0050.5687.5a3f
static 0
Veth2
4
14
000c.29ca.c69e
dynamic 4
Eth4/2
4
14
0050.5687.5a40
dynamic 110
Eth4/2
4
Total MAC Addresses: 6
Note
The MAC address of each VM appears twice. This is because each VEM learns the
MAC address of the VM connected to the other VEM on its uplink interface.
Note
67. Shut down the virtual Ethernet port connected to the WinServer-1 VM.
N1000V(config-port-prof)# interface vethernet 1
N1000V(config-if)# shutdown
L3-30
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
68. Return to the WinServer-1 VM console and observe that since the vEth port is down, the
pings cannot reach the virtual network and therefore time out.
Note
Because you shutdown the vEth port this VM is attached to, you essentially
disconnected the VM from the network. The VM receives a link-down status.
Essentially, this is like the [virtual] cable from the vNIC is unplugged from the vEth
port on the Nexus 1000V.
69. Re-enable the virtual Ethernet port that the WinServer-1 VM is connected to.
N1000V(config-if)# no shutdown
N1000V(config-if)# exit
70. Inspect the port statistics on vEth1. Just like a physical Ethernet port, you can view traffic
metrics on a vEth interface for an individual VM.
N1000V(config)# show interface vethernet 1
Vethernet1 is up
Port description is WinServer-1, Network Adapter 1
Hardware is Virtual, address is 0050.569c.3db7 (bia 0050.569c.3db7)
Owner is VM "WinServer-1", adapter is Network Adapter 1
Active on module 3
VMware DVS port 100
Port-Profile is Production-VMs
Port mode is access
5 minute input rate 96 bytes/second, 0 packets/second
5 minute output rate 808 bytes/second, 0 packets/second
Rx
1751332 Input Packets 1743367 Unicast Packets
L3-31
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
71. From the 1000V VSM, save your configuration using your CLI alias.
N1000V# wr
[########################################] 100%
72. Verify that you are receiving ping replies again in the console of the WinServer-1 VM.
73. Stop the continuous ping session by closing the command prompt window, or by typing
Ctrl-C.
Activity Verification
You have completed this task when you attain these results:
Migrated VMs WinServer-1 and WinServer-2 to the Cisco Nexus 1000V ProductionVMs port group.
Verified connectivity between both virtual machines in the new Production-VMs port
group.
L3-32
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L4
Upgrading the Cisco Nexus
1000V VSM and VEM
Complete this lab activity to practice what you learned in the related lesson.
L4-1
Activity Objective
In this activity, you will upgrade the Cisco Nexus 1000V VSMs and VEMs. After
performing this lab, you should be able to perform the following:
Upload the VSM upgrade software to the VSM
Upgrade your VSM VMs to NX-OS release 4.2(1)SV1(5.2)
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
L4-2
Command
Description
show version
directory bootflash
Copy tftp://<ipaddress>/<filename>
bootflash:<filename>
show module
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in the visual objectives section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L4-3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
In this task, you will upload the new image to the Cisco Nexus 1000V VSM from your
vCenter desktop via TFTP server.
Activity Procedure
Complete these steps:
1.
Verify the current version running on the Cisco Nexus 1000V is 4.2(1)SV1(4a).
Software
loader: version unavailable [last: loader version not available]
kickstart: version 4.2(1)SV1(4a)
system: version 4.2(1)SV1(4a)
kickstart image file is: bootflash:/nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin
kickstart compile time: 7/27/2012 3:00:00 [07/27/2012 12:49:49]
system image file is: bootflash:/nexus-1000v-mz.4.2.1.SV1.4a.bin
system compile time: 7/27/2011 3:00:00 [07/27/2011 13:42:57]
Hardware
cisco Nexus 1000V Chassis ("Virtual Supervisor Module")
Intel(R) Xeon(R) CPU with 2075740 kB of memory.
Processor Board ID T5056B1802D
L4-4
2.
Go to the vCenter Server desktop. You will need to disable the firewall on vCenter so you
can use a TFTP server to get the new NX-OS files to the VSM.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Click Start and type firewall.cpl into the command panel and press the enter key.
3.
L4-5
Choose Turn off Windows Firewall in both Home and Public locations and click OK.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
4.
5.
L4-6
7.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
6.
8.
L4-7
9.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
You will setup 3CDaemon as a TFTP server so the VSM can connect and copy the NX-OS
kickstart and system files to bootflash.
L4-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
10. Click the TFTP Server tab in the left-hand pane. Ensure the TFTP Server is started. Click
Configure TFTP Server, and then click the icon to change the default
Upload/Download directory.
L4-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
12. Click OK, and then click Yes to save changes on the page.
13. Leave 3CDaemon open. Go back to the C:\ drive.
14. Right-click the kickstart file, and select Rename. Do not rename the file; simply copy the
filename to the clipboard so you do not have to type it in the NX-OS CLI in a later step.
15. Return to the PuTTY SSH session to your VSM at 10.0.1.200. Examine the contents of
bootflash on the Cisco Nexus 1000V VSM.
N1000V(config)# dir
19
77824
4096
4096
16384
2521
19642880
103922265
14441
L4-10
May
May
May
May
Jan
May
Jan
Jan
May
04
07
07
07
27
07
27
27
07
05:50:14
11:58:11
11:14:39
11:14:36
17:00:51
11:58:07
17:01:09
17:01:13
11:59:06
2012
2012
2012
2012
2011
2012
2011
2011
2012
.ovfconfigured
accounting.log
core/
log/
lost+found/
mts.log
nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin
nexus-1000v-mz.4.2.1.SV1.4a.bin
stp.log.1
May
Jan
Jan
Jan
Jan
07
27
27
27
27
11:57:46
17:01:48
17:01:48
17:01:48
17:01:20
2012
2011
2011
2011
2011
system.cfg.new
vdc_2/
vdc_3/
vdc_4/
vnmc-vsmpa.1.2.1a.bin
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
You should see two NX-OS files with the same version. NX-OS always comes as a
pair of files for any version: system and kickstart. The kickstart image is the kernel
image, and the system file is the NX-OS operating system.
17. Go back to the C:/ drive and copy the name of the new NX-OS system image. The system
file is the one that doesnt have kickstart in the name.
18. Upload the new NX-OS system file to bootflash.
Note
Copying the system file to booflash make take ~20 minutes. Feel free to take a break
while the file copies.
L4-11
19. Verify both the new kickstart and system NX-OS files are available on the VSM bootflash.
N1000V(config)# dir
May
May
May
May
Jan
May
Jan
Sep
Jan
Sep
May
May
Jan
Jan
Jan
Jan
04
07
07
07
27
07
27
28
27
28
07
07
27
27
27
27
05:50:14
11:58:11
11:14:39
11:14:36
17:00:51
11:58:07
17:01:09
02:40:37
17:01:13
02:44:54
11:59:06
11:57:46
17:01:48
17:01:48
17:01:48
17:01:20
2012
2012
2012
2012
2011
2012
2011
2012
2011
2012
2012
2012
2011
2011
2011
2011
.ovfconfigured
accounting.log
core/
log/
lost+found/
mts.log
nexus-1000v-kickstart-mz.4.2.1.SV1.4a.bin
nexus-1000v-kickstart-mz.4.2.1.SV1.5.2.bin
nexus-1000v-mz.4.2.1.SV1.4a.bin
nexus-1000v-mz.4.2.1.SV1.5.2.bin
stp.log.1
system.cfg.new
vdc_2/
vdc_3/
vdc_4/
vnmc-vsmpa.1.0.1j.bin
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
19
77824
4096
4096
16384
2521
19642880
19540480
103922265
80806200
14441
2569
4096
4096
4096
20827098
Activity Verification
You have completed this task when you attain these results:
Uploaded Cisco Nexus 1000V system and kickstart files for NX-OS release
4.2(1)SV1(5.2) to the bootflash directory of your VSM.
L4-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
Note
Before upgrading a VSM in production, Cisco recommends you close any 1000V
configuration sessions, commit all changes to startup-config, save a backup copy of
the running-config on external storage, and perform a backup of the VSM.
20. Return to the vSphere Client. Open a console session to your WinServer-1 virtual
machine. Start a continuous ping to the WinServer-2 VM at address 10.0.14.2.
21. From the Nexus 1000V VSM, save your configuration before testing and proceeding with
the upgrade. Save your configuration using your CLI alias.
N1000V(config)# wr
[########################################] 100%
L4-13
22. Examine the impact of upgrading NX-OS to release 4.2(1)SV1(5.2) kickstart and system
software. It is best practice to see the impact of the install before actually doing the install.
Be patient as this impact step goes thru all steps.
N1000V(config)# show install all impact kickstart bootflash:nexus-1000v-kickstartmz.4.2.1.SV1.5.2.bin system bootflash:nexus-1000v-mz.4.2.1.SV1.5.2.bin
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
[####################] 100% --
Install-type
-----reset
reset
Reason
Module
Running-Version
ESX Version
VSM Compatibility
ESX Compatibility
------ ---------------------- --------------------------------------------3
4.2(1)SV1(4a)
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
COMPATIBLE
COMPATIBLE
4
4.2(1)SV1(4a)
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
COMPATIBLE
COMPATIBLE
Note
L4-14
The install all command performs an In-Service Software Upgrade (ISSU) on dual
VSMs in a highly available environment. By including the show and impact
keywords, you can determine the potential impact of an upgrade before actually
performing one.
23. Install the new Nexus 1000V image on the VSM. Note you will have to confirm by typing
y and then Enter once the images have been verified. It is expected that your Putty session
will close (fail) at the end of this install. Again, be patient as the install goes thru each of
the steps as follows.
N1000V(config)# install all kickstart bootflash:nexus-1000v-kickstartmz.4.2.1.SV1.5.2.bin system bootflash:nexus-1000v-mz.4.2.1.SV1.5.2.bin
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
[####################] 100% --
Install-type
-----reset
reset
Reason
Module
Running-Version
ESX Version
VSM Compatibility
ESX Compatibility
------ ---------------------- --------------------------------------------3
4.2(1)SV1(4a)
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
COMPATIBLE
COMPATIBLE
4
4.2(1)SV1(4a)
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
COMPATIBLE
COMPATIBLE
Do you want to continue with the installation (y/n)?
[n] y
L4-15
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
24. Go back to the VM WinServer-2. Ensure the ping is still successfuly without interruption
during the upgrade process.
L4-16
25. Open a new Putty SSH session to your VSM at 10.0.1.200. Login with username admin
and password cisco123.
It is necessary to open a new session due to the VSM switchover during the upgrade.
Verify the version of the NX-OS software that is now running.
N1000V# show version
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
26. Verify the VSMs are still in a highly available redundant configuration.
N1000V-VSM(config)# show system redundancy status
Redundancy role
--------------administrative:
operational:
primary
primary
Redundancy mode
--------------administrative:
operational:
HA
HA
L4-17
HA standby
HA standby
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
Upgraded the VSM primary and secondary modules in your Cisco Nexus 1000V Series
Switch to Release 4.2(1)SV1(5.2) software.
L4-18
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
Note
Before performing a VEM upgrade in production, Cisco recommends you are logged
into the VSM CLI, have VMware documentation handy, have not placed the VEM
image in the root host directory (use /tmp instead), and have the following configured
on your upstream (physical) switches: PortFast (STP edge port), BPDU Filtering, and
BPDU Guard.
Note
27. The VEM software component of the Nexus 1000V in your ESXi hosts is upgraded
separately from the VSM. Up to this point, the only pieces that have been upgraded are the
VSMs on the Cisco Nexus 1000V (both active and standby).
28. Start by verifying the current version running on the VEM components of the Cisco Nexus
1000V. Note how the VEMs are still on the prior version (4a) from the upgraded VSMs
(5.2).
N1000V# show module
Mod
--1
2
3
4
Ports
----0
0
248
248
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Virtual Ethernet Module
Virtual Ethernet Module
Mod
--1
2
3
4
Sw
---------------4.2(1)SV1(5.2)
4.2(1)SV1(5.2)
4.2(1)SV1(4a)
4.2(1)SV1(4a)
Model
-----------------Nexus1000V
Nexus1000V
NA
NA
Status
-------active *
ha-standby
ok
ok
Hw
-----------------------------------------------0.0
0.0
VMware ESX 5.0.0 Releasebuild-469512 (3.0)
VMware ESX 5.0.0 Releasebuild-469512 (3.0)
L4-19
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
02-00-0c-00-03-00 to 02-00-0c-00-03-80
02-00-0c-00-04-00 to 02-00-0c-00-04-80
Mod
--1
2
3
4
Server-IP
--------------10.0.1.200
10.0.1.200
10.0.1.1
10.0.1.2
Serial-Num
---------NA
NA
NA
NA
Server-Name
---------------------NA
NA
10.0.1.1
10.0.1.2
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Server-UUID
---------------------------------NA
NA
564d26cf-7fdf-86a8-020d-def2e60ef1f9
564d525c-eb5d-d3a9-1edb-544c059d43af
29. To upgrade the VEM you will need to make the 4.2(1)SV1(5.2)VEM bundle available to
your ESXi hosts.
Since you are not using the VMware Update Manager (VUM), a manual upgrade is
necessary. Determine the VEM version required.
Note
Referring to the document entitled Cisco Nexus 1000V and VMware Compatibility
Information, Release 4.2(1)SV1(5.2), this VEM bundle corresponds to the
cross_cisco-vem_v144-4.2.1.1.5.2.0-3.0.1.vib VIB version. You will use the
cross_cisco-vem_v144-4.2.1.1.5.2.0-3.0.1.vib VIB to manually upgrade the VEM
modules.
30. Return to the vSphere Client. Use the navigation bar in vCenter to go to the Hosts and
Clusters view (Home > Inventory > Hosts and Clusters).
L4-20
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
31. Choose your first ESXi host 10.0.1.1. In the Summary tab, right-click the ISCSIVMFS
and choose Browse Datastore.
32. Ensure the following file is present in the root directory of the ISCSIVMFS datastore:
cross_cisco-vem_v144-4.2.1.1.5.2.0-3.0.1.vib.
If the file is not present, upload it from the DVD in the N: drive of your vCenter Server
host.
34. Log into the ESXi host with username root and password cisco123.
L4-21
35. With the CD command, navigate to the directory /vmfs/volumes/ISCSIVMFS and list the
contents of this directory. Verify that the VEM VIB file is located in this directory.
/vmfs/volumes/4bab21a5-e7608223-4c78-003048bdc94f # ls
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
AddOns.iso
N1000V-VSM1
N1000V-VSM2
WinServer-1
WinServer-2
WindowsXP.iso
cross_cisco-vem-v131-4.2.1.1.4.1.0-3.0.4.vib
cross_cisco-vem_v144-4.2.1.1.5.2.0-3.0.1.vib
/vmfs/volumes/4bab21a5-e7608223-4c78-003048bdc94f # cd
~ #
37. Return to the PuTTY SSH session to your VSM. Send notification of the VEM upgrade to
vSphere as you would in a standard production environment.
After notification, the administrator has the capability to accept or deny the upgrade, or to
defer to a time when it is more suitable.
Warning:
Please ensure the hosts are running compatible ESX versions for the upgrade. Refer
to corresponding "Cisco Nexus 1000V and VMware Compatibility Information" guide.
38. Return to your vSphere Client. Use the navigation bar in vCenter to go to the Networking
view (Home > Inventory > Networking).
L4-22
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
39. Choose your N1000V switch and click the Summary tab.
40. You should see configuration issue notifying you that an upgrade for the vDS (Nexus
1000V) is available. This is the result of the upgrade notify command you issued on the
VSM. Click Apply upgrade.
41. The Summary panel displays a new Configuration Issue alerting you that the upgrade is in
progress. Return to the console of the WinServer-2 VM and ensure that the ping is still
running to 10.0.14.2.
Note
The upgrade will show in progress until the network administrator has completed the
Nexus 1000 component upgrade and signaled its completion.
L4-23
42. You must put your ESXi host into maintenance mode in order to update the VEM software.
Using the navigation bar in vCenter, go to the Hosts and Clusters view (Home >
Inventory > Hosts and Clusters).
You will upgrade the ESXi 1 host (10.0.1.1) first. You must migrate powered-on VMs
running on ESXi 1 onto ESXi 2 before being able to place the host in maintenance mode.
If you had enabled VMware DRS (Distribued Resource Scheduler) on a cluster of
servers, DRS would automatically evaculate hosts when DRS detected a host
attempting to enter maintenance mode.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L4-24
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
44. Keep the default Change host setting (a vMotion) and click Next.
45. Choose the second ESXi host 10.0.1.2, and then click Next.
L4-25
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
48. Repeat the vMotion steps to move the last VM, N1000V-VSM1, off of host 10.0.1.1 onto
host 10.0.1.2. Your first host 10.0.1.1 should not have any running VMs left.
Note
You can also drag and drop VMs to initiate a vMotion from the Hosts and Clusters
inventory view.
49. Right-click ESXi host 10.0.1.1 and choose Enter Maintenance Mode.
L4-26
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
51. Verify that the host has successfully entered maintenance mode before continuing.
52. Return to your SSH session to your ESXi host, and perform a manual upgrade of the VEM
module. Again, use tab to autocomplete the file names.
~ # esxcli software vib install v /vmfs/volumes/ISCSIVMFS/cross_cisco-vem_v1444.2.1.1.5.2.0-3.0.1.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v144-esx_4.2.1.1.5.2.0-3.0.1
VIBs Removed: Cisco_bootbank_cisco-vem-v131-esx_4.2.1.1.4.1.0-3.0.4
53. On the upgraded host, verify the VEM and VSM versions now match.
~ # vemcmd show version
L4-27
55. Return to the Putty SSH session on your WinServer-2 VM verify the continuous ping is
running interrupted.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
56. From the vSphere Client, right-click the ESXi 1 host you just upgraded and choose Exit
Maintenance Mode.
57. Migrate all the VMs to ESXi 1 at 10.0.1.1 to prepare to upgrade the second host, 10.0.1.2.
L4-28
59. Once both ESXi hosts VEMs have been upgraded, vMotion the VMs back to their original
hosts (either drag and drop, or right-click and Migrate to change hosts).
Place N1000V-VSM1 and WinServer-1 on 10.0.1.1.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
60. Return to the SSH session to your N1000V switch. You should see console messages you
alerting you the VEMs went down and came back up during the upgrade process.
Note
Since the host was in maintenance mode (no running VMs), there was no disruption
to your Virtual Machine traffic. You can verify by checking the continuous ping from
VM WinServer-1.
L4-29
61. Verify that the upgrade is complete by confirming the versions of the VSMs and VEMs
now all match (5.2).
N1000V# show module
Ports
----0
0
248
248
Module-Type
-------------------------------Virtual Supervisor Module
Virtual Supervisor Module
Virtual Ethernet Module
Virtual Ethernet Module
Mod
--1
2
3
4
Sw
-----------------4.2(1)SV1(5.2)
4.2(1)SV1(5.2)
4.2(1)SV1(5.2)
4.2(1)SV1(5.2)
Mod
--1
2
3
4
MAC-Address(es)
-------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8
02-00-0c-00-03-00 to 02-00-0c-00-03-80
02-00-0c-00-04-00 to 02-00-0c-00-04-80
Mod
--1
2
3
4
Server-IP
--------------10.0.1.200
10.0.1.200
10.0.1.1
10.0.1.2
Model
-----------------Nexus1000V
Nexus1000V
NA
NA
Status
-----------active *
ha-standby
ok
ok
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Mod
--1
2
3
4
Hw
-----------------------------------------------0.0
0.0
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
VMware ESXi 5.0.0 Releasebuild-469512 (3.0)
Serial-Num
---------NA
NA
NA
NA
Server-UUID
---------------------------------NA
NA
564d26cf-7fdf-86a8-020d-def2e60ef1f9
564d525c-eb5d-d3a9-1edb-544c059d43af
Server-Name
---------------------NA
NA
10.0.1.1
10.0.1.2
L4-30
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
63. Using the navigation bar in vCenter, go to the Networking view in vSphere (Home >
Inventory > Networking). Verify that the configuration issue alerting the administrator
that the upgrade is in progress is now gone. The upgrade process is complete.
64. You should see yellow warnings on your hosts. These warnings were triggered when you
connected to your host via Putty SSH. VMware recommends you disable SSH access to
hosts for security reasons. You can clear the warnings and re-secure your hosts by
disabling Putty SSH access in the security settings.
65. Click your first host 10.0.1.1, click the Configuration tab, and then click Security Profile
in the Software pane.
L4-31
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
66. Click the SSH service, and then click Options Stop the service. Click OK to close both
windows.
67. Repeat the process of disabling SSH access on your second ESXi host, 10.0.1.2. You
should not see any warnings on either of the hosts now.
68. From the Nexus 1000V VSM, save your configuration using your CLI alias.
N1000V# wr
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Updated both ESXi hosts to NX-OS release 4.2(1)SV1(5.2) software using the VEM
VIB bundle.
L4-32
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L5
Optimize the Cisco Nexus 1000V
Implementation
Complete this lab activity to practice what you learned in the related lesson.
L5-1
Activity Objective
In this activity, you will add additional uplinks, configure MAC pinning, migrate port
groups from standard vSwitch0 to the distributed Cisco Nexus 1000V, and configure
virtual Port Channels (vPCs). After performing this lab, you should be able to perform the
following:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5.0 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
L5-2
Command
Description
state enabled
show module
pinning id <sub-group-id>
port-profile [type
{ethernet | vethernet}]
<profile_name>
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
state enabled
reload
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in visual objective section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L5-3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
For redundancy and throughput purposes, the Cisco Nexus 1000V VEM should be
connected to upstream switches using multiple uplinks rather than an individual uplink. If
the upstream switches that you connect your ESXi hosts to can be clustered (vPC, VSS,
VBS stacking), configure Multichassis EtherChannel (MEC) that terminate on both
upstream switches using LACP. For example, configure vPCs from the hosts to the Nexus
7000, 5000, or 2000 platforms. This will provide physical redundancy as well as greater
throughput than an Active/Standby NIC team.
If the upstream switches cannot support MEC, use MAC pinning. MAC pinning is a
special port channel configuration on Cisco Nexus 1000V and other Nexus devices that
does not require configuration of a port channel on the upstream switches, and as the name
implies, statically pins VM source MAC addresses to a particular uplink.
In this task, you will modify the Cisco Nexus 1000V uplink port profile to implement
MAC pinning.
Activity Procedure
Complete these steps:
1.
2.
N1000V# configure
Enter configuration commands, one per line. End with CNTL/Z.
N1000V(config)# port-profile type ethernet Host-Uplinks
N1000V(config-port-prof)# channel-group auto mode on mac-pinning
N1000V(config-port-prof)# exit
Activity Verification
You have completed this task when you attain these results:
L5-4
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
3.
In the vCenter Networking inventory view (Ctrl-Shift-N), right-click the N1000V switch
icon and select Manage Hosts.
4.
5.
Leave vmnic1 selected and additionally select vmnic3 under both ESXi hosts.
L5-5
6.
Choose the port group VMNIC-Uplinks from the drop-down menu by vmnic3 on both
hosts, and then click Next.
VMNIC-Uplinks is the same port profile as Host-Uplinks in the Nexus 1000V.
Recall that you can configure the VMware port group name to display differently than
the port profile name in the Nexus 1000V CLI.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
7.
8.
L5-6
Review the new uplink ports that will be added to the VMNIC-Uplinks ethernet port
profile. Click Finish.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
9.
10. Wait for the task to complete and return to the Cisco Nexus 1000V VSM Putty session to
verify the uplinks have been added to the dVS. Look at the port channel configuration.
N1000V(config)# show port-channel summary
Flags:
D
I
s
S
U
Down
P - Up in port-channel (members)
Individual H - Hot-standby (LACP only)
Suspended
r - Module-removed
Switched
R - Routed
Up (port-channel)
-----------------------------------------------------------------------------Group PortType
Protocol Member Ports
Channel
-----------------------------------------------------------------------------1
Po1(SU)
Eth
NONE
Eth3/2(P)
Eth3/4(P)
2
Po2(SU)
Eth
NONE
Eth4/2(P)
Eth4/4(P)
L5-7
11. Look at the port profile and verify you see the new port channel interfaces assigned to the
port profile.
N1000V(config)# show port-profile name Host-Uplinks
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
port-profile Host-Uplinks
type: Ethernet
description: "Uplink from ESXi hosts to switch"
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
channel-group auto mode on mac-pinning
no shutdown
evaluated config attributes:
switchport mode trunk
switchport trunk allowed vlan 1,11-14
channel-group auto mode on mac-pinning
no shutdown
assigned interfaces:
port-channel1
port-channel2
Ethernet3/2
Ethernet3/4
Ethernet4/2
Ethernet4/4
port-group: VMNIC-Uplinks
system vlans: 1,12-13
capability l3control: no
capability iscsi-multipath: no
port-profile role: none
port-binding: static
12. Look at the brief interface output to verify the new ESXi uplink interfaces are visible to the
VSM.
N1000V(config)# show interface brief
-----------------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
-----------------------------------------------------------------------------mgmt0
-up
10.0.1.200
1000
1500
-----------------------------------------------------------------------------Ethernet
VLAN
Type Mode
Status Reason
Speed
Port
Interface
Ch #
-----------------------------------------------------------------------------Eth3/2
1
eth trunk up
none
1000
1
Eth3/4
1
eth trunk up
none
1000
1
Eth4/2
1
eth trunk up
none
1000
2
Eth4/4
1
eth trunk up
none
1000
2
L5-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
-----------------------------------------------------------------------------Vethernet
VLAN
Type Mode
Status Reason
Speed
-----------------------------------------------------------------------------Veth1
14
virt access up
none
auto
Veth2
14
virt access up
none
auto
-----------------------------------------------------------------------------Port
VRF
Status IP Address
Speed
MTU
-----------------------------------------------------------------------------control0 -up
-1000
1500
Note
MAC pinning treats all uplinks coming out of the ESXi host as standalone links and
pins different MAC addresses to each link in a round-robin fashion. This approach
helps ensure that the MAC address of a virtual machine will never be seen on
multiple interfaces on the upstream switches. Therefore, no additional configuration
is required on the upstream switches. Notice that this configuration created two port
channels one for each ESXi host.
13. Find out to which physical vmnic uplink the MAC address of the WinServer-1 VM is
currently pinned to.
N1000V(config)# module vem 3 execute vemcmd show port
LTL
18
20
49
305
VSM Port
Eth3/2
Eth3/4
Veth1
Po1
Admin Link
UP
UP
UP
UP
UP
UP
UP
UP
State
FWD
FWD
FWD
FWD
PC-LTL
305
305
0
0
SGID
1
3
1
Vem Port
vmnic1
vmnic3
WinServer-1.eth0
Note
L5-9
14. Find out to which physical vmnic uplink the MAC address of the WinServer-2 VM is
currently pinned to.
N1000V(config)# module vem 4 execute vemcmd show port
VSM Port
Eth4/2
Eth4/4
Veth2
Po2
Admin Link
UP
UP
UP
UP
UP
UP
UP
UP
State
FWD
FWD
FWD
FWD
PC-LTL
305
305
0
0
SGID
1
3
3
Vem Port
vmnic1
vmnic3
WinServer-2.eth0
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
LTL
18
20
49
305
Note
N1000V(config)# wr
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Assigned additional VMNIC on each ESXi host to the MAC pinning uplink port profile
named Host-Uplinks.
L5-10
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
It is recommended to move all interfaces from the vSwitch to the Cisco Nexus 1000V and
thereby completely replace any existing standard vSwitches. One of the key advantages of
the Cisco Nexus 1000V is the segmentation of responsibilities as well as the improved
monitoring and troubleshooting capabilities.
Only by moving all interfaces to the Cisco Nexus 1000V can you ensure that the server
team can fully rely on the network team for network configuration. Therefore the network
team is able to handle complete management and troubleshooting capabilities of the
physical and virtual networks.
In this task, you will migrate the control, packet, and management virtual machine port
groups to the Cisco Nexus 1000V. To ensure continuous lab connectivity since we are not
able to physically access the hosts, we will leave VMkernel ports on the standard
vSwitch0.
Activity Procedure
Complete these steps:
17. Create a port profile for the Cisco Nexus 1000V Control connection.
18. Create a port profile for the Cisco Nexus 1000V Packet connection.
L5-11
19. Create a port profile for the Cisco Nexus 1000V Management connection.
port-profile type vethernet VSM-Management
description "VSM Management"
vmware port-group
switchport mode access
switchport access vlan 1
no shutdown
system vlan 1
state enabled
exit
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
N1000V(config-port-prof)#
20. From the vSphere Client in vCenter Server go to the Networking inventory view (Home
> Inventory > Networking), or use the shortcut Ctrl-Shift-N.
The new port profiles should be visible under the N1000V switch.
21. Right-click the N1000V switch icon and select Migrate Virtual Machine Networking.
L5-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
22. Select Source Network Control and Destination Network VSM-Control (N1000V).
Click the N1000V switch icon and select the Configuration tab.
Global Knowledge Training LLC
For individual use only; may not be reprinted, reused, or distributed without the express written consent of Global Knowledge.
L5-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
26. Expand the port group VSM-Control by clicking the plus icon and validate that the
Control connection of the primary and secondary Cisco Nexus 1000V VSMs appear in the
VSM-Control port group.
27. Right-click N1000V and select Migrate Virtual Machine Networking again.
28. Select Source Network Packet and Destination Network VSM-Packet (N1000V). Click
Next, and then click All Virtual Machines. Click Next, and then click Finish.
29. Wait for the task to complete in vCenter and click N1000V and select the Configuration
tab.
30. Expand the port group VSM-Packet by clicking the plus icon and validate that the Packet
connection of the primary and secondary Cisco Nexus 1000V VSMs appears in the VSMPacket port group.
31. Right-click N1000V and click Migrate Virtual Machine Networking again.
33. Wait for the task to complete in vCenter and click N1000V and select the Configuration
tab.
34. Expand the port group VSM-Management by clicking the plus icon and validate that the
Management connection of the primary and secondary Cisco Nexus 1000V VSMs appear
in the VSM-Management port group.
L5-14
35. Return to the Putty SSH session to your VSM at 10.0.1.200 and verify the successful
migration of your virtual machine port groups to the distributed switch.
N1000V(config)# show interface virtual
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
-----------------------------------------------------------------------------Port
Adapter
Owner
Mod Host
-----------------------------------------------------------------------------Veth1
Net Adapter 1 WinServer-1
3
10.0.1.1
Veth2
Net Adapter 1 WinServer-2
4
10.0.1.2
Veth3
Net Adapter 1 N1000V-VSM1
3
10.0.1.1
Veth4
Net Adapter 1 N1000V-VSM2
4
10.0.1.2
Veth5
Net Adapter 3 N1000V-VSM1
3
10.0.1.1
Veth6
Net Adapter 3 N1000V-VSM2
4
10.0.1.2
Veth7
Net Adapter 2 N1000V-VSM1
3
10.0.1.1
Veth8
Net Adapter 2 N1000V-VSM2
4
10.0.1.2
-----------------------------------------------------------------------------Port Profile
Port
Adapter
Owner
-----------------------------------------------------------------------------Host-Uplinks
Po1
Po2
Eth3/2
vmnic1
10.0.1.1
Eth3/4
vmnic3
10.0.1.1
Eth4/2
vmnic1
10.0.1.2
Eth4/4
vmnic3
10.0.1.2
Production
Veth1
Net Adapter 1 WinServer-1
Veth2
Net Adapter 1 WinServer-2
Control
Veth3
Net Adapter 1 N1000V-VSM1
Veth4
Net Adapter 1 N1000V-VSM2
Packet
Veth5
Net Adapter 3 N1000V-VSM1
Veth6
Net Adapter 3 N1000V-VSM2
Management
Veth7
Net Adapter 2 N1000V-VSM1
Veth8
Net Adapter 2 N1000V-VSM2
N1000V(config)# wr
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Migrated the Control, Packet, and Management VM port groups to the Cisco Nexus
1000V
L5-15
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
37. Go to your virtual machine WinServer-1 console and ensure a continuous ping to
WinServer-2 at IP 10.0.14.2 is running. If not, start one from the Command Prompt.
38. Using the navigation bar in vCenter, go to the Hosts and Clusters view (Home > Inventory
> Hosts and Clusters) or use the shortcut Ctrl-Shift-H.
39. Drag and drop the virtual machine WinServer-1 from your first ESXi host to your second
ESXi host.
L5-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
40. Step through the vMotion wizard by leaving the default settings. Click Next.
42. While the vMotion task is completing, return to your virtual machine WinServer-1 console
and ensure that the ping session is still successful.
43. Perform another vMotion to move WinServer-1 back to the first ESXi host 10.0.1.1.
L5-17
44. Leave the ping session active and return to the VSM and reload both VSMs to demonstrate
that VEMs continue forwarding packets while the control plane is reloading.
N1000V(config)# reload
This command will reboot the system. (y/n)?
[n] y
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
45. Return to your virtual machine WinServer-1 console and observe that the continuous ping
session continues.
Activity Verification
You have completed this task when you attain these results:
Performed successful vMotion of your virtual machines
L5-18
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L6
Configuring Security Features
Complete this lab activity to practice what you learned in the related lesson.
L6-1
Activity Objective
In this activity, you will configure security features on the Cisco Nexus 1000V Distributed
Virtual Switch. After performing this lab, you should be able to perform the following:
Configure access control lists (ACLs)
Configure port security
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5.0 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
L6-2
Command
Description
ip access-list <name>
statistics per-entry
ip port access-group
<name> {in | out}
copy running-config
startup-config
switchport port-security
switchport port-security
mac-address <address>
switchport port-security
mac-address <address>
show port-security
interface
feature dhcp
Enables DHCP.
ip dhcp snooping
ipconfig /release
ipconfig /renew
copy running-config
startup-config
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in visual objective section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L6-3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
1.
Using the navigation bar in vCenter, navigate to the Hosts and Clusters inventory view
(Home > Inventory > Hosts and Clusters), or use the shortcut Ctrl-Shift-H.
2.
3.
4.
5.
6.
Leave the disk format Same format as source and click Next.
7.
8.
9.
Wait for the task to complete. Right-click the new virtual machine WinServer-3 and select
Edit Settings.
L6-4
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
10. Click Network Adapter 1, and change the Network label from Production (N1000V) to
Production. Click OK.
Note
Note
We will use this setup to simulate a remote PC running network scans to our internal
virtual machines in the next task.
11. Open a console to virtual machine WinServer-3 and click Power On.
12. Log in by using username Administrator and password cisco123.
13. Since you will get a Windows error about duplicate addresses, change the IP address of the
Local Area Connection to 10.0.14.3 and assign a subnet mask of 255.255.255.0. Leave the
other values empty.
L6-5
14. Making sure that you are within the WinServer-3 VM console window, go to the desktop,
right-click Computer > Properties.
15. Click the Computer Name tab in the System Properties window and click the Change
button. Change the computer name to WinServer-3. Click OK.
16. Click OK to close the System Properties window and click Yes to reboot the virtual
machine.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
17. After reboot log in to the virtual machine and open the Command Prompt.
18. Verify successful connectivity to the other virtual machines by pinging the IP addresses
10.0.14.1 and 10.0.14.2.
Activity Verification
You have completed this task when you attain these results:
Cloned the virtual machine WinServer-1 to create another virtual machine WinServer-3
L6-6
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
19. From vCenter Server, select inventory then Hosts and Clusters, or use the shortcut CtrlShift-H.
21. Under the Hardware tab, select the CD/DVD component and select the Browse button in
the Datastore ISO file section.
22. Browse the ISCSIVMFS datastore, select the AddOns.iso file, and then click OK.
L6-7
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
23. Make sure to select the Connected checkbox on top of the page and click OK. This will
mount the CD for your VM.
24. Repeat mounting this ISO file to the CD drive to both the WinServer-2 and WinServer-3
VMs.
L6-8
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
25. On the virtual machine WinServer-3 open the superscan4.exe utility on the newlymounted D:\ drive image, under the superscan4 folder.
26. In the Start IP field, enter 10.0.14.1 (virtual machine WinServer-1) and in the End IP
field, enter 10.0.14.2 (virtual machine WinServer-2).
27. Click the arrow button next to the Start IP and End IP fields to populate the range in the
box.
L6-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
28. Click the play button in the lower left-hand corner to start the scan.
L6-10
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
29. Scroll up in the results pane to view the open ports on both ESXi hosts.
Activity Verification
You have completed this task when you attain these results:
You have scanned WinServer-1 and WinServer-2 and discovered open ports on these
virtual machines
L6-11
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
32. Create an IP-based access list named ProtectVM that blocks access to the open ports
discovered in the previous task and permits all other IP traffic.
Note
For each ACL that you configure, you can specify whether the device maintains
statistics for the ACL by using the command statistics per-entry. If an ACL is
applied to multiple interfaces, the maintained rule statistics are the sum of packet
matches (hits) on all the interfaces on which the ACL is applied.
33. Apply the access list to the port profile Production as an outbound rule.
N1000V(config-acl)# port-profile Production-VMs
N1000V(config-port-prof)# ip port access-group ProtectVM out
N1000V(config-port-prof)# exit
Note
As the vEth interfaces of WinServer-1 and WinServer-2 leverage the port profile
Production, adding the access list to this port profile will automatically update all
associated vEth interfaces and assign the access list to them. Here the concept of
port profiles comes very handy in simplifying the work. Alternatively, you can also
apply an access list directly to vEth interfaces.
Note
The directions in and out of an ACL have to be viewed from the perspective of the
VEM, not the virtual machine. Thus in specifies traffic flowing into the VEM from the
VM, while out specifies traffic flowing out from the VEM to the VM.
L6-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Configured on interfaces:
Vethernet1 Vethernet2 -
Active on interfaces:
Vethernet1 Vethernet2 -
L6-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
35. Return to the virtual machine WinServer-3 and click the play button in the lower left-hand
corner of the SuperScan window to repeat the scan process. You should not see any open
ports after this scan now that the access-list is in place.
36. Return to your VSM and display the access list configuration.
L6-14
As the result of your access list rules, access to open ports on your virtual machines
has been blocked. You should see your hit counters (line matches) have increased.
The actual match counts may vary.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config)# wr
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
You have configured, applied, and verified an access list to block access to open ports
on your virtual machines in the Cisco Nexus 1000V vEth port profile Production-VMs.
L6-15
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
39. Through the VSM, determine the MAC address of the virtual machine connected to
vEthernet 1. Also note which VM is connected to VEthernet 1. Your output may vary.
N1000V(config)# show interface vethernet 1
Vethernet1 is up
Port description is WinServer-1, Network Adapter 1
Hardware: Virtual, address: 0050.569c.3db7 (bia 0050.569c.3db7)
Owner is VM "WinServer-1", adapter is Network Adapter 1
Active on module 3
VMware DVS port 160
Port-Profile is Production
Port mode is access
5 minute input rate 0 bits/second, 0 packets/second
5 minute output rate 0 bits/second, 0 packets/second
Rx
3239 Input Packets 1077 Unicast Packets
0 Multicast Packets 2162 Broadcast Packets
530187 Bytes
Tx
4595 Output Packets 723 Unicast Packets
0 Multicast Packets 3872 Broadcast Packets 3872 Flood Packets
598194 Bytes
28 Input Packet Drops 0 Output Packet Drops
Note
With the interface connected to the VM shut down, the continuous ping should fail to
this VM.
L6-16
42. Configure a static entry for the MAC address of the virtual machine using the address you
recorded in Step 1.
N1000V(config-if)# switchport port-security mac-address xxxx.xxxx.xxxx
(where xxxx.xxxx.xxxx is the MAC address of the virtual machine connected to
interface Vethernet1)
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
43. Bring up the interface. The continuous ping should begin responding again.
N1000V(config-if)# no shutdown
Note
44. Return to vCenter and open a console to virtual machine WinServer-1 and log in by using
username Administrator and password cisco123
45. Click Start > Settings > Network Connections > Local Area Connection.
L6-17
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
46. Right-click the NIC and select Properties. Click the Configure button.
L6-18
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
48. Click NetworkAddress, select Value, change the MAC address to 123456123456 and
click OK.
Note
The expected behavior is that changing the MAC address should trigger a security
violation to occur and the virtual Ethernet interface to be placed in error-disabled
mode, but this does not actually happen. We will investigate the reason in the next
step.
50. Ensure you inspect the Vethernet interface your WinServer-1 VM is connected to. In the
sample output, WinServer-1 is connected to Vethernet 1.
N1000V(config-if)# show running-config interface vethernet 1
interface Vethernet1
inherit port-profile Production-VMs
description WinServer-1,Network Adapter 1
vmware dvport 160 dvswitch uuid "fa ca 0e 50 c2 0a 21 91-df 49 4a f8 4d d6 c0f6"
vmware vm mac 1234.5612.3456
L6-19
Here you can see that the MAC address change appears in the configuration but the
port security commands have disappeared due to the default behavior of the VSM.
The VSM removes all manual configurations on a Vethernet interface when the
corresponding port profile of that interface is changed or reassigned to the port.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
51. Prevent the manual configuration of virtual Ethernet interfaces from being deleted.
N1000V(config-if)# no svs veth auto-config-purge
53. After enabling the vEthernet interface, you should get an error message and the interface
should become error-disabled.
N1000V %ETHPORT-2-IF_DOWN_ERROR_DISABLED: Interface Vethernet1 is down (Error
disabled. Reason:error)
N1000V %ETH-PORT-SEC-2-ETH_PORT_SEC_SECURITY_VIOLATION_MAX_MAC_VLAN: Port
Vethernet1 moved to SHUTDOWN state as host 1234.5612.3456 is trying to access the
port in vlan 14
Total Secured Mac Addresses in System (excluding one mac per port)
Max Addresses limit in System (excluding one mac per port) : 8192
: 0
L6-20
:
:
:
:
:
:
:
:
:
:
Enabled
Secure Down
Shutdown
0 mins
Absolute
1
1
1
0
1
: 0
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L6-21
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
55. Remove the MAC address from the network adapter of the virtual machine. Set
NetworkAddress back to Not Present and click OK.
56. The vEthernet interface should come up again. The continuous ping should be successful.
N1000V(config-if)# show port-security interface vethernet 1
Port Security
Port Status
Violation Mode
Aging Time
Aging Type
Maximum MAC Addresses
Total MAC Addresses
Configured MAC Addresses
Sticky MAC Addresses
Security violation count
L6-22
:
:
:
:
:
:
:
:
:
:
Enabled
Secure UP
Shutdown
0 mins
Absolute
1
1
1
0
0
57. Remove the port security commands and save your configuration.
N1000V(config-if)# no switchport port-security
N1000V(config-if)# no switchport port-security mac-address xxxx.xxxx.xxxx
(where xxxx.xxxx.xxxx is the actual MAC address of the virtual machine connected to
interface veth1)
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config-if)# exit
N1000V(config)# wr
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
You have configured port security on a vEth interface
L6-23
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
59. Move your WinServer-3 VM to the Cisco Nexus 1000V. Right-click on the WinServer-3
VM and click Edit Settings
60. Click Network Adapter 1 and select Production-VMs (N1000V) from the Network
Label dropdown box. Click OK.
61. On the VSM, first enable DHCP Snooping globally on the Nexus 1000V.
62. Enable IP Source Guard for the port profile Production-VMs, which is used by your
Windows virtual machines.
N1000V(config)# port-profile Production-VMs
N1000V(config-port-prof)# ip verify source dhcp-snooping-vlan
Note
IP Source Guard is a traffic filter that permits IP traffic only when the IP address and
MAC address of each packet matches the IP and MAC address bindings in the
DHCP snooping table or a configured static entry.
63. Verify the DHCP snooping configuration, including the IP Source Guard configuration.
N1000V(config-port-prof)# show running-config dhcp
version 4.2(1)SV1(5.2)
feature dhcp
interface Vethernet1
ip verify source dhcp-snooping-vlan
interface Vethernet2
ip verify source dhcp-snooping-vlan
interface Vethernet9
ip verify source dhcp-snooping-vlan
L6-24
64. A DHCP server has been setup on the vCenter Server system. You will configure your
Windows VMs to obtain IP addresses from the DHCP server. Go to the console of the
WinServer-1 VM.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
65. Click Start > Settings > Network Connections, right-click Local Area Connection, and
click Properties.
66. Click Internet Protocol (TCP/IP), and then click the Properties button.
67. Select Obtain an IP address automatically to request an IP address from the DHCP
server. Click OK, then click Close.
L6-25
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
68. Open a command prompt and type ipconfig. Verify the Windows machine has been
assigned an IP address from the DHCP server from the pool 10.0.14.51-10.0.14.53 /24.
69. Repeat Steps 64 68 on your WinServer-2 VMs. DO NOT modify your WinServer-3
VM.
Note
L6-26
IP-address
---------10.0.14.51
10.0.14.52
Mac-address
-------------00:50:56:9c:3d:b7
00:50:56:a9:00:00
Vlan
--14
14
IP source guard was enabled in the Production-VMs port profile, so IP source guard
has also been enabled on the WinServer-3 VM (bound to Vethernet9 in the sample
show command output). However, this VM is still using a static IP address so it does
not have an entry in the switchs DHCP snooping table for IP source guard to verify.
71. Verify successful ping connectivity between WinServer-1 and WinServer-2 (these VMs
should now have IPs 10.0.14.51 and 10.0.14.52).
This should succeed because both VMs have valid entries in the DHCP snooping table,
which allows IP source guard to permit the traffic.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
72. Try to ping WinServer-3 at 10.0.14.3 from one of your other VMs. This should fail because
the VM has a static IP address, thus never sent a DHCP request that could be snooped by
the Nexus 1000V.
73. Examine the DHCP snooping binding table.
Note
IpAddress
--------------10.0.14.51
10.0.14.52
LeaseSec
-------690320
690410
Type
---------dhcp-snoop
dhcp-snoop
VLAN
---14
14
Interface
---------Vethernet1
Vethernet2
IP Source Guard is a per-interface traffic filter that permits IP traffic only when the IP
address and MAC address of each packet matches the IP and MAC address
bindings of dynamic or static IP source entries in the DHCP snooping binding table.
IP packets to or from WinServer-3 are dropped because there is no entry in the
binding table for WinServer-3.
74. Return to WinServer-3 and configure it to obtain an IP address from the DHCP server by
repeating Steps 7 11 of this task.
75. Examine again the DHCP snooping binding table.
IpAddress
--------------10.0.14.51
10.0.14.52
10.0.14.53
LeaseSec
-------690320
690410
691194
Type
---------dhcp-snoop
dhcp-snoop
dhcp-snoop
VLAN
---14
14
14
Interface
---------Vethernet1
Vethernet2
Vethernet9
76. Now try to ping WinServer-3 again from one of your other VMs. This should now succeed
since there is an entry for this VM in the DHCP snooping table.
77. Disable the DHCP feature on the Nexus 1000V. This will remove all related configuration,
including the IP Source Guard configuration from the Production-VMs port profile.
L6-27
78. Verify the IP Source Guard configuration (ip verify source dhcp-snooping-vlan) is gone
from your running-configuration. The grep should not return any results.
N1000V(config)# show running-config | grep dhcp-snooping-vlan
N1000V(config)#
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
N1000V(config)# wr
[########################################] 100%
80. Return to your virtual machine consoles and configure static IP addresses again. Click
Start > Settings > Network Connections, right-click Local Area Connection and select
Properties. Click Internet Protocol (TCP/IP) and click the Properties button.
81. Select Use the following IP address and assign IPs to your Windows VMs according to
the provided table. Do not include a default gateway or any DNS servers.
VM Name
IP Address
Mask
WinServer-1
10.0.14.1
255.255.255.0
WinServer-2
10.0.14.2
255.255.255.0
WinServer-3
10.0.14.3
255.255.255.0
82. Verify you can ping between all of your Windows VMs.
Activity Verification
You have completed this task when you attain these results:
Configured DHCP snooping and IP Source Guard
Configured the Windows VMs to obtain their IP addresses from a DHCP server
Verified IP Source Guard operation
Removed DHCP snooping and IP Source Guard configuration and returned the
Windows VMs to their normal static IP configuration
L6-28
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L7
Configuring Quality of Service
Complete this lab activity to practice what you learned in the related lesson.
L7-1
Activity Objective
In this activity, you will configure Quality of Service (QoS) features on the Cisco Nexus
1000V. After performing this lab, you should be able to perform the following:
Use the network testing tool Iperf to generate network traffic
Configure classification policies
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Configure policing
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
L7-2
Command
Description
ip access-list <name>
statistics per-entry
police <options>
bandwidth percent
<percentage>
copy running-config
startup-config
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in the visual objectives section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L7-3
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
2.
Open a command prompt and verify that the Iperf utility is present on the server by typing
iperf -h. This should present you with the help text for the Iperf utility.
Client/Server:
-f, --format
[kmKM]
format to report: Kbits, Mbits, KBytes, MBytes
-i, --interval #
seconds between periodic bandwidth reports
-l, --len
#[KM]
length of buffer to read or write (default 8 KB)
-m, --print_mss
print TCP maximum segment size (MTU - TCP/IP header)
-o, --output
<filename> output the report or error message to this specifie
d file
-p, --port
#
server port to listen on/connect to
-u, --udp
use UDP rather than TCP
-w, --window
#[KM]
TCP window size (socket buffer size)
-B, --bind
<host>
bind to <host>, an interface or multicast address
-C, --compatibility
for use with older versions does not sent extra msgs
-M, --mss
#
set TCP maximum segment size (MTU - 40 bytes)
-N, --nodelay
set TCP no delay, disabling Nagle's Algorithm
-V, --IPv6Version
Set the domain to IPv6
Server specific:
-s, --server
-D, --daemon
-R, --remove
Client specific:
-b, --bandwidth #[KM]
-c,
-d,
-n,
-r,
-t,
-F,
-I,
-L,
-P,
L7-4
--client
<host>
--dualtest
--num
#[KM]
--tradeoff
--time
#
--fileinput <name>
--stdin
--listenport #
--parallel #
Miscellaneous:
-h, --help
-v, --version
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
The TCP window size option can be set by the environment variable
TCP_WINDOW_SIZE. Most other options can be set by an environment variable
IPERF_<long option name>, such as IPERF_BANDWIDTH.
Report bugs to <dast@nlanr.net>
Note
3.
Iperf is a commonly used network-testing tool that can create TCP and UDP data
streams and measure the throughput of the network that is carrying them. If the Iperf
utility is not present on the server, ask the instructor for assistance.
Start Iperf on WinServer-2. Use the Real Time Transport Protocol (RTP) UDP port 16384
as destination port.
Note
4.
L7-5
5.
Open a command prompt and use the Iperf client to connect to the Iperf service on
WinServer-2 using the IP address 10.0.14.2. Use the UDP port 16384 as destination port.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
L7-6
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
7.
Create a class map of type QoS named RTP and configure this class map to match the RTP
traffic with a port range 16384 to 32767.
8.
9.
Create a policy map of type QoS named VMQoS and associate the class map RTP with it.
10. Set high priority markings for RTP traffic. Use CoS value 5 and DSCP value EF (which
corresponds to the decimal value 46).
N1000V(config-pmap-c-qos)# set cos 5
N1000V(config-pmap-c-qos)# set dscp ef
Note
A common QoS principle is to classify and mark packets as close to the edge of the
network as possible. The objective of this task is to set the layer 2 CoS and layer 3
DSCP marking in the packets for the traffic generated by your virtual machines. This
allows other QoS policies for the uplink connections either on the VSM or for
upstream switches to act on the markings without a need to reclassify the packets
using access lists.
L7-7
11. All other traffic will have low priority markings. Set the CoS value to 0 and DSCP value to
Default (which corresponds to the decimal value 0).
N1000V(config-pmap-c-qos)# class class-default
N1000V(config-pmap-c-qos)# set cos 0
N1000V(config-pmap-c-qos)# set dscp default
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
13. Apply the policy map VMQoS of type QoS to the port profile Production-VMs in the input
direction.
N1000V(config-pmap-c-qos)# port-profile Production-VMs
N1000V(config-port-prof)# service-policy type qos input VMQoS
14. Verify that the policy map is assigned and evaluated in the port profile configuration.
N1000V(config-port-prof)# show port-profile name Production-VMs
port-profile Production-VMs
type: Vethernet
description: "Production VM Network"
status: enabled
max-ports: 32
min-ports: 1
inherit:
config attributes:
switchport mode access
switchport access vlan 14
ip port access-group ProtectVM out
ip verify source dhcp-snooping-vlan
service-policy type qos input VMQoS
no shutdown
evaluated config attributes:
switchport mode access
switchport access vlan 14
ip port access-group ProtectVM out
ip verify source dhcp-snooping-vlan
L7-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
interface Vethernet1
service-policy type qos input VMQoS
interface Vethernet2
service-policy type qos input VMQoS
interface Vethernet9
service-policy type qos input VMQoS
L7-9
18. Repeat the connection tests performed in Task 1. Use the Iperf client to connect to the Iperf
service on WinServer-2 using the IP address 10.0.14.2. Use the UDP port 16384 as
destination port.
D:\Iperf> iperf -c 10.0.14.2 u -p 16384
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
19. Examine the policy map on the interface that connects to WinServer-1.
N1000V(config-port-prof)# show policy-map interface vethernet 1
Global statistics status :
enabled
Vethernet1
VMQoS
enabled
Class-map (qos):
RTP (match-all)
894 packets
Match: ip rtp 16384-32767
set cos 5
set dscp ef
Class-map (qos):
class-default (match-any)
29 packets
set cos 0
set dscp default
Note
N1000V(config-port-prof)# exit
N1000V(config)# copy run start
[########################################] 100%
L7-10
Activity Verification
You have completed this task when you attain these results:
Defined a class map of type QoS for RTP traffic
Defined a policy map of type QoS to mark traffic generated by your virtual machines
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Verified the operation of the classification and marking policy through testing using
Iperf
L7-11
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
22. Break out of the current Iperf session by issuing Ctrl-C. Start an Iperf server using the FTP
control port 21 as destination port using the default window size of 64 KB.
Ctrl-C
D:\Iperf> iperf -s -p 21
24. Use the Iperf client to connect to the Iperf service on WinServer-2 using the TCP port 21
as destination port using the default window size.
D:\Iperf> iperf -c 10.0.14.2 -p 21
26. Configure an access list named FTP that matches TCP ports 21 and 20 for either the source
or the destination port. Enable statistics gathering for the access list.
N1000V(config)# ip access-list
N1000V(config-acl)# permit tcp
N1000V(config-acl)# permit tcp
N1000V(config-acl)# permit tcp
N1000V(config-acl)# permit tcp
N1000V(config-acl)# statistics
L7-12
FTP
any any eq 21
any eq 21 any
any any eq 20
any eq 20 any
per-entry
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
29. Configure this class map to match the traffic permitted by access list FTP.
N1000V(config-cmap-qos)# match access-group name FTP
31. Add the class map FTP to your existing policy map VMQoS.
N1000V(config-cmap-qos)# policy-map type qos VMQoS
N1000V(config-pmap-qos)# class type qos FTP
32. Configure a 1-rate, 2-color policer that allows 1Mbps traffic and drops packets exceeding
this bandwidth limit.
N1000V(config-pmap-c-qos)# police 1 Mbps conform transmit violate drop
L7-13
The default Bc (committed burst) value is 200 milliseconds of traffic at the configured
rate.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
Note
TCP has automatic recovery from dropped packets, which it interprets as congestion
on the network. The sender reduces its sending rate for a certain amount of time,
and then tries to find out if the network is no longer congested by increasing the rate
again subject to a ramp-up. This is known as the slow-start algorithm and is the
reason that the transmission rate is below the configured policing rate of 1 Mbps.
37. Examine the policy map on the interface that connects to WinServer-1.
N1000V(config-pmap-c-qos)# show policy-map interface vethernet 1
Global statistics status :
enabled
Vethernet1
L7-14
VMQoS
enabled
RTP (match-all)
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Class-map (qos):
FTP (match-all)
491 packets
Match: access-group FTP
police cir 1 mbps bc 200 ms
conformed 601188 bytes, 0 bps action: transmit
violated 134746 bytes, 0 bps action: drop
Class-map (qos):
class-default (match-any)
7 packets
set cos 0
set dscp default
38. Mark FTP packets with the DSCP value AF11 (which corresponds to the decimal value
10).
N1000V(config-pmap-c-qos)# set dscp af11
39. Modify the policer to transmit 1 Mbps traffic with the original marking of AF11 and mark
down packets above this limit instead of dropping to the DSCP marking AF13 (which
equals to decimal value 14) using the system-defined default table map pir-markdownmap.
N1000V(config-pmap-c-qos)# police 1 Mbps conform transmit violate set dscp dscp
table pir-markdown-map
L7-15
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
version 4.2(1)SV1(5.2)
qos statistics
class-map type qos match-all FTP
match access-group name FTP
class-map type qos match-all RTP
match ip rtp 16384-32767
table-map cir-markdown-map
default copy
from 10,12 to 12
from 18,20 to 20
from 26,28 to 28
from 34,36 to 36
table-map pir-markdown-map
default copy
from 10,12 to 14
from 18,20 to 22
from 26,28 to 30
from 34,36 to 38
policy-map type qos VMQoS
class RTP
set cos 5
set dscp 46
class FTP
police cir 1 mbps bc 200 ms conform transmit violate set dscp dscp table pirmarkdown-map
set dscp 10
class class-default
set cos 0
set dscp 0
interface Vethernet1
service-policy type qos input VMQoS
interface Vethernet2
service-policy type qos input VMQoS
interface Vethernet9
service-policy type qos input VMQoS
interface port-channel1
priority-flow-control mode auto
interface port-channel2
priority-flow-control mode auto
L7-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
The transmission rate is back to approximately the original value with the difference
that now all packets above the CIR rate of 1 Mbps are marked down to AF31.
N1000V(config-pmap-c-qos)# end
N1000V# copy run start
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Defined a class map of type QoS for FTP traffic based on an access list
Added the class map to your existing policy map to mark and police FTP traffic
Verified the operation of the classification, marking, and policing through testing using
Iperf
L7-17
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Procedure
Complete these steps:
N1000V# configure
N1000V(config)# class-map type queuing match-all Control
46. Configure this class map to match the predefined protocol type n1k_control, which
automatically matches VSM control traffic. There are similar traffic classifications for the
other required 1000V networks.
N1000V(config-cmap-qos)# match protocol n1k_control
47. Create a class map of type queuing named Packet to match VSM packet traffic.
N1000V(config-cmap-qos)# class-map type queuing match-all Packet
N1000V(config-cmap-qos)# match protocol n1k_packet
48. Create a class map of type queuing named Management to match Cisco VSM or VMware
management traffic.
N1000V(config-cmap-qos)# class-map type queuing match-any Management
N1000V(config-cmap-qos)# match protocol n1k_mgmt
N1000V(config-cmap-qos)# match protocol vmw_mgmt
Note
Note
Type match protocol v? to view other NetFlow traffic classifications for VMware
traffic.
L7-18
49. Create a class map of type queuing named vMotion_FT to match VMware vMotion or
VMware fault tolerance traffic.
N1000V(config-cmap-qos)# class-map type queuing match-any vMotion_FT
N1000V(config-cmap-qos)# match protocol vmw_vmotion
N1000V(config-cmap-qos)# match protocol vmw_ft
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
50. Create a class map of type queuing named Storage to match VMware NFS or VMware
iSCSI traffic.
N1000V(config-cmap-qos)# class-map type queuing match-any Storage
N1000V(config-cmap-qos)# match protocol vmw_nfs
N1000V(config-cmap-qos)# match protocol vmw_iscsi
52. Create a policy map of type queuing named CBWFQ and associate the class map Control
with it.
N1000V(config-cmap-qos)# policy-map type queuing CBWFQ
N1000V(config-pmap-qos)# class type queuing Control
53. Set the minimum guaranteed bandwidth for this traffic class to 5 percent of the total
available bandwidth.
N1000V(config-pmap-c-qos)# bandwidth percent 5
L7-19
Note
Control traffic should be considered the most important traffic in a Cisco Nexus
1000V Series network. The configured value of 5 percent is an example and does
not reflect a fixed value for every Cisco Nexus 1000V installation.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
54. Add the class map Packet to the policy map and set the minimum guaranteed bandwidth
for this traffic class to 5 percent of the total available bandwidth.
N1000V(config-pmap-c-qos)# class type queuing Packet
N1000V(config-pmap-c-qos)# bandwidth percent 5
Note
Packet traffic transports selected packets to the VSM for processing. The bandwidth
required for packet interface is extremely low, and its use is intermittent. The
configured value of 5 percent is an example and does not reflect a fixed value for
every Cisco Nexus 1000V installation.
55. Add the class map Management to the policy map and set the minimum guaranteed
bandwidth for this traffic class to 5 percent of the total available bandwidth.
N1000V(config-pmap-c-qos)# class type queuing Management
N1000V(config-pmap-c-qos)# bandwidth percent 5
Note
Management traffic usually has low bandwidth requirements, but should be treated
as high-priority traffic. The configured value of 5 percent is an example and does not
reflect a fixed value for every Cisco Nexus 1000V installation.
56. Add the class map vMotion_FT to the policy map and set the minimum guaranteed
bandwidth for this traffic class to 10 percent of the total available bandwidth.
N1000V(config-pmap-c-qos)# class type queuing vMotion_FT
N1000V(config-pmap-c-qos)# bandwidth percent 10
L7-20
Note
When VMware vMotion is initiated, it usually generates a burst of data over a period
of 10 to 60 seconds. VMware vMotion is not bandwidth sensitive. When this type of
traffic is faced with bandwidth that is lower than line rate, the duration of the virtual
machine move event is extended based on the amount of bandwidth available.
Despite the popularity of VMware vMotion as a feature, VMware vMotion traffic can
usually be considered of medium priority relative to other traffic types. The
configured value of 10 percent is an example and does not reflect a fixed value for
every Cisco Nexus 1000V installation.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
57. Add the class map Storage to the policy map and set the minimum guaranteed bandwidth
for this traffic class to 15 percent of the total available bandwidth.
N1000V(config-pmap-c-qos)# class type queuing Storage
N1000V(config-pmap-c-qos)# bandwidth percent 15
Note
IP storage traffic must be lossless and receive priority over other traffic. The
configured value of 15 percent is an example and does not reflect a fixed value for
every Cisco Nexus 1000V installation.
L7-21
59. Apply the policy map CBWFQ of type queuing to the uplink port profile Host-Uplinks in
the output direction.
N1000V(config-pmap-c-qos)# port-profile type ethernet Host-Uplinks
N1000V(config-port-prof)# service-policy type queuing output CBWFQ
60. Verify that the policy map is assigned and evaluated in the port profile configuration.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L7-22
Weighted fair queuing only works with ESX 4.1 and later because it makes use of the
new Network I/O Control feature VMware introduced in version 4.1. It provides
support for 64 queues/resource pools per host and is only supported on egress
uplink ports.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
enabled
port-channel1
Class-map (queuing):
Control (match-all)
Match: protocol n1k_control
bandwidth percent 5
queue dropped pkts : 0
Class-map (queuing):
Packet (match-all)
Match: protocol n1k_packet
bandwidth percent 5
queue dropped pkts : 0
Class-map (queuing):
Management (match-any)
Match: protocol n1k_mgmt
Match: protocol vmw_mgmt
bandwidth percent 5
queue dropped pkts : 0
Class-map (queuing):
vMotion_FT (match-any)
Match: protocol vmw_vmotion
Match: protocol vmw_ft
bandwidth percent 10
queue dropped pkts : 0
Class-map (queuing):
Storage (match-any)
Match: protocol vmw_nfs
Match: protocol vmw_iscsi
bandwidth percent 15
queue dropped pkts : 0
port-channel2
L7-23
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Class-map (queuing):
Packet (match-all)
Match: protocol n1k_packet
bandwidth percent 5
queue dropped pkts : 0
Class-map (queuing):
Management (match-any)
Match: protocol n1k_mgmt
Match: protocol vmw_mgmt
bandwidth percent 5
queue dropped pkts : 0
Class-map (queuing):
vMotion_FT (match-any)
Match: protocol vmw_vmotion
Match: protocol vmw_ft
bandwidth percent 10
queue dropped pkts : 0
Class-map (queuing):
Storage (match-any)
Match: protocol vmw_nfs
Match: protocol vmw_iscsi
bandwidth percent 15
queue dropped pkts : 0
N1000V(config-port-prof)# exit
N1000V(config)# copy run start
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Configured class maps of type queuing to match critical traffic in your Cisco Nexus
1000V environment using predefined protocols
Configured a policy map of type queuing that allocates minimum guaranteed bandwidth
for the traffic classes
Assigned the policy map to the uplink port profile to implement class-based weighted
fair queuing
L7-24
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L8
Configuring Management Features
Complete this lab activity to practice what you learned in the related lesson.
L8-1
Activity Objective
In this activity, you will configure management features on the Cisco Nexus 1000V. After
performing this lab, you should be able to perform the following:
Configure and verify AAA
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
The table describes the commands that are used in this activity.
L8-2
Command
Description
show radius-server
server {<ipv4-address> |
<server-name>}
source-interface
<interface-type>
<interface-id>
use-vrf <vrf-name>
radius-server deadtime
<minutes>
show role
username <user-name>
password <password> role
<role-name>
where [detail]
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in the visual objectives section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L8-3
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
2.
Note
3.
Your vCenter Server with IP address 10.0.1.50 is set up as a RADIUS server using
Microsoft Internet Authentication Service (IAS). Cisco Nexus 1000V also supports
TACACS+.
4.
Configure a RADIUS server group named RadiusSG and add your server to it.
5.
Specify the management interface as source interface and management VRF to be used to
reach the RADIUS server.
L8-4
6.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
7.
Manually send a test message to the RADIUS server group to confirm the availability of
the server. Use the username radius and password cisco123 for authentication.
Note
8.
The RADIUS server is set up with these user credentials. You should get a
successful authentication message user has been authenticated before you
proceed to the next step.
Configure periodic RADIUS server monitoring. Configure as test username radius and
password cisco123 for authentication and set the idle timer to three minutes.
Note
L8-5
9.
The default idle timer value is 0 minutes. When the idle time interval is 0 minutes, the
Cisco Nexus 1000V does not perform periodic RADIUS server monitoring.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
10.0.1.50:
available for authentication on port:1812
available for accounting on port:1813
RADIUS shared secret:********
idle time:3
test user:administrator
test password:********
Note
Specifies the number of minutes to wait before sending a test packet to a RADIUS
server that was declared dead.
L8-6
Activity Verification
You have completed this task when you attain these results:
Configured a RADIUS server group
Confirmed availability of the RADIUS server group
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L8-7
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
The local user database will be used when the RADIUS server is down and fails to
respond.
17. Open a new Putty SSH session to your VSM and log in with the RADIUS server user
credentials radius and password cisco123. This should succeed.
18. Close the Putty session.
19. On your remote lab server click Start > Administrative Tools > Services.
L8-8
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
20. Locate the Network Policy Server service. Right-click the service and select Stop.
21. Open once again a Putty SSH session to your VSM and try to log in with the RADIUS
server user credentials radius and password cisco123. This should fail since the RADIUS
service is no longer running and the radius user does not exist locally on the VSM.
login as: radius
Nexus 1000v Switch
Using keyboard-interactive authentication.
Password: cisco123
Access denied
22. You should be able to log in using the local user database with the username admin and
password cisco123. You should also see a message informing you the AAA server was
unreachable, so local authentication is performed. This works for user admin since they
exist locally on the VSM, whereas user radius is defined on the vCenter server machine.
login as: admin
Nexus 1000v Switch
Using keyboard-interactive authentication.
Password: cisco123
L8-9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Monitoring Statistics
Time in previous state: 0 hrs, 56 min, 9 sec
Number of times dead: 1
Total time in dead state: 0 hrs, 6 min, 9 sec
Authentication Statistics
failed transactions: 1
sucessfull transactions: 2
requests sent: 4
requests timed out: 2
responses with no matching requests: 0
responses not processed: 0
responses containing errors: 0
Accounting Statistics
failed transactions: 0
sucessfull transactions: 0
requests sent: 0
requests timed out: 0
responses with no matching requests: 0
responses not processed: 0
responses containing errors: 0
L8-10
Activity Verification
You have completed this task when you attain these results:
Configured and verified RADIUS-based authentication for administrative access to
your VSM
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Configured and verified local authentication for administrative access to your VSM
when the RADIUS server is not available
L8-11
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
27. Examine the predefined user roles of the Cisco Nexus 1000V.
Role: network-admin
Description: Predefined network admin role has access to all commands
on the switch
------------------------------------------------------------------Rule
Perm
Type
Scope
Entity
------------------------------------------------------------------1
permit read-write
Role: network-operator
Description: Predefined network operator role has access to all read
commands on the switch
------------------------------------------------------------------Rule
Perm
Type
Scope
Entity
------------------------------------------------------------------1
permit read
Note
The role network-admin allows full access to all commands on the Cisco Nexus
1000V. The role network-operator allows read-only access to all commands. These
two predefined roles can be assigned to user accounts but cannot be modified.
28. Create a new user account with the username readonly and try to assign the password
readonly to it. This should fail.
N1000V(config)# username readonly password readonly
password is weak
Password should contain characters from at least three of the following classes:
lower case letters, upper case letters, digits and special characters.
L8-12
30. Create the user account with the username readonly and the password 1234QWerRO and
assign the role network-operator to it.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
user:readonly
this user account has no expiry date
roles:network-operator
35. Add three additional rules to allow read-write rights for the features ping, vlan, and syslog.
N1000V(config-role)# rule 2 permit read-write feature ping
N1000V(config-role)# rule 3 permit read-write feature vlan
N1000V(config-role)# rule 4 permit read-write feature syslog
L8-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Role: network-support
Description: First Level Support
-----------------------------------------------------------Rule
Perm
Type
Scope
Entity
-----------------------------------------------------------4
permit read-write feature
syslog
3
permit read-write feature
vlan
2
permit read-write feature
ping
1
permit read
37. Create a new user account with the username support and the password 1234QWerSU
and assign the role network-support to it.
N1000V(config-role)# username support password 1234QWerSU role network-support
user:support
this user account has no expiry date
roles:network-support
39. Open a new Putty SSH session to your VSM and log in with the user credentials readonly
and password 1234QWerRO.
40. Change into the global configuration mode and display your current username and location
in the CLI.
N1000V(config)# where
conf
readonly@N1000V
L8-14
Name
-------------------------------default
vMotion/Storage
Control
Packet
Production
Status
--------active
active
active
active
active
Ports
-----------------------------Po1, Po2, Veth7, Veth8
Po1, Po2
Po1, Po2, Veth3, Veth4
Po1, Po2, Veth5, Veth6
Po1, Po2, Veth1, Veth2, Veth9
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
43. Try to ping your vCenter Server. This should also fail.
N1000V(config)# ping 10.0.1.50
% Permission denied
45. Open a new Putty session to your VSM and log in with the user credentials support and
password 1234QWerSU.
46. Change into the global configuration mode and display your current username and location
in the CLI.
N1000V# configure
N1000V(config)# where
conf
support@N1000V
47. Try to add a VLAN, for example VLAN 100. This should work.
N1000V(config)# vlan 100
48. Delete the VLAN and try to ping your vCenter Server. This should also work.
N1000V(config)# no vlan 100
bytes
ttl=127
ttl=127
ttl=127
ttl=127
ttl=127
time=1.268 ms
time=0.828 ms
time=0.846 ms
time=0.789 ms
time=0.79 ms
--- 10.0.1.50 ping statistics --5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 0.789/0.904/1.268 ms
L8-15
The role applied to the user permits only specific commands to be performed.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
51. Return to your original Putty session with network-admin rights and save your
configuration.
N1000V(config)# copy run start
[########################################] 100%
Activity Verification
You have completed this task when you attain these results:
Configured a new role with read-only access and some additional privileges and
assigned a user account to it
Logged in as a new user and verified that the role applied to the user permits only
specific commands to be performed
L8-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L9
Configuring SPAN and ERSPAN
Complete this lab activity to practice what you learned in the related lesson.
L9-1
Activity Objective
In this activity, you will configure SPAN and ERSPAN sessions on the Cisco Nexus
1000V to inspect network traffic. After performing this lab, you should be able to perform
the following:
Configure and verify a local SPAN session
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Required Resources
These are the resources and equipment required for each pod to complete this activity:
Two VMware ESXi 5.0 hosts with the Cisco Nexus 1000V VEM installed
One server running VMware vCenter Server 5 and VMware vSphere Client 5.0
Two Cisco Nexus 1000V VSM VM appliances
All pods share the following lab core devices:
One switch for server networking
One iSCSI-based storage device
Command List
The table describes the commands that are used in this activity.
L9-2
Command
Description
description <description>
destination interface
<type> <id>
no shutdown
destination ip <ipaddress>
erspan-id <flow-id>
Vmkping <ip-address>
mtu <mtu_value>
username <user-name>
password <password> role
<role-name>
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
capability l3control
Job Aids
These job aids are available to help you complete the lab activity.
Lab topology diagram in the visual objectives section in the beginning of this lab
Lab connections table in the general lab topology information section in the beginning
of the lab guide
Lab IP address and VLAN plan in the general lab topology information section in the
beginning of the lab guide
L9-3
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Connect to your VSM and identify the vEthernet ports of the virtual machines WinServer-1
and WinServer-3.
-------------------------------------------------------------------Port
Adapter
Owner
Mod Host
-------------------------------------------------------------------Veth1
Net Adapter 1 WinServer-1
3
10.0.1.1
Veth2
Net Adapter 1 WinServer-2
4
10.0.1.2
Veth3
Net Adapter 1 N1000V-VSM2
4
10.0.1.2
Veth4
Net Adapter 2 N1000V-VSM2
4
10.0.1.2
Veth5
Net Adapter 3 N1000V-VSM2
4
10.0.1.2
Veth6
Net Adapter 1 N1000V-VSM1
3
10.0.1.1
Veth7
Net Adapter 2 N1000V-VSM1
3
10.0.1.1
Veth8
Net Adapter 3 N1000V-VSM1
3
10.0.1.1
Veth9
Net Adapter 1 WinServer-3
3
10.0.1.1
Note
2.
Make sure that WinServer-1 and WinServer-3 are located on the same ESXi host,
which should be host 10.0.1.1.
Create a local SPAN session to monitor the traffic of the virtual machine WinServer-1.
Configure one as session number and add a description.
3.
Configure the SPAN source as the vethernet interface of WinServer-1 in both (transmit and
receive) traffic directions.
Note
Ensure you use the correct vethernet interface for your pod, which may or may not
be vethernet 1.
L9-4
4.
Configure as SPAN destination the vEthernet interface of WinServer-3. Ensure you use the
vethernet port your WINXP3 VM is connected to on your 1000V pod.
5.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
6.
N1000V(config-monitor)# no shutdown
L9-5
Connect to WinServer-1 and issue a continuous ping from the command prompt to
WinServer-2 with the command ping 10.0.14.2 -t.
8.
From WinServer-3, open the Wireshark program using the desktop shortcut.
9.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
7.
L9-6
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
11. Click the icon named Stop the running live capture to stop the capture once you have
recorded some packets.
12. As a result of your local SPAN session you should see ICMP echo requests and replies
exchanged between WinServer-1 and WinServer-2.
13. Initiate a vMotion of WinServer-3 from your first ESXi host to your second ESXi host by
dragging the virtual machine to your second ESXi host.
14. Wait for vMotion to complete. Start a new capture session in Wireshark by clicking
Capture > Start, while the ping session on WinServer-1 is still active.
L9-7
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
15. Do not save the previous capture by clicking Continue without Saving.
16. You should not see any ICMP packets captured now that the VM has moved.
L9-8
Note
A characteristic of local SPAN on Cisco Nexus 1000V is that a destination port can
only monitor sources on the same VEM. But WinServer-1 and WinServer-3 are on
different VEMs after vMotion.
Note
This loss of local SPAN visibility after a VM moves to a different host applies just to
SPAN other features and configurations applied to the port profiles move with the
VM across the datacenter. Local SPAN, as its name implies, occurs between a local
source and destination on the same host.
17. Perform another vMotion to move WinServer-3 back to your first ESXi host where
WinServer-1 is located.
18. After vMotion is complete you should see again ICMP packets being captured in
Wireshark.
19. Stop the packet capture and close Wireshark. Quit without saving.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Activity Verification
You have completed this task when you attain these results:
Configured a local SPAN session to send traffic from a virtual machine to Wireshark
running on another virtual machine located on the same VEM
L9-9
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
21. Create an ERSPAN session to monitor the traffic of the virtual machine WinServer-1.
Configure one as session number and add a description.
N1000V(config)# monitor session 2 type erspan-source
N1000V(config-erspan-src)# description "ERSPAN of WinServer-1"
22. As the ERSPAN source use the vEthernet interface of WinServer-1 in both (transmit and
receive) traffic directions.
N1000V(config-erspan-src)# source interface vethernet 1 both
N1000V(config-erspan-src)# erspan-id 1
25. Enable the ERSPAN session and display your configured monitor session.
N1000V(config-erspan-src)# no shutdown
L9-10
:
:
:
:
:
:
:
ERSPAN of WinServer-1
erspan-source
up
Veth1
Veth1
Veth1
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
source VLANs
:
rx
:
tx
:
both
:
source port-profile
rx
:
tx
:
both
:
filter VLANs
:
destination IP
:
ERSPAN ID
:
ERSPAN TTL
:
ERSPAN IP Prec.
:
ERSPAN DSCP
:
ERSPAN MTU
:
ERSPAN Header Type:
L9-11
Activity Procedure
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
L9-12
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
29. Using the navigation bar in vCenter, go to the Host and Clusters view.
30. Select your first ESXi host and click the Configuration tab.
31. In the Hardware pane, select Networking and click the vNetwork Distributed Switch
button.
L9-13
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
L9-14
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
36. Select the port group ERSPAN from the drop-down menu and click Next. Do not select
any checkboxes.
37. Configure the IP address 10.0.14.111 and subnet mask 255.255.255.0, and click Next.
38. Click Finish and select No if you are asked to configure a default gateway. Click Close.
39. Repeat Steps 6 to 14 for your second ESXi host using the following IP address and subnet
mask:
Fields/Settings
Values
IP Address
10.0.14.112
Subnet Mask
255.255.255.0
ERSPAN ID
1
HDR VER
2
DST LTL/IP
10.0.14.3
L9-15
ERSPAN ID
1
HDR VER
2
DST LTL/IP
10.0.14.3
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
42. Log in to the server using username root and password cisco123.
43. Run the following command to verify connectivity between the VEM ERSPAN source IP
address and the ERSPAN destination IP address, which belongs toWinServer-3.
~ # vmkping 10.0.14.3
PING 10.0.14.3 (10.0.14.3): 56 data bytes
--- 10.0.14.3 ping statistics --3 packets transmitted, 0 packets received, 100% packet loss
Note
The same issue exists on your second ESX host. This is because we have
configured IP Source Guard for the port profile of WinServer-3 in the security lab. IP
Source Guard is a per-interface traffic filter that permits IP traffic only when the IP
address and MAC address of each packet matches the IP and MAC address
bindings of dynamic or static IP source entries in the DHCP snooping binding table.
You need to add a static IP source entry for the ERSPAN IP address of each VEM.
44. Using the navigation bar in vCenter, go to the Host and Clusters view and select your first
ESXi host and click the Configuration tab.
45. In the Hardware pane, select Networking and click the vNetwork Distributed Switch
view.
46. Click Manage Virtual Adapters.
L9-16
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
47. Select vmk2 (or the VMkernel port you created in the ERSPAN port group) and write
down the MAC address and click Close.
48. Repeat Steps 44 to 47 to record the MAC address of the VMkernel port on your second
ESXi host.
49. On your VSM identify the vEthernet ports used by the ERSPAN VMkernel interfaces of
your ESXi hosts.
N1000V(config)# show port-profile virtual usage
-----------------------------------------------------------------------------Port Profile
Port
Adapter
Owner
-----------------------------------------------------------------------------Host-Uplinks
Po1
Po2
Eth3/2
vmnic1
10.0.1.1
Eth3/4
vmnic3
10.0.1.1
Eth4/2
vmnic1
10.0.1.2
Eth4/4
vmnic3
10.0.1.2
Production-VMs
Veth1
Net Adapter 1 WinServer-1
Veth2
Net Adapter 1 WinServer-2
Veth9
Net Adapter 1 WinServer-3
Control
Veth3
Net Adapter 1 N1000V-VSM2
Veth4
Net Adapter 1 N1000V-VSM1
Packet
Veth5
Net Adapter 3 N1000V-VSM2
Veth6
Net Adapter 3 N1000V-VSM1
Management
Veth7
Net Adapter 2 N1000V-VSM1
Veth8
Net Adapter 2 N1000V-VSM2
ERSPAN
Veth10
vmk2
Module 3
Veth11
vmk2
Module 4
L9-17
50. Add a static IP source entry for the ERSPAN IP address of your first ESX host, using its
VMkernel MAC address and the vEthernet interface.
N1000V(config)# ip source binding 10.0.14.111 <MAC ADDRESS> vlan 14 interface
vethernet <VETH INTERFACE>
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
51. Add another static IP source entry for the ERSPAN IP address of your second ESX host,
using its VMkernel MAC address and vEthernet interface. Replace P with your pod
number.
N1000V(config)# ip source binding 10.0.14.112 <MAC ADDRESS> vlan 14 interface
vethernet <VETH INTERFACE>
IpAddress
--------------10.0.14.112
10.0.14.111
LeaseSec
-------infinite
infinite
Type
---------static
static
VLAN
---14
14
Interface
------------Vethernet11
Vethernet10
53. On your first or second ESXi host verify connectivity between the VEM ERSPAN source
IP address and the ERSPAN destination IP address that belongs to WinServer-3. This
should work now.
~ # vmkping 10.0.14.3
PING 172.16.P4.13 (10.0.14.3): 56 data bytes
64 bytes from 10.0.14.3: icmp_seq=0 ttl=128 time=0.390 ms
64 bytes from 10.0.14.3: icmp_seq=1 ttl=128 time=0.231 ms
64 bytes from 10.0.14.3: icmp_seq=2 ttl=128 time=0.239 ms
--- 10.0.14.3 ping statistics --3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.231/0.287/0.390 ms
L9-18
Activity Procedure
Complete these steps:
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
56. Wait a few seconds and stop the capture and finetune the selection of the traffic by entering
the following in the Filter field: erspan.spanid==1 && (icmp.type==0 | | icmp.type==8).
Click Apply.
Note
As a result of the filter, you will see ICMP requests and replies received via
ERSPAN.
57. Start a new capture session and while the session is active initiate a vMotion of WinServer3 from your first ESX host to your second ESX host. Observe that even during vMotion
Wireshark is receiving the spanned traffic, since this is now an Encapsulated Remote
SPAN (ERSPAN).
58. On W2K1-ESX1 increase the size of the ping packets to 1500 bytes using the command
ping 10.0.14.2 t l 1500.
L9-19
59. Observe that as a result of the previous step, on WinServer-3 the packet size of the
captured ICMP packets is increased in Wireshark. To see this, click on one of the recently
captured packets, then look at the frame size in the middle window.
60. On the VSM decrease the size of the spanned packets to 128 bytes using the MTU
command.
G
Co lo
b
py a
rig l K
n
ht ow
ed l
e
M dg
at e
er
ia
l
Note
One of the powerful features of the Cisco Nexus 1000V is the ability to use truncated
ERSPAN. Unlike any other switch, it can change the size of the ERSPAN packets to
receive only the useful information desired by the network administrator. By changing
the MTU to 128, it will only send the GRE header plus some of the packet header,
but it will not saturate the link by sending too much information.
61. Return to WinServer-3 and observe that as a result of the previous step. The packet size of
the captured ICMP packets should have decreased in Wireshark.
Activity Verification
You have completed this task when you attain these results:
Performed vMotion to move the virtual machine capturing spanned traffic to another
ESXi host
Captured and displayed the monitored traffic using Wireshark
Configured and verified truncated ERSPAN by decreasing the MTU size of the spanned
traffic
L9-20