Вы находитесь на странице: 1из 8

VMware NIC Trunking Design | Simon Greaves Pagina 1 di 8

Simon Virtualisation

Greaves Command Line

(http://www.simongreaves.co.uk) Contact (http://www.simongreaves.co.uk/contact/)


About (http://www.simongreaves.co.uk/about/)

Select an item

Simon Greaves is a virtualisation technologist,

VMware NIC Trunking Design PSO consultant and VMware vExpert. Working
for a leading infrastructure services provider in
Posted on: August 27, 2010 / Northern EMEA.
Comments: 19 comments (http://www.simongreaves.co.uk/vmware-nic-trunking/#comments) /
Categories: ESX (http://www.simongreaves.co.uk/category/esx/), ESXi Find me on LinkedIn
(http://www.simongreaves.co.uk/category/esxi/), Switches (http://uk.linkedin.com/in/simongreaves1) Follow
(http://www.simongreaves.co.uk/category/switches/), vCenter Server me on Twitter @sigreaves
(http://www.simongreaves.co.uk/category/vcenter-server-virtualisation/), Virtualisation (http://twitter.com/sigreaves).
(http://www.simongreaves.co.uk/category/virtualisation/)

(http://www.simongreaves.co.uk/vmware-nic-trunking/?print=pdf)
(http://www.simongreaves.co.uk/vmware-nic-trunking/?print=print)
1

Having read various books, articles, white papers and best practice guides I have found it
0
difficult to find consistently good advice on vNetwork and physical switch teaming design
so I thought I would write my own based on what I have tested and configured myself. Tweet

To begin with I must say I am no networking expert and may not cover some of the advanced
features of switches, but I will provide links for further reference where appropriate.

The basics

Each physical ESX(i) host has at least one physical NIC (pNIC) which is called an uplink.

Each uplink is known to the ESX(i) host as a vmnic.

Each vmnic is connected to a virtual switch (vSwitch).

Each virtual machine on the ESX(i) host has at least one virtual NIC (vNIC) which is connected to
the vSwitch.

The virtual machine is only aware of the vNIC, only the vSwitch is aware of the uplink to vNIC
relationship.

This setup offers a one to one relationship between the virtual machine (VM) connected to the vNIC
and the pNIC connected to the physical switch port, as illustrated below.

(http://simongreaves.co.uk/blog/wp-content/uploads/2010/08/NIC-Teaming-basics.jpg)

When adding another virtual machine a second vNIC is added, this in turn is connected to the
vSwitch and they share that same pNIC and the physical port the pNIC is connected to on the
physical switch (pSwitch).

When adding more physical NICs we then have additional options with network teaming.

Advertise Here
NIC Teaming

NIC teaming offers us the option to use connection based load balancing, which is balanced by the
number of connections and not on the amount of traffic flowing over the network. (http://stats.buysellads.com/click.go?
z=1282912&b=4756995&g=&s=&sw=1024&sh=819&br=msie,10,win&r=0.0380906
utm_source=blog-
This load balancing can provide us resilience on our connections by monitoring the links and if a link advertising&utm_medium=blog-
banner-
goes down, whether its the physical NIC or physical port on the switch, it will resend that traffic over ad&utm_campaign=SimonGreaves&utm_content=roots)
the remaining uplinks so that no traffic is lost. It is also possible to use multiple physical switches
Search for:
provided they are all on the same broadcast range. What it will not do is to allow you to send traffic
over multiple uplinks at once, unless you configure the physical switches correctly. Search

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 2 di 8

There are four options with NIC teaming, although the fourth is not really a teaming option

1. Port-based NIC teaming Top Posts & Pages


2. MAC address-based NIC teaming
Esxtop Guide
3. IP hash-based NIC teaming
(http://www.simongreaves.co.uk/esxtop-
4. Explicit failover
guide/)
Port-based NIC teaming Working with ESX(i) Log Files
(http://www.simongreaves.co.uk/working-
Route based on the originating virtual port ID or port-based NIC teaming as it is commonly known
with-esxi-log-files/)
as will do as it says and route the network traffic based on the virtual port on the vSwitch that it
Powerchute Network Shutdown ESXi/vMA
came from. This type of teaming doesnt allow traffic to be spread across multiple uplinks. It will
install (http://www.simongreaves.co.uk/74/)
keep a one to one relationship between the virtual machine and the uplink port when sending and
VMware NIC Trunking Design
receiving to all network devices. This can lead to a problem where the amount of physical ports
(http://www.simongreaves.co.uk/vmware-
exceeds the number of virtual ports as you would then end up with uplinks that dont do anything.
nic-trunking/)
As such the only time I would recommend using this type of teaming is when the amount of virtual
Distributed Virtual Switch Guide
NICs exceeds the number of physical uplinks.
(http://www.simongreaves.co.uk/distributed-
virtual-switch-guide/)
Exchange 2007 Powershell commands
(http://www.simongreaves.co.uk/exchange-
2007-powershell-commands/)
Virtual Machine Memory Overhead
(http://www.simongreaves.co.uk/virtual-
machine-memory-overhead/)
VMFS Datastore Free Space Calculations
(http://www.simongreaves.co.uk/vmfs-
datastore-free-space/)
Nexus 1000v ESXi Uplink Port Greyed Out
(http://www.simongreaves.co.uk/nexus-
1000v-esxi-uplink-port-greyed-out/)
VMware vMA
(http://www.simongreaves.co.uk/vmware-
vma/)

Archives
(http://simongreaves.co.uk/blog/wp-content/uploads/2010/08/Port-Based-NIC-Teaming.jpg)
April 2014
MAC address-based NIC teaming (http://www.simongreaves.co.uk/2014/04/)
Route based on MAC hash or MAC address-based NIC teaming sends the traffic out of the March 2014
originating vNICs MAC address. This works in a similar way to the port-based NIC teaming in that (http://www.simongreaves.co.uk/2014/03/)
it will send its network traffic over only one uplink. Again the only time I would recommend using November 2013
this type of teaming is when the amount of virtual NICs exceeds the number of physical uplinks. (http://www.simongreaves.co.uk/2013/11/)
September 2013
IP hash-based NIC teaming (http://www.simongreaves.co.uk/2013/09/)
Route based on IP hash or IP hash-based NIC teaming works differently from the other types of April 2013
teaming. It takes the source and destination IP address and creates a hash. It can work on multiple (http://www.simongreaves.co.uk/2013/04/)
uplinks per VM and spread its traffic across multiple uplinks when sending data to multiple network February 2013
destinations. (http://www.simongreaves.co.uk/2013/02/)
January 2012
(http://www.simongreaves.co.uk/2012/01/)
November 2011
(http://www.simongreaves.co.uk/2011/11/)
September 2011
(http://www.simongreaves.co.uk/2011/09/)
July 2011
(http://www.simongreaves.co.uk/2011/07/)
April 2011
(http://www.simongreaves.co.uk/2011/04/)
March 2011
(http://www.simongreaves.co.uk/2011/03/)
February 2011
(http://www.simongreaves.co.uk/2011/02/)
October 2010
(http://www.simongreaves.co.uk/2010/10/)
September 2010
(http://www.simongreaves.co.uk/2010/09/)
August 2010
(http://www.simongreaves.co.uk/2010/08/)
(http://simongreaves.co.uk/blog/wp-content/uploads/2010/08/IP-Based-NIC-Teaming.jpg)

Although IP-hash based can utilise multiple uplinks it will only use one uplink per session. This
means that if you are sending a lot of data between one virtual machine and another server that
traffic will only transfer over one uplink. Using the IP hash based teaming we can then use teaming
or trunking options on the physical switches. (Depending on the switch type) IP hash requires
Ether Channel (again depending on switch type) which for all other purposes should be disabled.

Explicit failover

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 3 di 8

This allows you to override the default ordering of failover on the uplinks. The only time I can see
this being useful is if the uplinks are connected to multiple physical switches and you wanted to use
them in a particular order. Either that or you think a pNIC In the ESX(i) host is not working
correctly. If you use this setting it is best to configure those vmnics or adapters as standby adapters
as any active adapters will be used from the highest in the order and then down.

The other options

Network failover detection

There are two options for failover detection. Link status only and beacon probing. Link status only
will monitor the status of that link, to ensure that a connection is available on both ends of the
network cable. If it becomes disconnected it will mark it as unusable and send the traffic over the
remaining NICs. Beacon probing will send a beacon up the network on all uplinks in the team.
This includes checking that the port on the pSwitch is available and is not being blocked by
configuration or switch issues. Further information is available on page 44 of the ESXi configuration
guide (http://www.vmware.com/pdf/vsphere4/r41/vsp_41_esxi_server_config.pdf). Do not set to
beacon probing if using route based on IP-hash.

(http://simongreaves.co.uk/blog/wp-content/uploads/2010/08/Failover1.jpg)

Notify switches

This should be left set to yes (default) to minimise route table reconfiguration time on the
pSwitches. Do not use this when configuring Microsoft NLB in unicast mode.

Failback

Failback will re-enable the failed uplink when it is working correctly and send the traffic over it that
was sent over the standby uplink. Best practice is to leave this set to yes unless using IP based
storage. This is because if the link were to go up and down quickly it could have a negative impact
on iSCSI traffic performance.

Incoming traffic is controlled by the pSwitch routing the traffic to the ESX(i) host, and hence the ESX
(i) host has no control over which physical NIC traffic arrives. As multiple NICs will be accepting
traffic, the pSwitch will use whichever one it wants.

Load balancing on incoming traffic can be achieved by using and configuring a suitable pSwitch.

pSwitch configuration

The topics covered so far describe egress NIC teaming, with physical switches we have the added
benefit of using ingress NIC teaming.

Various vendors support teaming on the physical switches, however quite a few call trunking
teaming and vice-versa.

From the switches I have configured I would recommend the following.

All Switches

A lot of people recommend disabling Spanning Tree Protocol (STP) as vSwitches dont require it as
they know the MAC address of every vNIC connected to it. I have found that the best practice
(http://kb.vmware.com/kb/1003804) is to enable STP and set it to Portfast. Without Portfast enabled

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 4 di 8

there can be a delay whereby the pSwitch has to relearn the MAC addresses again during
convergence which can take 30-50 seconds. Without STP enabled there is a chance of loops not
being detected on the pSwitch.

802.3ad & LACP

Link aggregation control protocol (LACP) is a dynamic link aggregation protocol (LAG) which can
dynamically make other switches aware of the multiple links and combine them into one single
logical unit. It also monitors those links and if a failure is detected it will remove that link from the
logical unit.

VMware doesnt support LACP. However VMware does support IEEE 802.3ad which can be
achieved by configuring a static LACP trunk group or a static trunk. The disadvantage of this is that
if one of those links goes down, 802.3ad static will continue to send traffic down that link.

Dell switches

Set Portfast using

Spanning-tree portfast

To configure follow my Dell switch aggregation guide (../../drupal/dell_switch_aggregation)

Further information on Dell switches is available through the product manuals.

Cisco switches

Set Portfast using

Spanning-tree portfast (for an access port)

Spanning-tree portfast trunk (for a trunk port)

Set etherchannel

Further information is available through the Sample configuration of EtherChannel / Link


aggregation with ESX and Cisco/HP switches (http://kb.vmware.com/kb/1004048)

HP switches

Set Portfast using

Spanning-tree portfast (for an access port)

Spanning-tree portfast trunk (for a trunk port)

Set static LACP trunk using

trunk < port-list > < trk1 trk60 > < trunk | lacp >

Further information is available through the Sample configuration of EtherChannel / Link


aggregation with ESX and Cisco/HP switches (http://kb.vmware.com/kb/1004048)

Google+

(https://plus.google.com/102960208857809375427)Simon Greaves
(https://plus.google.com/102960208857809375427) Follow 0

Tags: ESX (http://www.simongreaves.co.uk/tag/esx/), ESXi


(http://www.simongreaves.co.uk/tag/esxi/), Swtches (http://www.simongreaves.co.uk/tag/swtches/),
vCenter Server (http://www.simongreaves.co.uk/tag/vcenter-server/), Virtualisation
(http://www.simongreaves.co.uk/tag/virtualisation/)

19 comments

Prit patel (http://priteshhi.wordpress.com)January 28, 2011 at 1:01 pm


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-21) - Reply (/vmware-nic-
trunking/?replytocom=21#respond)
Thanks for putting it in simple terms.. very helpful.

-Prit patel

kevin (http://none)January 11, 2012 at 2:42 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-22) - Reply (/vmware-nic-trunking/?replytocom=22#respond)
Awesome stuff , clear concise and correct . Thanks a million

Kevin

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 5 di 8

PaulFebruary 21, 2012 at 9:11 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-23) - Reply (/vmware-nic-trunking/?replytocom=23#respond)
Hello,

Great post, thanks for taking the time to. Couple of questions if I may;

1) You mentioned that VMware does not support LACP. Do you have a reference for this?
2) At the end of the article you show how to configure LACP for various brands of physical
switches, but earlier you said this wasnt a supported option. Can you please clarrify that point
for me?

adminFebruary 22, 2012 at 9:30 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-24) - Reply (/vmware-nic-trunking/?replytocom=24#respond)
Hi Paul,

Thanks for your comments.

Link aggregation can be configured as either dynamic or static. Dynamic configuration is


supported using the IEEE 802.3ad standard, which is known as Link Aggregate Control
Protocol (LACP). So you have two types, LACP dynamic and LACP static. VMware does
support IEEE 802.3ad static as shown here in this vSphere 4.0 article on the VMware site
(http://kb.vmware.com/selfservice/microsites/search.do?
language=en_US&cmd=displayKC&externalId=1010270)

I hope this answers your questions.

Arghya ChatterjeeOctober 3, 2012 at 4:14 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-25) - Reply (/vmware-nic-trunking/?replytocom=25#respond)
Splendid work

Bruce HeNovember 19, 2012 at 7:06 am (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-26) - Reply (/vmware-nic-trunking/?replytocom=26#respond)
Hi Simon

thank for your great work!

I have a question for teaming on vmware. Since there are several physical NICs connected to
physical switch, are there ways to observe the detail traffic on each physical NIC? For example,
traffic vmA to B is on NIC-A, where vmC to D is on NIC-B

Simon Greaves (http://simongreaves.co.uk/blog)February 27, 2013 at 6:05 pm


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-33) - Reply (/vmware-nic-
trunking/?replytocom=33#respond)
Hi Bruce,

You can view virtual machine traffic stats on the performance tab of the vSwitch or virtual
machines. For detailed information you would need to use a packet capture device of some
sort, such as Wireshark.

Regards,

Simon

Dhakshinamoorthy BalasubramanianNovember 29, 2012 at 4:40 pm


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-27) - Reply (/vmware-nic-
trunking/?replytocom=27#respond)
Nice Article.
Thanks

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 6 di 8

30maJanuary 16, 2013 at 9:28 am (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-28) - Reply (/vmware-nic-trunking/?replytocom=28#respond)
i have the number of 25 vlans in switch 3750 and i want to related them to esxi5 server
interface vlan 3001
ip address 10.128.21.254
!inreface vlan 3002
ip address 10.128.22.254
!
.
.
.
interface vlan 3025
ip address 10.128.45.254

Simon Greaves (http://simongreaves.co.uk/blog)February 27, 2013 at 6:02 pm


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-32) - Reply (/vmware-nic-
trunking/?replytocom=32#respond)
Hi 30ma,

If these 25 VLANs are for virtual machine traffic you can just create a portgroup on the
Standard or Distributed Switch within vCenter and give each portgroup the appropriate
VLAN tag then just add the network interface of each VM to the appropriate portgroup.

Hope this helps!

Simon

30maJanuary 16, 2013 at 9:29 am (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-29) - Reply (/vmware-nic-trunking/?replytocom=29#respond)
how can i do? plz help me?

30maJanuary 16, 2013 at 9:31 am (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-30) - Reply (/vmware-nic-trunking/?replytocom=30#respond)
how can i do it? plz help me

JoeFebruary 27, 2013 at 5:34 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-31) - Reply (/vmware-nic-trunking/?replytocom=31#respond)
30ma, I am hoping someone can answer your question as well

DavidApril 30, 2013 at 10:19 pm (http://www.simongreaves.co.uk/vmware-nic-


trunking/#comment-34) - Reply (/vmware-nic-trunking/?replytocom=34#respond)
What if I am working with two physical Cisco switches that are not the same type nor are they
stacked: 1GB and 10GB switches

I want to configure vmnics on the ESX host to failover (not load balance traffic) from the 10GB
to the 1GB switch in the event of a catastrophic failure in the 10GB switch.

Basically, two vmnics are connected from the ESX host, one to each of the switches.

Can that be accomplished or do I have to rely on Cisco port trunking on a single switch (or
switch stack) to accomplish this?

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 7 di 8

Simon Greaves (http://simongreaves.co.uk/blog)April 30, 2013 at 10:32 pm


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-35) - Reply (/vmware-nic-
trunking/?replytocom=35#respond)
Hi David,

You can easily configure the 1GB to act as the failover vmnic by editing the portgroup setting
and selecting the NIC teaming tab, select the override vswitch failover order tick box and ensure
the 10Gb NIC is in the active adapters section and the 1Gb NIC is in the standby adapters
section.

This is a good way to minimise single point of failure risk without having to purchase expensive
10Gb NICs for the failover port. Obviously note that the 1Gb NIC will perform much slower than
the 10Gb port so ensure that this wont cause you any issues to the traffic that is flowing on the
failed over NIC.

Also note that active/standby is not supported on iSCSI traffic. This will work for NFS or
management/virtual machine traffic as long as the physical switch ports are configured to allow
any relevant VLANs in case of failover.

Thanks for reading!

Simon

DavidMay 1, 2013 at 7:21 pm (http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-


36) - Reply (/vmware-nic-trunking/?replytocom=36#respond)
ooooo iSCSI not supported? Then it wont work for us because the 10GB is handling the
iSCSI traffic.

www.allnew-tv.de (http://www.allnew-tv.de/sites/gallery/details.php?image_id=3607)October 15,


2013 at 8:51 pm (http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-37) - Reply
(/vmware-nic-trunking/?replytocom=37#respond)
I was curious if you evsr considered changing the page layout of
your blog? Its very well written; I love what youve got to say.
But maybe you could a little more iin the way off content so people could connect with it better.
Youve got an awful lot of tet for only having 1 or two pictures.
Maybe you could space itt out better?

RCMTech (http://rcmtech.wordpress.com/)October 18, 2013 at 11:00 am


(http://www.simongreaves.co.uk/vmware-nic-trunking/#comment-38) - Reply (/vmware-nic-
trunking/?replytocom=38#respond)
I know this article was written a while ago now, but just to update: LACP is supported as of v5.1
and improved in v5.5 if you use vDistributed Switches. See
http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf
(http://www.vmware.com/files/pdf/vsphere/VMware-vSphere-Platform-Whats-New.pdf)

Leave a Reply
Your email address will not be published. Required fields are marked *

Name *

Email *

Website

CAPTCHA Code
*

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014
VMware NIC Trunking Design | Simon Greaves Pagina 8 di 8

Comment

You may use these HTML (HyperText Markup Language) tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Comment

Confirm you are NOT a spammer

Notify me of follow-up comments by email.

Notify me of new posts by email.

(https://twitter.com/sigreaves) (http://uk.linkedin.com/in/simongreaves1/)
Copyright Simon Greaves 2014 | Theme by Theme in Progress (http://www.themeinprogress.com/) | Proudly powered by WordPress
(http://wordpress.org/)

http://www.simongreaves.co.uk/vmware-nic-trunking/ 02/07/2014

Вам также может понравиться