Вы находитесь на странице: 1из 6

VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

VMware ESX 4.0 and PowerVault MD3000i


This sample configuration description for using ESX/ESXi 4.0 with Dell™ PowerVault™ MD3000i storage offers
best practices goals for providing multipathing and load balancing for Internet SCSI (iSCSI) storage traffic. These
goals can be achieved using two NIC ports on the ESX host for iSCSI traffic and using Round Robin
path-selection policy for iSCSI volumes.

Please provide feedback at the bottom of this page if you find any corrections or have any suggestions.

Sample Configuration
Figure 1 shows a sample configuration to use ESX/ESXi 4.0 host(s) with Dell PowerVault MD3000i storage.

An ESX/ESXi 4.0 host is connected using two NIC ports (vmnic1 and vmnic4) dedicated for iSCSI traffic to the
PowerVault MD3000i. The vmnic1 and vmnic4 ports are connected to separate Gigabit Ethernet (GbE) switches
1 and 2. Ethernet switch 1 is connected to controller 0, port 0 and controller 1, port 0. Ethernet switch 2 is
connected to controller 0, port 1 and controller 1, port 1.

Dell PowerVault MD3000i Network Configuration

The Dell PowerVault MD3000i array comprises two active RAID controllers. However, at any given time only one
RAID controller owns a virtual disk or logical unit (LUN). Each RAID controller has two GbE ports for iSCSI data
traffic. By configuring controller 0, port 0 and controller 1, port 0 in one IP subnet and controller 0, port 1 and
controller 1, port 1 in another IP subnet, both traffic isolation across physical Ethernet segments (using redundant
Ethernet switches) and path redundancy can be achieved. For the purpose of this configuration, the PowerVault
MD3000i iSCSI host ports are configured with the following IP configuration:

1 de 6 21/7/2010 15:23
VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

Controller 0, port 0: 192.168.130.101/255.255.255.0


Controller 0, port 1: 192.168.131.101/255.255.255.0
Controller 1, port 0: 192.168.130.102/255.255.255.0
Controller 1, port 1: 192.168.131.102/255.255.255.0

Note: If using jumbo frames, enable jumbo frames on iSCSI data ports. Using the Modular Disk Storage Manager
(MDSM) interface, go to iSCSI > Configure iSCSI Host Ports. For each port, click Advanced Host Port Settings.
In the Advanced Host Port Settings window, check the Enable jumbo frames check box, and set MTU size to
9000. Click Ok.

iSCSI Switch Configuration

On the Ethernet switches used for iSCSI traffic, if using jumbo frames, enable jumbo frames on ports connected
to ESX host(s) and the PowerVault MD3000i.

iSCSI Storage Network Configuration on an ESXi Host

Multiple-pathing: To provide path redundancy and traffic load balancing for the iSCSI storage traffic, two network
adapters (vmnic1 and vmnic4) are used. As shown in Figure 2, vmnic1 is up-linked to vSwitch1 and vmnic4 is
up-linked to vSwitch2. Using the esxcli command, VMkernel interfaces vmk1 and vmk2 are attached to the
software iSCSI initiator.

The VMkernel-iSCSI1 interface is configured to be in the same IP subnet as PowerVault MD3000i controller 0,
port 0 and controller 1, port 0. The VMkernel-iSCSI2 interface is configured to be in the same IP subnet as
PowerVault MD3000i controller 0, port 1 and controller 1, Port 1. This configuration enables two active paths
(one through VMkernel-iSCSI1 and the other through VMkernel-iSCSI2) to a LUN owned by any PowerVault
MD3000i controller. Setting the path selection policy for a LUN to Round Robin (VMware) enables load balancing
of iSCSI traffic across both active paths. The other two path-selection policies, namely Most Recently Used
(MRU) and Fixed, do not offer load balancing.

The following steps are required to configure an ESX host as described in setup in Figure 1:

1. Using VI client, enable ESX/ESXi SW initiator on the ESX host and assign appropriate IQN.
2. In the SW iSCSI initiator properties, add any IP of PowerVault MD3000i host data port for dynamic
discovery. Do not rescan the SW iSCSI adapter at this time.
3. Manually add the ESX/ESXi host to the PowerVault MD3000i, and create host-to-virtual-disk mapping.

If you are not using jumbo frames:

4. Using VI client GUI, create virtual switch vSwitch1 and uplink vmnic1
5. Using VI client GUI, create virtual switch vSwitch2 and uplink vmnic4
6. Create a VMkernel port group on vSwitch1 with name VMkernel‐iSCSI1 and IP configura*on:
192.168.130.11/255.255.255.0. Leave the gateway as the default management network gateway.
7. Create a VMkernel port group on vSwitch2 with name VMkernel‐iSCSI2 and IP configura*on:
192.168.131.11/255.255.255.0. Leave the gateway as the default management network gateway.
8. Using the VI client GUI (Host‐>Configuraon‐>Networking), note down the VMkernel port numbers

2 de 6 21/7/2010 15:23
VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

(vmkX). A3ach the VMkernel interfaces to so4ware iSCSI ini*ator:


a. $ esxcli swiscsi nic add –n vmk1 –d vmhbaXX
b. $ esxcli swiscsi nic add –n vmk2 –d vmhbaXX
where vmhbaXX is the vmhba number of the so4ware iSCSI ini*ator.
c. Rescan the SW iSCSI ini*ator.

If you want to enable jumbo frames, follow these steps to set up VMkernel interfaces. On the ESX host, or using
RCLI, issue the following CLI commands:

4. Create two virtual switches vSwitch1 and vSwitch2 and add up-links vmnic1 and vmnic4 respectively:

a. $ esxcfg‐vswitch –a vSwitch1
b. $ esxcfg‐vswitch –a vSwitch2
c. $ esxcfg‐vswitch vSwitch1 –L vmnic1
d. $ esxcfg‐vswitch vSwitch2 –L vmnic4

5. Enable jumbo frames at vSwitches:

a. $ esxcfg‐vswitch vSwitch1 –m 9000


b. $ esxcfg‐vswitch vSwitch2 –m 9000

6. Create VMkernel Port Groups:

a. $ esxcfg‐vswitch vSwitch1 –A VMkernel‐iSCSI1


b. $ esxcfg‐vswitch vSwitch2 –A VMkernel‐iSCSI2

7. Create VMkernel interfaces for iSCSI traffic, and enable jumbo frames on each:

a. $ esxcfg‐vmknic –a –i 192.168.130.11 –n 255.255.255.0 –m 9000 VMkernel‐iSCSI1


b. $ esxcfg‐vmknic –a –i 192.168.131.11 –n 255.255.255.0 –m 9000 VMkernel‐iSCSI2

8. Observe the output of the esxcfg-vmknic command and note VMkernel ports named vmkX. Make sure
that MTU size for the newly created vmkX ports is set to 9000:

a. $ esxcfg-vmknic –l

9. Attach the VMkernel interfaces to software iSCSI initiator:

a. $ esxcli swiscsi nic add –n vmk1 –d vmhbaXX


b. $ esxcli swiscsi nic add –n vmk2 –d vmhbaXX

where vmhbaXX is the vmhba number of the software iSCSI initiator.

10. Rescan the SW iSCSI initiator:

a. $ esxcfg-rescan vmhbaXX

where vmhbaXX is the vmhba device for SW iSCSI initiator.

Follow these steps regardless of jumbo frame configuration:

11. Format the LUNs exposed to ESX as VMFS.

12. To configure round robin multipathing policy using the VI client GUI, for each LUN exposed to the ESX
server, change the default path-selection policy to Round Robin (VMware). This enables load balancing
over two active paths to the LUN (two paths through the controller that owns the LUN; the other two paths
should be stand by).

a. Right-click on the device and choose Manage Paths.

3 de 6 21/7/2010 15:23
VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

b.In the Path Selection drop-down, select Round Robin.


Before this selection is made, one path has a status of Active, another is Active (I/O), and the other
two paths are in Stand by.

c.After selecting Round Robin, two paths have a status of Active (I/O) and other two paths are in
Stand by.

4 de 6 21/7/2010 15:23
VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

d. Repeat the process for each iSCSI LUN presented to the ESX 4 server from the Dell PowerVault
MD3000i.

Note: For more details on using iSCSI storage with ESX/ESXi 4.0, refer to the vSphere iSCSI configuration guide
at http://vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf.

Latest page update: made by TDA-James , Dec 9 2009, 8:23 PM EST (about this update - complete history)
TDA-James Keyword tags: ESX 4.0 vSphere MD3000i

Curtir Seja o primeiro de seus amigos a


curtir isso.
Share this

Threads for this page

Started By Thread Subject Replies Last Post


Slow Performance with vSphere
cpt86 18 Jul 12 2010, 3:05 PM EDT by JOHNADCO
and MD3000i
Thread started: May 17 2010, 1:09 PM EDT Watch

Hi,

we have got a new MD3000i last week and I had really fun playing with it. I like it when hardware
is simple to set up and with this tutorial it was a peace of cake getting our two esxi-servers up
and running againt the storage.
But after some initial testing of failover and other stuff, I did started benchmarking the whole
system. And I found out I only get about 30MByte/s read-only. So I added Jumbo-Frames, tried
connecting the servers directly via crossover, enabled and disabled round robin but the best I
could get with HDTune (and verified it via smCLI or esxtop) was about 50MB/s. Which is fine for
most servers, but we want to add a virtualized fileserver and then 50MB/s is not enough.
Out of curiosity I then ran 2 instances of HDTune and the throughput I recieved nearly doubled.
With 4 instances I had the full 110MB/s and the switch was at running 98% load.
Now comes the part where I need help:
How do I get 100MB/s just for one instance of the benchmark (or a filecopy etc.)?

5 de 6 21/7/2010 15:23
VMware ESX 4.0 and PowerVault MD3000i - The Dell TechCenter http://www.delltechcenter.com/page/VMware+ESX+4.0+and+PowerVa...

Would be great if someone here could help me out ;)

Thanks in advance!

Chris
Show Last Reply

Exsi 4u1 and Dell MD3000i IP


alamosa 1 Jun 1 2010, 2:51 PM EDT by KongY@Dell
Addressing
multipath configuration lost after
anguyen69 3 May 26 2010, 11:31 AM EDT by JOHNADCO
reboot

Showing 3 of 10 threads for this page - view all

Related Content

(what's this?)
Slow Performance with vSphere and multipath configuration lost after All Controller Ports on the same
MD3000i reboot Subnet setup question
Seeing 8 paths versus 4 after step
ISCSI Speeds
11.... what are we doing wrong?

Copyright 1999-2007 Dell Inc.


Battery Recall | About Dell | Conditions of Sale & Site Terms | Unresolved Issues | NEW Privacy Policy | Contact Us | Site Map |

6 de 6 21/7/2010 15:23

Вам также может понравиться