Академический Документы
Профессиональный Документы
Культура Документы
Version 1.0
Jeff Purcell
h8229
Contents
Chapter 1
Chapter 2
Chapter 3
128
129
131
134
Contents
Chapter 4
Chapter 5
168
169
172
187
191
193
203
138
139
143
145
148
157
164
206
207
208
209
217
220
222
224
Figures
Title
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Page
17
19
20
22
22
24
29
31
32
37
39
42
47
50
51
52
54
57
59
60
61
62
63
66
67
68
70
72
73
77
Figures
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
Figures
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
Figures
Tables
Title
1
2
3
4
5
6
7
8
9
10
11
Page
Tables
10
Preface
Audience
This TechBook describes how VMware vSphere works with the EMC
VNX series. The content in this TechBook is intended for storage
administrators, system administrators, and VMware vSphere
administrators.
Note: Although this document focuses on VNX storage, most of the content
also applies when using vSphere with EMC Celerra or EMC CLARiiON
storage.
Note: In this document, ESXi refers to VMware ESX Server version 4.0 and
4.1. Unless explicitly stated, ESXi 4.x, ESX 4.X, and ESXi are synonymous.
11
Preface
Related
documentation
EMC Unisphere
http://www.vmware.com/products/
http://www.vmware.com/support/pubs/vs_pubs.html
Conventions used in
this document
12
Preface
IMPORTANT
An important notice contains information essential to software or
hardware operation.
Typographical conventions
EMC uses the following type style conventions in this document.
Normal
Bold
13
Preface
Italic
Courier
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when
shown outside of running text
Courier bold
Used for:
Specific user input (such as commands)
Courier italic
<>
[]
{}
...
14
1
Configuring VMware
vSphere on VNX
Storage
Introduction ........................................................................................ 16
Management options......................................................................... 19
VMware vSphere on EMC VNX configuration road map ........... 24
VMware vSphere installation........................................................... 26
VMware vSphere boot from storage ............................................... 27
Unified storage considerations ........................................................ 33
Network considerations.................................................................... 48
Storage multipathing considerations .............................................. 50
VMware vSphere configuration....................................................... 64
Provisioning file storage for NFS datastores.................................. 71
Provisioning block storage for VMFS datastores and RDM
volumes (FC, iSCSI, FCoE) ............................................................... 76
Virtual machine considerations ....................................................... 80
Monitor and manage storage ........................................................... 92
Storage efficiency ............................................................................. 100
15
Introduction
EMC VNX series delivers uncompromising scalability and
flexibility for the midtier while providing market-leading simplicity
and efficiency to minimize total cost of ownership. Customers can
benefit from the following new VNX features:
Fully Automated Storage Tiering for Virtual Pools (FAST VP) that
can be optimized for the highest system performance and lowest
storage cost on block and file.
Multiprotocol support for file, block, and object with object access
through Atmos Virtual Edition (Atmos VE).
The VNX series includes five new software suites and three new
software packs, making it easier and simpler to attain the maximum
overall benefits.
Storage alternatives
16
The VNX system supports one active SCSI transport type at a time.
An ESXi host can connect to a VNX Block System with any type of
adapters. However, the adapters must be the same type, for example,
FC, FCOE, or iSCSI. Connecting to a single VNX with different types
of SCSI adapters types is not supported.
The previous statement does not apply to NFS which can be used in
combination with any SCSI protocol.
VMware ESXi uses VNX SCSI devices to create VMFS datastores or
raw device mapping (RDM) volumes. LUNs and NFS file systems are
provisioned from VNX with Unisphere or through the VMware
vSphere Client using the EMC VSI for VMware vSphere: Unified
Storage Management (USM) feature. VNX platforms deliver a
complete multiprotocol foundation for a VMware vSphere virtual
data center, as shown in Figure 1.
Figure 1
17
The VNX series is ideal for VMware vSphere in the midrange for the
following reasons:
18
Management options
VMware administrators can use Unisphere or the Virtual Storage
Integrator (VSI) for VMware vSphere to manage VNX storage in
virtual environments.
EMC Unisphere
Figure 2
EMC Unisphere
Management options
19
Figure 3
20
Reduce the copy creation time of virtual machines using the Full
Clone technology.
For VMFS datastores and RDM volumes on block storage, use this
feature to do the following:
Management options
21
22
Figure 4
Figure 5
The VSI framework and its features are freely available from EMC.
Some features are specific to storage platforms such as Symmetrix
DMX and VNX. The framework, features, and supporting
documents can be obtained from the EMC Powerlink website
located at: http://Powerlink.EMC.com/.
Management options
23
Figure 6
24
After you install and configure VMware ESXi, complete the following
steps:
1. Ensure that network multipathing and failover are configured
between ESXi and the VNX platform. Storage multipathing
considerations on page 50 provides more details.
2. Complete the NFS, VMFS, and RDM configuration steps using
EMC VSI for USM:
a. NFS - Create and export the VNX file system to the ESXi host.
Add NFS datastores to ESXi hosts from NFS file systems
created on VNX. Provisioning file storage for NFS
datastores on page 71 provides details to complete this
procedure using USM.
b. VMFS - Configure a VNX FC/FCoE/iSCSI LUN and present
it to the ESXi server.
Configure a VMFS datastore from the LUN that was
provisioned from VNX. Provisioning block storage for VMFS
datastores and RDM volumes (FC, iSCSI, FCoE) on page 76
provides details to complete this procedure using USM.
c. RDM - Configure a VNX FC/FCoE/iSCSI LUN and present it
to the ESXi server.
Create and surface the LUN provisioned from VNX to a
virtual machine for RDM use. Provisioning block storage for
VMFS datastores and RDM volumes (FC, iSCSI, FCoE) on
page 76 provides details to complete this procedure using
USM.
3. Provision newly created virtual machines on NFS or VMFS
datastores and optionally assign newly created RDM volumes.
25
26
VMware vSphere
boot from SAN
FC/FCOE LUNs
27
8. Create a LUN on which to install the boot image. The LUN need
not be any larger than 20 GB. Do not store virtual machines
within this LUN.
9. Create a storage group and add the host record and the new LUN
to it.
10. Rescan the Host Adapter to discover whether the new device is
accessible. If the LUN does not appear or appears as LUNZ,
recheck the configuration and rescan the HBA.
11. Reserve a specific Host LUN ID to identify the boot devices. For
example, assign a Host LUN number of 0 to LUNs that contain
the boot volume. Using this approach makes it easy to
28
Figure 7
29
VMware vSphere
boot from SAN iSCSI
LUNs
30
Figure 8
31
storage interface, support jumbo frames, and the MTU size of the
interface card on the ESXi host, the Network port, and VNX port
are consistent.
3. Configure the first iSCSI target by specifying the IP address and
the IQN name of the VNX iSCSI port configured in the previous
step. Optionally, specify the CHAP properties for additional
security of the iSCSI session.
Figure 9
32
33
Table 1
34
Table 1 shows how the VNX platform enables users to mix drive
types and sizes on the storage array and in storage pools to
adequately support the applications.
Type of drive
Available size
Benefit
Suggested usage
Flash drives
100 GB
200 GB
Extreme performance
Lowest Latency
Cost Effective
Better performance
NL-SAS drives
RAID configuration
options
Table 2
Algorithm
Description
Pool support
RAID 0
RAID 1
RAID 1/0
RAID 3
RAID 5
RAID 6
35
FAST VP
36
Figure 10
37
38
Figure 11
39
Thick LUNs
Thin LUNs
Thin LUNs (TLUs) are also created within storage pools. However, a
TLU does not reserve or allocate any user space from the pool.
Internal allocation reserves a few storage pool 1 GB slices when the
LUN is created. No additional storage allocation occurs until the host
or guest writes to the LUN. Select the Thin LUN checkbox in the LUN
creation page of Unisphere to create a TLU.
Note: After a device is written to the guest level, the blocks remain allocated
until the device is deleted or migrated to another thin device. To free deleted
blocks, you must compress the LUN.
The primary difference between Thick and Thin LUN types is the
way storage is allocated within the pool. Thin LUNs, reserve a 1 GB
slice and 8KB blocks from that slice on demand, when the host issues
a new write to the LUN. Direct LUNs allocate space in 1 GB
increments as new writes to the VMFS datastore are initiated.
Another difference is in the pool reservation. While both storage
types perform on demand allocation, Thick LUN capacity is
guaranteed within the pool and deducted from free space. Thin LUN
capacity is not reserved or guaranteed within the storage pool which
is why monitoring free space of pools with Thin LUNs is important.
Monitoring and alerting are covered in Monitor and manage
storage on page 92 of this document. Since the goal of Thin is
economical use of storage resources, TLUs allocate space at a much
more granular level than thick LUNs. Thin LUNs reserve a 1 GB slice,
and allocate blocks in increments of 8KB as needed.
Comparison
between pool LUNs
and VNX OE for
block LUNs
40
VNX OE for block (VNX OE) LUNs or RAID Group LUNs are the
traditional storage devices that were used before the introduction of
storage pools. VNX OE LUNs allocate all the disk space in a RAID
group at the time of creation. VNX OE LUNs are the only available
option when creating a LUN from a RAID Group, and there is no thin
option with VNX OE or RAID Group LUNs
41
Figure 12
42
Enable the write cache and disable the read cache for Flash drive
LUNs.
The only AVM Pools supported with Flash drives are RAID 5 (4+1
or 8+1) or R1/0 (1+1).
Create four LUNs per Flash drive RAID and balance the
ownership of the LUNs between the VNX storage processors.
This recommendation is unique to Flash drives. Traditional AVM
configuration provided better spacial locality and performance
when configured with 2 LUNs per RAID Group.
Set the stripe element size for the volume to 256 KB.
LUN considerations
with VNX and
vSphere
Since SIOC requires an Enterprise Plus license, and not all systems
will have Flash Drives, we need to consider those environments as
well.
Creating a single LUN can encounter resource contention because it
forces the VMkernel to serially queue I/Os from all the virtual
machines using the LUN. The VMware parameter
Disk.SchedNumReqOutstanding prevents one virtual machine from
monopolizing the FC queue. Nevertheless, there is an unpredictable
elongation of response time when there is a long queue against the
LUN.
The LUN sizes within these environments should be based upon the
performance requirements. The key criteria to decide the LUN size is
understanding the workload, the required IOPS for the applications
and virtual machines, the response times of the applications, and the
Unified storage considerations
43
sizing for the peak periods of I/O activity. Balance the number of
virtual machines running within a datastore against the IO profile of
the VM and capabilities of the storage devices.
Larger single-LUN implementations
44
MetaLUN benefits
Single LUN
Easier management.
One VMFS to manage unused
storage.
45
Number of VMFS
volumes in an ESXi
host or cluster
46
Figure 13
47
Network considerations
The VNX platform supports many network configuration options for
VMware vSphere including basic network topologies. This section
lists items to consider before configuring the storage network for
vSphere servers.
Note: Storage multipathing is an important network configuration topic.
Review the information in Storage multipathing considerations on page 50
before configuring the storage network between vSphere and VNX.
Network equipment
considerations
IP-based network
configuration
considerations
48
Use CAT 6 cables rather than CAT 5/5e cables. Although GbE
works on CAT 5 cables, they are less reliable and robust.
Retransmissions absolutely recover from errors, but have a
significant impact for IP storage than general networking use
cases.
With NFS datastores, use network switches that support a
Multi-Chassis Link Aggregation technology such as cross-stack
Etherchannel or Virtual Port Channeling. Multipathing
considerations - NFS on page 56 provides more details.
With NFS datastores, use 10 GbE network equipment.
Alternatively, use network equipment that includes a simple
upgrade path from 1 GbE to 10 GbE.
With VMFS datastores over FC, consider using FCoE converged
network switches and CNAs over 10 GbE links. These have
similar fabric functionality and administration requirements as
standard FC switches and HBAs but at a lower cost. FCoE
network considerations on page 49 provides more details.
FCoE network
considerations
Set jumbo frames on ESXi, the physical network switch, and VNX
to enable them end-to-end in the I/O path.
Ensure that the Ethernet switches have the proper number of port
buffers and other internals to properly support NFS and iSCSI
traffic.
Network considerations
49
Multipathing
consideration
VMFS/RDM
Figure 14
50
Figure 15
51
With port binding enabled, configure a single vSwitch with two NICs
so that each NIC is bound to one VMkernel port. These NICs can be
connected to the same SP port on the same subnet as shown in
Figure 16.
Figure 16
Run the following command to verify that the ports are added to the
software iSCSI initiator:
# ESXicli swiscsi nic list -d <vmhba>
Multipathing and
failover options
52
MRU Uses the first path it detects when the host boots, and
uses it as long as it remains available.
Fixed Uses a single active path for all I/O to a LUN. vSphere
4.1 introduced a new policy called VMW_SATP_FIXED_AP,
which selects the Array LUN preferred path when VNX is set for
ALUA mode. This policy offers automated failback but does not
include load balancing.
53
Figure 17
54
55
Does not have any single point of failure (NIC ports, switch ports,
physical network switches, and VNX Data Mover network ports)
56
Figure 18
57
ESXi NIC ports - NIC Teaming on the ESXi hosts provides fault
tolerance of NIC port failure. Set the load balancing on the virtual
switch to route-based on IP hash for Ether channel.
58
Figure 19
Unisphere interface
2. Click Devices, and then click Create. The Create Network Device
dialog box appears.
3. In the Device Name field, type a name for the LACP device.
4. In the Type field, select Link Aggregation.
5. In the 10/100/1000 ports field, select the two Data Mover ports
that are used.
6. Enable Link Aggregation on the switches, the corresponding
VNX Data Mover interfaces, and ESXi host network ports.
7. Click OK to create the LACP device.
8. In the Settings for files page, click Interfaces.
9. Click Create. The Create Network Interface page appears.
Storage multipathing considerations
59
Figure 20
60
14. Access vSphere Client and complete steps 15 through 19 for each
ESXi host.
15. Create a vSwitch for all the new NFS datastores in this
configuration.
16. Create a single VMkernel port connection in the new vSwitch.
Add two physical NICs to it and assign an IP address for the
VMkernel in the same subnet as the two Network Interfaces of
the VNX Data Mover. (In Figure 16, the VMkernel IP address is
set to 10.6.121.183 with physical NIC vmnic0 and vmnic1
connected to it.)
17. Click Properties. The vSwitch1 Properties dialog box appears.
Figure 21
61
18. Select vSwitch, and then click Edit. The vSwitch1 Properties
page appears.
Figure 22
19. Click NIC Teaming, and select Route based on ip hash from the
Load Balancing list box.
Note: The two vmnics are listed under the Active Adapters for the NIC Team.
If both corresponding ports on the switch are enabled for Ether channel, data
traffic to the ports is statically balanced using a hash function of the source
and destination IP address.
IMPORTANT
This means that a single TCP session from a virtual machine to a
specific NFS datastore always uses the same vmnic. However, two
TCP sessions from two virtual machines accessing different
datastores will use different vmnics (network paths), resulting in a
higher throughput.
62
Figure 23
20. Provision an NFS datastore using USM. For the first NFS
datastore, select the primary Data Mover in the Data Mover field,
and for Data Mover Interface, assign the IP address of the first
network interface that was created.
21. Provision the second NFS datastore using USM. For the second
NFS datastore, select the primary Data Mover in the Data Mover
field and assign the IP address of the second network interface
that was created.
Note: Provision storage for NFS datastore to a new file system using
EMC VSI on page 71 provides details on how to provision a NFS
datastore with USM.
63
64
VMware provides drivers for supported iSCSI HBA, FCoE CNA, and
NIC cards as part of the VMware ESXi distribution. The VMware
compatibility guide provides additional details on qualified adapters.
The EMC E-Lab Interoperability Navigator utility available on
EMC Powerlink provides information about supported adapters for
connectivity of VMware vSphere to VNX.
VMkernel port
configuration in ESXi
The ESXi VMkernel port group enables the use of iSCSI and NFS
storage. When ESXi is configured for IP storage, the VMkernel
network interfaces are configured to access one or more iSCSI
Network Portals on the VNX storage processors, or NFS servers on
VNX Data Movers.
To configure the VMkernel interface, complete the following steps:
1. Select an unused network interface that is physically cabled to or
logically part of the same subnet (VLAN) as the VNX iSCSI
Network Portal.
2. To set the network access, complete the following steps:
a. Select the vSwitch to handle the network traffic for the
connection and click Next.
b. In the Device Name field, type a name for the LACP device.
3. Click Next. The VMkernel - IP Connection Settings dialog box
appears.
4. To specify the VMkernel IP settings, do one of the following:
Select Obtain IP settings automatically to use DHCP to
obtain IP settings.
Select Use the following IP settings to specify IP settings
manually.
5. If Use the following IP settings is selected, provide the following
details:
Type the IP Address and Subnet Mask for the VMkernel
interface.
Click Edit to set the VMkernel Default Gateway for VMkernel
services, such as vMotion, NAS, and iSCSI.
65
Figure 24
66
Figure 25
67
Figure 26
68
69
Figure 27
VMDirectPath
70
Provision storage for NFS datastore to a new file system using EMC VSI
Use this procedure when the vSphere Client is authorized to create a
new VNX file system. Complete the following steps to provision an
NFS datastore on a new VNX file system:
1. Access the vSphere Client.
2. Right-click an object (a host, cluster, folder, or data center).
Note: If you choose a cluster, folder, or data center, all ESXi hosts within
the object are attached to the newly provisioned storage.
71
Figure 28
11. Select a storage pool from the Storage Pool list box.
Note: The user sees all available storage within the storage pool. Ensure
that the storage pool selected is designated by the storage administrator
for use by VMware vSphere.
12. Type an initial capacity for the NFS export in the Initial Capacity
field, and select the unit of measure from the list box to the right.
13. If required, select Virtual Provisioning to indicate that the new
file systems are thinly provisioned.
72
Note: When a new NFS datastore is created with EMC VSI, Thin
Provisioning and Automatic File system extension are automatically
enabled. On the New NFS Export page, set the Initial Capacity and the
Max Capacity.
Figure 29
73
Provisioning an NFS datastore from an existing file system using EMC VSI
Use this feature if the VMware administrator does not have storage
privileges or needs to use an existing VNX file system for the new
NFS datastore.
The storage administrator completes the following steps in advance:
1. Create the file system according to the needs of the VMware
administrator.
2. Mount and export the file system on an active Data Mover.
74
5. Click Finish.
USM creates the NFS datastore and updates the selected NFS options
on the authorized ESXi hosts.
75
76
Figure 30
7. Select the storage pool or RAID group from which you want to
provision the new LUN. Click Next.
8. Select VMFS Datastore or RDM Volume.
Note: Unlike VMFS datastores, RDM LUNs are bound to a single virtual
machine and cannot be shared across multiple virtual machines, unless
clustering at the virtual machine level. Use VMFS datastores unless a
one-to-one mapping between physical and virtual storage is required.
Provisioning block storage for VMFS datastores and RDM volumes (FC, iSCSI, FCoE)
77
10. Select a LUN number from the LUN Number list box.
11. Type an initial capacity for the LUN in the Capacity field, and
select the unit of measure from the list box to the right.
Figure 31
12. Click the Advanced button to configure the VNX FAST VP policy
settings for the LUN. There are three tiering policy options:
Auto-Tier: Distributes the initial data placement across all
drive types in the pool to maximize spindle usage for the
LUN. Subsequent data relocation is based on LUN
performance statistics such that data is relocated among tiers
according to I/O activity.
Highest Available Tier: Sets the preferred tier for initial data
placement and subsequent data relocation (if applicable) to
the highest performing disk drives with available space.
Lowest Available Tier: Sets the preferred tier for initial data
placement and subsequent data relocation (if applicable) to
the most cost-effective disk drives with available space.
13. Click Finish.
14. At this point, the USM does the following:
a. Creates a LUN in the selected Storage Pool.
78
Provisioning block storage for VMFS datastores and RDM volumes (FC, iSCSI, FCoE)
79
80
For NFS, use the Direct Writes option on VNX file systems. It is
helpful with random write workloads and virtual machine disks
formatted with a 4 KB allocation unit size.
Figure 32
81
82
Figure 33
83
Figure 34
Partition allocation
unit size
Figure 35
84
Figure 36
Virtual machine
swap file location
Each new virtual machine is configured with a swap file that stores
memory pages under certain conditions, such as when the balloon
driver is inflated within the guest OS. By default, the swap file is
created and stored in the same folder as the virtual machine.
In some cases, the virtual machine performance can be improved by
relocating the swap file to a separate high-performance device such
as a Flash drive LUN. Additionally, the swap file contains dynamic
data that is reconstructed each time the VM is booted. Backing up the
swap file is of little value, unless your interest is in forensics or trying
re-create a particular system state.
It is also possible to use a local datastore to offload up to 10 percent of
the network traffic that results from the page file I/O.
The tradeoff for moving the swap file to the local disk is that it may
result in additional I/O when a virtual machine is migrated through
vMotion or DRS. In such cases, the swap file must be copied from the
local device of the current host to the local device of the destination
host. It also requires dedicated local storage to support the files.
A better solution is to leverage a high-speed, low latency device such
as Flash drives to support the swap files.
If there is sufficient memory where page reclamation is not expected
(that is, each virtual machine has 100 percent of its memory reserved
from host physical memory), it is possible to use SATA drives to
support page files.
85
In the absence of this configuration option, use Flash drives for page
files where performance is a concern.
Paravirtual SCSI
adapters
86
N Port ID
Virtualization for
RDM LUNs
LUNs must be masked to the ESXi host and the virtual machine
where NPIV is enabled
87
Figure 37
88
Figure 38
89
b. LUNs have the same host LUN number (HLU) as the ESXi
hosts.
c. LUNs must be assigned as RDMs to each virtual machine.
Virtual machines
Resiliency over NFS
90
Set the keep the disk timeout value to at least 60 seconds within
the Guest OS. For Windows OS, modify the
HKEY_LOCAL_MACHINE/System/ControlSet/Services/DISK
and set the TimeoutValue to 120. The following snippet performs
the same task and can be used for automation on multiple virtual
machines:
91
Monitor datastores
using vSphere Client
and EMC VSI
92
Figure 39
Actions tab
93
Figure 40
94
When LUN allocations begin to approach the capacity of the pool, the
administrator is alerted. Two non-dismissible pool alerts are
provided:
95
Figure 41
96
Figure 42
97
Figure 43
98
2. From the top menu bar, select System > Monitoring and Alerts >
Notifications for Files.
3. Click Storage Usage and click Create.
4. Complete the following steps:
a. In the Storage Type field, select File System.
b. In the Storage Resource list box, select the name of the file
system.
Note: Notifications can be added for all file systems.
c. In the Warn Before field, type the number of days to send the
warning notification before the file system is projected to be
full.
Note: Select Notify Only If Over-Provisioned to trigger this notification only
if the file system is over provisioned.
Figure 44
99
Storage efficiency
Thin Provisioning and compression are practices that administrators
can use to efficiently store data. This section describes how these
technologies are used in a vSphere and VNX environment.
Thinly provisioned
storage
100
VMware offers three options for provisioning a virtual disk. They are
Thin, ZeroedThick (or Thick), and Eagerzeroedthick. A description of
each along with a summary of their impact on VNX Storage Pools is
provided in Table 4 on page 101. Any of the formats listed in the table
can be provisioned from any supported VNX storage device (Thin,
Thick, VNX OE, or NFS).
Table 4
Thin
(NFS default)
Zeroedthick
(VMFS default)
Eagerzeroedthick
RDM
RDMp
Thinly Provisioned
block-based storage
With respect to the type of VNX storage, Thin LUNs are the only
devices that support oversubscription. Thin LUNs are created from
storage pools that preserve space by delaying block allocation until it
is required by an application or guest operating system. Although
Thick LUNs are created from storage pools, their space is always
reserved and thus they have no thin provisioning benefits. Similarly,
the blocks assigned for VNX OE LUNs are always allocated within
RAID Groups with no thin-provisioned option.
When referring to block-based thin provisioning within this section,
the focus is exclusively on VNX Thin LUNs for VMFS or RDM
volumes.
Storage efficiency
101
Figure 45
102
Thin virtual disks can be used to preserve space within the VMFS
datastore. The thin VMDK only allocates VMFS blocks needed by the
virtual machine for guest OS or application use. Thin VMDKs can be
created on a Thick LUN to preserve space within the file system or on
a Thin LUN to extend that benefit to the storage pool. In the example
in Figure 46, the same 500 GB virtual disk is created within a VMFS.
This time the disk is created in a thin-provisioned format. With this
option, the VMFS only uses 100 GB within the file system and 100 GB
within the VNX storage pool. Additional space is allocated when it is
required by the virtual machine. Additionally, the allocation unit is
the equivalent of the block size used to format the VMFS. So rather
than allocating at the 4k or 8k block that the virtual machine uses, the
minimum allocation size for ESXi is 1 MB, which is the default block
size for a VMFS volume, and up to 4 MB which is the maximum block
size used by VMFS. This is beneficial when using Thin on Thin.
Figure 46
Storage efficiency
103
Figure 47
Zeroed thick virtual disks are the default option created when neither
the "Allocate and commit space on demand" option or Support
clustering are not selected. In this example, since neither option has
been selected, a zeroedthick vmdk is created.
Selecting the zeroedthick option for virtual disks on VMFS volumes
affects the space allocated to the guest file system (or writing pattern
of the guest OS device). If the guest file system initializes all blocks,
the virtual disk needs all the space to be allocated up front. When the
first write is triggered on a zeroedthick virtual disk, it writes zeroes
on the region defined by the VMFS block size and not just the block
that was written to by the application. This behavior affects the
performance of array-based replication software because more data,
104
Figure 48
Storage efficiency
105
After additional space has been added or reclaimed from the storage
pool, the virtual machine can resume execution without any adverse
effects by selecting the Retry option. If an application times out while
waiting for storage capacity to become available, the application
must then be restarted. The Stop option causes the virtual machine to
be powered off.
Thinly Provisioned
file-based storage
Figure 49
106
Storage efficiency
107
VMware vSphere virtual disks created from NFS are always thin
provisioned. The virtual disk provisioning policy setting for NFS is
shown in Figure 50.
Figure 50
LUN Compression
108
VNX LUN Compression offers capacity savings to the users for data
types with lower performance requirements. LUNs presented to the
VMware ESXi host are compressed or decompressed as needed. As
shown in Figure 51 on page 109, compression is a LUN attribute that
can be enabled and disabled on a per-LUN basis. When enabled, data
on disk is compressed in the background. If the source is a VNX OE
or Thick Pool LUN, it undergoes an online migration to a thin LUN
when compression is enabled. Additional data written by the host is
initially stored uncompressed, and system-defined thresholds are
used to automatically trigger the compression of new data
asynchronously. Host reads of compressed data are decompressed in
memory but left compressed on disk. These operations are largely
Figure 51
The inline read and write operations of compressed data affects the
performance of individual I/O threads and therefore compression is
not recommended in the following cases:
Storage efficiency
109
File Deduplication
and Compression
Efficient deployment and cloning of virtual machines stored on VNX file systems over NFS
VNX File Deduplication and Compression can target active virtual
disk files (VMDK files) for data compression and cloning purposes.
This feature is available for VMware vSphere virtual machines that
are deployed on VNX-based NFS datastores.
Virtual machine compression with VNX File Deduplication and
Compression
With this feature, the VMware administrator can compress a virtual
machine disk at the VNX level and thus reduce the file system storage
consumption by up to 50 percent. There is some CPU overhead
associated with the compression process, but VNX includes several
optimization techniques to minimize this performance impact.
Virtual machine cloning with VNX File Deduplication and
Compression
VNX File Deduplication and Compression provides the ability to
perform efficient, array-level cloning of virtual machines. Two
cloning alternatives are available:
110
Storage efficiency
111
112
2
Cloning Virtual
Machines
113
Introduction
To help meet the ever-increasing demands put on resources, IT
administrators create exact replicas, or clones of existing, fully
configured virtual machines to quickly deploy a group of virtual
machines. Cloning works by creating a copy of the VMDKs and
configuration files on the virtual machine.
VMware vSphere provides two native methods to clone virtual
machinesthe Clone Virtual Machine wizard in vCenter Server and
the VMware vCenter Converter.
114
VNX SnapView for block storage using the FC, iSCSI, or FCoE
protocol
VNX SnapSure for file systems when using the NFS protocol
115
Cloning VMFS
datastores
116
Figure 52
117
118
Figure 53
Clone virtual
machines for RDM
volumes
119
differs from the one stored on it. This enables the VMkernel to
identify the copy correctly. vSphere provides selective resignaturing
at an individual LUN level and not at the ESXi host level.
Note: There are command line options such as enableLVMresignature = 1.
However, they are not the recommended approach to resignature with
vSphere 4.
After a rescan, the user can either keep the existing signature of the
replica (LUN) or resignature the replica (LUN) if needed:
Keep the existing signature Presents the copy of the data with
the same label name and signature as the source device.
However, on VMware ESXi hosts with access to both the source
and target devices, the parameter has no effect because VMware
ESXi does not present a copy of the data if there are signature
conflicts.
120
Note: The name present in the VMFS Label column indicates that the
LUN is a copy of an existing vStorage VMFS datastore.
Figure 54
121
Figure 55
vStorage API supports both VMFS datastores and RDM volumes and
works with the VNX platform FC, iSCSI, and FCOE protocols. The
storage systems must use VNX OE code, release 31 or later for the
host to use the new vStorage APIs. The Full Copy feature of the VAAI
suite offloads the virtual machine cloning operations to the storage
system.
Note: VAAI support is provided with VNX storage systems running VNX OE
for block version 5.31.
The host issues the Extended Copy SCSI command to the array and
directs the array to copy the data from a source LUN to a destination
LUN or to the same source LUN. The array uses its efficient internal
mechanism to copy the data and return Done to the host. Note that
123
124
Full Clone: A full clone operation can be done across file systems
from the same Data Mover. Full clone operations occur on the
Data Mover rather than on the ESXi host that frees ESXi cycles
and network resources by eliminating the need to pass data to the
ESXi host. By removing the ESXi host from the process, the
virtual machine clone operation can complete two to three times
faster than a native vSphere virtual machine clone operation.
125
Summary
The VNX platform-based technologies provide an alternative to
conventional VMware-based cloning. VNX-based technologies create
virtual machine clones at the storage layer in a single operation.
Offloading these tasks to the storage systems provides faster
operations with limited vSphere CPU, memory, and network
resource consumption.
VNX-based technologies provide options for administrators to:
Table 5
126
Block storage
(VMFS datastores or
RDM)
VNX SnapView
Network-attached storage
(NFS datastores)
VNX SnapSure
3
Establishing a Backup
and Recovery Plan for
VMware vSphere on
VNX Storage
Introduction ......................................................................................
Virtual machine data consistency ..................................................
VNX native backup and recovery options ...................................
Backup and recovery of a VMFS datastore ..................................
Backup and recovery of RDM volumes........................................
Replication Manager........................................................................
vStorage APIs for Data Protection.................................................
Backup and recovery using VMware Data Recovery .................
Backup and recovery using Avamar .............................................
Backup and recovery using NetWorker........................................
Summary ...........................................................................................
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
128
129
131
134
138
139
143
145
148
157
164
127
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Introduction
The combination of EMC Data Protection technologies and VMware
vSphere offers many backup and recovery options for virtual
machines provisioned from VNX storage. It is important to determine
recovery point objective (RPO) and recovery time objective (RPO) so
that an appropriate method is used to meet the service level
requirements and minimize downtime.
At the storage layer, two types of backup are discussed in this
chapter: logical backup and physical backup. A logical backup
(snapshot) provides a view of the VNX file system or LUN at a
specific point in time. Logical backups are created rapidly and require
very little storage space so they can be created frequently. Restoring
from a logical backup can also be done quickly, dramatically reducing
the mean time to recover. The logical backup protects against logical
corruption of the file system or LUN, accidental deletion of files, or
other similar human errors.
A logical backup cannot replace a physical backup. A physical
backup creates a full copy of the file system or LUN on different
physical media. Although backup and recovery time may be longer, a
physical backup provides a higher level of protection because it can
withstand a hardware failure on the source device. Physical backups
guard against data unavailability caused by hardware failure.
128
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
129
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
130
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
131
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 56
132
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
133
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
134
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 57
3. Select the appropriate number of copies for each source LUN and
optionally assign the snapshot to other ESXi Hosts as shown in
Figure 58 on page 136.
135
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 58
136
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Failure to set the parameters properly may result in the host viewing
the LUN as a new device, which can only be added to the
environment by formatting it as a new datastore. When the snapped
VMFS LUN is accessible from the ESXi host, the virtual machine files
can be copied from the snapped datastore to the original VMFS
datastore to recover the virtual machine.
137
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Note: Replication Manager functions only with RDM volumes created with
the physical compatibility mode option and formatted as NTFS volumes.
138
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Replication Manager
EMC Replication Manager is a software solution that integrates with
EMC data protection technologies to simplify and automate
replication tasks. Replication Manager uses EMC SnapSure or EMC
SnapView to create local replicas of VNX datastores.
Replication Manager provides additional protection by creating
VMware snapshots of all online virtual machines before creating local
replicas. This step ensures that the operating system of the virtual
machine is in a crash-consistent state when the replica is created.
Replication Manager uses a physical or virtual machine to act as a
proxy host to manage all tasks within the VMware and VNX storage
environment. The proxy host must be configured to communicate
with the vCenter Server and the storage systems. It performs
enumeration of storage devices from the virtualization and storage
environment and performs the necessary management tasks to
establish consistent copies of the datastores and virtual machine
disks. Use the Replication Manager Job Wizard, as shown in
Figure 59 on page 140, to select the replica type and expiration
options. Replication Manager 5.2.2 must be installed for datastore
support.
Replication Manager
139
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 59
140
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 60
Replication Manager
141
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 61
142
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
143
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 62
144
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 63
145
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 64
146
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
147
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
148
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 65
149
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Avamar backups
Avamar provides the following backup options for vSphere
environments:
150
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 66
After you install and configure the proxy to protect either Windows
or Linux Virtual Machines and configure Avamar to protect the VM,
you can schedule backups to run on-demand. Avamar integrates with
vCenter and offers a similar management interface to import and
configure VM protection. Figure 66 shows a sample proxy
configuration.
Avamar Manager can also enable Change Block Tracking (CBT) for
virtual machines to further accelerate backup processing. With CBT
enabled, Avamar can easily identify and deduplicate the blocks that
151
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 67
When a backup job starts, Avamar signals the vCenter server to create
a new Snapshot image of each VMDK specified in the backup policy.
It uses VADP to SCSI hot-add to mount it to the image proxy. If
change block tracking is enabled, Avamar uses it to filter the data that
is targeted for backup. After Avamar establishes a list of blocks, it
applies deduplication algorithms to determine if the segments are
unique. If they are it copies them to the AVE server otherwise it
creates a new pointer referencing the existing segment on disk. The
image proxy then copies those blocks to the Avamar Virtual
Appliance which is backed by virtual disks created from VNX
storage.
152
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
153
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 68
154
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 69
Restore requests will pass from the Avamar system and through the
Windows FLR proxy and on to the machine that is being protected.
The recovery speed with this operation is limited to the FLR proxy's
ability to read in the data and send it to the machine that the
administrator is recovering to. Therefore, large data recoveries
through the FLR proxy recovery are not advisable. In this instance an
image-level out-of-place recovery is more efficient.
Note: In order for File Level Recovery to work with target virtual machine
must be powered on and running virtual machine Tools.
155
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
156
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
157
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 70
158
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 71
VADP snapshot
159
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 72
160
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
161
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
VNX NAS file system NDMP backup and restore using NetWorker
NetWorker provides two methods of storage integration with VNX
NFS datastores. VNX provides file systems to use as Advanced File
System Type Devices (AFTD), or configured as a virtual tape library
unit (VTLU).
After configuring a VTLU on the VNX file system, configure
NetWorker as an NDMP target for backing up NFS datastores that
reside on the VNX platform. Configure NetWorker to use VNX File
System Integrated Checkpoints to create NDMP backups in the
following manner:
1. Create a Virtual Tape Library Unit (VTLU) on VNX NAS.
2. Create a library in EMC NetWorker.
3. Configure NetWorker to create bootstrap configuration, backup
group, backup client, and so on.
4. Run NetWorker backup.
5. Execute NetWorker Recover.
The entire datastore or individual virtual machines are available for
backup or recovery. Figure 73 shows NetWorker during the process.
Figure 73
162
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Figure 74
163
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Summary
This section has provided several backup options and examples of
virtual machine protection. There are native options and tools within
the VNX Storage System that provide the ability to create replicas or
Snapshots of the device backing the datastore. SnapSure for example,
can be used to create a point in time copy of an NFS datastore. Similar
capabilities exist using LUN clones or snapshots for VNX block
environments.
The Virtual Data Recovery appliance can be deployed and configured
fairly easily and populated with VNX Block storage to support up to
100 virtual machines for each appliance.
In larger environments EMC Avamar will scale significantly better
and introduce considerable benefits for global data deduplication
and reduced resource requirement in all areas of backup. EMC
Avamar Virtual Edition for VMware and Avamar Image Proxy,
provided as virtual appliances, can be installed and configured
quickly with tight vCenter integration for vSphere environments.
These products can be backed by VNX storage providing an efficient
and scalable data protection solution.
EMC Networker offers an image protection option for vSphere with
tight integration with vCenter to create and manage individual VM
backup and restore options. Networker provides NDMP support
with for VNX OE for block as well as integration with VNX OE for
file Virtual Tape Libraries. Table 6 on page 165 summarizes some of
the backup technologies and products that can be used to establish
image and file backup approaches. VNX Storage System and vSphere
are integrated with many data protection solutions. The information
in this section and in the table is not provided or meant to be a
comprehensive list of qualified products, but more of an example of
the data protection options and technologies that exist within EMC
VNX and VMware vSphere.
164
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
Table 6 on page 165 summarizes the backup and recovery options for
VNX with vSphere 4.
Table 6
File level
VMFS/NFS
datastore
RDM (physical)
Replication Manager
N/A
RDM
(virtual)
VDR
Avamar Proxy
NetWorker
Avamar
NetWorker
Summary
165
Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage
166
4
Using VMware vSphere
in Data Restart Solutions
Introduction ......................................................................................
Definitions.........................................................................................
EMC remote replication technology overview ............................
RDM volume replication.................................................................
Replication Manager........................................................................
Automating Site Failover with SRM and VNX............................
Summary ...........................................................................................
168
169
172
187
191
193
203
167
Introduction
With the number of servers being virtualized, it is critical to have a
Business Continuity (BC) plan for the virtualized datacenter.
Administrators can use native VNX and EMC replication
technologies to create stand-alone Disaster Recovery (DR) point
solutions or combine them with VMware Site Recovery Manager to
provide an integrated disaster recovery solution.
This section focuses on using EMC replication technologies to
provide DR solutions. It covers remote replication technologies to
create full copy LUN replicas at the secondary site. Replicas can
satisfy business processes or be integrated into disaster recovery
solutions. These solutions normally involve a combination of virtual
infrastructure at multiple geographically-separated data centers with
EMC technologies replicating data between them.
Topics covered in this section include:
168
Definitions
The following terms are used in this chapter.
Definitions
169
Zero data loss is the ideal goal, but it has added financial and
application considerations. For regulated business services such as
those in the financial sector, zero data loss may be a required,
resulting in synchronous replication of each transaction. That added
protection can impact application performance and infrastructure
costs.
Design
considerations for
disaster recovery
and data restart
170
Geographically
distributed virtual
infrastructure
VMware does not provide native tools to replicate data from one
ESXi host to another ESXi host at a geographically separated location.
Software-based replication technology can be used inside virtual
machines or the service console, but may add complexity and
consume significant network and CPU resources. Integrating VNX
storage system replication products with VMware technologies
enables customers to provide cost-effective disaster recovery and
business continuity solutions. Some of these solutions are discussed
in the following sections.
Note: Similar solutions are possible using host-based replication software
such as RepliStor. However, storage-array replication offers a disaster
restart solution with business-consistent views of data from: multiple hosts,
operating systems, and applications.
Definitions
171
Table 7
MirrorView
VMFS
RDM
173
EMC Replicator
174
Figure 75
Replication Wizard
175
Note: The destination can be the same Data Mover (loop back
replication), another Data Mover in the same VNX cabinet or a Data
Mover in a different VNX cabinet.
Figure 76
176
EMC MirrorView
MirrorView LUN
replication
177
MirrorView
consistency group
VNX5300
VNX5500
VNX5700
VNX7500
Maximum
number of
mirrors
128
128
256
512
1024
Maximum
number of
consistency
groups
64
64
64
64
64
Maximum
number of
mirrors per
consistency
group
32
32
32
64
64
178
Figure 77
179
Synchronous
MirrorView (MV/S)
2. Create a mirror on the production array and add the source LUN
as shown in the following example:
naviseccli -h <source-spa> -scope 0 -user <User>
-password <password> mirrorview -create -name LU4 -lun
4
180
Figure 78
181
Figure 79
You can fail over MirrorView LUNs or consistency groups to start the
virtual environment at the remote site. In a planned failover, disable,
or shut down the production VNX site before initiating these tasks.
To ensure no loss of data, synchronize secondary MirrorView/S
LUNs before starting the failover process. In addition, pair the
secondary image in a MirrorView/A pair behind the primary image.
Perform a manual update of the secondary image after the
applications at the production site are shut down.
Use the following commands to sync LUNs in MirrorView/S
relationship:
naviseccli -h <source-spa> mirror -sync -promoteimage
-name <name> type normal
naviseccli -h <source-spa> mirror -sync -promotegroup
-name <name> type normal
182
EMC RecoverPoint
Figure 80
183
Use write splitters that reside on the VNX arrays or in the SAN
fabric. The write splitter intercepts the write operations destined
to the ESXi datastore volumes and sends them to the
RecoverPoint appliance that transmits them to the remote
location over IP networks as depicted in Figure 80 on page 183.
RecoverPoint VAAI
support
184
Note: The RecoverPoint SAN splitter does not include support for any VAAI
SCSI commands. Disable VAAI if SAN splitter is used.
Using the Advanced Settings option of the ESXi host, VAAI features
can be disabled by setting the value of HardwareAccellerateMove,
HardwareAcceleratedInit, and HardwareAcceleratedLocking to zero
as shown in Figure 81 on page 185.
Figure 81
185
Note: All virtual disk devices (VMFS and RDM) that constitute a virtual
machine must be a part of the same consistency group. If application
consistency is required when using RDMs, install the RecoverPoint driver in
the Windows guest OS. Table 9 on page 186 summarizes the support options
available when using RecoverPoint for replication with VNX.
Table 9
Features
186
Splitter
Windows host write splitter
Brocade/Cisco Intelligent
Fabric write splitter
Yes
Yes
Yes
No
Yes
Yes
Supports VMFS
No
Yes
Yes
Supports VMotion
No
Yes
Yes
Supports HA/DRS
No
Yes
Yes
No
Yes
Yes
RDM/P only
RDM/P only
RDM/P only
No
Yes
Yes
N/A
N/A
Yes
Yes
No
Configure remote
sites for vSphere
virtual machines
with RDM
187
Windows disk
\\.\PHYSICALDRIVE2
SCSI (0:1)
\\.\PHYSICALDRIVE3
SCSI(0:2)
\\.\PHYSICALDRIVE4
SCSI(0:3)
These three VNX LUNs are replicated to a remote VNX using LUNs
2, 3, and 4, respectively. The running virtual machine that already
consists of a boot image disk configured as SCSI target 0:0 on the
remote site should be presented three RDM disks, that is, SCSI disks
0:1, 0:2, and 0:3, respectively.
Therefore, EMC recommends using a copy of the source virtual
machine's configuration file instead of replicating the VMware file
system. Complete the following steps to create copies of the
production virtual machine by using RDMs at the remote site:
1. Create a directory within a cluster datastore at the remote location
to store the replicated virtual machine files.
Note: Select a datastore that is not part of the current replication to
perform this one-time operation.
2. Copy the configuration file for the source virtual machine to the
directory. This task does not need to be repeated unless the
configuration of the source virtual machine changes.
3. Register the cloned virtual machine using the Virtual
Infrastructure client or the service console.
4. Generate RDMs on the target VMware ESXi hosts. The RDMs
should be configured to use the secondary MirrorView images.
188
The virtual machine at the remote site can be powered on using either
the Virtual Infrastructure client or the service console.
Start virtual
machines at a
remote site after a
disaster
Configure remote
sites for virtual
machines using
VMFS
189
2. Use the vSphere Client to initiate a SCSI bus rescan after surfacing
the target devices to the VMware ESXi hosts.
3. Use the vCenter client Add storage wizard to select the replicated
devices holding the copy of the VMware file systems. Select the
Keep existing signature option for each LUN copy. After all
devices are processed, the VMware file systems will be displayed
under the Storage tab of the vClient interface.
4. Browse the datastores with the vSphere Client and perform
selective registration of the virtual machines.
Note: When using replication from multiple sources it is possible to
duplicate virtual machine names. If this occurs, select a different variant
of the machine name during virtual machine registration.
5. The virtual machines on the VMware ESXi hosts at the remote site
will start without any modification if the following requirements
are met:
The target VMware ESXi host has the same virtual network
switch configuration. For example, the name and number of
virtual switches are duplicated from the source VMware ESXi
cluster group.
All VMware file systems that are used by the source virtual
machines are replicated.
The minimum memory and processor resource requirements
of all cloned virtual machines can be supported on the target
VMware ESXi hosts.
Devices such as CD-ROM and floppy drives are attached to
physical hardware or placed in a disconnected state on the
virtual machines.
6. Power on the cloned virtual machines using the VirtualCenter
Client or command line utilities when required.
When the virtual machines are powered on for the first time, a
message regarding msg.uuid.altered appears. Select I moved it to
complete the power on procedure.
190
Replication Manager
Replication Manager supports all the replication technologies
introduced in Definitions on page 169. Replication Manager
simplifies the process of creating and mounting of replicas by
defining application sets to execute on VNX and virtual machine
storage devices through the corresponding element manager.
Application sets also provide the option to create workflows to
prepare an application by setting it to hot standby mode or otherwise
ensuring that it is in a consistent state prior to the creation of the
replica.
In a VMware environment, Replication Manager uses a proxy host
(physical or virtual) to initiate management tasks with vCenter and
VNX storage systems. The Replication Manager proxy host can be the
same physical or virtual host that serves as a Replication Manager
server. A vCenter server with Replication Manager agent must be
installed in the environment. The proxy host must also be configured
with a Replication Manager agent, EMC Solutions Enabler, and
administrative access to the VNX storage systems.
Unless you require application consistency within the guest virtual
machine, there is no need to install Replication Manager on the
virtual machines or the ESXi hosts where the VNX storage resides.
Operations are sent from a proxy to create VMware snapshots of all
online virtual machines that reside on the VNX datastore. This step
ensures the operating system consistency of the resulting replica.
Figure 82 shows the NAS datastore replica in the Replication
Manager.
Figure 82
Replication Manager
191
Figure 83
192
193
194
Figure 84
Create SRM
protection groups at
the protected site
195
However, if your VNX model does not support the number of devices
being protected within a protection group, create multiple VNX
consistency groups for each protection group.
Note: The maximum number of consistency groups allowed per storage
system is 64. Both MirrorView/S and MirrorView/A count toward the total.
196
Figure 85
To test the plan, click the Test button on the menu bar. During the test,
the following events will occur:
All the resources created within the SRM protection group are
re-created at the recovery site.
When all the defined tasks in the recovery plan are completed, SRM
will pause until you verify that the test ran correctly. After
verification of virtual machines and applications in the recovery site,
select the Continue button to revert the environment to its original
production state.
Automating Site Failover with SRM and VNX
197
Execute an SRM
recovery plan at the
recovery site
Figure 86
The target LUN is not attached to any ESXi host and other
servers.
199
Before installing the VNX Failback plug-in for VMware vCenter SRM,
install the VMware vCenter SRM on a supported Windows host (the
SRM server) at both the protected and the recovery sites.
Note: Install the EMC Replicator Adapter for VMware SRM on a supported
Windows host (preferably the SRM server) at both the protected and recovery
sites.
Ensure that you have enough disk space configured for both the
virtual machines and the swap file at the secondary site to ensure
that the recovery plan test runs successfully and without errors.
If SRM is used for failover, use either SRM or MVIV for failback
because manual failback is cumbersome and requires selecting
each LUN individually and configuring the Keep the existing
201
202
Summary
Table 11 on page 203 provides data replication solutions of VNX
storage presented to an ESXi host.
Table 11
Replication
NAS datastore
EMC Replicator
Replication Manager
VMware vCenter SRM
VMFS/iSCSI
EMC RecoverPoint
Replication Manager
VMware vCenter SRM
RDM/iSCSI (physical)
EMC RecoverPoint
VMware vCenter SRM
RDM/iSCSI (virtual)
EMC RecoverPoint
VMware vCenter SRM
Summary
203
204
5
Using VMware vSphere
for Data Vaulting and
Migration
Introduction ......................................................................................
EMC SAN Copy interoperability with VMware file systems....
SAN Copy interoperability with virtual machines using RDM
Using SAN Copy for data vaulting ...............................................
Transitional disk copies to cloned virtual machines ...................
SAN Copy for data migration from CLARiiON arrays .............
SAN Copy for data migration to VNX arrays..............................
Summary ...........................................................................................
206
207
208
209
217
220
222
224
205
Introduction
For businesses, information is critical for finding the right customers,
building the right products, and offering the best services. This
requires the ability to create copies of the information and make it
available to users involved in different business processes in the most
cost-effective way possible. It can require users to migrate the
information between storage arrays as business requirements change.
Additionally, compliance regulations can impose data vaulting
requirements that may require users to create additional copies of the
data.
The criticality of the information also imposes strict availability
requirements. Few businesses can afford protracted downtime to
copy and distribute data to different user groups. Copying and
migrating data requires extensive planning and manual work. Due to
this complexity, the processes are susceptible to errors which can lead
to data loss.
VMware ESXi hosts and related products consolidate computing
resources to reduce the total cost of ownership. However, the
consolidation process can result in conflict over computer and
storage resources between applications with different service-level
agreements.
VMware provides technologies such as Storage vMotion and VAAI to
help redistribute virtual machines between available datastores.
However, there is still no solution for a full-scale migration of
datastores from one storage location to another. In some cases, using
native tools to copy data in a vSphere environment can require
extended downtime of virtual machines, or stacking of storage
vMotion tasks. EMC offers technologies to migrate and copy data
from one storage array to another with minimal impact to the
operating environment. The purpose of this chapter is to discuss one
such technology, EMC SAN Copy, and its interoperability in
vSphere environments using VNX block storage.
206
207
208
209
Windows Os/
Application Data
Linux Os/
Application Data
Production
VNX for block
SAN COPY
Windows Os/
Application Data
SAN COPY
Linux Os/
Application Data
SnapView
Replica of
Windows Os/
Application Data
Replica of
Linux Os/
Application Data
Secondary
VNX for block
ux
ux
Lin
Lin
SOFTWARE
HARDWARE
SnapView
ESXi
Boot
SOFTWARE
HARDWARE
ESXi
Boot
VNX-000659
Figure 87
210
Windows Os/
Application Data
Snap View
Remote Replica of
Windows Os/
Application Data
Snap View
Replica of
Windows Os/
Application Data
Replica of
Windows Os/
Application Data
Linux Os/
Application Data
Remote Replica of
Linux Os/
Application Data
Replica of
Linux Os/
Application Data
Replica of
Linux Os/
Application Data
ux
Lin
Production
VNX for block
Secondary
VNX for block
SOFTWARE
HARDWARE
ESXi
Boot
ux
Lin
SOFTWARE
HARDWARE
Figure 88
Data vaulting of
VMware file system
using SAN Copy
ESXi
Boot
VNX-000660
211
Figure 89
Figure 90 on page 212 shows how to use the Unisphere CLI and agent
to determine the WWNs of VNX devices that need to be replicated. In
addition, the VMware-aware Unisphere feature introduced in VNX
OE 29 provides mapping between VNX LUNs and VMware file
systems to help determine the LUNs that need to be replicated.
Figure 90
Identify the WWN of the remote devices for the data vaulting
solution. The WWN is a 128-bit number that uniquely identifies any
SCSI device. The WWN for a SCSI device can be determined by using
different techniques. Management software for the storage array can
212
213
Figure 91
214
Figure 92
215
Data vaulting of
virtual machines
configured with
RDMs using SAN
Copy
216
Configure remote
sites for virtual
machines using
VMFS
217
The target VMware ESXi hosts have the same virtual network
switch configuration. For example, the name and number of
virtual switches are duplicated from the source VMware ESXi
cluster group.
All VMware file systems that are used by the source virtual
machines are replicated.
The VMFS labels are unique on the target VMware ESXi hosts.
The target VMware ESXi hosts support the minimum memory
and processor resource reservation requirements of all cloned
virtual machines. For example, if 10 source virtual machines,
each with a memory resource reservation of 256 MB need to be
cloned, the target VMware ESXi cluster should have at least
2.5 GB of physical RAM allocated to the VMkernel.
Devices such as CD-ROM and floppy drives are attached to
physical hardware or are started in a disconnected state when
the virtual machines are powered on.
7. The cloned virtual machines can be powered on by using the
vCenter client or command line utilities when required. The
process for starting the virtual machines at the remote site is
discussed in the section.
Configure remote
sites for vSphere 4
virtual machines
with RDM
218
2. Copy the configuration file for the source virtual machine to the
directory created in step 1. The command line utility scp can be
used for this purpose.
Note: This step has to be repeated only if the configuration of the source
virtual machine changes.
The process to start the virtual machines at the remote site is the same
as given in Start virtual machines at a remote site after a disaster on
page 189.
219
Migrate a VMware
file system
220
Migrate devices
used as RDM
221
Figure 93
222
3. The virtual machines using the devices that are being migrated
must be shut down. The SAN Copy session created in the
previous step should be started to initiate the data migration from
the source devices to the VNX devices.
4. The LUN masking information on both the remote storage array
and the VNX array should be modified to ensure that the
VMware ESXi hosts have access to just the devices on the VNX.
Note that the zoning information may also need to be updated to
ensure that the VMware ESXi hosts have access to the appropriate
front-end Fibre Channel ports on the VNX storage system.
5. After the full SAN Copy session completes, a rescan of the fabric
on the VMware ESXi hosts enables the servers to discover the
remote devices on the VNX. The VMware ESXi hosts also update
the /vmfs structures automatically.
6. After the remote devices have been discovered, the virtual
machines can be restarted. Note that the discussion about virtual
machines using unlabeled VMFS or raw devices also applies for
the migrations discussed in this section.
When the amount of data being migrated from the remote storage
array to a VNX array is significant, SAN Copy provides a convenient
mechanism to leverage storage array capabilities to accelerate the
migration. Thus, by leveraging SAN Copy, one can reduce the
downtime significantly while migrating data to VNX arrays.
223
Summary
In this section we've looked at the use of SAN Copy as a data
migration tool for vSphere. SAN copy provides an interface between
storage systems for one time migrations, or periodic updates between
storage systems.
One of the unique capabilities of SAN Copy is that it inter operates
between different storage system types. As a result it is also useful as
a tool for migrating data during storage system upgrades and can be
a valuable tool to migrate from existing storage platform to VNX.
For additional details on Migrating Data From the EMC CLARiiON
Array to a VNX Platform using SAN Copy white paper on
Powerlink.EMC.com.
224