Вы находитесь на странице: 1из 224

Using EMC VNX Storage

with VMware vSphere

Version 1.0

Configuring VMware vSphere on VNX Storage


Cloning Virtual Machines
Establishing a Backup and Recovery Plan for VMware
vSphere on VNX Storage
Using VMware vSphere in Data Restart Solutions
Using VMware vSphere for Data Vaulting and Migration

Jeff Purcell

Copyright 2011 EMC Corporation. All rights reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS
PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR
FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date regulatory document for your product line, go to the Technical Documentation and
Advisories section on EMC Powerlink.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.

h8229

Using EMC VNX Storage with VMware vSphere

Contents

Chapter 1

Configuring VMware vSphere on VNX Storage


Introduction ....................................................................................... 16
Management options........................................................................ 19
VMware vSphere on EMC VNX configuration road map .......... 24
VMware vSphere installation.......................................................... 26
VMware vSphere boot from storage .............................................. 27
Unified storage considerations ....................................................... 33
Network considerations................................................................... 48
Storage multipathing considerations ............................................. 50
VMware vSphere configuration...................................................... 64
Provisioning file storage for NFS datastores ................................ 71
Provisioning block storage for VMFS datastores and RDM
volumes (FC, iSCSI, FCoE) .............................................................. 76
Virtual machine considerations ...................................................... 80
Monitor and manage storage .......................................................... 92
Storage efficiency ............................................................................ 100

Chapter 2

Cloning Virtual Machines


Introduction ..................................................................................... 114
Using EMC VNX cloning technologies........................................ 115
Summary .......................................................................................... 126

Chapter 3

Establishing a Backup and Recovery Plan for VMware


vSphere on VNX Storage
Introduction .....................................................................................
Virtual machine data consistency .................................................
VNX native backup and recovery options ..................................
Backup and recovery of a VMFS datastore .................................

Using EMC VNX Storage with VMware vSphere

128
129
131
134

Contents

Backup and recovery of RDM volumes.......................................


Replication Manager ......................................................................
vStorage APIs for Data Protection ...............................................
Backup and recovery using VMware Data Recovery................
Backup and recovery using Avamar............................................
Backup and recovery using NetWorker ......................................
Summary..........................................................................................

Chapter 4

Using VMware vSphere in Data Restart Solutions


Introduction.....................................................................................
Definitions .......................................................................................
EMC remote replication technology overview...........................
RDM volume replication ...............................................................
Replication Manager ......................................................................
Automating Site Failover with SRM and VNX ..........................
Summary..........................................................................................

Chapter 5

168
169
172
187
191
193
203

Using VMware vSphere for Data Vaulting and Migration


Introduction.....................................................................................
EMC SAN Copy interoperability with
VMware file systems ......................................................................
SAN Copy interoperability with virtual machines
using RDM.......................................................................................
Using SAN Copy for data vaulting..............................................
Transitional disk copies to cloned virtual machines..................
SAN Copy for data migration from CLARiiON arrays ............
SAN Copy for data migration to VNX arrays ............................
Summary..........................................................................................

138
139
143
145
148
157
164

Using EMC VNX Storage with VMware vSphere

206
207
208
209
217
220
222
224

Figures

Title
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

Page

VNX storage with VMware vSphere...........................................................


EMC Unisphere ..............................................................................................
VSI Feature Manager .....................................................................................
Storage Viewer presentation of VNX NFS datastore details ...................
Storage Viewer presentation of VNX block storage details .....................
Configuration road map................................................................................
Manual assignment of host logical unit for ESXi boot device .................
iSCSI port management.................................................................................
IBFt interface for VNX target configuration...............................................
VNX FAST VP reporting and management interface...............................
Disk Provisioning Wizard for file storage ..................................................
Creation of a striped volume through Unisphere .....................................
Spanned VMFS-3 tolerance to missing physical extent............................
FC/FCoE topology when connecting VNX storage to an ESXi host ......
iSCSI topology when connecting VNX storage to ESXi host ...................
Single virtual switch iSCSI configuration...................................................
VSI Path Management multipath configuration feature ..........................
Multipathing configuration with NFS ........................................................
Unisphere interface ........................................................................................
Data Mover link aggregation for NFS server .............................................
vSphere networking configuration..............................................................
vSwitch1 Properties screen ...........................................................................
VMkernel Properties screen..........................................................................
VMkernel port configuration........................................................................
Virtual disk shares configuration.................................................................
SIOC latency window....................................................................................
Network Resource Allocation interface ......................................................
File storage provisioning with USM............................................................
Creating a new NFS datastore with USM...................................................
Block storage provisioning with USM ........................................................

Using EMC VNX Storage with VMware vSphere

17
19
20
22
22
24
29
31
32
37
39
42
47
50
51
52
54
57
59
60
61
62
63
66
67
68
70
72
73
77

Figures

31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72

Creating a new VMFS datastore with USM ............................................... 78


Select the disk ................................................................................................. 81
Guest disk alignment validation.................................................................. 83
NTFS data partition alignment (wmic command) .................................... 84
Output of Linux partition aligned to a 1 MB disk
boundary (starting sector 2048) ................................................................... 84
Output for an unaligned Linux partition (starting sector 63).................. 85
Enable NPIV for a virtual machine after adding an RDM volume ........ 88
Manually register virtual machine (virtual WWN) initiator records ..... 89
Actions tab ...................................................................................................... 93
Storage Viewer: Datastores view - VMFS datastore ................................. 94
Adjustable percent full threshold for the storage pool............................. 96
Create Storage Usage Notification interface .............................................. 97
User-defined storage usage notifications ................................................... 98
User-defined storage projection notifications............................................ 99
Thick or zeroedthick virtual disk allocation ............................................ 102
Thin virtual disk allocation......................................................................... 103
Virtual machine disk creation wizard....................................................... 104
Virtual machine out-of-space error message ........................................... 105
File system Thin Provisioning with EMC VSI: USM feature................. 106
Provisioning policy for a NFS virtual machines virtual disk ............... 108
LUN Compression property configuration.............................................. 109
Performing a consistent clone fracture operation ................................... 117
Create a SnapView session to create a copy of a VMware file system. 118
Assign a new signature ............................................................................... 121
Create a writeable checkpoint for a NAS datastore ................................ 122
ShowChildFsRoot parameter properties in Unisphere .......................... 132
Snapshot Configuration Wizard ................................................................ 135
Snapshot Configuration Wizard (continued) .......................................... 136
Replication Manager Job Wizard............................................................... 140
Replica Properties in Replication Manager .............................................. 141
Read-only copy of the datastore view in the vSphere client ................. 142
VADP flow diagram .................................................................................... 144
VMware data recovery ................................................................................ 145
VDR backup process.................................................................................... 146
Sample Avamar environment .................................................................... 149
Sample proxy configuration ....................................................................... 151
Avamar backup management configuration options............................. 152
Avamar virtual machine image restore .................................................... 154
Avamar browse tree .................................................................................... 155
NetWorkervirtualization topology view .............................................. 158
VADP snapshot ............................................................................................ 159
NetWorker configuration settings for VADP .......................................... 160

Using EMC VNX Storage with VMware vSphere

Figures

73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93

NDMP recovery using NetWorker ............................................................ 162


Backup with integrated checkpoint ........................................................... 163
Replication Wizard....................................................................................... 175
Replication Wizard (continued) ................................................................. 176
Preserving dependent-write consistency with MirrorView
consistency group technology .................................................................... 179
EMC VMware Unisphere interface............................................................ 181
Business continuity solution using MirrorView/S in a
virtual infrastructure with VMFS ............................................................... 182
EMC RecoverPoint architecture overview................................................ 183
Disabling VAAI support on an ESXi host ................................................. 185
NFS replication using Replication Manager............................................. 191
Registering a virtual machine with ESXi .................................................. 192
VMware vCenter SRM configuration........................................................ 195
SRM discovery plan ..................................................................................... 197
MVIV reporting for SRM environments ................................................... 199
Data vaulting solution using incremental SAN Copy in a virtual infrastructure ......................................................................................................... 210
Minimum performance penalty data vaulting solution using incremental
SAN Copy ...................................................................................................... 211
Identifying the canonical name associated with
VMware file systems .................................................................................... 212
Using Unisphere CLI/Agent to map the canonical name to
EMC VNX devices ........................................................................................ 212
Creating an incremental SAN Copy session............................................. 214
Creating an incremental SAN Copy session (continued) ....................... 215
Creating a SAN Copy session to migrate data to a
VNX storage array ........................................................................................ 222

Using EMC VNX Storage with VMware vSphere

Figures

Using EMC VNX Storage with VMware vSphere

Tables

Title
1
2
3
4
5
6
7
8
9
10
11

Page

VNX disk types................................................................................................ 34


RAID comparison table .................................................................................. 35
Single-LUN and Multi-LUN datastore comparison................................... 44
Allocation policies when creating new virtual disks on a
VMware datastore ......................................................................................... 101
VNX-based technologies for virtual machine cloning options............... 126
Backup and recovery options ...................................................................... 165
EMC VMware replication options .............................................................. 172
VNX MirrorView limits................................................................................ 178
EMC RecoverPoint feature support............................................................ 186
VNX to virtual machine RDM ..................................................................... 188
Data replication solutions ............................................................................ 203

Using EMC VNX Storage with VMware vSphere

Tables

10

Using EMC VNX Storage with VMware vSphere

Preface

As part of an effort to improve and enhance the performance and capabilities


of its product lines, EMC periodically releases revisions of its hardware and
software. Therefore, some functions described in this document may not be
supported by all versions of the software or hardware currently in use. For
the most up-to-date information on product features, refer to your product
release notes.
If a product does not function properly or does not function as described in
this document, please contact your EMC representative.
Note: This document was accurate as of the time of publication. However, as
information is added, new versions of this document may be released to the
EMC Powerlink website. Check the Powerlink website to ensure that you are
using the latest version of this document.

Audience

This TechBook describes how VMware vSphere works with the EMC
VNX series. The content in this TechBook is intended for storage
administrators, system administrators, and VMware vSphere
administrators.
Note: Although this document focuses on VNX storage, most of the content
also applies when using vSphere with EMC Celerra or EMC CLARiiON
storage.

Note: In this document, ESXi refers to VMware ESX Server version 4.0 and
4.1. Unless explicitly stated, ESXi 4.x, ESX 4.X, and ESXi are synonymous.

Using EMC VNX Storage with VMware vSphere

11

Preface

Individuals involved in acquiring, managing, or operating EMC VNX


storage arrays and host devices can also benefit from this TechBook.
Readers with knowledge of the following topics will benefit:

Related
documentation

EMC VNX series

EMC Unisphere

EMC Virtual Storage Integrator (VSI) for VMware vSphere

VMware vSphere 4.0 and 4.1

The following EMC publications provide additional information:

EMC CLARiiON Asymmetric Active/Active Feature (ALUA)

EMC VSI for VMware vSphere: Path ManagementProduct Guide

EMC VSI for VMware vSphere: Path ManagementRelease Notes

EMC VSI for VMware vSphere: Unified Storage


ManagementProduct Guide

EMC VSI for VMware vSphere: Unified Storage ManagementRelease


Notes

EMC VSI for VMware vSphere: Storage ViewerProduct Guide

EMC VSI for VMware vSphere: Storage ViewerRelease Notes

Migrating Data From an EMC CLARiiON Array to a VNX Platform


using SAN Copy - white paper

The following links to the VMware website provide more


information about VMware products:

http://www.vmware.com/products/

http://www.vmware.com/support/pubs/vs_pubs.html

The following document is available on the VMware web site:

Conventions used in
this document

vSphere iSCSI SAN Configuration Guide

EMC uses the following conventions for special notices:

DANGER indicates a hazardous situation which, if not avoided, will


result in death or serious injury.

12

Using EMC VNX Storage with VMware vSphere

Preface

WARNING indicates a hazardous situation which, if not avoided,


could result in death or serious injury.

CAUTION, used with the safety alert symbol, indicates a


hazardous situation which, if not avoided, could result in minor or
moderate injury.

NOTICE is used to address practices not related to personal injury.


Note: A note presents information that is important, but not hazard-related.

IMPORTANT
An important notice contains information essential to software or
hardware operation.
Typographical conventions
EMC uses the following type style conventions in this document.
Normal

Used in running (nonprocedural) text for:


Names of interface elements (such as names of windows,
dialog boxes, buttons, fields, and menus)
Names of resources, attributes, pools, Boolean expressions,
buttons, DQL statements, keywords, clauses, environment
variables, functions, utilities
URLs, pathnames, filenames, directory names, computer
names, filenames, links, groups, service keys, file systems,
notifications

Bold

Used in running (nonprocedural) text for:


Names of commands, daemons, options, programs,
processes, services, applications, utilities, kernels,
notifications, system calls, man pages
Used in procedures for:
Names of interface elements (such as names of windows,
dialog boxes, buttons, fields, and menus)
What user specifically selects, clicks, presses, or types

Using EMC VNX Storage with VMware vSphere

13

Preface

Italic

Used in all text (including procedures) for:


Full titles of publications referenced in text
Emphasis (for example a new term)
Variables

Courier

Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when
shown outside of running text

Courier bold

Used for:
Specific user input (such as commands)

Courier italic

Used in procedures for:


Variables on command line
User input variables

<>

Angle brackets enclose parameter or variable values supplied by


the user

[]

Square brackets enclose optional values

Vertical bar indicates alternate selections - the bar means or

{}

Braces indicate content that you must specify (that is, x or y or z)

...

Ellipses indicate nonessential information omitted from the


example

Wed like to hear from you!


Your feedback on our TechBooks is important to us! We want our
books to be as helpful and relevant as possible, so please feel free to
send us your comments, opinions and thoughts on this or any other
TechBook:
TechBooks@emc.com

14

Using EMC VNX Storage with VMware vSphere

1
Configuring VMware
vSphere on VNX
Storage

This chapter contains the following topics:

Introduction ........................................................................................ 16
Management options......................................................................... 19
VMware vSphere on EMC VNX configuration road map ........... 24
VMware vSphere installation........................................................... 26
VMware vSphere boot from storage ............................................... 27
Unified storage considerations ........................................................ 33
Network considerations.................................................................... 48
Storage multipathing considerations .............................................. 50
VMware vSphere configuration....................................................... 64
Provisioning file storage for NFS datastores.................................. 71
Provisioning block storage for VMFS datastores and RDM
volumes (FC, iSCSI, FCoE) ............................................................... 76
Virtual machine considerations ....................................................... 80
Monitor and manage storage ........................................................... 92
Storage efficiency ............................................................................. 100

Configuring VMware vSphere on VNX Storage

15

Configuring VMware vSphere on VNX Storage

Introduction
EMC VNX series delivers uncompromising scalability and
flexibility for the midtier while providing market-leading simplicity
and efficiency to minimize total cost of ownership. Customers can
benefit from the following new VNX features:

Next-generation unified storage, optimized for virtualized


applications.

Extended cache using Flash drives with FAST Cache.

Fully Automated Storage Tiering for Virtual Pools (FAST VP) that
can be optimized for the highest system performance and lowest
storage cost on block and file.

Multiprotocol support for file, block, and object with object access
through Atmos Virtual Edition (Atmos VE).

Simplified management with EMC Unisphere for a single


management framework for all NAS and SAN storage.

Up to three times improvement in performance with the latest


Intel multicore CPUs, optimized with Flash.

6 Gb/s SAS back end with the latest drive technologies


supported: Flash, SAS, and NL-SAS.

Expanded EMC UltraFlex I/O connectivityFibre Channel (FC),


Internet Small Computer System Interface (iSCSI), Common
Internet File System (CIFS), Network File System (NFS) including
parallel NFS (pNFS), Multi-Path File System (MPFS), and Fibre
Channel over Ethernet (FCoE) connectivity for converged
networking over Ethernet.

The VNX series includes five new software suites and three new
software packs, making it easier and simpler to attain the maximum
overall benefits.

Storage alternatives

16

VMware vSphere supports storage device access for hosts and


virtual machines using the FC, FCoE, iSCSI, and NFS protocols
provided by the VNX platform. VNX provides the CIFS protocol for
shared file systems in a Windows environment.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

The VNX system supports one active SCSI transport type at a time.
An ESXi host can connect to a VNX Block System with any type of
adapters. However, the adapters must be the same type, for example,
FC, FCOE, or iSCSI. Connecting to a single VNX with different types
of SCSI adapters types is not supported.
The previous statement does not apply to NFS which can be used in
combination with any SCSI protocol.
VMware ESXi uses VNX SCSI devices to create VMFS datastores or
raw device mapping (RDM) volumes. LUNs and NFS file systems are
provisioned from VNX with Unisphere or through the VMware
vSphere Client using the EMC VSI for VMware vSphere: Unified
Storage Management (USM) feature. VNX platforms deliver a
complete multiprotocol foundation for a VMware vSphere virtual
data center, as shown in Figure 1.

Figure 1

VNX storage with VMware vSphere


Introduction

17

Configuring VMware vSphere on VNX Storage

The VNX series is ideal for VMware vSphere in the midrange for the
following reasons:

Provides configuration options for block (FC, FCoE, iSCSI) and


file (NFS, CIFS) storage allowing users to select the best option
based upon capacity, performance, and cost.

Has a modular architecture allowing users to mix Flash, SAS, and


Near-Line SAS (NL-SAS) drives to satisfy application storage
requirements.

Scales quickly to address the storage needs of virtual machines on


VMware ESXi servers.

Provides virtual machine Unisphere and EMC Virtual Storage


integrator (VSI) for VMware vCenter management options.
Management options on page 19 provides more information.

Provides no single point of failure and five 9s availability, which


improves application availability.

VMware administrators can use the following features to manage


virtual storage:

18

Thin Provisioning: Improves storage utilization and simplifies


storage management by presenting virtual machines with
sufficient capacity for an extended period of time.

File Compression: Improves the storage efficiency of file systems


by compressing virtual disks.

File Deduplication: Eliminates redundant files in a file system.

LUN Compression: Condenses data to improve storage


utilization on inactive LUNs.

FAST VP and FAST Cache: Automates sub-LUN data movement


in the array to improve total cost of ownership.

EMC Replication Manager: Provides a single interface to


provision and manage application-consistent virtual machine
replicas on VNX platforms.

vStorage APIs for Array Integration (VAAI): Supports efficient


SCSI LUN reservation methods that increase virtual machine
scalability, and reduces I/O traffic between the host and the
storage system during cloning or zeroing operations.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Management options
VMware administrators can use Unisphere or the Virtual Storage
Integrator (VSI) for VMware vSphere to manage VNX storage in
virtual environments.

EMC Unisphere

Unisphere is a common web-enabled interface for remote


management of EMC VNX, Celerra, and CLARiiON platforms. It
offers a simple interface to manage file and block storage and easily
maps storage objects to their corresponding virtual storage objects.
Unisphere has a modular architecture that enables users to integrate
new features, such as RecoverPoint/SE management, into the
Unisphere interface as shown in Figure 2.

Figure 2

EMC Unisphere

Management options

19

Configuring VMware vSphere on VNX Storage

VSI for VMware


vSphere

Figure 3

VSI Unified Storage


Management

20

Virtual Storage Integrator (VSI) is a vSphere Client plug-in that


provides a single interface to manage EMC storage. The VSI
framework enables discrete management components, which are
identified as features, to be added to support the EMC products
installed within the environment. This section describes the EMC VSI
features that are most applicable to the VNX platform: Unified
Storage Management, Storage Viewer, and Path Management.

VSI Feature Manager

VMware administrators can use the VSI USM feature to provision


and mount new datastores and RDM volumes. For NFS datastores,
use this feature to do the following:

Provision new virtual machines replicas rapidly with full clones


or space-efficient fast clones.

Initiate file system deduplication to reduce the storage


consumption of virtual machines created on NFS file systems.

Simplify the creation of NFS datastores in accordance with best


practices.

Mount NFS datastores automatically to one or more ESXi hosts.

Reduce the storage consumption of virtual machines using


compression or Fast Clone technologies.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Reduce the copy creation time of virtual machines using the Full
Clone technology.

For VMFS datastores and RDM volumes on block storage, use this
feature to do the following:

VSI Storage Viewer

Provision and mount new storage devices from storage pools or


RAID groups.

Assign tiering polices on FAST VP LUNs.

Unmask VNX LUNs automatically to one or more ESXi hosts.

Create VMFS datastores and RDM volumes in accordance with


best practices.

Storage Viewer enables the vSphere Client to discover and identify


VNX storage devices. This feature performs the following functions:

Merges data from several different storage mapping tools into


seamless vSphere Client views.

Enables VMware administrators to relate VMFS, NFS, RDM, and


virtual disk storage to the backing storage devices presented by
VNX.

Presents the VMware administrators with details of storage


devices accessible to the ESXi hosts in the virtual data center.

Provides storage mapping and connectivity details for VNX


storage devices.

Figure 4 on page 22 illustrates how Storage Viewer can be used to


identify properties of an NFS Datastore presented from a VNX
Storage System. Figure 5 on page 22 illustrates the use of Storage
Viewer to identify the properties of VNX for block devices.

Management options

21

Configuring VMware vSphere on VNX Storage

22

Figure 4

Storage Viewer presentation of VNX NFS datastore details

Figure 5

Storage Viewer presentation of VNX block storage details

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

VSI: Path Management


This feature displays multipath properties (including number of
paths, state of paths, and the path management policy) for the
VMware Native Multipathing plug-in (NMP) and PowerPath/VE.
It enables administrators to do the following:

Change the multipath policy based on both storage class and


virtualization object.

Maintain consistent multipath policies across a virtual data center


containing a wide variety of storage devices.

The VSI framework and its features are freely available from EMC.
Some features are specific to storage platforms such as Symmetrix
DMX and VNX. The framework, features, and supporting
documents can be obtained from the EMC Powerlink website
located at: http://Powerlink.EMC.com/.

Management options

23

Configuring VMware vSphere on VNX Storage

VMware vSphere on EMC VNX configuration road map


Figure 6 displays the configuration steps for VNX storage with
VMware vSphere.

Figure 6

24

Configuration road map

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

The primary configuration blocks in Figure 6 on page 24 are:

NIC and FC/FCoE/iSCSI HBA driver configuration with


vSphere After installing ESXi, configure the physical interface
used to connect the ESXi host to VNX. ESXi IP and FC driver
configuration on page 64 provides more details.

VMkernel port configuration in vSphere Configure the ESXi


host VMkernel interface for IP storage connections to VNX NFS
and iSCSI storage. VMkernel port configuration in ESXi on
page 65 provides more details.

After you install and configure VMware ESXi, complete the following
steps:
1. Ensure that network multipathing and failover are configured
between ESXi and the VNX platform. Storage multipathing
considerations on page 50 provides more details.
2. Complete the NFS, VMFS, and RDM configuration steps using
EMC VSI for USM:
a. NFS - Create and export the VNX file system to the ESXi host.
Add NFS datastores to ESXi hosts from NFS file systems
created on VNX. Provisioning file storage for NFS
datastores on page 71 provides details to complete this
procedure using USM.
b. VMFS - Configure a VNX FC/FCoE/iSCSI LUN and present
it to the ESXi server.
Configure a VMFS datastore from the LUN that was
provisioned from VNX. Provisioning block storage for VMFS
datastores and RDM volumes (FC, iSCSI, FCoE) on page 76
provides details to complete this procedure using USM.
c. RDM - Configure a VNX FC/FCoE/iSCSI LUN and present it
to the ESXi server.
Create and surface the LUN provisioned from VNX to a
virtual machine for RDM use. Provisioning block storage for
VMFS datastores and RDM volumes (FC, iSCSI, FCoE) on
page 76 provides details to complete this procedure using
USM.
3. Provision newly created virtual machines on NFS or VMFS
datastores and optionally assign newly created RDM volumes.

VMware vSphere on EMC VNX configuration road map

25

Configuring VMware vSphere on VNX Storage

VMware vSphere installation


Install ESXi on a local disk of the physical server, a SAN disk with a
boot from SAN configuration, or a USB storage device. There are no
special VNX considerations when installing the hypervisor image
locally. However, consider the following:

26

Do not create additional VMFS partitions during the ESXi


installation because the installer does not create aligned
partitions. Virtual machine disk partitions alignment on
page 80.

Install a VMware vCenter host as part of the VMware vSphere


and VMware Infrastructure suite.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

VMware vSphere boot from storage


ESXi offers installation options for USB, Flash, or a SCSI device.
Installing a disk image on the SAN can improve performance and has
the following benefits:

VMware vSphere
boot from SAN
FC/FCOE LUNs

Increases the availability of the hypervisor in the virtual


environment.

Places the configuration and environmental information on tier 1


storage to eliminate the impact of a local disk failure, which
results in a host failure.

Distributes the images across multiple spindles.

Improves reliability through RAID protected storage and


optionally redundant host I/O paths to the boot device.

Host replacement is as easy as a BIOS modification and zoning


updates resulting in minimal down time.

Complete the administrative tasks for host cabling and storage


zoning to ensure that when the host is powered on, the HBAs log in
to the storage controllers on the VNX platform.
If this is an initial installation and if zoning is complete, obtain the
World Wide Names (WWNs) for the HBAs from the SAN switch or
from the Unisphere Host Connectivity page after the host initiator
logs in to the VNX SCSI targets:
1. Gather the information to configure the environment using the
selected front end ports on the array. This information should
include:
ESXi hostname
IP addresses
The HBA WWN if available
VNX management IP address and credentials
2. Power on the ESXi host.
3. Modify the host BIOS settings to disable internal devices that are
not required and to establish the proper boot order.
4. Ensure that the following are enabled:
Virtual floppy or CD-ROM device.

VMware vSphere boot from storage

27

Configuring VMware vSphere on VNX Storage

Local device follows the CD-ROM in the boot order.


For software iSCSI, the iSCSI adapter is enabled for iSCSI
Boot.
5. Enable the FC, FCOE, or iSCSI adapter as a boot device.
6. Verify that the adapter can access the VNX platform to show the
properties of the Array Controllers.
7. Access the Unisphere interface to view the Host Connectivity
Status and to verify that the adapters are logged in to the correct
controllers and ports. In some cases, a rescan of the storage
adapters is required to perform the SCSI IT Nexus. Though
vSphere is integrated with VNX to automatically register initiator
records for a running ESXi server, boot from SAN requires
manual registration of the HBAs. Select the new initiator records
and manually register them using the fully qualified domain
name of the host. The ALUA mode (failover mode 4) is required
for VAAI support.
Note: In some servers, the host initiators may not appear until starting
the host operating system installation. Examples of this are ESXi
installations and Cisco UCS which lacks an HBA BIOS probe capability.

8. Create a LUN on which to install the boot image. The LUN need
not be any larger than 20 GB. Do not store virtual machines
within this LUN.
9. Create a storage group and add the host record and the new LUN
to it.
10. Rescan the Host Adapter to discover whether the new device is
accessible. If the LUN does not appear or appears as LUNZ,
recheck the configuration and rescan the HBA.
11. Reserve a specific Host LUN ID to identify the boot devices. For
example, assign a Host LUN number of 0 to LUNs that contain
the boot volume. Using this approach makes it easy to

28

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

differentiate the boot volume from other LUNs assigned to the


host. If the host is accessing multiple storage systems, do not use
the reserved HLU number when assigning LUNs.

Figure 7

Manual assignment of host logical unit for ESXi boot device

12. Ensure that the CD-ROM/DVD-ROM/USB/virtual media is in


the caddy and precedes the local device in the boot order.
13. Install the ESXi code, select the DGC device, and follow the
installation steps to configure the host.

VMware vSphere boot from storage

29

Configuring VMware vSphere on VNX Storage

VMware vSphere
boot from SAN iSCSI
LUNs

With ESXi 4.1, VMware introduced software iSCSI initiator boot


support.
Booting from the VNX platform, the iSCSI protocol provides many of
the same benefits as FC storage. iSCSI is easier to configure and less
expensive than Fibre options. However, there may be a slight
difference in the response time because it is not a closed loop protocol
like FC.
The Network Card must support software initiator boot for this
configuration to work properly. The card should support 1 Gigabit or
greater for iSCSI SAN boot. The VMHCL helps to verify whether the
device is supported before beginning this procedure. Access the iSCSI
adapter configuration utility during the system boot and to configure
the HBA:

Set the IP Address and IQN name of the iSCSI initiator.

Define the VNX iSCSI target address.

Scan the target.

Enable the boot settings and the target device.

The vendor documentation provides instructions to enable and


configure the iSCSI adapter:
1. Some utilities use a default iSCSI Qualified Name (IQN). Each
initiator requires a unique IQN for storage group assignment on
the VNX platform.
2. Configure an iSCSI portal on the VNX platform using Unisphere.

30

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 8

iSCSI port management

Unisphere provides support for Jumbo Frames with valid MTU


values of 1488-9000. When enabling Jumbo Frames, ensure that
all components in the I/O storage path from the host to the
VMware vSphere boot from storage

31

Configuring VMware vSphere on VNX Storage

storage interface, support jumbo frames, and the MTU size of the
interface card on the ESXi host, the Network port, and VNX port
are consistent.
3. Configure the first iSCSI target by specifying the IP address and
the IQN name of the VNX iSCSI port configured in the previous
step. Optionally, specify the CHAP properties for additional
security of the iSCSI session.

Figure 9

IBFt interface for VNX target configuration

4. Configure the secondary target using the address information for


the iSCSI port on Storage Processor B of the VNX platform.
5. Using Unisphere:
Register the new initiator record
Create a new storage group
Create a new boot LUN
Add the newly registered host to the storage group
6. At this point, proceed with the installation of the ESXi image.

32

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Unified storage considerations


Configuring the VNX array appropriately is critical to ensure a
scalable high-performance virtual environment.
This section presents storage considerations when using VNX with
vSphere.
With the introduction of storage pools and FAST VP, storage
configuration is simplified so that storage devices can be created with
differing service levels.
The array handles data placement based upon the demands of the
servers and their applications. Though pools have been introduced
for simplicity and optimization, VNX preserves the RAID group
option for internal storage devices used by VNX replication
technologies, and environments or applications with fixed resource
reservations.

Unified storage considerations

33

Configuring VMware vSphere on VNX Storage

VNX supported disk


types

Table 1

34

Table 1 shows how the VNX platform enables users to mix drive
types and sizes on the storage array and in storage pools to
adequately support the applications.

VNX disk types

Type of drive

Available size

Benefit

Suggested usage

Flash drives

100 GB
200 GB

Extreme performance
Lowest Latency

Virtual machine applications


with low response time and
high-throughput requirements
Large-capacity,
high-performance VMware
environments

Serial Attached SCSI


(SAS)

10 and 15k rpm 300 GB and


600 GB

Cost Effective
Better performance

Most tier 1 and 2 business


applications, such as SQL,
Exchange, and
performance-based virtual
applications, that require a low
response time and high
throughput.

NL-SAS drives

7200 rpm 1 TB and 2 TB

Performance and reliability


that is equivalent to SATA
drives

Back up the VMware


environment and store virtual
machine templates and ISO
images.
Good solution for tier 2/3
applications with low
throughput and response time
requirements, that is,
infrastructure services DNS,
AD, and similar applications.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

RAID configuration
options

Table 2

VNX provides a wide range of RAID configuration algorithms to


help address the performance and reliability requirements of
VMware environments. RAID protection is provided within the VNX
operating environment and used by all block and file devices. An
understanding of the application and storage requirements in the
computing environment helps to identify the appropriate RAID
configuration. Table 2 on page 35 illustrates RAID options.
RAID comparison table

Algorithm

Description

RAID group support

Pool support

RAID 0

Striped RAID (no data


protection)

RAID 1

Data is striped across all


spindles

RAID 1/0

Data is mirrored and striped


across all spindles

RAID 3

Striped with dedicated parity


disk

RAID 5

Striped with distributed parity


among all disks

RAID 6

Striped with distributed double


parity amongst all disks.

The storage and RAID algorithm chosen is largely based on the


throughput and data protection requirements of the applications or
virtual machines. The most attractive RAID configuration options for
VMFS volumes are RAID, RAID 1/0, RAID 5, and RAID 6 options.
Parity RAID provides the most efficient use of disk space to satisfy
the requirements of the applications. RAID 1/0 provides higher
transfer rates than RAID 5, but RAID 1/0 consumes more disk space.
Based upon the testing performed in EMC labs, RAID 5 was chosen
in most cases for virtual machine boot disks and the virtual disk
storage used for application data. RAID 6 provides the highest level
of protection against disk failure. It is used when extra protection is
required.

Unified storage considerations

35

Configuring VMware vSphere on VNX Storage

FAST VP

VNX FAST VP is the VNX feature that enables a single LUN to


leverage the advantages of Flash, SAS, and NL-SAS drives through
the use of pools. VNX supports three storage tiers using a different
physical storage device type (Flash, SAS, and NL-SAS). Each tier
offers unique advantages. FAST VP can leverage all three of these
tiers at once or any two at a time.
Note: Rotational speed is not differentiated within a pool tier. Therefore,
disks with different speeds can be assigned to the same pool tier. However,
that is not a recommended configuration.

FAST VP provides automated sub-LUN-level tiering to classify and


place data on the most appropriate storage class. FAST VP collects
I/O activity statistics at a 1 GB granularity (known as a slice). It uses
the relative activity level of each slice to determine tier placement.
Very active slices are promoted to higher tiers of storage. Less
frequently used slices are candidates for migration to lower tiers of
storage. Slice migration is performed manually or through an
automated scheduler.
FAST VP is beneficial because it adjusts to the changing use of data
over time. As storage patterns change, FAST VP moves slices among
the tiers matching the needs of the VMware environment with the
most appropriate class of storage. VNX FAST VP currently supports a
single RAID type across all tiers in the pool. Additionally, the RAID
configurations are constructed using five disks for RAID 5, and eight
disks for RAID 1/0 and RAID 6 pools. Pool expansion should adhere
to the configuration rules and grow in similar increments to avoid
parity overhead and unbalanced LUN distribution. In Figure 10 on
page 37, the tiering screen of Unisphere indicates that 47 GB of data

36

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

has been identified to be moved to the performance tier and 28 GB


will be moved to the Extreme Performance Tier. This action can be
scheduled for automatic migration or manually relocated.

Figure 10

VNX FAST Cache

VNX FAST VP reporting and management interface

FAST Cache architecture is an optimization technology that can


greatly improve the performance of the VMware environment by
using Flash drives as a second-level cache. FAST Cache combines
hard disk drive (HDD) storage with Flash drives, identifying and
promoting the most frequently used data to the highest class of
storage, thus providing an order of magnitude performance
improvement to that data. It is dynamic in nature and operates at a 64
KB extent. As data blocks within the extent are no longer accessed or
the access patterns change, existing extents are destaged to HDD and
replaced with higher priority data.

Unified storage considerations

37

Configuring VMware vSphere on VNX Storage

vStorage API for


Array Integration
(VAAI)

VAAI storage integration improves the overall performance of the


ESXi block storage environment by offloading storage-related tasks to
the VNX platform. It provides functions that accelerate common
vSphere tasks such as Storage vMotion. An ESXi host connected to a
VAAI-capable target device passes the SCSI request to the array and
monitors its progress throughout the task. Storage blocks are
migrated within the array at an accelerated rate while limiting the
impact on resources required by the host and front end ports of the
VNX platform. The primary functions are as follows:
Copy: Initiated by vSphere Clone, Storage vMotion, and Deploy VM
from Template tasks. With VAAI-enabled storage systems such as
VNX, the host passes the copy request to the storage system that
performs the operation internally.
Zeroing of new blocks: Also called zero copy, it is used to fill data in
a newly created Virtual Machine Disk (VMDK) file that contains
sparse or unallocated space. Rather than copying lots of zeros into a
new VMDK file, the hardware accelerated init feature
instantaneously creates a file with the proper allocations and
initializes the blocks to zero. Reducing the amount of repetitive traffic
over the fabric from the host to the array.
Hardware Accelerated Locking: Addresses datastore contention that
results from virtual machine metadata operations such as create,
boot, and update. The VAAI feature added an extent-based solution
that enables metadata to be updated without locking the entire
device. Heavy metadata operations, such as booting dozens of virtual
machines within the same datastore, can take less time.
These VAAI capabilities improve storage efficiency and performance
within the VMware environment. They enable dense datastore
configurations with improved operational value.
EMC recommends using VAAI on all VMFS datastores and the
functionality is enabled by default.

VNX storage pools

VNX provides a capability to group disks into a higher-level storage


abstraction called a storage pool. The VNX Operating Environment
uses predefined optimization and performance templates to allocate
available physical disks to for File System and Block storage pools.
A storage pool is created from a collection of disks within the VNX
platform. Storage pools are segmented into 1 GB slices that are used
to create LUNs.

38

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

The primary differences between pools and RAID groups are as


follows:

Pools can span the physical boundaries associated with RAID


groups.

Pools support Thin LUNs (TLUs).

When configured to use FAST VP, configure pools to use a


combination of any disk type on the system.

Pools support LUN compression

Management and configuration of the storage pools are


accomplished through Unisphere and the storage management
wizards accessed in Unisphere.

Figure 11

Disk Provisioning Wizard for file storage

Unified storage considerations

39

Configuring VMware vSphere on VNX Storage

Thick LUNs

A Thick LUN is the default device created when provisioning from a


storage pool. Thick LUNs reserve storage space within the pool that
is equal to the size of the LUN (additionally, there is a small amount
of overhead for metadata). The pool space is protected and cannot be
used by any other storage device. Because the space is guaranteed, a
Thick LUN never encounters an out-of-space condition.

Thin LUNs

Thin LUNs (TLUs) are also created within storage pools. However, a
TLU does not reserve or allocate any user space from the pool.
Internal allocation reserves a few storage pool 1 GB slices when the
LUN is created. No additional storage allocation occurs until the host
or guest writes to the LUN. Select the Thin LUN checkbox in the LUN
creation page of Unisphere to create a TLU.
Note: After a device is written to the guest level, the blocks remain allocated
until the device is deleted or migrated to another thin device. To free deleted
blocks, you must compress the LUN.

The primary difference between Thick and Thin LUN types is the
way storage is allocated within the pool. Thin LUNs, reserve a 1 GB
slice and 8KB blocks from that slice on demand, when the host issues
a new write to the LUN. Direct LUNs allocate space in 1 GB
increments as new writes to the VMFS datastore are initiated.
Another difference is in the pool reservation. While both storage
types perform on demand allocation, Thick LUN capacity is
guaranteed within the pool and deducted from free space. Thin LUN
capacity is not reserved or guaranteed within the storage pool which
is why monitoring free space of pools with Thin LUNs is important.
Monitoring and alerting are covered in Monitor and manage
storage on page 92 of this document. Since the goal of Thin is
economical use of storage resources, TLUs allocate space at a much
more granular level than thick LUNs. Thin LUNs reserve a 1 GB slice,
and allocate blocks in increments of 8KB as needed.

Comparison
between pool LUNs
and VNX OE for
block LUNs

40

VNX OE for block (VNX OE) LUNs or RAID Group LUNs are the
traditional storage devices that were used before the introduction of
storage pools. VNX OE LUNs allocate all the disk space in a RAID
group at the time of creation. VNX OE LUNs are the only available
option when creating a LUN from a RAID Group, and there is no thin
option with VNX OE or RAID Group LUNs

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

The use of Pool Storage LUNs provides a simplified configuration


and storage provisioning option. Pools can be much larger than RAID
Groups and they support a broader variety of options including FAST
and FAST VP support.
Additionally, the benefit of Pool LUNs is derived through the ability
to make more efficient use of the storage space within VNX with
intelligent placement that aligns data with the usage patterns of the
applications. Pools provide a storage efficiency solution that is
supported by FAST VP and FAST Cache.
VNX OE for block LUNs are optimized for performance with all of
the space allocated at creation time using contiguous space in the
RAID Group. There is a high probability that the VNX OE for block
LUNs will have the best spacial locality of the three LUN types,
which is an important consideration to achieve optimal performance
from the VNX storage. The next best performing option is the Thick
LUNs that will have better spacial locality than Thin LUNs.
Thin LUNs preserve space on the storage system at the cost of a
potentially modest increase in seek time due to reduced locality of
reference, which is only true for spinning media and not when Flash
drives are in use.
Thick LUNs have a 10 percent performance overhead in comparison
to VNX OE for block LUNs, whereas thin LUNs can have up to 50
percent overhead.

VNX for fie volume


management

Automatic Volume Management (AVM) and Manual Volume


Management (MVM) are available for users to create and manage
volumes and file systems for VMware. AVM and MVM allow users to
do the following:

Create and aggregate different volume types into usable file


system storage.

Divide, combine, or group volumes to meet specific configuration


needs.

Manage VNX volumes and file systems without having to create


and manage underlying volumes.

AVM works well for most VMware deployments. Virtualized


environments consisting of databases and e-mail servers can benefit
from MVM because it provides an added measure of control in the
selection and layout of the storage used to support the applications.

Unified storage considerations

41

Configuring VMware vSphere on VNX Storage

VNX for file


considerations with
Flash drives

Figure 12

42

A Flash drive uses single-level, cell-based flash technology suitable


for high-performance and mission-critical applications. VNX
supports 100 GB and 200 GB Flash drives, which are tuned-capacity
drives. Consider the following when using Flash drives with VNX for
file:

Enable the write cache and disable the read cache for Flash drive
LUNs.

The only AVM Pools supported with Flash drives are RAID 5 (4+1
or 8+1) or R1/0 (1+1).

Create four LUNs per Flash drive RAID and balance the
ownership of the LUNs between the VNX storage processors.
This recommendation is unique to Flash drives. Traditional AVM
configuration provided better spacial locality and performance
when configured with 2 LUNs per RAID Group.

Use MVM to configure Flash drive volumes with drive


configurations and requirements that are not offered through
AVM.

Unlike rotating media, striping across multiple dvols from the


same Flash drive RAID group is supported.

Set the stripe element size for the volume to 256 KB.

Creation of a striped volume through Unisphere

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

LUN considerations
with VNX and
vSphere

Most vSphere configurations use VMFS datastores to support the


folders and files that constitute the virtual machines.
There are enhancements to VNX and vSphere that enable larger LUN
sizes, and these are largely focused on the use of Flash drives for
FAST Cache/FAST VP, VAAI support, and Storage I/O Control
(SIOC).
Separately, each feature improves a particular area of scalability:

Flash drives support a significant increase in IOPS with lower


response times. A Flash drive provides 10 times the number of
IOPS of other drive types, which is beneficial for Flash LUNs,
FAST Cache, and FAST VP LUNs.

VAAI provides benefits in the form of reduced ESXi host


resources required to perform vSphere storage related
administrative tasks, and SIOC alleviates the condition that
occurs if the storage resources are taxed beyond required service
levels.

SIOC provides a mitigation solution to address the edge


conditions that may occur during very heavy I/O periods. SIOC
ensures that critical virtual machine applications receive the
highest priority during bursty I/O periods.

If storage is configured using these options, larger LUN sizes can be


used. The maximum LUN size for vSphere without using extents, is
approximately 2 TB (2 TB512 bytes).
Environments without Flash drive and SIOC

Since SIOC requires an Enterprise Plus license, and not all systems
will have Flash Drives, we need to consider those environments as
well.
Creating a single LUN can encounter resource contention because it
forces the VMkernel to serially queue I/Os from all the virtual
machines using the LUN. The VMware parameter
Disk.SchedNumReqOutstanding prevents one virtual machine from
monopolizing the FC queue. Nevertheless, there is an unpredictable
elongation of response time when there is a long queue against the
LUN.
The LUN sizes within these environments should be based upon the
performance requirements. The key criteria to decide the LUN size is
understanding the workload, the required IOPS for the applications
and virtual machines, the response times of the applications, and the
Unified storage considerations

43

Configuring VMware vSphere on VNX Storage

sizing for the peak periods of I/O activity. Balance the number of
virtual machines running within a datastore against the IO profile of
the VM and capabilities of the storage devices.
Larger single-LUN implementations

The previous paragraph presents the traditional recommendations


for single LUN configurations. There are several technologies that
have been introduced to alleviate some of the congestion that
resulted in that recommendation.
Table 3 compares the use of single-LUN and multi-LUN
configurations.
Table 3

44

Single-LUN and Multi-LUN datastore comparison

VNX OE for block single LUN

MetaLUN benefits

Single LUN

Easier management.
One VMFS to manage unused
storage.

Small management overhead.


Storage provisioning has to be on
demand.
One VMFS to manage (spanned).

Similar to single LUN with potential for


additional drives.

Can result in poor response time.


Single SP with no load balancing.

Multiple queues to storage ensure


minimal response times.
Opportunity to perform manual load
balancing.

Flash drive and FAST provide


response time improvements.
Still limited to a single SP with no load
balancing.

Limits the number of I/O-intensive virtual


machines.

Multiple VMFS allows more virtual


machines per ESXi server.
Response time of limited concern (can
optimize).

Improved support for virtual machines


with Flash drives.

All virtual machines share one LUN.


Cannot leverage all available storage
functionality.

Multiple VMFS allow more virtual


machines per ESXi server.
Response time of limited concern (can
optimize).

All virtual machines share one LUN.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

General recommendations for storage sizing and configuration


VNX enables users, with knowledge of the anticipated I/O workload,
to provide different service levels to virtual machines from a single
storage platform. If workload details are not available, use the
following general guidelines:

Use a VMware file system or NFS datastore to store virtual


machine boot disks. Most modern operating systems generate
minimal I/O to the boot disk, most of which page response-time
sensitive activity. Separating the boot disks from application data
mitigates the risk of response time elongation due to
application-related I/O activity. If there are significant numbers
of virtual machine disks on the datastore such as in a Virtual
Desktop Infrastructure (VDI) environment, consider using a
FAST Cache enabled LUN to mitigate boot storms and paging
overhead.

When using Virtual Desktop configurations with linked clones,


use FAST VP on 15K R5 drives with Flash drives to accommodate
the hot regions of the VMFS file system.

Databases such as Microsoft SQL Server or Oracle use an active


log or recovery data structure to track data changes. Store log
files on a separate virtual disk stored in a RAID 1/0 or RAID 5
VMFS datastore, NFS datastore, or RDM device.

If a separate virtual disk is provided for applications (binaries,


application log, and so on), configure the virtual disk to use
RAID 5 protected devices on 15k rpm SAS drives. However, if the
application performs extensive logging, a FAST Cache enabled
15k SAS RAID 10 device may be more appropriate.

Ensure that the datastores are 80 percent or less full to enable


administrators to quickly allocate space for user data and to
accommodate VMware snapshots for making copies of the
virtual machines.

Infrastructure servers, such as DNS, perform a vast majority of


their activity using CPU and RAM. Therefore, low I/O activity is
expected from virtual machines supporting the enterprise
infrastructure functions. Use FAST VP Thin LUNs or NFS
datastores with a combination of SAS and NL-SAS for these
applications.

Unified storage considerations

45

Configuring VMware vSphere on VNX Storage

Use RAID 10 protected devices on Flash drives or 15k rpm SAS


drives for virtual machines that are expected to have a
write-intensive workload.

Use RAID 5 FAST VP Pools with a combination of SAS and


NL-SAS drives for large file servers with storage consumed by
static files because the I/O activity tends to be low.
Medium-size SAS drives, such as the 300 GB, 15k rpm drive,
may be appropriate for these virtual machines.
Consider the 1 TB and 2 TB NL-SAS drives for virtual
machines that are used for storing archived data.
Configure 7.2K rpm NL-SAS drives in RAID 6 mode. This is
true for all drives equal to or greater than 1 TB

Number of VMFS
volumes in an ESXi
host or cluster

Applications with hot regions of data can benefit from the


addition of FAST Cache. FAST Cache warms and pulls heavily
used data into a Flash storage device where response time and
IOPS are eight times faster than spinning media storage devices.

Allocate RAID 10 protected volumes, Flash drive, or FAST Cache


to enhance the performance of virtual machines that generate
high small block random I/O read workload. Also consider
dedicated RDM devices for these virtual machines.

Enable SIOC to control bursty conditions. Monitor SIOC response


times within vSphere. If continually high, rebalance virtual
machines using storage VMotion.

Ensure VAAI is enabled to off load storage tasks to the VNX


storage system.

Virtualization increases the utilization of IT assets. However, the


fundamentals of managing information in the virtual environment
are the same as those in the physical environment. Consider the
following best practices.
VMFS supports the concatenation of multiple SCSI disks to create a
single file system. Allocation schemes used in VMFS spread the data
across all LUNs supporting the file system thus exploiting all
available spindles. Use this functionality when using VMware ESXi
hosts with VNX platforms.
Note: If a member of a spanned VMFS-3 volume is unavailable, the datastore
is available for use, except the data from the missing extent. An example of
this situation is shown in Figure 13 on page 47.

46

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Although the loss of a physical extent is not likely in VNX platforms,


good change control mechanisms are required to prevent the
inadvertent loss of access.

Figure 13

Use of VNX metaLUNs

Spanned VMFS-3 tolerance to missing physical extent

A metaLUN is used to aggregate extents from separate RAID Groups


into a single striped or concatenated LUN to overcome the physical
space and performance limitations of a single RAID group. VNX
Storage Pools enable multi-terabyte LUNs. Therefore, metaLUNs are
most useful when an application requires a VNX OE LUN with
reserved storage resources.
VNX metaLUNs can be used in conjunction with VMFS spanning,
where multiple LUNs are striped at the VNX Operating Environment
level and then concatenated as a VMFS volume to distribute the I/O
load across all the disks.

Unified storage considerations

47

Configuring VMware vSphere on VNX Storage

Network considerations
The VNX platform supports many network configuration options for
VMware vSphere including basic network topologies. This section
lists items to consider before configuring the storage network for
vSphere servers.
Note: Storage multipathing is an important network configuration topic.
Review the information in Storage multipathing considerations on page 50
before configuring the storage network between vSphere and VNX.

Network equipment
considerations

The considerations for network equipment are as follows:

IP-based network
configuration
considerations

48

Use CAT 6 cables rather than CAT 5/5e cables. Although GbE
works on CAT 5 cables, they are less reliable and robust.
Retransmissions absolutely recover from errors, but have a
significant impact for IP storage than general networking use
cases.
With NFS datastores, use network switches that support a
Multi-Chassis Link Aggregation technology such as cross-stack
Etherchannel or Virtual Port Channeling. Multipathing
considerations - NFS on page 56 provides more details.
With NFS datastores, use 10 GbE network equipment.
Alternatively, use network equipment that includes a simple
upgrade path from 1 GbE to 10 GbE.
With VMFS datastores over FC, consider using FCoE converged
network switches and CNAs over 10 GbE links. These have
similar fabric functionality and administration requirements as
standard FC switches and HBAs but at a lower cost. FCoE
network considerations on page 49 provides more details.

The considerations for IP-based network configuration are as follows:

Dedicate a physical switch or an isolated network VLAN for IP


storage connectivity to ensure that iSCSI and NFS I/O are not
affected by other network traffic.

On network switches used for storage network, enable


Flow-Control, spanning tree protocol with either RSTP or
port-fast enabled, and restrict bridge protocol data units on
storage network ports.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

FCoE network
considerations

Configure jumbo frames for NFS and iSCSI to improve the


performance of I/O-intensive workloads. Both VMware vSphere
and VNX support jumbo frames for IP-based storage.

Set jumbo frames on ESXi, the physical network switch, and VNX
to enable them end-to-end in the I/O path.

Ensure that the Ethernet switches have the proper number of port
buffers and other internals to properly support NFS and iSCSI
traffic.

Native Fibre Channel over Ethernet (FCoE) support, included with


the VNX platform, offers a simplified physical cabling option
between servers and other peripheral hardware components such as
switches and storage subsystems. FCoE connectivity allows the
general server IP-based traffic, and I/O to the storage system, to be
carried in and out of the server through fewer, high-bandwidth
IP-based physical connections.
Converged Network Adapters (CNAs) reduce the physical hardware
footprint requirements to support the data traffic flowing into and
out of the servers, while providing a high flow rate through the
consolidated data flow network. High performance block I/O data
flow, previously handled through a separate set of FC-based data
traffic network, can be merged through a single, IP-based network
leveraging the CNAs that provide efficient FCoE support.
Additional configuration of the IP switches is necessary to enable
FCoE data flow correctly. However, combining the IP and block I/O
traffic on the same switch port does not compromise the server's
ability to deliver equivalent service performance for applications on
that server because of the 10 Gb speed on the switch ports and the
bandwidth capacity of these IP switches.
With the FCoE data frame support work offloaded to the CNAs, there
is no significant CPU or memory impact to the servers. Thus,
application performance is not compromised by going to the
converged network as opposed to managing separate block data I/O
SAN traffic network and node-to-node IP traffic.
VNX includes 10 Gb FCOE connectivity options by adding expansion
modules to the Storage Controllers. Configuration options on the
VNX are minimal and you must complete most management tasks at
the IP switch to enable and trunk the FCOE ports. Configure a
separate VLAN and trunk for all FCOE ports.

Network considerations

49

Configuring VMware vSphere on VNX Storage

Storage multipathing considerations


Multipathing and load balancing increase the level of availability for
applications running on ESXi hosts. The VNX platform offers a
nondisruptive upgrade (NDU) operation for the VMware native
failover software and EMC PowerPath for block storage. In addition,
configure VNX and vSphere advanced networking to increase
storage availability and performance to access file storage.

Multipathing
consideration
VMFS/RDM

Figure 14

50

When connecting an ESXi host to the VNX storage with FC/FCoE or


iSCSI protocol, ensure that each HBA or network card has access to
both storage processors. Figure 14 and Figure 15 on page 51 provide a
common topology for FC/FCoE and iSCSI connectivity to the ESXi
host.

FC/FCoE topology when connecting VNX storage to an ESXi host

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 15

iSCSI topology when connecting VNX storage to ESXi host


Note: The iSCSI hardware-initiator configuration is similar to the FC HBA
configuration.

Storage multipathing considerations

51

Configuring VMware vSphere on VNX Storage

With port binding enabled, configure a single vSwitch with two NICs
so that each NIC is bound to one VMkernel port. These NICs can be
connected to the same SP port on the same subnet as shown in
Figure 16.

Figure 16

Single virtual switch iSCSI configuration

After the iSCSI configuration is complete, use ESXcli to activate the


iSCSI multipathing connection by using the following commands:
# ESXcli swiscsi nic add -n <port_name> -d <vmhba>

Run the following command to verify that the ports are added to the
software iSCSI initiator:
# ESXicli swiscsi nic list -d <vmhba>

Multipathing and
failover options

VMware ESXi offers multipath software in its kernel. This failover


software, called Native Multipathing Plug-in (NMP), contains
policies for Fixed, Round Robin, and Most Recently Used (MRU)
device path. Additionally, EMC provides PowerPath Virtual Edition
(PowerPath/VE) to perform I/O load balancing across all available
paths. The following summary describes the relevance of each option.
NMP policies
Round Robin Provides primitive load balancing when used
with the VNX arrays. However, there is not an automated
failback when a LUN is trespassed from one storage processor to
another.

52

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

MRU Uses the first path it detects when the host boots, and
uses it as long as it remains available.

Fixed Uses a single active path for all I/O to a LUN. vSphere
4.1 introduced a new policy called VMW_SATP_FIXED_AP,
which selects the Array LUN preferred path when VNX is set for
ALUA mode. This policy offers automated failback but does not
include load balancing.

Use VMW_SATP_FIXED_AP for the following reasons:

With the default VNX failovermode mode Asymmetric


Active/Active mode or ALUA, the path selected is the preferred
and optimal path of the LUN so I/O operations always use the
optimal path.

Uses the auto-restore or failback to assign LUNs to their default


storage processor (SP) after an NDU operation. This prevents a
single storage processor from owning all LUNs after an NDU.

Storage multipathing considerations

53

Configuring VMware vSphere on VNX Storage

Figure 17

Sends I/O down a single path. However, if there are multiple


LUNs in the environment, select a preferred path for a given
LUN to achieve static I/O load balancing.

VSI Path Management multipath configuration feature


Note: You can set the policies for both NMP and PowerPath by using the
EMC VSI Path Management feature for VMware vSphere. For details on how
to configure the above policies, refer to the EMC VSI for VMware vSphere:
Path Management document available on Powerlink.

54

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Using EMC PowerPath/VE multipathing and failover


PowerPath provides the most comprehensive pathing solution for
multipathing I/O between a host and the VNX. It provides multiple
options from basic failover to I/O distribution across all available
paths.
PowerPath is supported in FC and iSCSI (software and hardware
initiator) configurations. The benefits of using PowerPath in
comparison to VMware native failover are as follows:

Has an intuitive CLI that provides an end-to-end view and


reporting of the host storage resources, including HBAs.

Eliminates the need to manually change the load-balancing policy


on a per-device basis.

Uses auto-restore to restore LUNs to the preferred SP when it


recovers, ensuring balanced load and performance.

Provides the ability to balance queues on the basis of queue depth


and block size.

Note: PowerPath provides the most robust functionality. Though it does


require a license, it is the recommended multipathing option for VNX.

Storage multipathing considerations

55

Configuring VMware vSphere on VNX Storage

Multipathing considerations - NFS


Multipathing for NFS is significantly limited in comparison to SCSI
multipathing options. As a result, it requires manual configuration
and distribution of the I/O workload.
A highly available storage network configuration between ESXi hosts
and VNX should have the following characteristics:

Does not have any single point of failure (NIC ports, switch ports,
physical network switches, and VNX Data Mover network ports)

Optimally load balances the workload among the available I/O


paths.

Note: VMware vSphere supports the NFSv3 protocol, which is limited to a


single TCP session per network link. Even if multiple links are used, an NFS
datastore uses just one physical link for the data traffic to the datastore.
Higher throughput can be achieved by distributing virtual machines among
multiple NFS datastores. However, there are limits to the number of NFS
mounts that ESXi will support. The default number of NFS mounts is 8 with a
maximum value of 64 provided through a host parameter change
(NFS.MaxVolumes).

56

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Elements of a multipathing configuration over NFS


Figure 18 illustrates the recommended configuration that addresses
high availability and load balancing in all these levels.

Figure 18

Multipathing configuration with NFS

The guidelines to achieve high availability and load balancing for


NFS are as follows:

Data Mover network ports, connections to switch - Link


aggregation on VNX Data Movers and network switches
provides N+1 fault tolerance on port failures. It also enables load
balancing between multiple network paths. The switch can be
configured for static LACP for Data Mover and ESXi NIC ports.
The Data Mover also supports dynamic LACP.
Note: When Cisco Nexus 1000v pluggable virtual switch is used on the
ESXi hosts, configure dynamic LACP for the ESXi NIC ports.

Storage multipathing considerations

57

Configuring VMware vSphere on VNX Storage

ESXi NIC ports - NIC Teaming on the ESXi hosts provides fault
tolerance of NIC port failure. Set the load balancing on the virtual
switch to route-based on IP hash for Ether channel.

Physical network switch - Use multiple switches for physical


switch fault tolerance and connect each Data Mover and ESXi
host to both switches. If available, use Multi-chassis Link
Aggregation to span two physical switches while offering
redundant port termination for each I/O path from the Data
Mover and from the ESXi host.
Note: When using network switches that do not support Multi-chassis
Link Aggregation technology, use Fail-Safe Network on the VNX Data
Movers instead of link aggregation and use routing tables on ESXi
instead of NIC teaming. Use separate network subnets for each network
path.

Configure multiple network paths for NFS datastores


This section describes how to build the configuration that is shown in
Figure 18 on page 57.
At the VNX Data Mover level, create one LACP device with link
aggregation. An LACP device uses two physical network interfaces
on the Data Mover and IP addresses on the same subnet. At the ESXi
level, create a single VMkernel port in a vSwitch and add two
physical NICs to it. Configure the VMkernel IP address on the same
subnet as the two VNX network interfaces.
Note: Separate the virtual machine network and the virtual machine storage
network with different physical interfaces and subnets. This is recommended
for good performance.

58

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Complete the following steps to build a configuration with multiple


paths for NFS datastores (steps 1 through 13 are performed using
EMC Unisphere and steps 14 through 22 are performed using
vSphere Client):
To access Unisphere, complete the following steps:
1. Select the VNX platform from the Systems list box in the top
menu bar. From the top menu bar, select Settings > Network >
Settings for file. The Settings for files page appears.

Figure 19

Unisphere interface

2. Click Devices, and then click Create. The Create Network Device
dialog box appears.
3. In the Device Name field, type a name for the LACP device.
4. In the Type field, select Link Aggregation.
5. In the 10/100/1000 ports field, select the two Data Mover ports
that are used.
6. Enable Link Aggregation on the switches, the corresponding
VNX Data Mover interfaces, and ESXi host network ports.
7. Click OK to create the LACP device.
8. In the Settings for files page, click Interfaces.
9. Click Create. The Create Network Interface page appears.
Storage multipathing considerations

59

Configuring VMware vSphere on VNX Storage

Figure 20

Data Mover link aggregation for NFS server

10. Complete the following steps:


a. Type the details for the first Network Interface: name, IP
address. (In Figure 16, the IP address is set at 10.244.156.102
and the interface name is set as DM2_LACP1)
b. In the Device Name list box, select the LACP device that was
created in step 5 on page 59 .
c. Click Apply to create the first network interface and keep the
Create Network Interface page open.
11. In the Create Network Interface page, type the details for the
second Network Interface: name and IP address.
12. In the Device Name list box, select the LACP device that was
created in step 6. (In Figure 20 on page 60, LACP1 is selected.)
13. Click OK to create the second network interface.
Note: As noted in Figure 20 on page 60, for simplicity, only the primary
Data Mover connections are shown. Make similar connections between
the standby Data Mover and the network switches.

60

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

14. Access vSphere Client and complete steps 15 through 19 for each
ESXi host.
15. Create a vSwitch for all the new NFS datastores in this
configuration.
16. Create a single VMkernel port connection in the new vSwitch.
Add two physical NICs to it and assign an IP address for the
VMkernel in the same subnet as the two Network Interfaces of
the VNX Data Mover. (In Figure 16, the VMkernel IP address is
set to 10.6.121.183 with physical NIC vmnic0 and vmnic1
connected to it.)
17. Click Properties. The vSwitch1 Properties dialog box appears.

Figure 21

vSphere networking configuration

Storage multipathing considerations

61

Configuring VMware vSphere on VNX Storage

18. Select vSwitch, and then click Edit. The vSwitch1 Properties
page appears.

Figure 22

vSwitch1 Properties screen

19. Click NIC Teaming, and select Route based on ip hash from the
Load Balancing list box.
Note: The two vmnics are listed under the Active Adapters for the NIC Team.
If both corresponding ports on the switch are enabled for Ether channel, data
traffic to the ports is statically balanced using a hash function of the source
and destination IP address.

IMPORTANT
This means that a single TCP session from a virtual machine to a
specific NFS datastore always uses the same vmnic. However, two
TCP sessions from two virtual machines accessing different
datastores will use different vmnics (network paths), resulting in a
higher throughput.

62

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 23

VMkernel Properties screen

20. Provision an NFS datastore using USM. For the first NFS
datastore, select the primary Data Mover in the Data Mover field,
and for Data Mover Interface, assign the IP address of the first
network interface that was created.
21. Provision the second NFS datastore using USM. For the second
NFS datastore, select the primary Data Mover in the Data Mover
field and assign the IP address of the second network interface
that was created.
Note: Provision storage for NFS datastore to a new file system using
EMC VSI on page 71 provides details on how to provision a NFS
datastore with USM.

22. Create and distribute the virtual machines evenly across


datastores.

Storage multipathing considerations

63

Configuring VMware vSphere on VNX Storage

VMware vSphere configuration


VNX platforms provide configuration options that scale from
midrange to high-end network storage. Although differences exist
along the product line, there are common building blocks that can be
combined to address a broad range of applications and scalability
requirements.
Note: The considerations and settings described in this section
require special attention when configuring VMware vSphere for VNX
storage. ESXi and VNX settings are applied automatically when
using the EMC VSI plug-in. The EMC VSI for VMware vSphere: Unified
Storage Management Product Guide available on Powerlink provides
further details.

ESXi IP and FC driver


configuration

64

VMware provides drivers for supported iSCSI HBA, FCoE CNA, and
NIC cards as part of the VMware ESXi distribution. The VMware
compatibility guide provides additional details on qualified adapters.
The EMC E-Lab Interoperability Navigator utility available on
EMC Powerlink provides information about supported adapters for
connectivity of VMware vSphere to VNX.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

VMkernel port
configuration in ESXi

The ESXi VMkernel port group enables the use of iSCSI and NFS
storage. When ESXi is configured for IP storage, the VMkernel
network interfaces are configured to access one or more iSCSI
Network Portals on the VNX storage processors, or NFS servers on
VNX Data Movers.
To configure the VMkernel interface, complete the following steps:
1. Select an unused network interface that is physically cabled to or
logically part of the same subnet (VLAN) as the VNX iSCSI
Network Portal.
2. To set the network access, complete the following steps:
a. Select the vSwitch to handle the network traffic for the
connection and click Next.
b. In the Device Name field, type a name for the LACP device.
3. Click Next. The VMkernel - IP Connection Settings dialog box
appears.
4. To specify the VMkernel IP settings, do one of the following:
Select Obtain IP settings automatically to use DHCP to
obtain IP settings.
Select Use the following IP settings to specify IP settings
manually.
5. If Use the following IP settings is selected, provide the following
details:
Type the IP Address and Subnet Mask for the VMkernel
interface.
Click Edit to set the VMkernel Default Gateway for VMkernel
services, such as vMotion, NAS, and iSCSI.

VMware vSphere configuration

65

Configuring VMware vSphere on VNX Storage

Figure 24

VMkernel port configuration

6. Click Next. The Ready to Complete dialog box appears.


7. Verify the settings and click Finish to complete the process.

66

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

The VMkernel interface is the I/O path to the data. Therefore,


consider the network configuration and resources available for this
I/O path. Network considerations on page 48 provides more
details.

Storage I/O Control

Storage I/O Control (SIOC) offers a storage resource management


capability with granular control over storage consumers in a
clustered environment. To establish virtual storage prioritization,
SIOC aggregates virtual machine disk share and IOPS values at a host
level. These values are used to establish precedence and apportion
the storage resources when the datastore response time exceeds
predefined levels.
Virtual machine disk shares are assigned when the vmdk is created
and can be modified at any time. The default assignment is normal,
which equates to a disk share value of 1,000 shares. There are also
classifications of high (2,000 shares), low (500 shares), and custom
values that can be assigned to each vmdk. SIOC works at a host level
by aggregating the virtual machine disk shares of all powered-on
virtual machines.

Figure 25

Virtual disk shares configuration

When SIOC is enabled on a datastore, it is assigned a value called a


congestion threshold. The congestion threshold value is specified in
milliseconds (ms), which represents acceptable device latency for the
datastore. Valid settings range from 10 ms to 100 ms with a default
value of 35 ms.

VMware vSphere configuration

67

Configuring VMware vSphere on VNX Storage

Figure 26

SIOC latency window

SIOC samples I/O response time at 4-second intervals and if the


response time exceeds the congestion threshold, SIOC begins to
throttle the host queue depth to the LUN backing the datastore. The
percentage of throttling is based on the current workload of the host
and the proportion of I/O resources.
When the congestion threshold is applied, the host congestion
window value is used to reduce the available device queue for the
host. The LUN queue depth value continues to be modified, up or
down, until the response time falls below the congestion window
value.
SIOC is only supported on VMFS3 volumes, which means that it can
only be configured on devices presented from block storage. It cannot
be enabled on VNX file systems. Additionally, SIOC is not supported

68

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

on multi-extent volumes. So create the volume from a single LUN


with a single extent. In general, balance the response time with the
needs of the application and the storage configuration.
The setting that is applied depends on multiple factors including the
following:

The type of device used

Number of disks supporting the LUN

Other consumers of the spindles

Change latency threshold based on the storage media type used:


For SAS storage, the recommended latency threshold is 20 ms
to 30 ms
For NL-SAS storage, the recommended latency threshold is 35
ms to 50 ms
For Flash drive storage, the recommended latency threshold is
10 ms to 20 ms

Define a limit per virtual machine for IOPS to avoid a single


virtual machine flooding the array. For instance, limit the amount
of IOPS per virtual machine to 1,000.

Note: SIOC is also intelligent enough to detect non-VMware workloads on a


shared storage system. If the SIOC LUN is accessed for some other purpose
such as replication or storage cloning, ESXi generates an error stating that an
external workload has been detected. Detailed information is available in the
VMware KB article 1020651.

Network I/O Control

Similar to SIOC, Network I/O Control (NIOC) provides a way to


manage or prioritize network resources at a cluster level. NIOC is an
advanced networking feature available with vNetwork Distributed
Switches for vSphere 4.1 and later.
vNetwork Distributed Switches were introduced with vSphere 4.0
and provide a great way to manage networking. However, the
resource management was confined to traffic shaping policies to limit
the rate of network traffic across the virtual switch interface.
With vSphere 4.1, a virtual switch can be configured with a weighting
factor to prioritize network utilization. NIOC has several network
classes, as shown in Figure 18, that enable finer control of the network
resources within each network resource pool. Network resources are

VMware vSphere configuration

69

Configuring VMware vSphere on VNX Storage

prioritized based upon the weighting factor assigned to each resource


type. Additionally, a throughput value can be assigned to limit the
resource utilization in Mbit/s for each host sharing that resource.

Figure 27

Network Resource Allocation interface

This ability to adjust network prioritization offers some flexibility in


tuning for particular applications. For example, an environment with
virtual machines running I/O-intensive applications over an iSCSI or
NFS network interface may benefit from increasing the weighting of
the VMkernel NIC/virtual port.

VMDirectPath

VMDirectPath provides a method for the virtual machine to access a


PCI/PCIe device on the physical host. The host must support Intel's
VT-d or AMD's IOMMU technology.
VMDirectPath enables guest operating systems to directly access an
I/O device, bypassing the virtualization layer. This direct path or
passthrough improves performance for virtual machines that use
high-speed I/O devices, such as 10 Gigabit Ethernet or FCOE, to
access the VNX storage.

70

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Provisioning file storage for NFS datastores


The configuration of VNX NFS with VMware vSphere includes two
primary steps:
1. Create a file system and export it to ESXi in the VNX.
2. Mount an NFS datastore in ESXi from the provisioned VNX file
system in ESXi.
Use the EMC VSI for VMware vSphere: Unified Storage Management
Product Guide to complete these steps from vSphere Client.
Note: It is also possible to extend an existing NFS datastore using EMC VSI.
The EMC VSI for VMware vSphere: Unified Storage Management Product Guide
provides more details.

Provision storage for NFS datastore to a new file system using EMC VSI
Use this procedure when the vSphere Client is authorized to create a
new VNX file system. Complete the following steps to provision an
NFS datastore on a new VNX file system:
1. Access the vSphere Client.
2. Right-click an object (a host, cluster, folder, or data center).
Note: If you choose a cluster, folder, or data center, all ESXi hosts within
the object are attached to the newly provisioned storage.

3. Select EMC > Unified Storage.


4. Select Provision Storage. The Provision Storage wizard appears.
5. Select Network File System and click Next.
6. Type the datastore name.
7. Select the VNX Control Station from the Control Station list box.
If the VNX Control Station is not in the Control Station list box,
click Add. The Add Credentials wizard appears.
8. Select a Data Mover from the Data Mover Name list box.
9. Select a Data Mover interface from the Data Mover Interfaces list
box and click Next.

Provisioning file storage for NFS datastores

71

Configuring VMware vSphere on VNX Storage

10. Select Create New NFS Export and click Next.

Figure 28

File storage provisioning with USM

11. Select a storage pool from the Storage Pool list box.
Note: The user sees all available storage within the storage pool. Ensure
that the storage pool selected is designated by the storage administrator
for use by VMware vSphere.

12. Type an initial capacity for the NFS export in the Initial Capacity
field, and select the unit of measure from the list box to the right.
13. If required, select Virtual Provisioning to indicate that the new
file systems are thinly provisioned.

72

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Note: When a new NFS datastore is created with EMC VSI, Thin
Provisioning and Automatic File system extension are automatically
enabled. On the New NFS Export page, set the Initial Capacity and the
Max Capacity.

Figure 29

Creating a new NFS datastore with USM

14. If Virtual Provisioning is enabled for the file system, the


maximum capacity is required. Type the maximum file system
size limit.
15. Click Advanced. The Advanced Options dialog box appears. Of
the features listed, the following settings are important for
optimal VNX with VMware vSphere performance:

Provisioning file storage for NFS datastores

73

Configuring VMware vSphere on VNX Storage

High Water Mark: Specifies the percentage of consumed file


system space at which VNX initiates automatic file system
extension. Acceptable values are 50 to 99. (The default is 90
percent.)
Direct Writes: Enhances write performance to the VNX file
system. This mechanism enables well-formed NFS writes to be
sent directly to the disk without being cached on the Data
Mover.
The Direct Writes mechanism is designed to improve the
performance of applications with many connections to a large file
such as a virtual disk file of a virtual machine. This mechanism
can enhance access to large files through the NFS protocol. When
replication is used, Direct Writes are enabled on the secondary file
system to maintain performance in case of a failover.
16. After reviewing the settings, click OK, and then click Finish.
After clicking Finish, the Unified Storage Management feature
does the following:
Creates a file system on the selected VNX storage pool.
Mounts the newly created file system on the selected VNX
Data Mover.
Exports the newly created file system over NFS and provides
root and access privileges to the ESXi VMkernel interfaces that
mount the NFS datastore. All other hosts are denied access to
the file system.
Creates the NFS datastore on the selected ESXi hosts.
Updates the selected NFS options on the chosen ESXi hosts.
17. Click the Summary tab to see the newly provisioned storage.

Provisioning an NFS datastore from an existing file system using EMC VSI
Use this feature if the VMware administrator does not have storage
privileges or needs to use an existing VNX file system for the new
NFS datastore.
The storage administrator completes the following steps in advance:
1. Create the file system according to the needs of the VMware
administrator.
2. Mount and export the file system on an active Data Mover.

74

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

At this point, the file system is available to authorized ESXi hosts.


Use USM to add the NFS datastore.
Complete the following steps to provision a NFS datastore on an
existing VNX file system:
1. Follow steps 1 through 11 in Provisioning block storage for
VMFS datastores and RDM volumes (FC, iSCSI, FCoE) on
page 76.
2. Select Use Existing NFS Export and click Next.
3. From the NFS Export Name list box, select the NFS export that
was created by the storage administrator earlier.
4. Select Advanced. The Advanced Options dialog box appears.
The only feature displayed is Set Timeout Settings.
Note: Set Timeout Settings is selected by default and it is recommended
that the setting not be modified. Virtual machines Resiliency over NFS
on page 90 provides details about the recommended Timeout Settings
when using VNX NFS with VMware vSphere.

5. Click Finish.
USM creates the NFS datastore and updates the selected NFS options
on the authorized ESXi hosts.

Provisioning file storage for NFS datastores

75

Configuring VMware vSphere on VNX Storage

Provisioning block storage for VMFS datastores and RDM


volumes (FC, iSCSI, FCoE)
vSphere provides a beneficial change that reduces the complexity of
managing the storage environment while offering potential
performance and scalability benefits. VMware file system volumes
created with vSphere Client are automatically aligned on 64 KB
boundaries. If vSphere Client is not used, a manual alignment
process is required to avoid performance loss on the datastore.
Therefore, EMC strongly recommends using the VMware vSphere
Client to create and format VMFS datastores.
RDM volumes have a SCSI pass-through mode that enables virtual
machines to pass SCSI commands directly to the physical hardware.
Utilities like admsnap and admhost are used to issue commands
directly to the LUN when the virtual disk is in physical compatibility
mode. In virtual compatibility mode, an RDM volume looks like a
virtual disk in a VMFS volume. In RDM virtual compatibility mode,
certain advanced storage-based technologies, such as expanding an
RDM volume at the virtual machine level using metaLUNs, do not
work.
The creation of VMFS datastores and RDM volumes on block storage
is accomplished using USM.
After USM is installed, right-click a vSphere object, such as a host,
cluster, folder, or data center in vCenter:
Note: If you choose a cluster, folder, or data center, all ESXi hosts within the
object are granted access to the newly provisioned storage.

1. Select EMC > Unified Storage.


2. Select Provision Storage. The Provision Storage wizard appears.
3. Select Disk/LUN and click Next.

76

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 30

Block storage provisioning with USM

4. Select a storage array from the Storage Array list box.


5. If there are no storage arrays in the Storage Array list box, click
Add. The Add Credentials wizard appears. Add the VNX storage
by entering its credentials.
6. Select a storage processor to own the new LUN and select Auto
Assignment Enabled. Click Next.
Note: Install and properly configure failover software for failover of
block storage.

7. Select the storage pool or RAID group from which you want to
provision the new LUN. Click Next.
8. Select VMFS Datastore or RDM Volume.
Note: Unlike VMFS datastores, RDM LUNs are bound to a single virtual
machine and cannot be shared across multiple virtual machines, unless
clustering at the virtual machine level. Use VMFS datastores unless a
one-to-one mapping between physical and virtual storage is required.

9. For VMFS datastores, complete the following steps:


Type a name for the datastore in the Datastore Name field.
Select a maximum file size from the Maximum File Size list
box.

Provisioning block storage for VMFS datastores and RDM volumes (FC, iSCSI, FCoE)

77

Configuring VMware vSphere on VNX Storage

10. Select a LUN number from the LUN Number list box.
11. Type an initial capacity for the LUN in the Capacity field, and
select the unit of measure from the list box to the right.

Figure 31

Creating a new VMFS datastore with USM

12. Click the Advanced button to configure the VNX FAST VP policy
settings for the LUN. There are three tiering policy options:
Auto-Tier: Distributes the initial data placement across all
drive types in the pool to maximize spindle usage for the
LUN. Subsequent data relocation is based on LUN
performance statistics such that data is relocated among tiers
according to I/O activity.
Highest Available Tier: Sets the preferred tier for initial data
placement and subsequent data relocation (if applicable) to
the highest performing disk drives with available space.
Lowest Available Tier: Sets the preferred tier for initial data
placement and subsequent data relocation (if applicable) to
the most cost-effective disk drives with available space.
13. Click Finish.
14. At this point, the USM does the following:
a. Creates a LUN in the selected Storage Pool.

78

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

b. Assigns the LUN to the designated SP.


c. Adds the LUN to the storage group associated with the
selected ESXi hosts, making it visible to the hosts.
d. Creates the VMFS datastore on the selected ESXi hosts if
VMFS is chosen.
15. Select Configuration > Storage to see the newly provisioned
storage.

Provisioning block storage for VMFS datastores and RDM volumes (FC, iSCSI, FCoE)

79

Configuring VMware vSphere on VNX Storage

Virtual machine considerations


When using VNX storage, consider the following items to help
achieve optimal performance and functionality in virtual machines:

Virtual machine disk


partitions alignment

Virtual machine disk partition alignment

Virtual machine swap file location

Paravirtual virtual SCSI adapter (PVSCSI)

N Port ID Virtualization (NPIV)

Virtual Machines Resiliency over NFS

The alignment of virtual machine disk partitions can improve


application performance. Because a misaligned disk partition in a
virtual machine may lead to degraded performance, EMC strongly
recommends aligning virtual machines that are deployed over any
storage protocol. The following recommendations provide the best
performance for the environment:

Create the datastore with the vSphere Client or USM interface.

The benefits of aligning boot partitions are generally marginal. If


there is only a single virtual disk, consider adding an app/disk
partition.

It is important to align the app/data disk partitions that sustain


the heaviest I/O workload. Align partitions to a 1 MB disk
boundary in both Windows and Linux.
Note: Windows 2008, Windows Vista, and Windows 7 disk partitions are
aligned to 1 MB by default.

Align virtual machine


disk partitions

80

For Windows, use the allocation unit size recommended by the


application. Use a multiple of 8 KB if no allocation unit size is
recommended.

For NFS, use the Direct Writes option on VNX file systems. It is
helpful with random write workloads and virtual machine disks
formatted with a 4 KB allocation unit size.

The disk partition alignment within virtual machines is affected by a


long-standing issue with the x86 processor storage configuration. As
a result, external storage devices are not always aligned in an optimal

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

manner, which is true for VMware in most cases. The following


examples illustrate how to align data partitions with VNX storage for
Windows and Linux virtual machines.
Aligning Windows virtual machines
Note: This step is not required for Windows 2008, Windows Vista, or
Windows 7, which align partitions on 1 MB boundaries for disks larger than 4
GB (64 KB for disks smaller than 4 GB).

To create an aligned data partition, use the diskpart.exe utility. This


example assumes that the data disk to be aligned is disk 1:
1. At the command prompt, type diskpart.
2. Type select disk 1.

Figure 32

Select the disk

3. Type the create partition primary align=1024 command to create


a partition to align to a 1 MB disk boundary.
4. Type Exit.
Set the allocation unit size of a Windows partition
Use Windows Disk Manager to format an NTFS partition. Select an
allocation unit that matches your application needs.
Note: The default allocation unit is 4 KB. However, larger sizes such as 64 KB
can provide improved performance when large files are stored within the
volume.

Virtual machine considerations

81

Configuring VMware vSphere on VNX Storage

Align Linux virtual machines


Use the fdisk command to create an aligned data partition:
1. At the command prompt, type fdisk /dev/sd<x> where <x> is the
device suffix.
2. Type n to create a new partition.
3. Type p to create a primary partition.
4. Type 1 to create partition Number 1.
5. Select the defaults to use the complete disk.
6. Type t to set the partition's system ID.
7. Type fb to set the partition system ID to fb.
8. Type x to go into expert mode.
9. Type b to adjust the starting block number.
10. Type 1 to choose partition 1.
11. Type 2048 to set the starting block number to 2048 for a 1 MB disk
partition alignment.
12. Type w to write label and partition information to disk.

82

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Identify the alignment of virtual machines on Windows


Complete the following steps to identify virtual disk alignment:
1. From the Start menu, select Programs > Accessories > System
Tools > System Information. The System Information dialog
box appears.

Figure 33

Guest disk alignment validation

2. Locate the Partition Starting Offset property and validate if the


value is 1,048,576 bytes to indicate alignment to a 1 MB disk
boundary.
Note: Optionally, use wmic partition get StartingOffset, Name at the
command prompt to display the partition starting offset.

Virtual machine considerations

83

Configuring VMware vSphere on VNX Storage

Figure 34

Partition allocation
unit size

NTFS data partition alignment (wmic command)

To identify the allocation unit size of an existing data partition, use


the fsutil command. In the following example, the E: drive is the
NTFS data partition that is formatted with an allocated unit size of 8
KB.
At the command prompt, type fsutil fsinfo ntfsinfo <drive letter>.
The "Bytes Per Cluster" value identifies the allocation unit size of the
data partition.

Identify Linux virtual


machine alignment

To identify the current alignment of an existing Linux data partition,


use the fdisk command. In the following example, /dev/sdb is the
data partition that was configured on a Linux virtual machine.
In the terminal session, type fdisk -lu <data partition>.

Figure 35

Output of Linux partition aligned to a 1 MB disk boundary (starting


sector 2048)

The unaligned disk shows the starting sector as 63.

84

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 36

Virtual machine
swap file location

Output for an unaligned Linux partition (starting sector 63)

Each new virtual machine is configured with a swap file that stores
memory pages under certain conditions, such as when the balloon
driver is inflated within the guest OS. By default, the swap file is
created and stored in the same folder as the virtual machine.
In some cases, the virtual machine performance can be improved by
relocating the swap file to a separate high-performance device such
as a Flash drive LUN. Additionally, the swap file contains dynamic
data that is reconstructed each time the VM is booted. Backing up the
swap file is of little value, unless your interest is in forensics or trying
re-create a particular system state.
It is also possible to use a local datastore to offload up to 10 percent of
the network traffic that results from the page file I/O.
The tradeoff for moving the swap file to the local disk is that it may
result in additional I/O when a virtual machine is migrated through
vMotion or DRS. In such cases, the swap file must be copied from the
local device of the current host to the local device of the destination
host. It also requires dedicated local storage to support the files.
A better solution is to leverage a high-speed, low latency device such
as Flash drives to support the swap files.
If there is sufficient memory where page reclamation is not expected
(that is, each virtual machine has 100 percent of its memory reserved
from host physical memory), it is possible to use SATA drives to
support page files.

Virtual machine considerations

85

Configuring VMware vSphere on VNX Storage

In the absence of this configuration option, use Flash drives for page
files where performance is a concern.

Paravirtual SCSI
adapters

Paravirtual SCSI (PVSCSI) adapters are high-performance storage


adapters that can result in greater throughput and lower CPU
utilization. PVSCSI is best suited for SAN environments where
hardware or applications drive very high throughput. PVSCSI
adapters are recommended because they offer improved I/O
performance and a reduction in the ESXi host CPU usage.
PVSCSI adapters also reduce the cost of virtual interrupts by batching
and processing multiple I/O requests. Starting with vSphere 4 U1,
the PVSCSI adapter is supported for the virtual machine boot disk in
addition to data virtual disks.
Testing with Windows 2003 and Windows 2008 guest operating
systems, the PVSCSI adapter was found to improve the resiliency of
VNX-NFS based virtual machines due to storage network failures.
Paravirtual SCSI adapters are supported with the following guest
operating systems:

Windows Server 2003 and 2008

Red Hat Enterprise Linux (RHEL) 5

Paravirtual SCSI adapters have the following limitations:

Hot-add or hot-remove requires a bus rescan from within the


guest.

PVSCSI may not provide performance gains when the virtual


disk has snapshots or ESXi host memory is overcommitted.

If RHEL 5 is upgraded to an unsupported kernel, data may not be


accessible from the virtual machine's PVSCSI disks. Run
vmware-config-tools.pl with the kernel-version parameter to
regain access.

Booting a Linux guest from a disk attached to a PVSCSI adapter is


not supported. Booting a Microsoft Windows guest from a disk
attached to a PVSCSI adapter is not supported in ESXi prior to
ESXi 4.0 Update 1.
Detailed information is available in the VMware KB article
1010398.

86

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Note: Hot-adding a PVSCSI adapter to a virtual machine is not


supported. You must configure PVSCSI on the storage controller when
the virtual machine is created.

N Port ID
Virtualization for
RDM LUNs

N Port ID Virtualization (NPIV) within the FC protocol enables


multiple virtual N_Port IDs to share a single physical N_Port. In
other words, you can define multiple virtual initiators through a
single initiator. This feature enables SAN tools that provide QoS at
the storage-system level to guarantee service levels for virtual
machine applications.
NPIV does have some restrictions. To enable NPIV support, adhere to
the following points:

VMware NPIV support is limited to RDM volumes.

To configure NPIV, it must be supported on the host HBAs and


the FC switch.

NPIV must be enabled on each virtual machine.

To enable NPIV on a virtual machine, at least one RDM volume


must be assigned to the virtual machine.

LUNs must be masked to the ESXi host and the virtual machine
where NPIV is enabled

Within VMware ESXi, NPIV is enabled for each virtual machine so


that physical HBAs on the ESXi host can assign virtual initiators to
each virtual machine. As a result, a virtual machine has virtual
initiators (WWNs) available for each HBA. These initiators can log in
to the storage like any other host providing the ability to provision
block devices directly to the VM through Unisphere.
Figure 37 on page 88 shows how to enable NPIV for a virtual
machine. To enable the NPIV feature, present an RDM volume
through the ESXi host to the virtual machine. Note that once NPIV is
enabled, virtual WWNs are assigned to that virtual machine.

Virtual machine considerations

87

Configuring VMware vSphere on VNX Storage

Figure 37

Enable NPIV for a virtual machine after adding an RDM volume

For some switches, the virtual WWN names must be entered


manually within the switch interface and then zoned to the storage
system ports. The virtual machine initiator records then appear
within the VNX storage connectivity screen for registration as shown
in Figure 38 on page 89. A separate storage group should be created
for each virtual machine that is NPIV enabled. However, LUNs
assigned to the virtual machine storage group must be presented to
the ESXi storage group as well.

88

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Figure 38

Manually register virtual machine (virtual WWN) initiator records

The following points summarize the steps required to configure


NPIV:
1. Ensure that the HBA and FC switch support NPIV.
2. Assign a RDM volume to the ESXi host and then to the virtual
machine.
3. Enable NPIV for the virtual machine to create virtual WWNs.
4. Manually type the virtual WWNs within the switch interface.
5. Zone the virtual WWNs to the VNX platforms using the switch
interface. Add them to the same zone containing the ESXi HBA
and VNX storage ports.
6. Using Unisphere, manually register the initiator records for the
virtual machine using failover mode 4 (ALUA)
7. Create a new virtual machine storage group and assign the
virtual machine records to it.
8. To add LUNs to the virtual machine, ensure the following:
a. LUNs are masked to the ESXi hosts and the virtual machine
storage group.

Virtual machine considerations

89

Configuring VMware vSphere on VNX Storage

b. LUNs have the same host LUN number (HLU) as the ESXi
hosts.
c. LUNs must be assigned as RDMs to each virtual machine.

Virtual machines
Resiliency over NFS

With VNX Data Mover outages, customers may face several


challenges in production virtual environments such as application
unavailability, guest operating system crash, data corruption, and
data loss. In production environments, the availability of virtual
machines is the most important factor to ensure that the data is
available when required.
Several factors affect the availability of virtual machines such as Data
Mover failover due to connectivity issues or Data Mover reboot for
VNX OE File upgrades. These result in Data Mover downtime and
make the application unavailable for the duration of the operations.
The rational for VMware Resiliency
During VNX failure events, the guest OS loses connection to the NAS
datastore created on the VNX file system and the datastore becomes
inactive and unresponsive to the virtual machine I/O. Meanwhile,
virtual machines hosted on the NAS datastore start experiencing Disk
SCSI timeout errors in the OS system event viewer. To avoid these
errors, use the following best practices on the guest OSs to keep the
application and virtual machines available during VNX Data Mover
outage events.
EMC recommendations for VMware Resiliency with VNX NFS
To avoid the downtime caused during the VNX Data Mover outage
events:

90

Configure the customer environment with at least one standby


Data Mover to avoid a guest OS crash and the unavailability of
applications.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Install the VMware tools for the guest OS.

Set the keep the disk timeout value to at least 60 seconds within
the Guest OS. For Windows OS, modify the
HKEY_LOCAL_MACHINE/System/ControlSet/Services/DISK
and set the TimeoutValue to 120. The following snippet performs
the same task and can be used for automation on multiple virtual
machines:

reg.exe add \\%1\HKLM\SYSTEM\CurrentControlSet\Services\Disk /V TimeoutValue /t


/REG_DWORD /d 120 /f"

Virtual machine considerations

91

Configuring VMware vSphere on VNX Storage

Monitor and manage storage


When using VNX storage with VMware vSphere, it is possible to
proactively monitor storage resource utilization through the vSphere
Client or datastore alarms in vCenter Server. When using VNX Thin
Provisioning (with or without ESXi Thin Provisioning), it is critical to
monitor the storage utilization in VNX to prevent an accelerated
out-of-space condition.
This section explains how to proactively monitor the storage
utilization of vSphere datastores within vSphere Client using the
EMC VSI for VMware vSphere: Storage Viewer feature of USM. In
addition, it explains how to monitor the utilization of the underlying
VNX file system LUNs when they are thinly provisioned through
Unisphere.
Note: As described in VSI for VMware vSphere on page 20, the VSI feature
is available to map between datastores and their corresponding VNX storage
objects from the vSphere Client. Storage Viewer can also be used to gather
VNX-based storage utilization information. EMC VSI for VMware vSphere:
Storage Viewer Product Guide provides further information about the EMC
VSI feature and how to install it.

Monitor datastores
using vSphere Client
and EMC VSI

Use the vSphere Client to display the current utilization information


for NFS and VMFS datastores. Though this is useful for observing a
point in time, it is a manual process. You can configure vCenter to
trigger datastore Alarms that occur in response to selected events,
conditions, and state changes with the objects in the inventory. A
vSphere Client connected to a vCenter Server can be used to create
and modify alarms. Datastore alarms can be set for an entire data
center, host, or a single datastore.
Complete the following steps to create a Datastore Alarm:
1. From vSphere Client, select the datastore that you want to
monitor.
2. Select a datastore, right-click, and select Add Alarm.
3. Click General and type the required properties:
a. Type the alarm name and description.
b. In the Monitor list box, select Datastore

92

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

c. Select Monitor for specific conditions or state, for example


CPU usage, power stage.
d. Add a trigger to warn at 80 percent and to alert at 90 percent
capacity.

Figure 39

Actions tab

e. Add an action to generate e-mail notifications when the


condition occurs.
When using VNX Thin Provisioning, it is important to correlate the
storage information presented in vCenter with storage utilization
from the storage array. This is done from within vSphere Client using
the EMC VSI for VMware vSphere Storage Viewer feature.
To accomplish this task, complete the following steps:
1. From vSphere Client, select an ESXi host.
2. Click the EMC VSI tab on the right. This tab provides three
sub-views of EMC storage information for Storage Viewer, listed
in the Features Navigation panel, grouped according to their
context: Datastores, LUNs, and Targets.
3. Click Datastores from the Feature Navigation panel. The Storage
Viewer Datastores information appears on the right.
Monitor and manage storage

93

Configuring VMware vSphere on VNX Storage

4. Select the desired datastore from the Datastores. The Storage


Details window lists the storage devices or the NFS export
backing the selected datastore.
Note: The highlighted VP column in Storage Details has a value of Yes if
Thin Provisioning is enabled on the LUN. Figure 40 shows the
information that appears in Storage Viewer for a VMFS datastore
provisioned on a VNX LUN.

EMC VSI provides a comprehensive view of NFS.

Figure 40

Storage Viewer: Datastores view - VMFS datastore

Thin Provisioning enables physical storage to be over provisioned.


The expectation is that not all users or applications require their full
storage complement at the same time. They can share the pool and
also conserve storage resources. However, there is a possibility that
applications may grow rapidly and request storage from a storage
pool with insufficient capacity. This section describes a procedure
used to avoid running into this condition with VNX LUNs.
Unisphere is used to monitor storage pool utilization as well as
display the current space allocations. Users can also add alerts to
objects that must be monitored with event monitor and send alerts as
e-mail, page, or SNMP traps:

94

Usable pool capacity is the total physical capacity available to all


LUNs in the storage pool.

Allocated capacity is the total physical capacity currently


assigned to all thin LUNs.

Subscribed capacity is the total host-reported capacity supported


by the pool.

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

When LUN allocations begin to approach the capacity of the pool, the
administrator is alerted. Two non-dismissible pool alerts are
provided:

A warning event is triggered when the pool exceeds a


user-defined value between 1 and 84.

A critical alert is triggered when the pool reaches 85 percent.

Both alerts trigger an associated secondary notification if defined.


Complete the following steps to configure a user-defined alert on the
storage pool:
1. Access EMC Unisphere.
2. Select the VNX platform from the Systems list box on the top
menu bar. From the top menu bar, select Storage > Storage
Configuration > Storage Pools for Blocks. The Pools page
appears.
3. Select the storage pool in which to set the alert. Click Properties
to display the Storage Pool Properties page.
4. Click the Advanced tab.
5. In the Percent Full Threshold list box, type or select a value to
generate an alert that the storage pool is reaching the threshold.
In Figure 41 on page 96, the Percent Full Threshold value in the
Advanced tab of the Storage Pool Properties dialog box was set at 70
percent. Therefore, alerts are sent after the utilization of the storage
pool reaches 70 percent.

Monitor and manage storage

95

Configuring VMware vSphere on VNX Storage

Figure 41

Adjustable percent full threshold for the storage pool

Adding drives to the storage pool non-disruptively increases the


available usable pool capacity.
Note: An important concept to understand is that allocated capacity is only
reclaimed by the pool when LUNs are deleted. Removing files or freeing
space within a virtual machine disk does not free space within the pool.
Monitor VNX thinly provisioned file storage using EMC Unisphere.

Users must monitor the space utilization in over-provisioned storage


pools, and thinly provisioned file systems to ensure that they do not
become full and deny write access. Notifications can be configured
and customized based on the file system, storage pool usage, and
time-to-fill predictions. Notifications are particularly important when
over-provisioned resources are configured in the environment.

96

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

VNX File System Notifications are used to proactively monitor VNX


file systems used for NFS datastores and generate SMTP (e-mail) or
SNMP (network management) alerts when an event is triggered.
Multiple notification settings can be applied to the same resource to
alert a trend or a worsening condition.
Configure VNX file
system Storage Usage
notification

To configure a notification based on the percentage of the maximum,


complete the following steps:
1. Access EMC Unisphere to select the VNX platform.
2. From the top menu bar, select System > Monitoring and Alerts >
Notifications for Files.
3. Click Storage Usage and click Create. The Create Storage Usage
Notification page appears.

Figure 42

Create Storage Usage Notification interface


Monitor and manage storage

97

Configuring VMware vSphere on VNX Storage

4. Complete the following steps:


a. In the Storage Type field, select File System.
b. In the Storage Resource list box, select the name of the file
system.
Note: Notifications can be added for all file systems.

c. In the Notify On field, select Maximum Size.


Note: Maximum Size is the auto-extension maximum size and is
valid only for auto-extending file systems.

d. In the Condition field, type the percentage of storage (percent


used) and select % Used from the list box.
Note: Select Notify Only If Over-Provisioned to trigger the
notification only if the file system is over provisioned. If this
checkbox is not selected, a notification is always sent when the
condition is met.

e. Type the e-mail or SNMP address, which consists of an IP


address or hostname and community name. Separate multiple
e-mail addresses or trap addresses with commas.
f. Click OK. The configured notification appears in the Storage
Usage page.

Figure 43

Configure VNX file


system storage
projection notification

User-defined storage usage notifications

To configure notifications for the projected file system full time


(including automatic file system extension), complete the following
steps:
1. Access EMC Unisphere to select the VNX platform.

98

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

2. From the top menu bar, select System > Monitoring and Alerts >
Notifications for Files.
3. Click Storage Usage and click Create.
4. Complete the following steps:
a. In the Storage Type field, select File System.
b. In the Storage Resource list box, select the name of the file
system.
Note: Notifications can be added for all file systems.

c. In the Warn Before field, type the number of days to send the
warning notification before the file system is projected to be
full.
Note: Select Notify Only If Over-Provisioned to trigger this notification only
if the file system is over provisioned.

d. Specify optional e-mail or SNMP addresses.


e. Click OK. The configured notification is displayed in the
Storage Projection page.

Figure 44

User-defined storage projection notifications


Note: There is no comparable capability for block storage. VSI provides a
useful way to monitor space utilization.

Monitor and manage storage

99

Configuring VMware vSphere on VNX Storage

Storage efficiency
Thin Provisioning and compression are practices that administrators
can use to efficiently store data. This section describes how these
technologies are used in a vSphere and VNX environment.

Thinly provisioned
storage

Thin Provisioning is a storage efficiency technology that exists within


VMware vSphere and EMC VNX. With Thin Provisioning, a host is
presented with a storage device that has not been fully allocated.
Only an initial allocation is performed using a portion of the device
capacity. Additional space is consumed when it is required by the
user, application, or operating system. When using vSphere with
VNX, the following thin provisioning combinations exist:

On ESXi, using ESXi Thin Provisioning

On VNX file systems, through thinly provisioned VNX file


systems

On VNX LUN, through VNX Thin LUNs.

When using thin provisioning, monitor the storage utilization to


prevent an accelerated out-of-space condition. When using thin
virtual disks on thin LUNs, the storage pool is the authoritative
resource for storage capacity. Monitor the pool to avoid an
out-of-space condition.
Virtual Machine Disk
allocation

100

VMware offers three options for provisioning a virtual disk. They are
Thin, ZeroedThick (or Thick), and Eagerzeroedthick. A description of
each along with a summary of their impact on VNX Storage Pools is
provided in Table 4 on page 101. Any of the formats listed in the table
can be provisioned from any supported VNX storage device (Thin,
Thick, VNX OE, or NFS).

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Table 4

Allocation mechanism (Virtual


Disk format)

Allocation policies when creating new virtual disks on a VMware


datastore

VMware kernel behavior

Impact on VNX pool

Thin
(NFS default)

Does not reserve any space on the VMware file


system on creation of the virtual disk. The space
is allocated and zeroed on demand.

Minimal initial VNX pool allocation.


Allocation is demand based.

Zeroedthick
(VMFS default)

All space is reserved at creation, but is not


initialized with zeroes. However, the allocated
space is wiped clean of any previous contents of
the physical media. All blocks defined by the
block size of the VMFS datastore are initialized
on the first write.

Reserves VMDK size within the


LUN/Pool. Allocation is performed when
blocks are zeroed by the virtual machine.

Eagerzeroedthick

This option allocates all of the space and


initializes every block with zeroes. This
allocation mechanism performs a write to every
block of the virtual disk.

Full allocation of space in VNX storage


pool. No Thin benefit.

RDM

The virtual disk created in this mechanism is a


mapping file that contains the pointers to the
blocks of SCSI disk it is mapping. However, the
SCSI INQ information of the physical media is
virtualized. This format is commonly known as
the "Virtual compatibility mode of raw disk
mapping".

Allocation is dependent on the type of file


system or application.

RDMp

This format is similar to the RDM format.


However, the SCSI INQ information of the
physical media is not virtualized. This format is
commonly known as the "Pass-through raw disk
mapping".

Allocation is dependent on the type of file


system or application.

Thinly Provisioned
block-based storage

With respect to the type of VNX storage, Thin LUNs are the only
devices that support oversubscription. Thin LUNs are created from
storage pools that preserve space by delaying block allocation until it
is required by an application or guest operating system. Although
Thick LUNs are created from storage pools, their space is always
reserved and thus they have no thin provisioning benefits. Similarly,
the blocks assigned for VNX OE LUNs are always allocated within
RAID Groups with no thin-provisioned option.
When referring to block-based thin provisioning within this section,
the focus is exclusively on VNX Thin LUNs for VMFS or RDM
volumes.
Storage efficiency

101

Configuring VMware vSphere on VNX Storage

VMFS datastores are thin friendly, meaning that a VMware file


system created on a Thin LUN uses a minimal number of extents
from the storage pool. A VMFS datastore reuses previously allocated
blocks thus benefiting from Thinly Provisioned LUNs. When using
RDM volumes, the file system of the guest OS dictates if the RDM
volume is thin friendly.
Virtual Machine Disks options with block storage
The default VMDK format with vSphere is zeroedthick, which does
not initialize or zero all blocks and claim all the space during creation.
RDM volumes are formatted by the guest operating system. Hence,
virtual disk options such as zeroedthick, thin, and eagerzeroedthick
only apply to VMFS volumes.
From an allocation standpoint, space is reserved at the VMFS level,
but it is not allocated until the blocks within the VMDK are zeroed. In
the example in Figure 45, a 500 GB VMDK has been created and 100
GB has been written to the disk. This results in 500 GB of file space
reserved from the VMFS file system and 100 GB of space allocated in
the VNX storage pool. Zeroedthick provides some performance
benefits in allocation time and potential space utilization within the
storage pool. After the blocks have been allocated, they cannot be
compressed again.
It should be noted that Quick Format helps to preserve storage space.
If a Windows file system is formatted with NTFS, each block is
zeroed, which performs a full allocation at the storage pool level.
It is recommended to use Quick Format for NTFS volumes to
preserve space.

Figure 45

102

Thick or zeroedthick virtual disk allocation

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Thin virtual disks can be used to preserve space within the VMFS
datastore. The thin VMDK only allocates VMFS blocks needed by the
virtual machine for guest OS or application use. Thin VMDKs can be
created on a Thick LUN to preserve space within the file system or on
a Thin LUN to extend that benefit to the storage pool. In the example
in Figure 46, the same 500 GB virtual disk is created within a VMFS.
This time the disk is created in a thin-provisioned format. With this
option, the VMFS only uses 100 GB within the file system and 100 GB
within the VNX storage pool. Additional space is allocated when it is
required by the virtual machine. Additionally, the allocation unit is
the equivalent of the block size used to format the VMFS. So rather
than allocating at the 4k or 8k block that the virtual machine uses, the
minimum allocation size for ESXi is 1 MB, which is the default block
size for a VMFS volume, and up to 4 MB which is the maximum block
size used by VMFS. This is beneficial when using Thin on Thin.

Figure 46

Thin virtual disk allocation

Storage efficiency

103

Configuring VMware vSphere on VNX Storage

Figure 47

Virtual machine disk creation wizard

Zeroed thick virtual disks are the default option created when neither
the "Allocate and commit space on demand" option or Support
clustering are not selected. In this example, since neither option has
been selected, a zeroedthick vmdk is created.
Selecting the zeroedthick option for virtual disks on VMFS volumes
affects the space allocated to the guest file system (or writing pattern
of the guest OS device). If the guest file system initializes all blocks,
the virtual disk needs all the space to be allocated up front. When the
first write is triggered on a zeroedthick virtual disk, it writes zeroes
on the region defined by the VMFS block size and not just the block
that was written to by the application. This behavior affects the
performance of array-based replication software because more data,

104

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

which is not required, needs to be copied based on the VMFS block


size. However, it also alleviates some of the concerns about
fragmentation with Thin on Thin.
In ESXi, configure a thin virtual machine disk as zeroedthick or thin.
When using the thin virtual disk format, the VMFS datastore is aware
of the space consumed by the virtual machine. However, VMFS
datastore free capacity must be monitored to avoid an out-of-space
condition; vSphere provides a simple alert when datastore thresholds
are reached.
In addition, with ESXi, when using vCenter features such as Cloning,
Storage vMotion, Cold Migration, and Deploying a template, the
zeroedthick or thin format remains intact on the destination
datastore. In other words, the consumed capacity of the source virtual
disk is preserved on the destination virtual disk and not fully
allocated.
Because the virtual machine is not thin-aware, there is the potential to
encounter an out-of-space condition when the storage pool backing a
Thin LUN reaches its capacity. If the Thin LUN cannot accommodate
any writes and block requests made by the host while in this state,
VMware ESXi 4.1 "stuns" the virtual machine (pauses it) and
generates a pop-up message on the vSphere Client alerting the user to
the problem, as shown in Figure 48.

Figure 48

Virtual machine out-of-space error message

Storage efficiency

105

Configuring VMware vSphere on VNX Storage

After additional space has been added or reclaimed from the storage
pool, the virtual machine can resume execution without any adverse
effects by selecting the Retry option. If an application times out while
waiting for storage capacity to become available, the application
must then be restarted. The Stop option causes the virtual machine to
be powered off.
Thinly Provisioned
file-based storage

File-based thin provisioning with VNX is available using VNX Thin


Provisioning for file systems. Both USM and Unisphere are able to set
up Thin Provisioning on a file system.
When a new NFS datastore is created with EMC VSI, Thin
Provisioning and Automatic File system extension are automatically
enabled.

Figure 49

106

File system Thin Provisioning with EMC VSI: USM feature

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Automatic File system Extension on the file system is controlled by


the High Water Mark (HWM) value located in the Advanced screen
as shown in Figure 49 on page 106. This value (percentage) dictates
when to extend the file system. By default, VSI sets HWM to 90
percent meaning that the file system is extended when the used
capacity reaches 90 percent. After the NFS datastore is created by VSI,
it is presented to VMware ESXi host with the file system's maximum
capacity.
The ESXi host is unaware of the file system's currently allocated
capacity. However, using the EMC VSI for VMware vSphere: Storage
Viewer, it is possible to view the currently allocated capacity of the
file system.
Additional virtual machines can be created on the datastore even
when the aggregated capacity of all their virtual disks exceeds the
datastore size. Therefore, it is important to monitor the utilization of
the VNX file system to identify and proactively address upcoming
storage shortage.
Note: Monitor and manage storage on page 92 provides further details on
how to monitor the storage utilization with VMware vSphere and EMC VNX.

The thin provisioning virtual disk characteristics are preserved when


a virtual machine is cloned, migrated to another datastore, or its
virtual disk is extended.
VNX-based block and file system operations that affect a datastore
are transparent to the virtual machine disks stored in them.
Virtual-provisioning characteristics of the virtual disk are preserved
during all the operations.

Storage efficiency

107

Configuring VMware vSphere on VNX Storage

VMware vSphere virtual disks created from NFS are always thin
provisioned. The virtual disk provisioning policy setting for NFS is
shown in Figure 50.

Figure 50

LUN Compression

108

Provisioning policy for a NFS virtual machines virtual disk

VNX LUN Compression offers capacity savings to the users for data
types with lower performance requirements. LUNs presented to the
VMware ESXi host are compressed or decompressed as needed. As
shown in Figure 51 on page 109, compression is a LUN attribute that
can be enabled and disabled on a per-LUN basis. When enabled, data
on disk is compressed in the background. If the source is a VNX OE
or Thick Pool LUN, it undergoes an online migration to a thin LUN
when compression is enabled. Additional data written by the host is
initially stored uncompressed, and system-defined thresholds are
used to automatically trigger the compression of new data
asynchronously. Host reads of compressed data are decompressed in
memory but left compressed on disk. These operations are largely

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

transparent to the end user and once enabled, the system


automatically manages the processing of new data in the
background.

Figure 51

LUN Compression property configuration

The inline read and write operations of compressed data affects the
performance of individual I/O threads and therefore compression is
not recommended in the following cases:

I/O-intensive or response time sensitive applications

Active database or messaging systems

Compression can successfully be applied to more static data sets such


as archives (virtual machine templates) and non-production clones of
database or messaging systems volumes running on virtual
machines.

Storage efficiency

109

Configuring VMware vSphere on VNX Storage

If compression is disabled on a compressed LUN, the entire LUN is


processed in the background. When the decompression process
completes, the LUN remains a Thin LUN and remains in the same
pool. Capacity allocation of the Thin LUN after decompression
depends on the original pre-compression LUN type.

File Deduplication
and Compression

The VNX File Deduplication and Compression feature provides data


reduction for files through data compression and data deduplication.
The main objective of VNX File Compression is to improve file
storage efficiency by compressing files stored on a VNX file system.
Deduplication is used to eliminate redundant files in a file system
with minimal impact to the end user. Reducing data consumption in
the file system results in reduced cost per megabyte and improved
total cost of ownership of VNX.
VNX File Deduplication and Compression provide data reduction
cost savings capabilities in two usage categories:

Efficient deployment and cloning of virtual machines that are


stored on VNX file systems using NFS.

Efficient storage of file-based business data stored on NFS/CIFS


network shares accessed by virtual machines.

Efficient deployment and cloning of virtual machines stored on VNX file systems over NFS
VNX File Deduplication and Compression can target active virtual
disk files (VMDK files) for data compression and cloning purposes.
This feature is available for VMware vSphere virtual machines that
are deployed on VNX-based NFS datastores.
Virtual machine compression with VNX File Deduplication and
Compression
With this feature, the VMware administrator can compress a virtual
machine disk at the VNX level and thus reduce the file system storage
consumption by up to 50 percent. There is some CPU overhead
associated with the compression process, but VNX includes several
optimization techniques to minimize this performance impact.
Virtual machine cloning with VNX File Deduplication and
Compression
VNX File Deduplication and Compression provides the ability to
perform efficient, array-level cloning of virtual machines. Two
cloning alternatives are available:

110

Using EMC VNX Storage with VMware vSphere

Configuring VMware vSphere on VNX Storage

Full Clone With this operation, a full virtual machine clone is


created that is comparable to a native VMware vSphere clone
operation. A full VNX VM clone operation is performed on the
storage system rather than at the ESXi level, saving ESXi CPU
cycles required to perform the native cloning operation. The
result is an efficient virtual machine clone operation that can be
up to 2 to 3 times faster than a native vSphere virtual machine
clone operation.

Fast Clone With this operation the cloned virtual machine


holds only the blocks that have changed between the replica and
the source VM it refers to. This is very similar to a VNX LUN
snapshot operation except that in this case the operation is done
on files rather than LUNs. A fast clone is created in the same file
system as the source VM. Unchanged block reads are satisfied by
the source files and updated blocks are delivered by the fast clone
files. Fast clone creation is an almost instantaneous operation
because no data needs to be copied from the source virtual
machine to the target device.

All of the compression and cloning operations available in VNX File


Deduplication and Compression are virtual machine based rather
than file system based. This provides the administrator with high
flexibility to use VNX File Deduplication and Compression with
VMware vSphere to further increase the VNX storage efficiency.
The EMC VSI for VMware vSphere: Unified Storage Management Product
Guide provides further information on how to efficiently compress
and clone virtual machines using USM and VNX File Deduplication
and Compression
Efficient storage of file-based business data stored on NFS/CIFS network shares that are
mounted or mapped by virtual machines
VNX File Deduplication and Compression can also provide a high
degree of storage efficiency by eliminating redundant files with
minimal impact on the end-user experience. This feature will also
compress the remaining data.
VNX File Deduplication and Compression automatically targets files
that are the best candidates for deduplication and subsequent
compression in terms of the file access frequency and file size. When
using tiered storage architecture, VNX File Deduplication and
Compression can also be enabled on the secondary tier to reduce the
archived dataset size.

Storage efficiency

111

Configuring VMware vSphere on VNX Storage

With VMware vSphere, VNX File Deduplication and Compression


are used on file systems that are mounted or mapped by virtual
machines using NFS or CIFS. This is most suitable for business data
such as home directories and network-shared folders. Similarly, use
VNX File Deduplication and Compression to reduce the space
consumption of archived virtual machines, thus eliminating
redundant data and improving storage efficiency of the file systems.

112

Using EMC VNX Storage with VMware vSphere

2
Cloning Virtual
Machines

This chapter contains the following topics:

Introduction ...................................................................................... 114


Using EMC VNX cloning technologies......................................... 115
Summary ........................................................................................... 126

Cloning Virtual Machines

113

Cloning Virtual Machines

Introduction
To help meet the ever-increasing demands put on resources, IT
administrators create exact replicas, or clones of existing, fully
configured virtual machines to quickly deploy a group of virtual
machines. Cloning works by creating a copy of the VMDKs and
configuration files on the virtual machine.
VMware vSphere provides two native methods to clone virtual
machinesthe Clone Virtual Machine wizard in vCenter Server and
the VMware vCenter Converter.

Clone Virtual Machine wizard: Accessible from the vCenter


Server, the wizard enables users to clone a virtual machine on
any datastore (with sufficient space) visible to the ESXi host or
cluster to which the virtual machine is registered. In addition to
creating an exact copy of the virtual machine, the wizard can also
reconfigure the new virtual machine.

VMware vCenter Converter: Integrated with the vCenter Server,


this tool enables users to convert any machine that runs the
Windows operating system into a virtual machine on an ESXi
host. In addition, the tool can clone an existing virtual machine.

The VNX platform provides the following technologies to enable


users to efficiently clone virtual machines within the VNX platform
without ESXi cycles:

114

VNX SnapView for block storage using the FC, iSCSI, or FCoE
protocol

VNX SnapSure for file systems when using the NFS protocol

VMware VAAI technology for block storage accelerates native


virtual machine cloning.

VNX File Deduplication and Compression for individual virtual


machine cloning.

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

Using EMC VNX cloning technologies


This section explains how to use the EMC VNX series technologies to
clone virtual machines. The VNX platform-based technologies
produce exact copies of the source storage backing virtual machines.
Take the following precautions to ensure that all data is fully
committed and that the clone is recoverable:

Shut down or quiesce the applications running on the virtual


machines before initiating the creation of the clone to ensure that
all data from the memory is committed to the virtual disk.

Perform system reconfiguration or sysprep to customize the


operating system identity of the virtual machine and prevent
network and software conflicts.

Replica virtual machines must be modified to avoid identity or


network conflicts with other systems, including the source. It is best
to customize Windows virtual machines before cloning with VNX
SnapView or VNX SnapSure. Run the Windows System Preparation
tool (sysprep) on the virtual machine to generate a new security
identifier and network address for the new virtual machine. Using
sysprep, a new virtual machine identity is created when the system is
booted for the first time and potential identity conflicts are limited
when the replica virtual machine is brought up onto the network.

Clone virtual machines at the LUN level with VNX SnapView


This section explains how to clone virtual machines over
FC/iSCSI/FCoE storage at the LUN level with VNX SnapView.
SnapView snapshots offer a logical point-in-time copy of a LUN. Use
SnapView technology to clone a VMFS datastore or RDM LUN
containing virtual machines.
SnapView enables users to create LUN-level copies for testing,
backup, and recovery operations. SnapView includes two flexible
options:

Pointer-based, space-saving snapshots - SnapView snapshots


use a pointer-based technique to create an image of a source
LUN. Because snapshots use a copy-on-first-write technique, the
target devices are available immediately after creation and only

Using EMC VNX cloning technologies

115

Cloning Virtual Machines

require a fraction of disk space of the source LUN. A single


source LUN can have up to eight snapshots, each reflecting a
different view of the source.

Cloning VMFS
datastores

Highly functional, full-volume clones - SnapView clones create


full-image copies of a source LUN that can be established,
synchronized, fractured, and presented to a different host for
backup or other business purposes. Because SnapView tracks
change, subsequent establish operations only require copying the
changed tracks. A single clone source LUN can have up to eight
simultaneous target clones.

VNX LUNs presented to the ESXi host are formatted as VMFS


datastores. SnapView clones create block-for-block replicas of the
VNX source LUN and also preserve the VMware file system and its
contents. A SnapView clone consumes the same amount of storage as
the source LUN (Thick or Thin LUN). SnapView can be managed
from Unisphere or Navisphere CLI.
To create and present a point-in-time SnapView clone, complete the
following steps:
1. Use the VM-aware Unisphere feature or EMC Storage Viewer to
identify the following:
a. VNX LUN number used by the VMFS datastore.
b. Virtual machines that reside in the VMFS datastore.
2. Define a clone group for each VNX LUN that holds the data to be
cloned.
3. Add clone target LUNs to each clone group. The addition of the
target devices automatically starts the SnapView clone
synchronization process.
4. After the clone volumes are synchronized with the source
volume, fracture them from the source volume to create a full
point-in-time copy.
Multiple VNX clones can be created from the same source LUN. To
use the clone, fracture it from the source LUN as shown in Figure 52
on page 117. Specify the consistent switch to perform a consistent
fracture in the Navisphere CLI. Any VMware ESXi host with access to
the clone volumes is presented with a consistent point-in-time
read/write copy of the source volumes at the moment of fracture.

116

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

Figure 52

Performing a consistent clone fracture operation

To create and present SnapView snapshots, complete the following


steps:
1. Use the VM-aware Unisphere feature to identify the source
devices to be snapped.
2. Use Unisphere to create a SnapView snapshot of the source
devices.
3. User either Unisphere or Navisphere CLI, as shown in Figure 53
on page 118, to start a SnapView session on the source device.
4. Access the SnapView session by activating the SnapView
snapshot device session that was previously created.

Using EMC VNX cloning technologies

117

Cloning Virtual Machines

118

Figure 53

Create a SnapView session to create a copy of a VMware file system

Clone virtual
machines for RDM
volumes

EMC recommends using a copy of the source virtual machine's


configuration file to provide access to devices from the virtual
machines. SnapView technology creates a logical point-in-time copy
of the RDM volume. This RDM volume is presented to a virtual
machine.

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

An RDM volume has a one-on-one relationship with a virtual


machine (or a virtual machine cluster). To clone a virtual machine
stored on an RDM volume, create a SnapView clone or a snapshot of
the LUN that is mapped by the RDM volume.
To create or add virtual machines from replicated RDMs to present to
an ESXi host:
1. Access the host where the clone is going to be created.
2. Create a folder or directory to hold the configuration files for the
virtual the new machine. The datastore storage type can be local
disk, SAN, or NFS.
3. Use the command line utility scp, or the Datastore Browser in the
vSphere client to copy the configuration file of the source virtual
machine to the directory created in step 1. Repeat this step if the
configuration of the source virtual machine changes.
4. Register the cloned virtual machine using the vSphere client.
5. Edit the virtual machine settings:
a. Remove the existing Hard Disk entry referring to the source
RDM.
b. Add a new Hard Disk as an RDM device and associate it with
the Cloned RDMs.
6. Power on the cloned virtual machine using the vSphere client.
Identify clones

This section explains how VMware ESXi hosts identify copies.


A fractured clone copy or an activated point-in-time snapshot copy is
ready for access and can be presented to the ESXi host through the
existing VNX storage group. The VMkernel assigns a unique
signature to all VMFS volumes when formatted with the VMware file
system. If the VMware file system is labeled, that information is also
stored on the device. Because storage array technologies create exact
replicas of the source volumes, all information, including the unique
signature (and label, if applicable), is replicated.
If a copy of a VMFS-3 volume is presented to any VMware ESXi host
or cluster group, the VMkernel automatically masks the copy. The
device holding the copy is determined by comparing the signature
stored on the device with the computed signature. For example, a
clone has a different unique ID than the source device it was created
from. Therefore, the computed signature for a clone device always

Using EMC VNX cloning technologies

119

Cloning Virtual Machines

differs from the one stored on it. This enables the VMkernel to
identify the copy correctly. vSphere provides selective resignaturing
at an individual LUN level and not at the ESXi host level.
Note: There are command line options such as enableLVMresignature = 1.
However, they are not the recommended approach to resignature with
vSphere 4.

After a rescan, the user can either keep the existing signature of the
replica (LUN) or resignature the replica (LUN) if needed:

Keep the existing signature Presents the copy of the data with
the same label name and signature as the source device.
However, on VMware ESXi hosts with access to both the source
and target devices, the parameter has no effect because VMware
ESXi does not present a copy of the data if there are signature
conflicts.

Assign a new signature Automatically resignatures the


VMFS-3 volume holding the copy of the VMware file system
with the computed signature (using the UID and LUN number of
the target device). In addition, it appends the label with "snap-x,"
where x is a hexadecimal number that can range from 0x2 to
0xFFFFFFFF.

To resignature a SnapView clone or snapshot LUN, complete the


following steps:
1. Unmount the mounted datastore copy.
2. Rescan the storage on the ESXi host so that it updates its view of
LUNs presented to it and discovers any LUN copies.
To assign a new signature to a vStorage VMFS data copy, complete
the following steps:
1. Log in to vSphere Client and select the host from the Inventory
area.
2. Click Configuration and click Storage in the Hardware area.
3. Click Add Storage.
4. Select the Disk/LUN storage type and click Next.
5. From the list of LUNs, select the LUN that displays a datastore
name in the VMFS Label column and click Next. The Select
VMFS Mount Options dialog box appears.

120

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

Note: The name present in the VMFS Label column indicates that the
LUN is a copy of an existing vStorage VMFS datastore.

6. Select Keep the existing signature or Assign a new signature


and click Next.

Figure 54

Assign a new signature

The Ready to Complete page appears.


7. Review the datastore configuration information and click Finish.
8. Browse for the virtual machine's VMX file in the newly created
datastore, and add it to the vCenter inventory.

Clone virtual machines on VNX NFS datastores with VNX SnapSure


The VNX SnapSure feature creates a logical point-in-time image
(checkpoint) of a production file system (PFS). In a VMware
environment, the production file system contains an NFS datastore
with metadata and virtual disks associated with the virtual machines
to be cloned. The checkpoint file system must be in a read/write
mode to clone a virtual machine with SnapSure. The writeable
checkpoint file system is created by using Unisphere as shown in
Figure 55 on page 122.

Using EMC VNX cloning technologies

121

Cloning Virtual Machines

Figure 55

Create a writeable checkpoint for a NAS datastore

Execute the following command syntax in the CLI to create writeable


checkpoint file systems:
# fs_ckpt <NAS file system checkpoint> -Create -readonly n

To start the virtual machine, the VMkernel requires read/write and


root access to the checkpoint file system. Provisioning file storage
for NFS datastores on page 71 provides more details. Export the
checkpoint file system to the ESXi hosts providing them with root
level access.
122

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

To import multiple virtual machines on a checkpoint file system,


complete the following steps in vCenter:
1. Select an ESXi host with access to the checkpoint file system.
2. Click the Configuration tab to select the Add Storage Wizard.
3. Add the writeable checkpoint file system to the ESXi host as an
NFS datastore.
4. Browse for the new datastore and add the VMX files of the virtual
machines to the vCenter inventory.

Clone virtual machines with native vCenter cloning and VAAI


This section explains how to clone individual virtual machines on
FC/iSCSI/FCoE storage using native vCenter cloning and VAAI.
VMware vCenter's native cloning technology consumes ESXi CPU,
memory, and storage resources. The amount of resources used is
directly proportional to the amount of data to be copied.
VAAI allows VMware vSphere (version 4.1 and later) to take
advantage of efficient disk-array storage functions as an alternative to
VMware host-based functions. These vStorage APIs enable close
integration between VMware vSphere and storage hardware to:

Enable better quality of service to applications running inside


virtual machines.

Improve availability by enabling rapid provisioning.

Increase virtual machine scalability.

vStorage API supports both VMFS datastores and RDM volumes and
works with the VNX platform FC, iSCSI, and FCOE protocols. The
storage systems must use VNX OE code, release 31 or later for the
host to use the new vStorage APIs. The Full Copy feature of the VAAI
suite offloads the virtual machine cloning operations to the storage
system.
Note: VAAI support is provided with VNX storage systems running VNX OE
for block version 5.31.

The host issues the Extended Copy SCSI command to the array and
directs the array to copy the data from a source LUN to a destination
LUN or to the same source LUN. The array uses its efficient internal
mechanism to copy the data and return Done to the host. Note that

Using EMC VNX cloning technologies

123

Cloning Virtual Machines

the host can issue multiple Extended Copy commands depending on


the amount of data that needs to be copied. Because the array
performs the copy operation, unnecessary read and write requests
from the host to the array are eliminated, significantly reducing the
host I/O traffic. The Full Copy feature is only supported when the
source and destination LUNs belong to the same VNX platform. If the
source and destination datastores are not aligned (all datastores
created with the vSphere Client are aligned automatically), the Full
Copy operation fails and the old, primitive method to copy the data is
used. Administrators find the full copy feature useful to:

Create multiple copies of a virtual machine on the same LUN or a


different LUN.

Migrate a virtual machine by using Storage vMotion from one


LUN to another LUN on the same storage system.

Create virtual machines of the base template on the same LUN or


a different LUN.

Clone individual virtual machines on NFS datastores


The VNX File Deduplication and Compression eliminates redundant
data from files to increase file system storage efficiency. VMware
vSphere and VNX File Deduplication and Compression provides
data reduction cost-saving capabilities by efficiently deploying and
cloning virtual machines stored on VNX for file systems.
The two cloning alternatives that are available for the VMware
administrator to create clones are the following:

124

Full Clone: A full clone operation can be done across file systems
from the same Data Mover. Full clone operations occur on the
Data Mover rather than on the ESXi host that frees ESXi cycles
and network resources by eliminating the need to pass data to the
ESXi host. By removing the ESXi host from the process, the
virtual machine clone operation can complete two to three times
faster than a native vSphere virtual machine clone operation.

Fast Clone: A fast clone operation is done within a single file


system. Fast clones are near instantaneous operations executed at
the Data Mover level with no external data movement. Unlike
full clones, fast clones virtual machine images hold only the
changes to the cloned virtual machines and refer to the source
virtual machine for unchanged data. They are stored in the same
folder as the source virtual machine.

Using EMC VNX Storage with VMware vSphere

Cloning Virtual Machines

Data deduplication virtual machine cloning operations are virtual


machine based rather than file system based and provide the
flexibility to use Data Deduplication with VMware vSphere to reduce
the storage consumption.
Use the EMC VSI for VMware vSphere: USM feature to configure
VNX File Deduplication and Compression. This feature enables the
VMware administrator to perform VNX platform based virtual
machine cloning operations within the VMware vSphere Client.
Figure 27 on page 70 shows an example of using USM on a VMware
vSphere virtual machine.
The EMC VSI for VMware vSphere: Unified Storage Management
Product Guide available on Powerlink provides more information on
the USM feature and how to use it with VMware vSphere.

Using EMC VNX cloning technologies

125

Cloning Virtual Machines

Summary
The VNX platform-based technologies provide an alternative to
conventional VMware-based cloning. VNX-based technologies create
virtual machine clones at the storage layer in a single operation.
Offloading these tasks to the storage systems provides faster
operations with limited vSphere CPU, memory, and network
resource consumption.
VNX-based technologies provide options for administrators to:

Table 5

Clone a single or small number of virtual machines and maintain


the granularity of an individual virtual machine.

Clone a large number or all of the virtual machines with no


granularity of individual virtual machines on a datastore or
LUN.

VNX-based technologies for virtual machine cloning options


Individual virtual machine
granularity for a small
number of
virtual machines

126

No granularity for a large


number of virtual
machines

Block storage
(VMFS datastores or
RDM)

VMware native cloning with


VAAI Full Copy

VNX SnapView

Network-attached storage
(NFS datastores)

VNX File Data Deduplication


using Full Clone or Fast Clone

VNX SnapSure

Using EMC VNX Storage with VMware vSphere

3
Establishing a Backup
and Recovery Plan for
VMware vSphere on
VNX Storage

This chapter contains the following topics:

Introduction ......................................................................................
Virtual machine data consistency ..................................................
VNX native backup and recovery options ...................................
Backup and recovery of a VMFS datastore ..................................
Backup and recovery of RDM volumes........................................
Replication Manager........................................................................
vStorage APIs for Data Protection.................................................
Backup and recovery using VMware Data Recovery .................
Backup and recovery using Avamar .............................................
Backup and recovery using NetWorker........................................
Summary ...........................................................................................

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

128
129
131
134
138
139
143
145
148
157
164

127

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Introduction
The combination of EMC Data Protection technologies and VMware
vSphere offers many backup and recovery options for virtual
machines provisioned from VNX storage. It is important to determine
recovery point objective (RPO) and recovery time objective (RPO) so
that an appropriate method is used to meet the service level
requirements and minimize downtime.
At the storage layer, two types of backup are discussed in this
chapter: logical backup and physical backup. A logical backup
(snapshot) provides a view of the VNX file system or LUN at a
specific point in time. Logical backups are created rapidly and require
very little storage space so they can be created frequently. Restoring
from a logical backup can also be done quickly, dramatically reducing
the mean time to recover. The logical backup protects against logical
corruption of the file system or LUN, accidental deletion of files, or
other similar human errors.
A logical backup cannot replace a physical backup. A physical
backup creates a full copy of the file system or LUN on different
physical media. Although backup and recovery time may be longer, a
physical backup provides a higher level of protection because it can
withstand a hardware failure on the source device. Physical backups
guard against data unavailability caused by hardware failure.

128

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Virtual machine data consistency


In environments where VNX provides storage to the ESXi host,
administrators use the backup technologies described in this chapter
for crash consistency. In a simplified configuration where the virtual
machine's guest OS, application data, and application logs are stored
on a single datastore, crash consistency is achieved by using one of
the VNX Data Protection technologies. Many application vendors,
especially database vendors, advocate separating data and log files
across different file systems or LUNs to achieve better performance.
Separating the files allows a virtual machine to have multiple virtual
disks (.vmdk files) spread across several datastores. It is critical to
maintain data consistency across datastores when backup or
replication occurs. Use VMware snapshots with the VNX consistency
group technologies to provide crash consistency in such scenarios.
A VMware snapshot is a software-based technology that operates on
a single virtual machine. When a VMware snapshot is taken, it
quiesces all I/Os from the guest operating system and captures the
entire state of a virtual machine including its configuration settings,
virtual disk contents, and optionally the contents of the virtual
machine's memory.
The virtual machine ceases to write to the existing virtual disk file,
and after a short pause it starts to write to a newly created virtual
disk delta file. Because the I/Os to the original virtual disks are
frozen, the virtual machine can revert to the snapshot by discarding
the delta files. If the snapshot is deleted, the delta file and virtual disk
files are merged to create a single file image of the virtual disk.
After the VMware snapshot is taken, initiate a virtual machine
backup by creating an EMC SnapSure checkpoint of a NAS datastore,
or a SnapView snapshot of a VMFS or RDM LUN. Snapshots of
datastores containing all virtual disks that belong to the virtual
machines constitute the entire backup set. All files related to a
particular virtual machine must be restored together to revert to the
system state when the VMware snapshot was taken. To use the
snapshot approach, organize the datastore to include only virtual
machines that can be backed up and restored together. Otherwise
restoring a LUN is not possible without impacting all the virtual
machines in the datastore.
If the backup set is intact, crash consistency is maintained even if the
virtual machine has virtual disks provisioned across different storage
types or protocols (VMFS, NFS, or RDM Virtual Mode).
Virtual machine data consistency

129

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

To perform backup operations, complete the following steps:


1. Initiate a VMware snapshot. Set the flags to capture the memory
state and quiesce the file systems.
2. Create VNX checkpoints for file systems, or snapshots for block
storage systems containing the virtual machine disks to be backed
up.
Note: EMC Storage Viewer and Unisphere Virtualization views assist
with identification of the VNX storage devices backing each datastore.
VSI Storage Viewer on page 21 provides more details.

3. Delete the VMware snapshot.


To perform restore operations, complete the following steps:
1. Power off the virtual machine.
2. Perform checkpoint or snapshot restore of all datastores
containing virtual disks that belong to the virtual machine.
3. Update the virtual machine status within the vSphere UI by
restarting the management agents on console of the ESXi host.
Detailed information is available in the VMware KB 1003490.
(Wait for 30 seconds for the refresh and then proceed.)
4. Open the VMware Snapshot Manager and revert to the snapshot
taken in the backup operation and delete the snapshot.
5. Power on the virtual machine.
Replication Manager on page 139 provides details on how
Replication Manager supports creating VMFS datastore replicas in a
vSphere environment, and provides point-and-click backup and
recovery of virtual machine-level images and selective file restore in
versions 5.3.1 and later.

130

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

VNX native backup and recovery options


VNX provides native utilities to create replicas of file systems and
LUNs that support the ESXi environment. These utilities are not
meant for enterprise management of the vSphere environment.
Production environments should procure enterprise-level storage
management tools such as Replication Manager.

File System logical backup and restore using VNX SnapSure


Use VNX SnapSure to create near-line logical backups of individual
NFS datastores mounted on ESXi hosts. Unisphere provides the
interface to create one-time file system checkpoints and to define a
checkpoint schedule for automating the creation of new file system
checkpoints on VNX.
Note: SnapSure checkpoint file systems are accessed from the root of the file
system where they are created. The default location for checkpoints is a
hidden directory called.ckpt. To make the hidden directory visible to the
vCenter Datastore Browser, set the value of the Data Mover parameter
cfs.showChildFSRoot to 1, as shown in Figure 56 on page 132.

VNX native backup and recovery options

131

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 56

ShowChildFsRoot parameter properties in Unisphere

All virtual machines sharing a datastore are backed up and recovered


simultaneously in a single operation. To recover an individual virtual
machine, complete the following steps:
1. Power off the virtual machine.
2. Browse to the appropriate configuration and virtual disk files of
the specific virtual machine.
3. Copy the files from the checkpoint and add it to the datastore
under the directory /vmfs/volumes/<ESXi filesystem>/VM_dir.
4. Power on the virtual machine.

132

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Physical backup and restore using VNX File Replicator


Use VNX File Replicator to create a physical backup of NFS
datastores mounted on ESXi hosts. This is accomplished using the
/nas/bin/nas_replicate command from the Celerra CLI interface.
Back up multiple virtual machines simultaneously if they reside in
the same datastore. If image level granularity of the virtual machine
is required, move the virtual machine to its own datastore or consider
the EMC Replication Manager data protection solution.
File system replication can be local or remote. After the copy task has
completed, stop the replication to make the target file system a
stand-alone copy. If required, this target file system can be made
read-writeable. The target file system can be NFS mounted to any
ESXi host that allows selective restore of virtual machine files or
folders. If VMware snapshots already exist at the time of the backup,
the Snapshot Manager in the VI client may not report all VMware
snapshots correctly after a virtual machine restore. One way of
updating the GUI information is to remove the virtual machine from
the inventory and import it again. To recover an entire file system,
establish a replication session from the target file system to the
production file system with the nas_replicate command.

VNX native backup and recovery options

133

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Backup and recovery of a VMFS datastore


EMC SnapView for VNX provides the functionality to protect VMFS
datastores using either pointer-based replicas (snapshots), or full
volume copies (clones) of VNX LUNs. This storage system
functionality is exposed through Unisphere, Unisphere Snapshot
Configuration Wizard, or the admsnap utility. In enterprise
environments, LUN protection is controlled by Replication Manager
for simplified configuration, automation, and monitoring of the
replicas. The utilities covered in this section of the document offer a
manual method to create or restore a replica of a VNX LUN.
When a snapshot is activated, it tracks all the blocks of data for the
LUN. As the LUN is modified, original data blocks are copied to a
separate device in the reserve LUN pool.
Similarly, a clone private LUN pool is used to maintain various states
between the source and target LUN in a clone relationship. Ensure to
configure the reserved LUN and clone private LUN pools before
performing these operations.
SnapView operates at a LUN level, which means that VNX replicas
are most effective to create consistent virtual machine images when
the datastore is provisioned from a single LUN.
Note: Using metaLUNs instead of mult-extent LUNs will simplify creation of
datastore snapshots and enhance their usefulness in business practices such
as backup and restores.

Additionally, if multiple virtual machines share the same VMFS


datastore, they are backed up and recovered together as part of the
snap or restore operation. Avoid grouping dissimilar virtual
machines that can be inadvertently overwritten during a restore
operation. It is possible to perform manual restores of individual
virtual machines when a snapshot is attached to the ESXi host.
To create and assign a SnapView snap by using the Snapshot
Configuration Wizard, complete the following steps from Unisphere:
1. Launch the wizard and identify the production server where the
source LUN exists.
2. Select the Storage System, and the LUN of interest to the
SnapView session.

134

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 57

Snapshot Configuration Wizard

3. Select the appropriate number of copies for each source LUN and
optionally assign the snapshot to other ESXi Hosts as shown in
Figure 58 on page 136.

Backup and recovery of a VMFS datastore

135

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

4. Specify the snapshot name.


5. Select the host to present the snapshot image.
6. Review the configuration information and click OK to create and
mount the snapshots.
7. Rescan the ESXi hosts and verify if the storage appears in the
correct location.

Figure 58

Snapshot Configuration Wizard (continued)

If a copy of a VMFS volume is presented to any VMware ESXi cluster,


the ESXi host automatically masks the copy. The device holding the
copy is determined by comparing the signature stored on the device
with the computed signature. A clone, for example, has a different
unique ID from the source LUN it is associated with. Hence, the
computed signature for a clone is always different from the
computed signature of the source LUN. This ensures that the
VMware ESXi hosts identify the correct device.

136

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

vSphere provides two mechanisms to access copies of VMFS


volumes. Selective resignaturing is available for an individual LUN
using the Keep existing signature option or the Assign a new
signature option.

If Keep existing signature is selected for an individual LUN, the


copy of the data is presented with the same label name and
signature as the source device. However, on VMware ESXi hosts
that have access to both source and target devices, the parameter
has no effect because VMware ESXi never presents a copy of the
data if there are signature conflicts.

If Assign a new signature is selected for an individual LUN, the


VMFS-3 volume holding the copy of the VMware file system is
automatically resignatured with the computed signature (using
the UID and LUN number of the target device). In addition, the
label is appended to include snap-x, where x is a hexadecimal
number that can range from 0x2 to 0xffffffff.

Failure to set the parameters properly may result in the host viewing
the LUN as a new device, which can only be added to the
environment by formatting it as a new datastore. When the snapped
VMFS LUN is accessible from the ESXi host, the virtual machine files
can be copied from the snapped datastore to the original VMFS
datastore to recover the virtual machine.

Backup and recovery of a VMFS datastore

137

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Backup and recovery of RDM volumes


VNX presents LUNs to an ESXi host as VMFS formatted volumes or
RDM volumes. An RDM volume is a raw device mapped directly to
the virtual machine. RDMs provide capabilities similar to a VMFS
virtual disk, while retaining the properties of a physical device so
administrators can take full advantage of storage array-based data
protection technologies. EMC SnapView provides logical protection
of RDM devices to create snapshot images.
To back up an RDM volume physically, administrators can use
Replication Manager or RecoverPoint to create usable copies of the
device.
With RDM, administrators can create snapshots or clones in one of
the following ways:

Use the admsnap command or the Unisphere Snapshot


Configuration Wizard.

Use Replication Manager to create stand-alone snapshots or


clones of the RDM volumes and integrate them with Windows
applications for application-level consistency.

Note: Replication Manager functions only with RDM volumes created with
the physical compatibility mode option and formatted as NTFS volumes.

138

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Replication Manager
EMC Replication Manager is a software solution that integrates with
EMC data protection technologies to simplify and automate
replication tasks. Replication Manager uses EMC SnapSure or EMC
SnapView to create local replicas of VNX datastores.
Replication Manager provides additional protection by creating
VMware snapshots of all online virtual machines before creating local
replicas. This step ensures that the operating system of the virtual
machine is in a crash-consistent state when the replica is created.
Replication Manager uses a physical or virtual machine to act as a
proxy host to manage all tasks within the VMware and VNX storage
environment. The proxy host must be configured to communicate
with the vCenter Server and the storage systems. It performs
enumeration of storage devices from the virtualization and storage
environment and performs the necessary management tasks to
establish consistent copies of the datastores and virtual machine
disks. Use the Replication Manager Job Wizard, as shown in
Figure 59 on page 140, to select the replica type and expiration
options. Replication Manager 5.2.2 must be installed for datastore
support.

Replication Manager

139

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 59

Replication Manager Job Wizard

Select the Restore option in Replication Manager to restore the entire


datastore.
Complete the following steps before restoring the replicas:
1. Power off the virtual machines that are hosted within the
datastore.
2. Remove those virtual machines from the vCenter Server
inventory.
3. Restore the replica from Replication Manager.
4. After the restore is complete, import the virtual machines to the
vCenter Server inventory.

140

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

5. Revert to the VMware snapshot taken by Replication Manager to


obtain an operating system consistent replica and delete the
snapshot.
6. Manually power on each virtual machine.
Replication Manager creates a rollback snapshot for every VNX file
system that has been restored. The name of each rollback snapshot is
located in the restore details as shown in Figure 60 on page 141.
Delete the rollback snapshot after the contents of the restore are
verified.

Figure 60

Replica Properties in Replication Manager

Use the Mount option to restore a single virtual machine in


Replication Manager. Using this option, it is possible to mount a
datastore replica to an ESXi host as a read-only or read-write
datastore.
To restore a single virtual machine, complete the following steps:
1. Mount the read-only replica as a datastore on the ESXi host as
shown in Figure 61 on page 142.
2. Power off the virtual machine that resides on the production
datastore.

Replication Manager

141

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

3. Remove the virtual machine from the vCenter Server inventory.


4. Browse to the mounted datastore.
5. Copy the virtual machine files to the production datastore.
6. Add the virtual machine to the inventory again.
7. Revert to the VMware snapshot taken by Replication Manager to
obtain an operating system consistent replica and delete the
snapshot.
8. Unmount the replica through Replication Manager.
9. Power on the virtual machine.

Figure 61

142

Read-only copy of the datastore view in the vSphere client

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

vStorage APIs for Data Protection


With vSphere 4.0, VMware introduced vStorage APIs for Data
Protection (VADP). Much like its predecessor, VMware Centralized
Backups, VADP provides an interface into the vCenter environment
for creation and management of virtual machine snapshots. VADP
can be leveraged by data protection vendors to automate and
streamline nondisruptive virtual machine backups. A key feature of
VADP is Changed Block Tracking (CBT), which allows a data
protection application to identify modified content on the virtual
machine based upon a previous VMware snapshot. This reduces the
amount of data that needs to be backed up when using differential
backups of virtual machines.
The benefits are a reduction in the amount of time required to back
up an environment and storage savings that are achieved by backing
up only the required data blocks instead of the full virtual machine.

vStorage APIs for Data Protection

143

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

VADP integrates with existing backup tools and technologies to


perform full and incremental file backups of virtual machines.
Figure 62 on page 144 illustrates how VADP works.

Figure 62

144

VADP flow diagram

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Backup and recovery using VMware Data Recovery


VMware Data Recovery is a disk-based backup and recovery solution
built on the VMware vStorage API for data protection, and it uses a
virtual machine appliance and a client plug-in to manage and restore
backups. VMware Data Recovery can be used to protect any kind of
OS. It incorporates capabilities such as block-based data
deduplication and performs only incremental backups after the first
full backup to maximize storage efficiency. VNX CIFS, iSCSI, and FC
storage can be used as destination storage for VMware Data
Recovery backups. Each virtual machine backup is stored on a target
disk in a deduplicated store.

Figure 63

VMware data recovery

Backup and recovery using VMware Data Recovery

145

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

During the backup, VMware Data Recovery takes a snapshot of the


virtual machine and mounts the snapshot directly to the VMware
Data Recovery virtual appliance. After the snapshot is mounted,
VMware Data Recovery begins streaming the blocks of data to the
destination storage as shown in Figure 63 on page 145. During this
process, VDR uses the VADP to identify changed blocks and
minimize the amount of data required to be backed up. VDR
deduplicates the stream of data blocks to ensure that redundant data
is eliminated prior to writing the backup to the destination disk.
VMware Data Recovery uses the CBT functionality on ESXi hosts to
identify the changes since the last backup. The deduplicated store
creates a virtual full backup based on the last backup image and
applies the changes to it. When all the data is written, VMware Data
Recovery dismounts the snapshot and takes the virtual disk out of
the snapshot mode. VMware Data Recovery supports only full and
incremental backups at the virtual machine level, and does not
support backups at the file level.
Figure 64 shows a sample backup screenshot.

Figure 64

146

VDR backup process

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

When using VMware Data Recovery, adhere to the following


guidelines:

While a VMware Data Recovery appliance can protect up to 100


virtual machines, it supports the simultaneous use of only two
backup destinations. To use multiple backup destinations,
configure run at different times and ensure that the backup
destination size does not exceed 1 TB.

A VMware Data Recovery appliance cannot use an NFS file


system as a backup destination. NFS storage can be used by
creating virtual machine disks within an NFS datastore and
assigning them to the VDR appliance.

VMware Data Recovery supports RDM virtual and physical


compatibility modes as backup destinations. Use the virtual
compatibility mode for RDM as a backup destination.

When using VMFS over iSCSI as a backup destination, choose the


block size that matches the storage requirements. Selecting the
default 1 MB block size allows a maximum virtual disk size of
256 GB.

Ensure that similar virtual machines are backed up to the same


destination. Because VMware Data Recovery performs data
deduplication within and across virtual machines, virtual
machines with the same OS have one copy of the OS datastored.

The virtual machine must not have a snapshot named _data


recovery_ prior to a backup performed by VMware Data
Recovery because VDR creates a snapshot named _data
recovery_ as a part of its backup procedure. If the snapshot with
the same name exists already, the VDR will delete and re-create it.

Backups of virtual machines with RDM can be performed only


when the RDM is running in virtual compatibility mode.

VMware Data Recovery provides an experimental capability


called File Level Restore (FLR) to restore individual files without
restoring the whole virtual machine for Windows machines.

VMware Data Recovery only copies the state of the virtual


machine at the time of backup. Pre-existing snaps are not a part
of the VMware Data Recovery backup process.

Backup and recovery using VMware Data Recovery

147

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Backup and recovery using Avamar


EMC Avamar is a backup and recovery software product. Avamar
software provides an integrated software solution to accelerate
backup and restore of virtual machine and application data in a
vSphere environment. Avamar provides source and global data
deduplication to reduce the amount of backup data that needs to be
copied across the network and stored on disk. Global deduplication
means that Avamar stores a single copy of each unique sub-file
variable length data segment for all protected physical and virtual
servers in the environment.
After completing an initial backup of the virtual machine, Avamar
can create full restore backups of virtual machine which consume a
fraction of the space and time used to create the original. Avamar
integration with vCenter and VMware vStorage APIs allows it to
leverage the Change Block Tracking feature of vSphere to identify the
data blocks of interest for the backup job. It then applies
deduplication based upon the global view of stored data and copies
only globally unique blocks to the Avamar Storage Node or Avamar
Virtual Edition (AVE) server greatly reducing backup times and
storage capacity required to support the backup environment.
Avamar significantly reduces backup times, backup capacity
requirements, ESXi host resource utilization of CPU, network, and
disk resources.

Architectural view of the Avamar environment


Avamar Server is a core component providing management of the
backup environment and the storage for the virtual machine backups.
The server provides the management, services, and file system
storage to support all backup and administrative actions. Avamar has
the following server types:

148

Avamar Data Grid - An all-in-one server that runs Avamar


software on a preconfigured EMC-certified hardware platform.
Options include a single and multi-node version that can use
internal or SAN storage.

Avamar Virtual Edition for VMware - AVE is a fully functional


Avamar Server that installs and runs as a Virtual appliance
within a vSphere environment.

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Both physical and virtual edition products provide the same


capabilities. However, AVE is easily deployed in vSphere
environments and since it can be backed by VNX Block storage it
offers a simplified and optimal solution for protection of virtual
machines, application and user data. AVE can also perform
significantly faster in VMware environments than the Avamar
Datastore. Figure 65 illustrates a sample configuration with a DRS
cluster and multiple ESXi hosts with access to VNX Block LUNs.
These LUNs house the virtual machines in the environment. The
environment illustrates three types of virtual machines - production
virtual machines, image proxies, and file-level proxies.
The Production virtual machines can be any VMware supported OS
and serve any application role or function. In this scenario the virtual
machines do not require an Avamar agent.

Figure 65

Sample Avamar environment

Backup and recovery using Avamar

149

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Avamar backups
Avamar provides the following backup options for vSphere
environments:

File Level Backup - File level backups are enabled by installing


the Avamar Client inside of the guest OS and registering the
client with an Avamar Server. This option provides a scheduled
backup of all files on the virtual machine and also allows the user
to manually backup and restore files to their desktop virtual
machine. The reality is that the client capabilities are the same as
those provided when the Avamar client is installed in a physical
desktop/laptop/server system.
After the client is installed, the administrative requirement is
minimal. Scheduled backups occur based upon administrative
policy, and the desktop user can perform manual backups and
restores as necessary. Avamar client runs as a low priority process
in the virtual machine to limit the impact of the backup operation
on other processes. From a vSphere standpoint Avamar can
throttle Virtual Machine CPU to limit the amount of ESXi host
CPU resources being consumed during backup operations.

150

Image Level Backups - Image Level backups allow the VSphere


environment to be backed up without installing a client within
each virtual machine. They use one or more Avamar virtual
machine Image Proxy servers that have access to the shared VNX
Storage environment.

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

The Image Proxy is provided as a downloadable ova image accessible


through the web interface of the AVE server. As is the case with AVE,
the Image Proxy server is installed as a virtual machine appliance
within vCenter and a separate Image Proxy server is required for
Windows and Linux virtual machine image backups.

Figure 66

Sample proxy configuration

After you install and configure the proxy to protect either Windows
or Linux Virtual Machines and configure Avamar to protect the VM,
you can schedule backups to run on-demand. Avamar integrates with
vCenter and offers a similar management interface to import and
configure VM protection. Figure 66 shows a sample proxy
configuration.
Avamar Manager can also enable Change Block Tracking (CBT) for
virtual machines to further accelerate backup processing. With CBT
enabled, Avamar can easily identify and deduplicate the blocks that

Backup and recovery using Avamar

151

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

VMware has flagged without the need to perform any additional


processing. This allows for a more efficient and fast backups of the
virtual machine image. Refer to Figure 67 for more detail.
Note: Changed block tracking is available with virtual machine versions 7
and later. If you are need to backup an older machine, then you must update
the virtual machine by performing a virtual machine hardware upgrade
before you can enable the CBT.

Figure 67

Avamar backup management configuration options

When a backup job starts, Avamar signals the vCenter server to create
a new Snapshot image of each VMDK specified in the backup policy.
It uses VADP to SCSI hot-add to mount it to the image proxy. If
change block tracking is enabled, Avamar uses it to filter the data that
is targeted for backup. After Avamar establishes a list of blocks, it
applies deduplication algorithms to determine if the segments are
unique. If they are it copies them to the AVE server otherwise it
creates a new pointer referencing the existing segment on disk. The
image proxy then copies those blocks to the Avamar Virtual
Appliance which is backed by virtual disks created from VNX
storage.

152

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

As mentioned multiple proxies are required to protect Windows and


Linux environment. The administrator can also deploy additional
proxies to provide scalability and allow simultaneous backups and
recoveries. For scalability, Avamar provides the ability to configure
each Image Proxy to protect multiple datastores from vCenter, or to
load balance backups across all of them in a round robin fashion.
Avamar data recovery
In addition to multiple backup options, Avamar provides multiple
recovery options as well. The two most common recovery requests
made to backup administrators are:

File-level recovery - Object-level recoveries account for the


majority of user support requests. Common actions requiring
file-level recovery are, individual users deleting files,
applications requiring recoveries, and batch process-related
erasures.

System recovery - Although complete system recovery requests


are less frequent in number than those for file-level recovery this
bare metal restore capability is vital to the enterprise. Some
common root causes for full-system recovery requests are viral
infestation, registry corruption, or unidentifiable unrecoverable
issues

With the client installation, a user can perform self-service file


recovery, by browsing the file system and identifying the files they
need to restore.
Virtual machine image restore
The image proxy can be used to restore an entire image. Image
backup content can be restored to the original virtual machine, a new
virtual machine, or a pre-existing alternate virtual machine with a
configuration similar to the original. Avamar Image Proxy can be
used to restore a virtual machine image to the same location where it
was created, a different existing virtual machine, or as a new virtual
machine to a different location in the environment. Figure 68 shows a

Backup and recovery using Avamar

153

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

virtual machine being restored to its current location. In this example


the virtual machine was deleted from disk and is being restored to
the existing datastore.

Figure 68

Avamar virtual machine image restore

Avamar file-level recovery proxy


An Avamar file-level recovery proxy is a virtual machine that allows
a single file(s) to be recovered into a virtual machine from a full
image backup. This virtual machine leverages the Avamar Virtual
File System (AvFS) to create a view that users can browse in a virtual
machine's VMDK file. From this view the administrator can select
any files or folders to restore to the virtual machine in the current
location, or specify a new location within the target virtual machine.
The Avamar file-level proxy feature is only available for Windows
virtual machines at this time.
The Avamar VMware Windows file-level restore feature is
implemented using a Windows proxy client virtual machine. The
Avamar and VMware software running on the Windows proxy
requires a CIFS share which is exported by the Avamar server. This
CIFS share provides a remote hierarchical filesystem view of the

154

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

backups stored on the Avamar server. VMware Image Backups are


accessed by way of the CIFS share in order to browse and restore
their contents.
When backups are selected for recovery, the FLR proxy server reads
the VMDK data from the Avamar system and creates a browse tree
that is presented to the administration GUI shown below in
Figure 69.

Figure 69

Avamar browse tree

Restore requests will pass from the Avamar system and through the
Windows FLR proxy and on to the machine that is being protected.
The recovery speed with this operation is limited to the FLR proxy's
ability to read in the data and send it to the machine that the
administrator is recovering to. Therefore, large data recoveries
through the FLR proxy recovery are not advisable. In this instance an
image-level out-of-place recovery is more efficient.
Note: In order for File Level Recovery to work with target virtual machine
must be powered on and running virtual machine Tools.

Backup and recovery using Avamar

155

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Consider the following items when setting up the environment:

Avoid using File-Level restore to browse folders or directories


with thousands of files or sub-directories. A better approach
would be to restore the virtual machine and use the native
operating system to browse and identify the files you wish to
restore.

Avamar proxy clients do not need to be backed up and backup of


proxy clients is not supported. The proxy client virtual machines
can be readily redeployed from template if needed.

Avamar image backup is dependent on reliable DNS service and


time synchronization. Network routing and firewall settings
must be properly configured to allow access to the network hosts
providing these services.

SSL certificate installation across the vCenter, ESXi hosts, and


Avamar proxy virtual machine appliances is highly desirable.
However, SSL certificate authentication can be turned off at the
Avamar server.

Use multiple network interfaces for HA configurations of the


Avamar Datastore Node.

Backups are a "crash-consistent" snapshot of the full virtual


machine image. Use the Avamar client for OS and application
consistent backups.

An image proxy will only perform one backup at a time. Parallel


processing can only be achieved by having more than one proxy
in an environment.

The image backup process requires temporary creation of a


VMware virtual machine snapshot.

Supported virtual disk types for Image Backup are:


Flat (version 1 and 2)
Raw Device Mapped (RDM) in virtual mode only (version 1
and 2)
Sparse (version 1 and 2)

156

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Backup and recovery using NetWorker


NetWorker performs a full image-level backup for virtual machines
running any OS, as well as file-level backups for virtual machines
running Microsoft Windows without requiring a backup agent in the
guest hosts:

AgentNetWorker agent architectures are particularly focused


on environments where application consistency is required. For
virtual machine backups requiring application integration, the
agent is used to place the application and operating system into a
consistent state before generating a snapshot and starting
subsequent backup tasks. The agent configuration does require
additional client administration to install and maintain agents on
all of the virtual machines. If crash-consistent or operating
system consistent images are sufficient, VADP may be a better
option.

VADPNetWorker 7.6 SP2 introduces integration with VMware


environments to support virtual machine protection using VADP.
VADP is used in a NetWorker environment to make a snapshot
copy of a running virtual machine disk. The NetWorker
architecture offers the ability to architect flexible backup
solutions which improve backup processes, reduce backup
windows, and reduce the amount of space required to store
backup images.

Backup and recovery using NetWorker

157

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 70

NetWorkervirtualization topology view

When a backup is initiated through EMC NetWorker, it uses the


VADP API to generate virtual machine snapshots on the vCenter
server. The snapshots can be hot-added to a VADP Proxy host for
LAN-free backups. The snapshot created by NetWorker is identified
as _VADP_BACKUP_ as shown in Figure 71 on page 159.

158

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 71

VADP snapshot

VNX storage devices for NetWorker


NetWorker offers the flexibility to use multiple storage types as
targets for backup jobs. Supported storage types include standard
physical tape devices, virtual tape libraries, and Advanced File Type
Devices (AFTD) provisioned on VNX storage. An AFTD device can
be configured on the Network server or Storage Node using block
storage (LUN), or a NAS file system. NL-SAS LUNs or VNX FAST
Pools LUNs consisting of NL-SAS drives are ideal for AFTDs.

Backup and recovery using NetWorker

159

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Figure 72

NetWorker configuration settings for VADP

When planning to use VADP with vSphere, consider the following


guidelines and best practices:

160

All virtual machines should have the latest version of VMware


tools installed. Without VMware tools, the backup created by
VADP is crash-consistent.

Image-level backup must be performed on virtual machines


running any OS. Perform file-level backup only on Windows
virtual machines.

RDM physical mode is not supported for VADP.

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

When an RDM disk in virtual mode is backed up, it is converted


to a standard virtual disk format. When it is restored, it will not
be in RDM format.

When using LAN mode, virtual disks cannot exceed 1 TB each.

The default backup mode is SAN. To perform LAN-based


backup, modify TRANSPORT_MODE to nbd, nbdssl, or
hotadd in the config.js file.

The hot-add transport mode does not support the backup of


virtual disks belonging to different datastores.

Before performing file-level backup, VADP creates a virtual


machine snapshot named _VADP-BACKUP_. A NetWorker
backup will fail if a snapshot with the same name already exists.
Change the parameter to modify this default behavior.
PREEXISTING_VADP_SNAPSHOT to delete in the config.js
file.

If a backup job fails, virtual machines can remain mounted in the


snapshot mode. NetWorker Monitoring Window provides an
alert when a snapshot needs to be manually removed.

VADP searches for the target virtual machines by IP address. The


virtual machine must be powered on the first time it is backed up
so that virtual disk information can be relayed to NetWorker
through the vCenter server. This information is cached on the
VADP proxy and used for subsequent backup jobs. A
workaround is to switch to the virtual machine lookup by name
setting VM_LOOKUP_METHOD=name in config.js.
Note: The backup will fail if there are duplicated virtual machine names.

Beginning with release 7.4.1 of NetWorker, users must add each


virtual machine to be backed up to NetWorker as a client. The
NetWorker client software is not required on the virtual machine.
It is recommended that with NetWorker release 7.4.1 or later, the
VADP method to find virtual machines should be based on the
virtual machine IP address (default method).

Backup and recovery using NetWorker

161

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

VNX NAS file system NDMP backup and restore using NetWorker
NetWorker provides two methods of storage integration with VNX
NFS datastores. VNX provides file systems to use as Advanced File
System Type Devices (AFTD), or configured as a virtual tape library
unit (VTLU).
After configuring a VTLU on the VNX file system, configure
NetWorker as an NDMP target for backing up NFS datastores that
reside on the VNX platform. Configure NetWorker to use VNX File
System Integrated Checkpoints to create NDMP backups in the
following manner:
1. Create a Virtual Tape Library Unit (VTLU) on VNX NAS.
2. Create a library in EMC NetWorker.
3. Configure NetWorker to create bootstrap configuration, backup
group, backup client, and so on.
4. Run NetWorker backup.
5. Execute NetWorker Recover.
The entire datastore or individual virtual machines are available for
backup or recovery. Figure 73 shows NetWorker during the process.

Figure 73

NDMP recovery using NetWorker

To use VNX file backup with integrated checkpoints, set the


environment variable SNAPSURE=y. This feature automates
checkpoint creation, management, and deletion activities by entering

162

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

the environmental variable in the qualified vendor backup software.


The setting of the SNAPSURE variable for creating a backup client
with EMC NetWorker is illustrated in Figure 74.

Figure 74

Backup with integrated checkpoint

When the variable is set in the backup software, a checkpoint of the


file system is automatically created (and mounted as read-only) each
time particular jobs are run and before the start of the NDMP backup.
This automated process allows production activity to continue
uninterrupted on the file system. When the backup completes, the
checkpoint is automatically deleted.

Backup and recovery using NetWorker

163

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Summary
This section has provided several backup options and examples of
virtual machine protection. There are native options and tools within
the VNX Storage System that provide the ability to create replicas or
Snapshots of the device backing the datastore. SnapSure for example,
can be used to create a point in time copy of an NFS datastore. Similar
capabilities exist using LUN clones or snapshots for VNX block
environments.
The Virtual Data Recovery appliance can be deployed and configured
fairly easily and populated with VNX Block storage to support up to
100 virtual machines for each appliance.
In larger environments EMC Avamar will scale significantly better
and introduce considerable benefits for global data deduplication
and reduced resource requirement in all areas of backup. EMC
Avamar Virtual Edition for VMware and Avamar Image Proxy,
provided as virtual appliances, can be installed and configured
quickly with tight vCenter integration for vSphere environments.
These products can be backed by VNX storage providing an efficient
and scalable data protection solution.
EMC Networker offers an image protection option for vSphere with
tight integration with vCenter to create and manage individual VM
backup and restore options. Networker provides NDMP support
with for VNX OE for block as well as integration with VNX OE for
file Virtual Tape Libraries. Table 6 on page 165 summarizes some of
the backup technologies and products that can be used to establish
image and file backup approaches. VNX Storage System and vSphere
are integrated with many data protection solutions. The information
in this section and in the table is not provided or meant to be a
comprehensive list of qualified products, but more of an example of
the data protection options and technologies that exist within EMC
VNX and VMware vSphere.

164

Using EMC VNX Storage with VMware vSphere

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

Table 6 on page 165 summarizes the backup and recovery options for
VNX with vSphere 4.
Table 6

Backup and recovery options


Backup/recovery
Image level

File level

VMFS/NFS
datastore

Avamar Client or File Level


Recovery
EMC SnapSure/SnapView
/Replication Manager

RDM (physical)

Replication Manager

N/A

RDM
(virtual)

VDR
Avamar Proxy
NetWorker

Avamar
NetWorker

Avamar Image Proxy


NDMP
VDR
EMC NetWorker
EMC
SnapSure/SnapClone

Summary

165

Establishing a Backup and Recovery Plan for VMware vSphere on VNX Storage

166

Using EMC VNX Storage with VMware vSphere

4
Using VMware vSphere
in Data Restart Solutions

This chapter contains the following topics:

Introduction ......................................................................................
Definitions.........................................................................................
EMC remote replication technology overview ............................
RDM volume replication.................................................................
Replication Manager........................................................................
Automating Site Failover with SRM and VNX............................
Summary ...........................................................................................

Using VMware vSphere in Data Restart Solutions

168
169
172
187
191
193
203

167

Using VMware vSphere in Data Restart Solutions

Introduction
With the number of servers being virtualized, it is critical to have a
Business Continuity (BC) plan for the virtualized datacenter.
Administrators can use native VNX and EMC replication
technologies to create stand-alone Disaster Recovery (DR) point
solutions or combine them with VMware Site Recovery Manager to
provide an integrated disaster recovery solution.
This section focuses on using EMC replication technologies to
provide DR solutions. It covers remote replication technologies to
create full copy LUN replicas at the secondary site. Replicas can
satisfy business processes or be integrated into disaster recovery
solutions. These solutions normally involve a combination of virtual
infrastructure at multiple geographically-separated data centers with
EMC technologies replicating data between them.
Topics covered in this section include:

EMC Replication configurations and their interaction with ESXi


hosts.

Integration of guest operating environments with EMC


technologies.

Use of VMware vCenter Site Recovery Manager to manage and


automate a site-to-site disaster recovery with VNX.

A review of the following replication options and ESXi host


application-specific considerations:
EMC VNX Replicator
EMC MirrorView
EMC RecoverPoint

168

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Definitions
The following terms are used in this chapter.

Dependent-write consistency: This is a state where data integrity


is guaranteed by dependent-write I/Os embedded in application
logic. A dependent-write I/O cannot be issued until a related
predecessor I/O is completed. Database management systems
provide a good example of the practice of dependent-write
consistency. Typically, the dependent-write is a data or index
write, while the predecessor write is a write to the log. Since the
write to the log must be complete before issuing the
dependent-write, the application thread is blocked until the log
write completes. The result is a write consistent database.

Disaster restart: Involves the implicit use of active logs during


system initialization to ensure transactional consistency. If a
database or application is shut down normally, consistency is
established very quickly. However, if a database or application
terminates abnormally, the restart process takes longer, and is
dependent on the number and size of transactions that were
in-flight at the time of termination. A replica image created from
a running database or application without any preparation is
considered to be restartable, which is similar to the state
encountered during a power failure. During restart, the image
achieves transactional consistency by completing committed
transactions and rolling back uncommitted transactions.

Disaster recovery: The process of rebuilding data from a backup


image, and applying subsequent logs to update the environment
to a designated point of consistency. The mechanism to create
recoverable copies of data depends on the database and
applications.

Roll-forward recovery: In some cases, it is possible to apply


archive logs to a Database Management System (DBMS) image to
roll it forward to a point in time after the image was created. This
capability offers a backup strategy that consists of a baseline
image backup and archive logs to establish the recovery point.

Recovery point objective (RPO): A consistent image used to


recover the environment. An RPO is the consistency point you
want to establish after a failure. It is defined by the acceptable
amount of data loss between the time the image was created and
when a failure occurs.

Definitions

169

Using VMware vSphere in Data Restart Solutions

Zero data loss is the ideal goal, but it has added financial and
application considerations. For regulated business services such as
those in the financial sector, zero data loss may be a required,
resulting in synchronous replication of each transaction. That added
protection can impact application performance and infrastructure
costs.

Recovery time objective (RTO): The RTO is the maximum


amount of time allowed after the declaration of a disaster to
recover to a specified point of consistency. This includes the time
taken to:
Provision power and utilities
Configure server software and networking
Restore data at the new site
Roll the environment forward and validate data to a known
point of consistency
Some delays can be reduced or eliminated by choosing certain
disaster recovery options such as:
Establishing a hot site with preconfigured servers.
Implementing a storage replication solution to ensure that
applications are started with current data.
Like RPO, each RTO solution has a different cost profile. Defining
the RTO is usually a compromise between the cost of the solution
and the potential revenue loss when applications are unavailable.

Design
considerations for
disaster recovery
and data restart

The effect of data loss or application availability varies from one


business to another. Data loss and availability are the business drivers
that determine baseline requirements for a disaster restart or disaster
recovery solution. When quantified, the acceptable loss of data is
referred to as the RPO, while loss of uptime is known as RTO.
When evaluating a solution, you must ensure that the RPO and RTO
requirements of the business are met. In addition, the solution's
operational complexity, cost, and its ability to return the entire
business to a point of consistency need to be considered. Each of
these aspects is discussed in the following sections.

170

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Testing the solution


Tested, proven, and documented procedures are required for a
disaster recovery solution. Often, the test procedures are
operationally different from the disaster recovery procedures.
Operational procedures must be clearly documented. Companies
should periodically execute the actual disaster recovery to ensure that
the procedures are up to date. This could be costly to the business
because of the application downtime, but is necessary to ensure
validity of the disaster recovery solution.

Geographically
distributed virtual
infrastructure

VMware does not provide native tools to replicate data from one
ESXi host to another ESXi host at a geographically separated location.
Software-based replication technology can be used inside virtual
machines or the service console, but may add complexity and
consume significant network and CPU resources. Integrating VNX
storage system replication products with VMware technologies
enables customers to provide cost-effective disaster recovery and
business continuity solutions. Some of these solutions are discussed
in the following sections.
Note: Similar solutions are possible using host-based replication software
such as RepliStor. However, storage-array replication offers a disaster
restart solution with business-consistent views of data from: multiple hosts,
operating systems, and applications.

Definitions

171

Using VMware vSphere in Data Restart Solutions

EMC remote replication technology overview


Business continuity
solutions

The business continuity solution for a production vSphere


environment requires a remote replication technology to ensure that
the data is being reliably applied to a secondary location.
For disaster recovery purposes, a remote replica of the VNX device
used to provide ESXi host storage is required. VNX offers advanced
data replication solutions to help protect file systems and SCSI LUNs.
In case of a disaster, an environment failover to the remote location is
accomplished with minimal administrator intervention.
Each replication session is managed independently with periodic or
synchronous advancement of the remote storage consistency point.
The update frequency is determined based on the WAN bandwidth,
RPO, and change rate.
EMC provides three primary replication options for the VNX Storage
systems: EMC Replicator, which offers native asynchronous
replication for VNX File and NFS datastores, EMC MirrrorView,
which offers native synchronous and asynchronous replication for
VNX Block, and EMC RecoverPoint, which provides sync and async
out of band replication for VNX Block and File datastores.

Table 7

EMC VMware replication options


NFS
EMC Replicator

EMC RecoverPoint CRR

MirrorView

VMFS

RDM

Replication Manager and Site Recovery Manager support each of


these replication technologies. Therefore, you can build DR and BC
solutions using either of the technologies. Table 7 on page 172 lists the
replication capabilities and their target protection for vSphere.
In general, each technology provides a similar set of LUN and
Consistency Group replication capabilities. There are specific
architectural differences, but from a business process standpoint the
primary differences are functional and relate to a number of
supported replicas, manageability, and ease of replica accessibility at
the remote site.
172

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Replicator provides the most comprehensive solution for replicating


file systems supporting NFS datastores in an ESXi environment.
While there is support for NFS with MirrorView and RecoverPoint,
Replicator provides the most flexibility for file within VNX.
MirrorView is integrated at the VNX OE Block level and provides a
considerable number of replica sessions for one-to-one replication of
many VNX LUNs. From a pure DR/replication standpoint it
provides the tightest block integration.
Note: Replicator does not offer a concept of consistency groups to ensure
application consistency across replicated file systems. To improve application
consistency use one of the following options: put all virtual machines in a
single replicated file system, replicate VNX OE File LUNs using MirrorView
or RecoverPoint.

RecoverPoint is the most flexible replication technology and a


provides a level of granularity and integration that is considerably
more useful for integration with applications and business processes.
There are a significant number of point-in-time copies (bookmarks)
that can be taken to establish precise points in time images of the
virtual storage devices allowing multi-purpose use of replicas in
addition to DR.

EMC remote replication technology overview

173

Using VMware vSphere in Data Restart Solutions

EMC Replicator

EMC Replicator offers native file system replication for NFS


datastores exported to ESXi servers. Replicator is an asynchronous
replication solution that provides file-system level replication to a
remote VNX File environment. User-defined update intervals keep
the file systems consistent with the production environment for
upwards of 600 file system sessions per VNX Data Mover.
Replicator uses a default update interval of 10 minutes. At that point
it creates a delta set file including all changes from the previous
update, and transfers the file to the secondary location for play back.
Replication sessions can be customized with different update
intervals and quality of service settings, which prioritizes content
updates between NFS datastores.
Since Replicator operates at a file-system level, all virtual machines in
an NFS datastore are available within the replicated datastore (file
system). It is a good practice to arrange virtual machines with similar
requirements together to improve the reliability and efficacy of the
disaster recovery solution. Organizing virtual machines at a file
system level allows prioritization policies to be applied in accordance
with RPOs.

Replicate a NAS file


system

For remote file system replication, complete the following steps in


Unisphere:
1. Select Data Protection.
2. Click File Replication Wizard - Unisphere.

174

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Figure 75

Replication Wizard

3. Select the replication type as File System.


4. Select Ongoing File System Replication to display the list of
destination VNX Network Servers.
5. Select the destination VNX system to create a read-only,
point-in-time copy of a source file system at the destination.
EMC remote replication technology overview

175

Using VMware vSphere in Data Restart Solutions

Note: The destination can be the same Data Mover (loop back
replication), another Data Mover in the same VNX cabinet or a Data
Mover in a different VNX cabinet.

6. Select the network interface that is used to transfer the replication


delta sets.
Replicator requires a connection called an interconnect between
source and destination Data Movers. The wizard will default to
the first available network interface. If you wish to change that,
select a different interface and continue.
7. Specify a name for this replication session and select the source
file system that you wish to replicate to the remote location.

Figure 76

Replication Wizard (continued)

8. Select a file system at the destination to support the replication


session. If a file system does not exist, create one and click Next to
display Update Policy.

176

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Note: When replication creates a destination file system, it automatically


assigns a name based on the source file system and ensures that the file
system size is the same as the source. Administrators can select a storage
pool for the destination file system, and also select the storage pool used
for checkpoints.

9. Select the interval at which to update the secondary site. The


EMC Replicator accumulates all changes in the defined period,
store them in a delta set file and transfer them to the remote
location where they are applied to the target file system. After the
file system reaches a synchronized state, it is operational and can
be used in a variety of ways. To use an NFS datastore at the
remote location, mount the file system as read-write by either
initiating a failover, reversing the replication direction, or
terminating the replication session. It is also possible to reverse
the replication to activate the environment at the remote location.
After the file system has been mounted read-write, present it to
the ESXi host and manually register the virtual machines.

EMC MirrorView

MirrorView LUN
replication

EMC MirrorView provides synchronous and asynchronous


replication of VNX block storage using Fibre Channel or iSCSI
connections between two separate VNX storage systems. Protection
is assigned to individual LUNs or a consistency group.
In an ESXi host environment, VMFS datastore LUNs can be replicated
to establish a synchronous datastore copy at the remote location.
Devices that are replicated to the secondary site go through an
initialization period to establish a block-for-block image of the source
device. The two useable states for a mirrored LUN aresynchronized
and consistent. In the consistent state, the remote image is in a
synchronous state, but is out of sync because of outstanding updates
that have not been applied to the LUN. Potential cause of this may be
a slow or unavailable replication link between the sites. A LUN or
consistency group at the remote location can be promoted and used

EMC remote replication technology overview

177

Using VMware vSphere in Data Restart Solutions

by ESXi when it is in either of these states. For multiple LUNs, it is a


good practice to use a consistency group. Table 8 lists the MirrorView
limits for the VNX platforms.
Table 8

MirrorView
consistency group

VNX MirrorView limits


VNX5100

VNX5300

VNX5500

VNX5700

VNX7500

Maximum
number of
mirrors

128

128

256

512

1024

Maximum
number of
consistency
groups

64

64

64

64

64

Maximum
number of
mirrors per
consistency
group

32

32

32

64

64

The MirrorView consistency group is a collection of mirrored devices


that function together as a unit within a VNX storage system. All
operations such as synchronization, promotion, and fracture, are
applied to all components of the consistency group. If an event
impacts any component of the consistency group, I/Os are
suspended to all components of the consistency group preserving
write-ordered I/O to the LUNs and the applications they serve.
Members of a consistency group can span across storage processors,
but must be on the same VNX storage system. Although synchronous
and asynchronous mirrors are supported on consistency groups, all
LUNs of a consistency group must be protected by the same

178

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

replication mode. VNX supports 32 LUNs per consistency group for


MirrorView (synchronous and asynchronous). The following figure
shows an example of a consistency group with four LUNs.

Figure 77

Preserving dependent-write consistency with MirrorView consistency


group technology

In this example, a communication failure between SP As of the two


VNX storage systems resulted in fracturing the mirrored LUNs from
that SP. At the point of disruption, MirrorView fractures the rest of
the mirrors in the consistency group. While the secondary images are
fractured, updates to the primary volumes are not propagated to the
secondary volumes, thus preserving consistency of data.
Asynchronous
MirrorView (MV/A)

MirrorView/A is an asynchronous method of replicating up to 256


LUNs from one VNX to another. With MirrorView/A, host writes are
acknowledged immediately and buffered at the source VNX. At an
interval defined by the administrator, a differential view of the source
LUN is created and the changed blocks are copied to the remote VNX
to create consistent, write-ordered point-in-time copies of the
production LUN.
The asynchronous nature of MV/A replication implies a non-zero
RPO. Thus MV/A is designed to provide customers with a RPO
greater than or equal to 30 minutes. A key benefit of this architecture
is that there are no distance limitations between the source and target
VNX storage systems.

EMC remote replication technology overview

179

Using VMware vSphere in Data Restart Solutions

Synchronous
MirrorView (MV/S)

MirrorView provides synchronous replication for LUNs or


consistency groups and ensures that each I/O is replicated to a
remote system. Synchronous replication for vSphere maintains
lockstep consistency between primary and secondary storage. The
write operation from the virtual machine is not acknowledged until
both primary and secondary VNX arrays have a copy of the data in
their write cache.
The trade-off is that MV/S cannot be used for locations separated by
distances greater than 100 kilometers, and there is no propagation
delay (latency) associated with each I/O.
Use the following steps to set up MirrorView replication using
Navisphere Secure CLI commands. The Virtualization tab in
Unishpere enables users to identify LUN numbers and their
relationship to the VMFS datastores and RDM devices when
configuring MirrorView:
Note: The process of configuring synchronous and asynchronous MirrorView
replication is very similar. The only difference is that the -async option must
be specified for asynchronous replication.

1. Type the following command to the production VNX storage


array to enable the MirrorView relationship between the source
and destination array:
naviseccli -h <source-spa> -user <User> -password
<password> -scope 0 mirror enablepath <rmote_spa>

2. Create a mirror on the production array and add the source LUN
as shown in the following example:
naviseccli -h <source-spa> -scope 0 -user <User>
-password <password> mirrorview -create -name LU4 -lun
4

3. Add the secondary image to the mirror as shown in the following


example:
naviseccli -h <source-spa> -user <User> -password
<password> -scope 0 mirror -sync -addimage -name LU4
-arrayhost <remote_spa> -lun 15

4. To access the secondary image at the remote site, promote the


images at the DR site:
naviseccli -h <source-spa> mirror -sync -promoteimage
-name <name>

180

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Note: When the secondary images are in a synchronized or consistent


state, you can create consistent point-in-time copies of the secondary
image using SnapView clones or snapshots without promoting the
images and disrupting the MirrorView session.

Note: If using consistency groups, use the promotegroup command to


promote all mirror images:
naviseccli -h <source-spa> mirror -sync -promotegroup
-name <name>

The MirrorView/Synchronous Command Line Interface Reference


available on Powerlink provides details on using Unisphere CLI with
MirrorView.

Figure 78

EMC VMware Unisphere interface

Figure 79 on page 182 is a schematic representation of the business


continuity solution that integrates VMware vSphere and MirrorView.
The figure shows two virtual machines accessing VNX LUNs as RDM
volumes. The proposed solution provides a method to consolidate

EMC remote replication technology overview

181

Using VMware vSphere in Data Restart Solutions

the virtual infrastructure at the remote site. Since virtual machines


can run on any ESXi host in the cluster, fewer ESXi hosts can support
the replication at the remote location.

Figure 79

Business continuity solution using MirrorView/S in a virtual infrastructure


with VMFS

Fail MirrorView LUNs to


a remote site using CLI

You can fail over MirrorView LUNs or consistency groups to start the
virtual environment at the remote site. In a planned failover, disable,
or shut down the production VNX site before initiating these tasks.
To ensure no loss of data, synchronize secondary MirrorView/S
LUNs before starting the failover process. In addition, pair the
secondary image in a MirrorView/A pair behind the primary image.
Perform a manual update of the secondary image after the
applications at the production site are shut down.
Use the following commands to sync LUNs in MirrorView/S
relationship:
naviseccli -h <source-spa> mirror -sync -promoteimage
-name <name> type normal
naviseccli -h <source-spa> mirror -sync -promotegroup
-name <name> type normal

182

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Achieve the same result for LUNs in MirrorView/A relationship by


substituting the -sync option with the -async as shown in the
following example:
naviseccli -h <source-spa> mirror -async -promoteimage
-name <name> type normal
naviseccli -h <source-spa> mirror -async -promotegroup
-name <name> type normal

These commands make the following changes:

EMC RecoverPoint

Figure 80

Sets the primary images on the production site to write disabled.

Reverses the mirror relationship of the devices. Therefore, the


devices at the remote site assume the primary role and are set to
write enabled.

Resumes the MirrorView link to allow updates to flow from the


remote data center to the production data center.

Allows the cloned virtual machines to be registered and powered


on using the vSphere client or command line utilities.

EMC RecoverPoint provides local and remote LUN replication. EMC


RecoverPoint continuous data protection (CDP) for local replication,
RecoverPoint continuous remote replication (CRR) for remote
replication, and the RecoverPoint continuous local and remote (CLR)
data protection features provide sequential, remote, and local
replication.

EMC RecoverPoint architecture overview

EMC remote replication technology overview

183

Using VMware vSphere in Data Restart Solutions

Administrators use RecoverPoint to:

Support flexible levels of protection without distance limitations


or performance degradation. RecoverPoint offers fine-grain
recovery of VMFS and RDM devices that reduces the recovery
point to zero.

Replicate block storage to a remote location through a cluster of


tightly coupled servers. VNX storage is replicated to a remote
location where it can be presented to the ESXi hosts.

Use write splitters that reside on the VNX arrays or in the SAN
fabric. The write splitter intercepts the write operations destined
to the ESXi datastore volumes and sends them to the
RecoverPoint appliance that transmits them to the remote
location over IP networks as depicted in Figure 80 on page 183.

Provide a full-featured replication and continuous data protection


solution for VMware ESXi hosts. For remote replication, it uses
synchronous replication with a zero RPO or an asynchronous
replication with a small RPO enabling VMware protection from
data corruption while guaranteeing recoverability with little to
no data loss.

Virtual machine write


splitting

For VMware, RecoverPoint provides a host-based write splitter to


support application integration for Windows virtual machines. The
driver monitors the write operations and ensures that a copy of each
write operation to a protected RDM volume is sent to the
RecoverPoint appliance. Since the KDriver runs in the virtual
machine, the only volumes that can be replicated by RecoverPoint are
the SAN volumes attached to the virtual machine in the physical
RDM mode (RDM/P).

RecoverPoint VAAI
support

RecoverPoint provides limited support for VAAI when using the


VNX splitter. RecoverPoint provides support for a single VAAI
primitive, which is Hardware Accelerated Locking. The SCSI
commands issued by Hardware Accelerated Locking alleviate VMFS
contention during metadata updates, thus providing concurrent
access and improved performance.
This is the only supported VAAI primitive. All other VAAI SCSI
commands issued to a LUN protected by the VNX splitter will
receive an unsupported response that results in the SCSI request
falling back to the host for processing.

184

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Note: The RecoverPoint SAN splitter does not include support for any VAAI
SCSI commands. Disable VAAI if SAN splitter is used.

Using the Advanced Settings option of the ESXi host, VAAI features
can be disabled by setting the value of HardwareAccellerateMove,
HardwareAcceleratedInit, and HardwareAcceleratedLocking to zero
as shown in Figure 81 on page 185.

Figure 81

Disabling VAAI support on an ESXi host

RecoverPoint provides consistency groups to assign VNX storage


devices consumed by ESXi. Each consistency group is made up of
LUNs that must be protected. The consistency group is also assigned
a journal volume to maintain the bookmarks and the various states
provided with RecoverPoint. RecoverPoint and the ESXi host HBAs
are configured to separate storage groups. Devices that require
protection must be unmasked to the ESXi storage group and the
storage group of the RecoverPoint appliances.
The RecoverPoint management UI or CLI is used to configure the
consistency groups, apply policies, and manage storage access.

EMC remote replication technology overview

185

Using VMware vSphere in Data Restart Solutions

Note: All virtual disk devices (VMFS and RDM) that constitute a virtual
machine must be a part of the same consistency group. If application
consistency is required when using RDMs, install the RecoverPoint driver in
the Windows guest OS. Table 9 on page 186 summarizes the support options
available when using RecoverPoint for replication with VNX.

EMC RecoverPoint feature support

Table 9

Features

186

Splitter
Windows host write splitter

Array-based write splitter

Brocade/Cisco Intelligent
Fabric write splitter

Supports physical RDM

Yes

Yes

Yes

Supports virtual RDM

No

Yes

Yes

Supports VMFS

No

Yes

Yes

Supports VMotion

No

Yes

Yes

Supports HA/DRS

No

Yes

Yes

Supports vCenter Site


Recovery Manager

No

Yes

Yes

Supports P2V replication

RDM/P only

RDM/P and VMFS

RDM/P and VMFS

Supports V2V replication

RDM/P only

RDM/P and VMFS

RDM/P and VMFS

Supports guest OS BFS

RDM/P only

RDM/P and VMFS

RDM/P and VMFS

Supports ESXi BFS

No

Yes

Yes

Maximum number of LUNs


supported per ESXi hosts

255 (VMware restriction)

N/A

N/A

Heterogeneous array support EMC VNX, CLARiiON CX,


Symmetrix and selected third
party storage

EMC VNX and CLARiiON


CX3/CX4

EMC along with third party

Can be shared between


RecoverPoint clusters

Yes

Yes

No

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

RDM volume replication


RDM volumes are managed and replicated as separate physical
devices. Since the RDM volumes are surfaced directly to the virtual
machine, a critical component of replication with RDM volumes is to
ensure that device paths are preserved or reconstituted at the
operating system level of the replicated virtual machines.
Replication Manager interacts EMC replication technologies to
manage the remote replica and preserve those mappings. Replication
Manager can only be used with an RDM volume that is formatted as
NTFS and is in the physical compatibility mode.

Configure remote
sites for vSphere
virtual machines
with RDM

When an RDM is added to a virtual machine, a new virtual disk file is


created. This maps the logical device to the physical device. The
virtual disk file contains the unique ID and LUN number of the
device it is mapping.
The virtual machine configuration is updated with the label of the
VMware file system containing the RDM and the name of the RDM.
When the VMware file system that contains the virtual machine is
replicated to a remote location, it will include the same configuration
and virtual disk mapping file information. However, the replicated
device has its own unique ID that results in a configuration error
when the virtual machine is started.
In addition to ensuring that the devices are presented in a
synchronized or consistent state, the SCSI devices must be mounted
to the virtual machine in the same order as the production site to
maintain the integrity of the application. This requires proper
assignment and mapping of the VNX device numbers on the target
VNX storage system.
The secondary images on the target VNX storage system are normally
presented as write disabled. Hence, they cannot be viewed by the
VMware ESXi host. The device state must be modified either through
a promotion or by the use of snapshots for the host to mount the
device in a usable state. Snapshots or clone LUNs can be presented to
the virtual machine or host as independent devices used for ancillary
purposes such as QA or backup.

RDM volume replication

187

Using VMware vSphere in Data Restart Solutions

The most important consideration is to ensure that data volumes are


presented to the guest OS in the proper order. This requires precise
mapping of the VNX device numbers on the secondary images to the
canonical name assigned by the VMkernel.
Determine the device mapping for the physical server and document
the disk order for the devices presented to the virtual machines on
the remote site. For example, consider a physical server on the
production site with its three application data disks as given in
Table 10.
Table 10

VNX to virtual machine RDM


LUN number

Windows disk

Virtual device node

\\.\PHYSICALDRIVE2

SCSI (0:1)

\\.\PHYSICALDRIVE3

SCSI(0:2)

\\.\PHYSICALDRIVE4

SCSI(0:3)

These three VNX LUNs are replicated to a remote VNX using LUNs
2, 3, and 4, respectively. The running virtual machine that already
consists of a boot image disk configured as SCSI target 0:0 on the
remote site should be presented three RDM disks, that is, SCSI disks
0:1, 0:2, and 0:3, respectively.
Therefore, EMC recommends using a copy of the source virtual
machine's configuration file instead of replicating the VMware file
system. Complete the following steps to create copies of the
production virtual machine by using RDMs at the remote site:
1. Create a directory within a cluster datastore at the remote location
to store the replicated virtual machine files.
Note: Select a datastore that is not part of the current replication to
perform this one-time operation.

2. Copy the configuration file for the source virtual machine to the
directory. This task does not need to be repeated unless the
configuration of the source virtual machine changes.
3. Register the cloned virtual machine using the Virtual
Infrastructure client or the service console.
4. Generate RDMs on the target VMware ESXi hosts. The RDMs
should be configured to use the secondary MirrorView images.

188

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

The virtual machine at the remote site can be powered on using either
the Virtual Infrastructure client or the service console.

Start virtual
machines at a
remote site after a
disaster

Complete the following steps at the remote site to restart virtual


machines using the replicated copy of the data:
1. Promote the replica LUNs, file systems, or consistency groups at
the remote site. Promoting a LUN changes the state of the device
to write enabled, thus making it useable by the remote virtual
environment. Before promoting the storage, ensure that the
replicas are in a synchronized or consistent state.
2. Shut down the virtual machines that are used for ancillary
business operations at the remote site. Verify that the promoted
devices are added to the ESXi storage groups, allowing ESXi hosts
to access the secondary images.
3. For block storage, rescan the SCSI bus to discover the new
devices. The VMware file system label created on the source
volumes is recognized by the target VMware ESXi host.
Note: If VMware file system labels are not used, the virtual machine
configuration files need to be modified to accommodate changes in the
canonical names of the devices.

4. Power on the cloned virtual machines using either the vSphere


Client or CLI interfaces.

Configure remote
sites for virtual
machines using
VMFS

vSphere assigns a unique signature to all VMFS volumes when they


are formatted. The signature is generated using the unique ID (UID)
of the device and the Host LUN number assigned through Unisphere.
The signature also includes the user assigned label that is equivalent
to the datastore name.
Select the Keep existing signature option and complete the following
steps to create virtual machines at the remote site:
1. Promote the secondary images so that they become read/write
enabled, and accessible by the VMware ESXi cluster group at the
remote data center.
vSphere does not allow duplicate object names (virtual machines)
within a vCenter data center. If the same vCenter server is used to
manage the VMware ESXi hosts at the production and remote
sites, add the servers to different data centers in vCenter.
RDM volume replication

189

Using VMware vSphere in Data Restart Solutions

2. Use the vSphere Client to initiate a SCSI bus rescan after surfacing
the target devices to the VMware ESXi hosts.
3. Use the vCenter client Add storage wizard to select the replicated
devices holding the copy of the VMware file systems. Select the
Keep existing signature option for each LUN copy. After all
devices are processed, the VMware file systems will be displayed
under the Storage tab of the vClient interface.
4. Browse the datastores with the vSphere Client and perform
selective registration of the virtual machines.
Note: When using replication from multiple sources it is possible to
duplicate virtual machine names. If this occurs, select a different variant
of the machine name during virtual machine registration.

5. The virtual machines on the VMware ESXi hosts at the remote site
will start without any modification if the following requirements
are met:
The target VMware ESXi host has the same virtual network
switch configuration. For example, the name and number of
virtual switches are duplicated from the source VMware ESXi
cluster group.
All VMware file systems that are used by the source virtual
machines are replicated.
The minimum memory and processor resource requirements
of all cloned virtual machines can be supported on the target
VMware ESXi hosts.
Devices such as CD-ROM and floppy drives are attached to
physical hardware or placed in a disconnected state on the
virtual machines.
6. Power on the cloned virtual machines using the VirtualCenter
Client or command line utilities when required.
When the virtual machines are powered on for the first time, a
message regarding msg.uuid.altered appears. Select I moved it to
complete the power on procedure.

190

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Replication Manager
Replication Manager supports all the replication technologies
introduced in Definitions on page 169. Replication Manager
simplifies the process of creating and mounting of replicas by
defining application sets to execute on VNX and virtual machine
storage devices through the corresponding element manager.
Application sets also provide the option to create workflows to
prepare an application by setting it to hot standby mode or otherwise
ensuring that it is in a consistent state prior to the creation of the
replica.
In a VMware environment, Replication Manager uses a proxy host
(physical or virtual) to initiate management tasks with vCenter and
VNX storage systems. The Replication Manager proxy host can be the
same physical or virtual host that serves as a Replication Manager
server. A vCenter server with Replication Manager agent must be
installed in the environment. The proxy host must also be configured
with a Replication Manager agent, EMC Solutions Enabler, and
administrative access to the VNX storage systems.
Unless you require application consistency within the guest virtual
machine, there is no need to install Replication Manager on the
virtual machines or the ESXi hosts where the VNX storage resides.
Operations are sent from a proxy to create VMware snapshots of all
online virtual machines that reside on the VNX datastore. This step
ensures the operating system consistency of the resulting replica.
Figure 82 shows the NAS datastore replica in the Replication
Manager.

Figure 82

NFS replication using Replication Manager

Replication Manager

191

Using VMware vSphere in Data Restart Solutions

The environment must have a proper DNS configuration, allowing


the proxy host to resolve the hostnames of the Replication Manager
server, the mount host, and the VNX Control Station.
After performing a failover operation, the destination storage can be
mounted as a datastore on the remote ESXi host. When a datastore
replica is mounted to an alternate ESXi host, Replication Manager
performs all tasks necessary to make it visible to the ESXi host. After
successful completion, further administrative tasks such as restarting
the virtual machines and applications may be automated with scripts
initiated by Replication Manager.
Unisphere provides the option to administratively fail over file
systems to a remote location. After the failover, the file systems are
mounted on the remote ESXi host and the virtual machines that
reside in the datastores are registered using the vSphere client. The
vSphere client datastore browser lists all the virtual machines in the
datastore. Select any virtual machine folder, locate the configuration
(.vmx) file, and right-click to display the menu to add the virtual
machine to Inventory as shown in Figure 83. The ESXi host names for
Virtual Machine Networks, VMkernel, and similar properties must be
identical to the source. Inconsistent network names will result in
accessibility issues.

Figure 83

192

Registering a virtual machine with ESXi

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Automating Site Failover with SRM and VNX


VMware Site Recovery Manager (SRM) provides a standardized
framework to automate VMware site failover. SRM is a plug-in for
vCenter that allows users to create workflows to fail over some or all
of the resources of a production VMware environment to a secondary
VMware location. Orchestration of the failover is controlled by a
recovery plan that defines the assets that are failed over and the order
in which they are restored. SRM offers additional capabilities to
execute pre- and post-failover scripts to assist in the recovery and
establishes a completely automated solution.
During the test failover process, the production virtual machines at
the protected site continue to run and the replication connection
remains active for all the replicated LUNs or file systems.
When the test failover command is run, SRM requests the VNX
storage system at the recovery site to create a writeable snapshot or
checkpoint. Based on the definitions in the recovery plan, these snaps
or checkpoints are discovered and mounted, and preparation scripts
or callouts are executed. Virtual machines are powered up and
optional post-power-on scripts or callouts are executed. The test
recovery plan is also used for failover so that the users are confident
that the test process is as close to a real failover as possible.
Companies realize a greater level of confidence knowing that their
users are trained on the disaster recovery process and can execute it
correctly each time. Users have the ability to add a layer of
test-specific customization to the workflow that is only executed
during a test failover to handle scenarios where the test may have
differences from the actual failover scenario. If virtual machine power
on is successful, the SRM test process is complete. Users can start
applications and perform validation tests, if required. Prior to
cleaning up the test environment, SRM uses a system callout to pause
the simulated failover. At this point, verify that their test
environment is consistent with the expected results. After
verification, acknowledge the callout and the test failover process
concludes by powering down and unregistering virtual machines,
and removing snaps or checkpoints created for the test.
The actual failover is similar to the test failover, except that rather
than leveraging snaps or checkpoints at the recovery site while
keeping the primary site running, the storage array is physically
failed over to a remote location and the actual recovery site LUNs or
file systems are brought online and the virtual machines are powered
Automating Site Failover with SRM and VNX

193

Using VMware vSphere in Data Restart Solutions

on. vSphere will attempt to power on the virtual machines at the


protected site if they are active when the failover command is issued.
However, if the protected site is destroyed, VMware will be unable to
complete this task. SRM will not allow a virtual machine to be active
on both sites. EMC Replicator has an adaptive mechanism that
attempts to ensure that RPOs are met, even with varying VMware
workloads, so that the users can be confident that the crash-consistent
datastores that are recovered by SRM meet their pre-defined
service-level specifications.

EMC Site Recovery


Adapter

SRM leverages the data replication capabilities of the underlying


storage system through an interface called Site Recovery Adapter
(SRA). VMware vCenter SRM 4 supports SRAs for EMC Replicator,
EMC MirrorView, and EMC RecoverPoint.
Each EMC SRA is a software package that enables SRM to implement
disaster recovery for virtual machines by using VNX storage systems
running replication software. SRA-specific scripts support array
discovery, replicated LUN discovery, test failover, failback, and actual
failover. Disaster recovery plans provide the interface for policy
definition that can be implemented for virtual machines running on
NFS, VMFS, and RDM.

194

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Figure 84

Create SRM
protection groups at
the protected site

VMware vCenter SRM configuration

A protection group is made up of one or more replicated datastores


that contain virtual machines and templates. It specifies the items that
must be transitioned to the recovery site in the event of a disaster. A
protection group establishes virtual machine protection, and maps
virtual machine resources from the primary site to the recovery site.
There is typically a one-to-one mapping between an SRM protection
group and a VNX or RecoverPoint consistency group.

Automating Site Failover with SRM and VNX

195

Using VMware vSphere in Data Restart Solutions

Note: There are exceptions such as when RecoverPoint is used to protect


applications such as databases with separate consistency groups for binaries,
user DBs, and system DBs. In that case the SRM protection group would be
made up of multiple consistency groups.

However, if your VNX model does not support the number of devices
being protected within a protection group, create multiple VNX
consistency groups for each protection group.
Note: The maximum number of consistency groups allowed per storage
system is 64. Both MirrorView/S and MirrorView/A count toward the total.

The VNX Open Systems Configuration Guide available on Powerlink


provides the updated sync and async mirrors limits.

SRM recovery plan


The SRM recovery plan is a list of steps required to switch the
operation of the data center from the protected site to the recovery
site. The purpose of a recovery plan is to establish a reliable failover
that includes prioritized recovery of applications. For example, if a
database management server needs to be powered on before an
application server, the recovery plan can start the database
management server, and then start the application server. After the
priorities are established, the recovery plan should be tested to ensure
that the ordering of activities has been properly aligned for the
business to continue running at the recovery site.
The recovery plans are created at the recovery site and are associated
with a protection group created at the protected site. If different
recovery priorities are needed during failover, more than one
recovery plan may be defined for a protection group.

196

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Test the SRM


recovery plan at the
recovery site

Figure 85

The SRM recovery plan must be tested to ensure that it performs as


expected. A recovery plan is shown in Figure 85.

SRM discovery plan

To test the plan, click the Test button on the menu bar. During the test,
the following events will occur:

Production virtual machines are shut down.

SnapView sessions are created and activated using the already


created snapshots.

All the resources created within the SRM protection group are
re-created at the recovery site.

Virtual machines power on in the order defined in the recovery


plan.

When all the defined tasks in the recovery plan are completed, SRM
will pause until you verify that the test ran correctly. After
verification of virtual machines and applications in the recovery site,
select the Continue button to revert the environment to its original
production state.
Automating Site Failover with SRM and VNX

197

Using VMware vSphere in Data Restart Solutions

The VMware vCenter SRM Administration Guide available on


Powerlink and on the VMware website provides more information
on SRM recovery plans and protection groups.

Execute an SRM
recovery plan at the
recovery site

Executing an SRM recovery plan is similar to testing the


environment, except for the following differences:

Execution of the SRM recovery plan is a one-time activity.

SnapView snapshots are not involved during an executed SRM


recovery plan.

The MirrorView/RecoverPoint Replicator secondary copies are


promoted as the new primary production LUNs.

After executing a recovery plan, manual steps are needed to


resume operation at the original production site.

You should only execute an SRM recovery plan in the event of a


declared disaster or validation test.

SRM Failback scenarios


The steps required to restore the primary environment are directly
related to the nature of the disaster. The respective SRM integration
documentation on Powerlink provides details on how to address
different failback scenarios for SRM.
SRM failback is the process of restoring the production VMware
configuration after the production environment has been restored.
Failback requires that the storage infrastructure and configuration are
restored to a condition that can support the application data.
EMC provides failback solutions for each replication technology
supported by SRM. EMC MirrorView Insight for VMware (MVIV)
provides experimental support for site failback. It does not require
any re-configuration because it discovers the existing environment
and automatically performs the failback by executing the Failover
option.
After completing the VNX configuration, the SRM administrator can
use the MirrorView Insight for VMware (MVIV) tool available with
the MirrorView SRA to verify and validate the underlying replication
configuration of the VNX storage systems, as well as the ESXi hosts.
198

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

The MVIV also helps troubleshoot the configuration by presenting


the entire configuration, such as MirrorView and the VMware ESXi
hosts and datastores on a single screen as shown in Figure 86.

Figure 86

MVIV reporting for SRM environments

For example, MVIV enumerates the environment to identify any of


the following conditions:

Datastores are replicated and none of the RDMs attached to the


virtual machines within the datastores are replicated.

All RDMs attached to the virtual machines are replicated and


none of the datastores for the virtual machines are replicated.

The target LUN is not attached to any ESXi host and other
servers.

The MirrorView Insight for VMware (MVIV) Technical Note available on


Powerlink provides more details on MVIV.
You can use SRM to fail back. However, you must set up the array
managers, protection groups, and recovery plans from the primary
site to the disaster site.

Automating Site Failover with SRM and VNX

199

Using VMware vSphere in Data Restart Solutions

When using EMC Replicator, VNX Failback Plug-in for VMware


vCenter SRM is a supplemental software package for VMware
vCenter SRM 4. This plug-in enables users to fail back virtual
machines and their associated datastores to the primary site after
executing disaster recovery through VMware vCenter SRM for VNX
storage systems running EMC Replicator and EMC SnapSure.
The plug-in does the following:

Install the VNX


Failback plug-in for
VMware vCenter SRM

Allows users to type login information (hostname/IP, username,


and password) for two vCenter systems and two VNX systems.

Cross-references replication sessions with vCenter Server


datastores and virtual machines.

Allows users to select one or more failed-over EMC replication


sessions for failback.

Supports VNX NFS datastores.

Manipulates vCenter Server at the primary site to rescan storage,


unregister orphaned virtual machines, rename datastores,
register failed-back virtual machines, reconfigure virtual
machines, customize virtual machines, remove orphaned.vswp
files for virtual machines, and power on failed-back virtual
machines.

Manipulates vCenter Server at the secondary site to power off the


orphaned virtual machines, unregister the virtual machines, and
rescan the storage.

Identifies failed-over sessions created by EMC Replication


Manager and directs the user about how these sessions can be
failed back.

Before installing the VNX Failback plug-in for VMware vCenter SRM,
install the VMware vCenter SRM on a supported Windows host (the
SRM server) at both the protected and the recovery sites.
Note: Install the EMC Replicator Adapter for VMware SRM on a supported
Windows host (preferably the SRM server) at both the protected and recovery
sites.

To install the VNX Failback plug-in for VMware vCenter SRM,


extract and run the executable VNX Failback plug-in for VMware
vCenter SRM.exe from the downloaded zip file. Follow the on-screen
instructions and provide the username and password for the vCenter
Server where the plug-in is registered.
200

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Use the VNX Failback


plug-in for VMware
vCenter SRM

To run the VNX Failback plug-in for VMware vCenter SRM:


1. Open an instance of VI Client or vSphere Client to connect to the
protected site vCenter.
2. Click VNX Failback Plug-in.
3. Follow the on-screen instructions to connect to the protected and
recovery site VNX and vCenter systems.
4. Click Discover.
5. Select the required sessions for failback from the list on the Failed
Over Datastores, Virtual Machines, and Replication Sessions
areas.
6. Click Failback.
Note: The failback progress is displayed in the Status Messages area.

Recommendations and cautions for SRM with VNX


The recommendations and cautions identified are as follows:

Install VMware tools on the virtual machines targeted for failover


to ensure that the recovery plan does not generate an error when
attempting to shut them down. If the tools are not installed, the
plan will complete. However, the event will be flagged as an
error in the recovery plan (which can be accessed by clicking the
History tab) even if the virtual machines fail over successfully.

Enable SnapView on the arrays with snapshots at both the


primary and secondary sites to test failover and failback.

Create alarms to announce the creation of new virtual machines


on the datastore so that the mirrors can be configured to include
the new virtual machines in the SRM protection scheme.

Complete the VNX-side configurations (MirrorView setup,


snapshots creation, and so on) before installing SRM and SRA.

Ensure that you have enough disk space configured for both the
virtual machines and the swap file at the secondary site to ensure
that the recovery plan test runs successfully and without errors.

If SRM is used for failover, use either SRM or MVIV for failback
because manual failback is cumbersome and requires selecting
each LUN individually and configuring the Keep the existing

Automating Site Failover with SRM and VNX

201

Using VMware vSphere in Data Restart Solutions

signature or Assign a new signature option in vSphere on the


primary ESXi hosts. By default, SRM resignatures the datastores
using the Assign a new signature option and renames the VMFS
datastores in vSphere environments.

202

Testing a recovery plan only captures snapshots of the


MirrorView secondary image, it does not check for connectivity
between the arrays nor verifies whether MirrorView is working
properly. To verify connectivity between virtual machine
consoles, use the SRM connection. To check connectivity between
arrays, use SRM Array Manager or Unisphere.

The VNX Failback Plug-in for VMware vCenter Site Recovery


Manager Release Notes available on Powerlink provides further
information on troubleshooting and support when using the
plug-in.

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere in Data Restart Solutions

Summary
Table 11 on page 203 provides data replication solutions of VNX
storage presented to an ESXi host.
Table 11

Data replication solutions


Type of virtual object

Replication

NAS datastore

EMC Replicator
Replication Manager
VMware vCenter SRM

VMFS/iSCSI

EMC RecoverPoint
Replication Manager
VMware vCenter SRM

RDM/iSCSI (physical)

EMC RecoverPoint
VMware vCenter SRM

RDM/iSCSI (virtual)

EMC RecoverPoint
VMware vCenter SRM

Summary

203

Using VMware vSphere in Data Restart Solutions

204

Using EMC VNX Storage with VMware vSphere

5
Using VMware vSphere
for Data Vaulting and
Migration

This chapter contains the following topics:

Introduction ......................................................................................
EMC SAN Copy interoperability with VMware file systems....
SAN Copy interoperability with virtual machines using RDM
Using SAN Copy for data vaulting ...............................................
Transitional disk copies to cloned virtual machines ...................
SAN Copy for data migration from CLARiiON arrays .............
SAN Copy for data migration to VNX arrays..............................
Summary ...........................................................................................

Using VMware vSphere for Data Vaulting and Migration

206
207
208
209
217
220
222
224

205

Using VMware vSphere for Data Vaulting and Migration

Introduction
For businesses, information is critical for finding the right customers,
building the right products, and offering the best services. This
requires the ability to create copies of the information and make it
available to users involved in different business processes in the most
cost-effective way possible. It can require users to migrate the
information between storage arrays as business requirements change.
Additionally, compliance regulations can impose data vaulting
requirements that may require users to create additional copies of the
data.
The criticality of the information also imposes strict availability
requirements. Few businesses can afford protracted downtime to
copy and distribute data to different user groups. Copying and
migrating data requires extensive planning and manual work. Due to
this complexity, the processes are susceptible to errors which can lead
to data loss.
VMware ESXi hosts and related products consolidate computing
resources to reduce the total cost of ownership. However, the
consolidation process can result in conflict over computer and
storage resources between applications with different service-level
agreements.
VMware provides technologies such as Storage vMotion and VAAI to
help redistribute virtual machines between available datastores.
However, there is still no solution for a full-scale migration of
datastores from one storage location to another. In some cases, using
native tools to copy data in a vSphere environment can require
extended downtime of virtual machines, or stacking of storage
vMotion tasks. EMC offers technologies to migrate and copy data
from one storage array to another with minimal impact to the
operating environment. The purpose of this chapter is to discuss one
such technology, EMC SAN Copy, and its interoperability in
vSphere environments using VNX block storage.

206

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

EMC SAN Copy interoperability with VMware file systems


EMC SAN Copy is used to migrate or create copies of VMware file
system and RDM LUNs. Spanned VMware file systems that use
multiple VNX extents require that all members be replicated or
migrated to the target location. All virtual machines accessing a
spanned VMware file system need to be shut down before starting a
SAN Copy session. Alternatively, start a SAN Copy session from a
SnapView clone of the VMFS datastore that maintains a consistent
point-in-time copy of the VMware file system.

EMC SAN Copy interoperability with VMware file systems

207

Using VMware vSphere for Data Vaulting and Migration

SAN Copy interoperability with virtual machines using RDM


RDM volumes created in physical compatibility mode provide direct
access to virtual machines created in VMware ESXi. The virtual
machine I/O path is between the virtual machine and the device,
without passing through the ESXi host VMkernel. This limits some of
the advanced functionality provided by the VMware ESXi kernel.
However, providing virtual machines with dedicated access to
storage devices does provide advantages.
Storage array-based replication and migration are performed at the
VNX LUN level. When users add raw devices to virtual machines,
copies of data can be created on individual virtual machines using
storage array software. Since the virtual machines communicate
directly with the storage array, storage management commands can
be initiated on the virtual machine. To ensure that migrations do not
result in data loss, all virtual machines must be shut down before
performing the migration. A separate virtual machine on a separate
storage device is required to manage the SAN Copy activities.

208

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

Using SAN Copy for data vaulting


SAN Copy has different modes of operation. One mode is the
incremental mode that is used for push operations. In incremental
mode, SAN Copy propagates data from the production volume to a
volume of equal or greater size on a remote storage array. This
mechanism provides a data vaulting solution where a copy of
production data can be made available for ancillary business
processes on a cost-effective remote storage platform.
One risk of using synchronous replication is that it maintains a
consistent image at both locations. If a replication session is
inadvertently reversed from an inconsistent state or, a target virtual
machine is overwritten due to a partial failover, data can be lost. To
protect against these conditions, maintain an offline golden image of
the virtual environment when building a disaster recovery or
migration solution. Take the golden image offline and preserve it to
avoid accidental modification.

Using SAN Copy for data vaulting

209

Using VMware vSphere for Data Vaulting and Migration

A schematic representation of the data vaulting solution used for an


environment with a low I/O rate to the production volumes is shown
in Figure 87. To maintain data consistency at the remote location,
incremental SAN Copy uses reserved LUNs on the production
storage array to buffer data before it is copied to the target array. The
resulting performance overhead is not appropriate for environments
subject to a high rate of change. The performance penalty can be
minimized by modifying the solution presented in the figure.

Windows Os/
Application Data

Linux Os/
Application Data

Production
VNX for block

SAN COPY

Windows Os/
Application Data

SAN COPY

Linux Os/
Application Data

SnapView

Replica of
Windows Os/
Application Data

Replica of
Linux Os/
Application Data

Secondary
VNX for block

ux

ux

Lin

Lin

SOFTWARE
HARDWARE

SnapView

ESXi
Boot

SOFTWARE
HARDWARE

DR or Backup ESXi Server

ESXi
Boot
VNX-000659

Figure 87

210

Data vaulting solution using incremental SAN Copy in a virtual


infrastructure

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

The solution shown in Figure 88 uses SnapView Clone technology to


create a copy of the production volume and use it as the source for
the incremental SAN Copy.
SAN COPY

Windows Os/
Application Data

Snap View

Remote Replica of
Windows Os/
Application Data

Snap View

Replica of
Windows Os/
Application Data

Replica of
Windows Os/
Application Data

Linux Os/
Application Data

Remote Replica of
Linux Os/
Application Data

Replica of
Linux Os/
Application Data

Replica of
Linux Os/
Application Data

ux

Lin

Production
VNX for block

Secondary
VNX for block

SOFTWARE
HARDWARE

DR or Backup ESXi Server

ESXi
Boot

ux

Lin

SOFTWARE
HARDWARE

Figure 88

Data vaulting of
VMware file system
using SAN Copy

ESXi
Boot

VNX-000660

Minimum performance penalty data vaulting solution using


incremental SAN Copy

Although Figure 87 on page 210 and Figure 88 depict virtual


machines accessing the storage as RDM, VMFS LUN replication is
completed using the same process. First, identify appropriate devices
and their canonical names on the VMware ESXi host environments

Using SAN Copy for data vaulting

211

Using VMware vSphere for Data Vaulting and Migration

constituting the VMware file systems. Figure 89 shows an example


using VMware file system version 3. The devices have to be related to
the VNX LUN numbers.

Figure 89

Identifying the canonical name associated with VMware file systems

Figure 90 on page 212 shows how to use the Unisphere CLI and agent
to determine the WWNs of VNX devices that need to be replicated. In
addition, the VMware-aware Unisphere feature introduced in VNX
OE 29 provides mapping between VNX LUNs and VMware file
systems to help determine the LUNs that need to be replicated.

Figure 90

Using Unisphere CLI/Agent to map the canonical name to EMC VNX


devices

Identify the WWN of the remote devices for the data vaulting
solution. The WWN is a 128-bit number that uniquely identifies any
SCSI device. The WWN for a SCSI device can be determined by using
different techniques. Management software for the storage array can

212

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

provide the information. Solutions Enabler can be used to obtain the


WWN of devices on supported storage arrays (Symmetrix, HDS, and
HP StorageWorks).
After identifying the VNX devices that back the VMFS volumes, and
the WWN of the remote devices, the SAN Copy configuration and
session management can be initiated. The following procedure
describes the steps required to create a data vaulting solution using
SAN Copy:
1. In a SAN Copy configuration, VNX Storage Processor (SP) ports
act as host initiators that log in to the SP ports of the remote VNX
SP ports. Therefore, a SAN switch zone must be created with
VNX SP ports from the source and the target storage arrays.
2. Most modern storage arrays do not allow unrestricted access to
storage devices. The access to storage devices is enforced by LUN
masking that controls the host initiators that can access the
devices. The VNX SP ports need to be able to access the remote
devices to perform I/Os from the source devices. The next step in
the creation of a data vaulting solution is to provide the VNX SP
ports with appropriate access to the remote devices. The
management software for the storage array must be used to
provide access to the VNX SP ports to the appropriate LUNs on
the remote storage array.
3. SAN Copy incremental sessions internally communicate with
SnapView software to keep track of changes and updates for a
SAN Copy session. SnapView reserved LUN pool needs to be
configured with available LUNs before an incremental SAN Copy
session can be created. The number and size of these LUNs
depend on the rate of the change on the source LUN during the
SAN Copy update operation.
4. Create an incremental SAN Copy session between the source and
destination LUNs as shown in Figure 91 on page 214 and
Figure 92 on page 215. The attributes for the SAN Copy
sessionSAN Copy session name, WWN of source and
destination LUNs, throttle value, latency and bandwidth
controlcan be specified when the session is created. The
bandwidth value is required, but the value for the latency
parameter can be retained as default, in which case SAN Copy
will measure latency by sending test I/O to the target.

Using SAN Copy for data vaulting

213

Using VMware vSphere for Data Vaulting and Migration

5. There is no movement of data when SAN Copy sessions are


created. When a session is created, SAN Copy performs a series of
tests to validate the configuration. These include checks to ensure
that the VNX SP ports have access to the remote devices, and that
the remote devices are of equal or larger size than the source
devices.

Figure 91

214

Creating an incremental SAN Copy session

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

Figure 92

Creating an incremental SAN Copy session (continued)

6. Starting or activating the session created in the previous step


results in the propagation of a point-in-time copy of the data from
the source devices to the target devices.
7. SAN Copy provides a parameter (throttle) to control the rate at
which the data is copied from or to the source and target devices.
A throttle value of 10 will cause SAN Copy to use all available
system resources to speed up the transfer rate. The throttle value
can be changed dynamically after the session is created.
8. After the copy process is complete, the data on the remote devices
can be accessed by the virtual machines configured on a different
VMware ESXi host. However, EMC does not recommend this.
Incremental updates to the target volumes are possible only if the
remote devices are not actively accessed by the hosts.

Using SAN Copy for data vaulting

215

Using VMware vSphere for Data Vaulting and Migration

9. To access the copy of the data on remote devices, EMC


recommends the use of snapshot technology native to the target
storage array. For example, if the target storage array is an EMC
VNX storage array, SnapView snapshots can be leveraged to
present a copy of the data to the virtual machines.
10. An incremental update of the remote device can be achieved by
restarting the previously created SAN Copy session. Incremental
updates can dramatically reduce the amount of data that needs to
be propagated from the source volume in cases where the amount
of data to be copied is a small fraction of the size of the source
volume.

Data vaulting of
virtual machines
configured with
RDMs using SAN
Copy

SAN Copy provides a storage array-based mechanism to copy a


consistent point-in-time copy of data on VNX devices to supported
third-party storage. When virtual machines are configured with raw
devices or RDM, the use of SAN Copy is simplified. Furthermore, by
replicating data at the individual virtual machine level, copying of
unnecessary data can be eliminated.
The virtual machines configured with RDM in the physical
compatibility mode are aware of the presence of VNX devices.
Navisphere CLI/Agent installed on the virtual machine can be used
to easily determine the devices that need to be replicated using SAN
Copy. After the devices are identified, the process to create and use
SAN Copy with virtual machines using raw devices is the same as
that given in Data vaulting of VMware file system using SAN Copy
on page 211.

216

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

Transitional disk copies to cloned virtual machines

Configure remote
sites for virtual
machines using
VMFS

The disk resignaturing options have been covered in other sections of


this book and are listed in Identify clones on page 119. When using
SAN Copy for data vaulting of a production vSphere environment,
EMC recommends selecting the Keep existing signature option for
the individual LUN on VMware ESXi host at the remote site.
The following procedure provides the process to create virtual
machines at the remote site:
1. Enable access to the copy of the remote devices for the VMware
ESXi cluster group at the remote data center. To preserve the
incremental push capabilities of SAN Copy, the remote devices
should never be accessed directly by the VMware ESXi hosts.
2. vCenter does not allow duplication of object names within a
vCenter data center. It is a good practice to avoid using the same
vCenter infrastructure to manage the VMware ESXi hosts at the
production and remote sites. If you must use the same vCenter
Server, add the replicated storage to a different data center in
vCenter to avoid name collisions.
3. After providing access for the VMware ESXi hosts at the target
site to the copy of the remote devices, scan the SCSI bus using the
service console or the vCenter client.
4. Use the vCenter client Add storage wizard to list the devices
holding the copy of the VMware file systems replicated from the
source devices. Select the Keeping existing signature option for
each LUN copy. After you select this option for all LUNs, the
VMware file systems are displayed under the Storage tab in the
vClient interface.
5. Register the virtual machines from the target device using the
vCenter client or the service console.
6. The virtual machines can be started on the VMware ESXi hosts at
the remote site without any modification if the following
requirements are met:

Transitional disk copies to cloned virtual machines

217

Using VMware vSphere for Data Vaulting and Migration

The target VMware ESXi hosts have the same virtual network
switch configuration. For example, the name and number of
virtual switches are duplicated from the source VMware ESXi
cluster group.
All VMware file systems that are used by the source virtual
machines are replicated.
The VMFS labels are unique on the target VMware ESXi hosts.
The target VMware ESXi hosts support the minimum memory
and processor resource reservation requirements of all cloned
virtual machines. For example, if 10 source virtual machines,
each with a memory resource reservation of 256 MB need to be
cloned, the target VMware ESXi cluster should have at least
2.5 GB of physical RAM allocated to the VMkernel.
Devices such as CD-ROM and floppy drives are attached to
physical hardware or are started in a disconnected state when
the virtual machines are powered on.
7. The cloned virtual machines can be powered on by using the
vCenter client or command line utilities when required. The
process for starting the virtual machines at the remote site is
discussed in the section.

Configure remote
sites for vSphere 4
virtual machines
with RDM

When an RDM is generated, a virtual disk is created on a VMware file


system that points to the physical device that is mapped. The virtual
disk provides all the metadata about the physical device. This
includes the unique ID, the device LUN number, the RDM name, and
the name of the VMware file system that holds the RDM. If the VMFS
holding the RDM vmdk is replicated and presented to an ESXi host at
the remote site, the mapping file is not valid since it references a
unique device that does not exist. Therefore, EMC recommends to
use a copy of the source virtual machine's configuration file instead of
replicating the VMware file system. The following steps create copies
of the production virtual machine using RDMs at the remote site:
1. On the VMware ESXi cluster group at the remote site, create a
directory on a datastore (VMware file system or NAS storage)
that holds the files related to the cloned virtual machine. A
VMware file system on an internal disk, unreplicated
SAN-attached disk, or NAS-attached storage must be used to
store the files for the cloned virtual disk. This step has to be
performed only once.

218

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

2. Copy the configuration file for the source virtual machine to the
directory created in step 1. The command line utility scp can be
used for this purpose.
Note: This step has to be repeated only if the configuration of the source
virtual machine changes.

3. Register the cloned virtual machine using the vCenter client or


the service console. This step does not need to be repeated.
4. Generate RDMs on the target VMware ESXi hosts in the directory
created in step 1. The RDMs must be configured to address the
copy of the remote devices.
5. The virtual machine at the remote site can be powered on using
either the vCenter client or the service console when needed.
Note: The process listed in this section assumes that the source virtual
machine does not have a virtual disk on a VMware file system. The process to
clone virtual machines with a mix of RDMs and virtual disks is complex and
beyond the scope of this document.

The process to start the virtual machines at the remote site is the same
as given in Start virtual machines at a remote site after a disaster on
page 189.

Transitional disk copies to cloned virtual machines

219

Using VMware vSphere for Data Vaulting and Migration

SAN Copy for data migration from CLARiiON arrays


VMware ESXi hosts provide a limited set of tools to perform data
migrations. Furthermore, most of the native tools require extensive
downtime as the data is migrated from source devices to target
devices. The extended downtime is normally unacceptable for critical
business applications.
SAN Copy is frequently used to migrate data from CLARiiON
storage arrays to other supported storage arrays. One of the major
advantages that SAN Copy provides over other data migration
technologies is the capability to provide incremental updates. This
capability can be leveraged to provide a testing environment before
switching production workload to the migrated devices. In addition,
the incremental update capability can be used to minimize the outage
window when the production workload is switched from the source
devices to the migrated devices.
Note: The concepts presented here are similar to those presented for
MirrorView replication. SAN Copy is discussed in this chapter due to its
support for heterogeneous arrays. If the migration is between VNX arrays,
MirrorView may be a better alternative.

Migrate a VMware
file system

The process to migrate VMware file system to a VNX array using


SAN Copy is the same as given in Data vaulting of VMware file
system using SAN Copy on page 211. A few additional steps that are
listed below are needed to handle the new functionality introduced in
the vSphere environment:
1. After you power off the virtual machines (Data vaulting of
VMware file system using SAN Copy on page 211), remove the
virtual machines from the vCenter infrastructure inventory.
2. Rescan the SCSI bus after providing the VMware ESXi hosts with
access to the migrated devices. Scan the SCSI bus using the
service console or the vCenter client.
3. The vCenter client Add storage wizard lists the devices holding
the replicated copy of the VMware file systems. Select the Keep
existing signature option for each migrated LUN to maintain the
existing VMFS metadata and display the original datastore name

220

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

in the Storage tab of the vClient interface. Selecting this option


also obviates the need to manually edit the VM config file (.vmx)
for the existing virtual disks on the virtual machine.
4. Use the configuration files on the migrated volumes to add the
virtual machines back to the vCenter infrastructure inventory.
You can do this by using the Virtual Infrastructure client or the
service console. Do this after step 5 in (Data vaulting of VMware
file system using SAN Copy on page 211) after relabeling of the
VMware file system on the remote devices.

Migrate devices
used as RDM

The steps required to perform the migration of data is the same as


that listed in SAN Copy for data migration from CLARiiON arrays
on page 220.
An RDM volume contains information that is unique to the device
and environment where it exists. Migrating a virtual machine to an
environment where it will make use of a replica of the RDM
invalidates the configuration because the RDM file points to a device
UUID that does not exist in the environment. Since altering the
device UUID is not permissible, modify the configuration of the
virtual machine that can pose a risk. As long as care is taken to map
out the source and destination can be accomplished quite easily
through the vCenter client.
When the data for virtual machines containing RDM is migrated
using the process described in SAN Copy for data migration from
CLARiiON arrays on page 220, the virtual disk denoting the RDM
points to a device that does not exist. As a result, the virtual machine
cannot be powered on. Complete the following steps to ensure that
this does not occur:
1. The existing RDM should be deleted before the source devices are
removed from the VMware ESXi hosts (step 5 in Data vaulting of
VMware file system using SAN Copy on page 211). This can be
achieved by using the rm command on the service console or by
using the -U option to vmkfstools utility.
2. The RDM should be re-created utilizing the canonical name of the
remote devices after the devices are discovered on the VMware
ESXi hosts (step 5 in Data vaulting of VMware file system using
SAN Copy on page 211). The virtual disk created during this
process should have the same name as the one deleted in the
previous step.

SAN Copy for data migration from CLARiiON arrays

221

Using VMware vSphere for Data Vaulting and Migration

SAN Copy for data migration to VNX arrays


SAN Copy provides various modes of operation. In addition to the
incremental copy mode, SAN Copy supports the full copy mode in
which data from a supported storage system can be migrated to the
VNX storage system. The full copy option requires the source devices
to be offline since SAN Copy does not support incremental pull from
remote storage arrays. The following process needs to be followed
when migrating VMware virtual infrastructure data to EMC VNX
arrays from supported storage arrays:
1. The first step in any migration process that uses SAN Copy is the
identification of the WWN of the source devices. The
management software for the source storage array should be used
for this. The device numbers of the VNX LUNs involved in the
migration should also be identified.
2. After the appropriate information about the devices has been
obtained, create a full SAN Copy session of the clone volume on
the remote array. Figure 93 displays the options necessary to
create a full SAN Copy session.

Figure 93

222

Creating a SAN Copy session to migrate data to a VNX storage array

Using EMC VNX Storage with VMware vSphere

Using VMware vSphere for Data Vaulting and Migration

3. The virtual machines using the devices that are being migrated
must be shut down. The SAN Copy session created in the
previous step should be started to initiate the data migration from
the source devices to the VNX devices.
4. The LUN masking information on both the remote storage array
and the VNX array should be modified to ensure that the
VMware ESXi hosts have access to just the devices on the VNX.
Note that the zoning information may also need to be updated to
ensure that the VMware ESXi hosts have access to the appropriate
front-end Fibre Channel ports on the VNX storage system.
5. After the full SAN Copy session completes, a rescan of the fabric
on the VMware ESXi hosts enables the servers to discover the
remote devices on the VNX. The VMware ESXi hosts also update
the /vmfs structures automatically.
6. After the remote devices have been discovered, the virtual
machines can be restarted. Note that the discussion about virtual
machines using unlabeled VMFS or raw devices also applies for
the migrations discussed in this section.
When the amount of data being migrated from the remote storage
array to a VNX array is significant, SAN Copy provides a convenient
mechanism to leverage storage array capabilities to accelerate the
migration. Thus, by leveraging SAN Copy, one can reduce the
downtime significantly while migrating data to VNX arrays.

SAN Copy for data migration to VNX arrays

223

Using VMware vSphere for Data Vaulting and Migration

Summary
In this section we've looked at the use of SAN Copy as a data
migration tool for vSphere. SAN copy provides an interface between
storage systems for one time migrations, or periodic updates between
storage systems.
One of the unique capabilities of SAN Copy is that it inter operates
between different storage system types. As a result it is also useful as
a tool for migrating data during storage system upgrades and can be
a valuable tool to migrate from existing storage platform to VNX.
For additional details on Migrating Data From the EMC CLARiiON
Array to a VNX Platform using SAN Copy white paper on
Powerlink.EMC.com.

224

Using EMC VNX Storage with VMware vSphere

Вам также может понравиться