Вы находитесь на странице: 1из 42

Disaster recovery solutions for HP StorageWorks Enterprise Virtual Array in a VMware Infrastructure 3 environment

Executive summary............................................................................................................................... 2 Virtualization with VI 3 ......................................................................................................................... 2 Configuring a VM ............................................................................................................................ 4 Addressing IT challenges ...................................................................................................................... 5 Disaster recovery ............................................................................................................................. 5 Planned downtime ........................................................................................................................ 5 Unplanned downtime.................................................................................................................... 5 Combining BC and CA for disaster recovery........................................................................................... 5 Local replication .............................................................................................................................. 6 Remote replication ........................................................................................................................... 6 Overall business continuity solution .................................................................................................... 6 More information ............................................................................................................................. 6 Implementing BC ................................................................................................................................. 7 Creating a snapclone of a RDM LUN ................................................................................................. 7 Adding a RDM LUN to a VM........................................................................................................... 16 Creating a snapclone of a VMFS-3 LUN ........................................................................................... 19 Adding a VMFS-3 LUN to a VM....................................................................................................... 23 Creating snapshots......................................................................................................................... 25 Remapping mirrorclones ................................................................................................................. 25 Implementing CA ............................................................................................................................... 26 Creating a CA LUN........................................................................................................................ 26 Failing over to the secondary site ..................................................................................................... 32 Failing back to the primary site ........................................................................................................ 36 Hardware proof-of-concept ................................................................................................................. 40 For more information.......................................................................................................................... 42

Executive summary
Disaster recovery solutions are key components of todays data centers even for small businesses. If you are deploying VMware Virtual Infrastructure 3 (VI 3), solution tools such as VMotion (instantaneously moving an entire running virtual machine from one server to another), Distributed Resource Scheduling (DRS) (resource utilization monitoring and intelligent allocation across resource pools), High Availability (HA) (cost-effective failover protection), and Consolidated Backup (LAN-free, centralized virtual machine backups) offer a broad range of disaster recovery options. When you are planning and deploying an overall solution, however, you can supplement these VMware solution tools with the disaster recovery functionality offered by HP storage arrays to provide the business continuity needed to support your desired service levels. This white paper outlines HP StorageWorks Business Copy (BC) and Continuous Access (CA) business continuity solutions for HP StorageWorks Enterprise Virtual Array (EVA) storage arrays, and provides detailed instructions for implementing BC and CA in a VI 3 environment. The proof-of-concept performed by HP and VMware to validate these implementations is described. Target audience: This white paper is intended for solutions architects or engineers engaged in the development, deployment, and operation of solutions for consolidation and storage virtualization. You should be familiar with the utilization of networking and Storage Area Network (SAN) technologies in a heterogeneous environment, and have a working knowledge of HP StorageWorks Command View, BC and CA, and VI 3 SAN implementations.

Virtualization with VI 3
VI 3 allows enterprises and small businesses alike to transform, manage, and optimize their IT environments through virtualization. Virtualization creates an abstraction layer that decouples the physical hardware from the operating system to deliver greater IT resource utilization and flexibility. Multiple virtual machines (VMs), running a range of operating systems (such as Microsoft Windows Server 2003 or Linux) and applications, run in isolation but side-by-side on the same physical server. VI 3 delivers comprehensive virtualization, management, resource optimization, application availability and operational automation capabilities in an integrated offering.

Figure 1 provides a logical view of the components of a VI 3 implementation.

Figure 1: VI 3 components

The components shown in Figure 1 include: VMware ESX Server a production-proven virtualization layer running on physical servers; abstracts processor, memory, storage and networking resources to be provisioned to multiple VMs Virtual Machine a representation of a physical machine implemented in software; includes an operating system and applications running on a virtual hardware configuration (CPU, system memory, disk storage, NIC, and more). The operating system sees a consistent, conventional, hardware configuration, regardless of the actual physical components. VMs support advanced capabilities such as 64-bit computing and virtual symmetric multiprocessing. A VM is implemented as an isolated file that can reside in any storage, making it easy to migrate the VM to any location (as, for example, part of a disaster recovery strategy involving BC and CA). Virtual Machine File System (VMFS) a high-performance cluster file system for VMs Raw Disk Mapping (RDM) a RDM LUN supports direct access 1 between the VM and physical storage subsystem, and is useful with SAN snapshots and other layered applications running in a VM. Because RDM volumes allow SCSI commands to pass through directly from the guest operating system, they better enable scalable backups using native SAN features. By comparison, VMFS volumes filter all SCSI commands.

Fibre Channel or iSCSI only

Virtual Symmetric Multi-Processing (SMP) allows a single VM to simultaneously use multiple physical processors VirtualCenter Management Server provides a central point for configuring, provisioning and managing a virtualized IT infrastructure Virtual Infrastructure (VI) Client provides an interface that allows administrators and users to connect remotely to the VirtualCenter Management Server or individual ESX Server installations from any Windows PC VI Web Access A web interface for virtual machine management and remote console access

Configuring a VM
A suitable VM configuration is a pre-requisite for successful disaster recovery with BC and CA. For example, boot disk images and data disks for VMs should be located within a RAID array on the SAN. Figure 2 shows a sample configuration.

Figure 2: Sample configuration that is suitable for BC and CA business continuity solutions

HP recommends using a dedicated VMFS virtual disk to separate the VMs operating system from all other data. This VMFS virtual disk could be on a dedicated LUN or could be shared. Application and other data are maintained on one or more additional VMFS virtual disks or RDM LUNs. With this structure, the VMFS virtual disk containing the boot image can be kept to the smallest possible size so that less time is required for cloning, exporting, and backup. You can also implement separate backup policies for the operating system virtual disk and the data disks.

Addressing IT challenges
Key challenges for todays IT managers include safeguarding critical data and ensuring users are able to access these data with minimal disruption. Solutions have focused on providing and maintaining a high-availability environment often delivering four 9s (99.99%) or five 9s (99.999%) availability. As described in this white paper, BC and CA functionality can be utilized to help provide business continuity in a VI 3 environment.

Disaster recovery
A disaster recovery solution must be able to accommodate both planned and unplanned downtime. Planned downtime Planned downtime (for equipment or software maintenance) can jeopardize data at local or centrallylocated data centers. You must be able to preserve these data at a centralized location. BC provides an ideal solution for data on EVA storage arrays, allowing you to create snapshots, snapclones or mirrorclones of your critical data for backup purposes. Unplanned downtime Unplanned downtime results from a natural disaster or sudden hardware or software failure and may impact an entire site. CA offers an ideal solution, allowing you to move your data to a safe offsite data center, away from harm.

Combining BC and CA for disaster recovery


For todays businesses, maintaining a cost-effective, highly-available infrastructure that strongly protects the security, integrity and confidentiality of valuable data and transactions is a top priority. In addition, this infrastructure must possess sufficient flexibility to respond dynamically to changing business needs and accommodate unplanned events like power outages and natural disasters. Downtime or data loss can have a devastating effect on the bottom line no matter the size of your business. To keep you protected, HP StorageWorks EVA storage arrays provide business continuity and availability options for the smallest business with direct-attached storage to large enterprises with multiple terabytes and hundreds of sites. These solutions BC and CA can be deployed individually to meet backup and recovery needs or easily combined to provide a disaster-tolerant solution that helps meet your regulatory data-retention requirements. BC and CA share an integrated management interface Replication Solutions Manager (RSM) that supports a unique, unified replication management approach. RSM can be utilized to manage all local and remote replication features across the full EVA array storage family, allowing you to protect your information in the event of a disaster.

Local replication
BC HP StorageWorks Business Copy EVA software is local replication software that allows you to create copies of virtual disks for backup and restore purposes. BC capabilities include: Create point-in-time copies (snapshots, snapclones, or mirrorclones) of virtual disks Specify properties for the copies (such as redundancy level 2 , read cache, and write protection) Present the copies to hosts Allow immediate host I/O to snapshots and snapclones Instantly restore the contents of a virtual disk
Note: Features vary based on the particular array controller software version. For specific information, refer to HP StorageWorks 4000/6000/8000 Enterprise Virtual Array Software & Drivers.

Remote replication
CA HP StorageWorks Continuous Access EVA software is an array-based application that can be used to create, manage and configure remote replication on the EVA product family. CA delivers the components necessary to solve your business continuity objectives in a costeffective, easy-to-deploy package. The benefits of remote replication with CA include: Less expensive through faster, automated methods that virtually eliminate complexity while resulting in fewer user errors Easy to deploy for accelerated payback Less laborintensive, optimizing the utilization of your valuable IT staff

Overall business continuity solution


BC and CA can be combined to provide a robust business continuity solution for a VI 3 environment. Snapshot capabilities allow VM images and applications to be backed up and, if desired, migrated to a physical host on another SAN. Data replication from the original EVA storage array ensures that required images will always be available.

More information
For more information on BC and CA, refer to HP StorageWorks EVA replication software consolidated release notes http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00746629/c00746629.pdf.

Vraid

Implementing BC
BC is local replication software that allows you to create copies of virtual disks for backup and restore purposes. This section provides instructions for creating snapclones and snapshots, and re-mapping mirrorclones: Creating a snapclone of a RDM LUN Adding a RDM LUN to a VM Creating a snapclone of a VMFS-3 LUN Adding a VMFS-3 LUN to a VM Creating snapshots Remapping mirrorclones These instructions have been validated by HP and VMware.

Creating a snapclone of a RDM LUN


The process for creating a snapclone of a RDM LUN can be summarized as follows: Identify the existing RDM LUN you wish to clone; select the cloning option from Command View. Present the newly-cloned LUN to ESX Server. Rescan the host. If desired, add the newly-cloned LUN to a VM as an RDM.
Note: You do not need to power down the VM before starting the cloning process.

1. Using HP StorageWorks Command View EVA (Command View), navigate to the LUN under

Virtual Disks that you wish to clone, as shown in Figure 3. Select Create snapclone to begin the cloning process.

Figure 3: Selecting Vdiskdata2, the LUN you wish to clone

2. Create a unique name for the snapclone, as shown in Figure 4, so that you will be able to keep

track of the LUNs you have cloned. Ensure you select exactly the same performance characteristics (such as Vraid level and read cache policy) as configured on the LUN being cloned.

Figure 4: Naming the snapclone Copy of Vdiskdata 2

3. As shown in Figure 5, you are shown a dialog box indicating it may take a significant amount of

time to create the clone. This is normal; select OK to continue.

Figure 5: Starting to create a snapclone of Vdiskdata2

10

4. While the LUN is being cloned, note the message indicating Snapclone in progress, as shown in

Figure 6. Avoid any change in the fabric that could disrupt SAN connectivity.

Figure 6: The creation of snapclone Copy of Vdiskdata2 is in progress

11

5. When cloning is complete, note that a new entry, Copy of Vdiskdata2, is present under Virtual

Disks, as shown in Figure 7.

Figure 7: The new LUN entry appears under Virtual Disks

12

6. The last step from Command View is to present the newly-cloned LUN to ESX Server.

As shown in Figure 8, LUN 8, Copy of Vdiskdata2, is being presented to host DL580 (an HP ProLiant DL580 G2 server).

Figure 8: The newly-cloned LUN is presented to a host

13

7. ESX Server must perform a rescan to detect the newly-cloned LUN.

Note: LUNs with IDs 0 7 are currently shown under SCSI Target 0 in this example.

Figure 9: Updating the configuration of the host so that it will be able to recognize the newly-cloned LUN

14

8. From the host, initiate a rescan. On completion, verify that the newly-cloned LUN is present.

As shown in Figure 10, LUN 8 vmhba2:0:8 has been added to the list of LUNs for host DL580 (see Step 7).

Figure 10: Verifying that the newly-cloned LUN (LUN 8) is present on the host

15

Adding a RDM LUN to a VM


The newly-cloned RDM LUN can now be deployed with any VM.
Caution: Do not present a cloned LUN to the VM that was the source of the cloned data unless you first remove the original LUN from that VM. Consider the example of a VM named original with LUNs at drive E:\ and F:\. If you clone these LUNs (original E:\ and original :F\), creating new clones clone E:\ and clone F:\, you should not present the newly-cloned LUNs to VM original unless you first remove original E:\ and original :F\).

You can use the Add Hardware Wizard to add your newly-cloned LUN to a VM. Ensure you select the Raw Device Mappings option, as shown in Figure 11.
Note: For the rest of the steps for Figure 11 on RDM, follow the procedures from the Basic System Configuration guide from VMware. http://www.vmware.com/support/pubs/vi_pubs.html

Figure 11: Selecting the Raw Device Mappings option while adding your newly-cloned RDM LUN to a VM

16

You can hot-add a new LUN to a running VM without needing to power it down. You must then perform a Rescan Disks from Windows Computer Management, as shown in Figure 12.

Figure 12: Selecting the Rescan Disks command after hot-adding a new LUN

17

After the Rescan Disks, newly-cloned LUNs (for example, Disk 1 and Disk 2 in Figure 13) are detected but are offline and must be reactivated. Right-click on the LUN you wish to activate. Select Reactivate Disk.

Figure 13: Reactivating a hot-added LUN

18

Creating a snapclone of a VMFS-3 LUN


The procedure for creating a snapclone of a VMFS-3 LUN can be summarized as follows: Identify the existing VMFS-3 LUN you wish to clone; select the cloning option from Command View. Present the newly-cloned LUN to ESX Server. Change the value of the LVM.EnableResignature setting to 1. Rescan the host. Restore the value of LVM.EnableResignature to 0. If desired, add any virtual disk on the newly-cloned LUN to any VM.
Note: You do not need to power down the VM before starting the cloning process.

Step-by-step instructions are provided below.

19

1. Follow Steps 1 6 of the Creating a snapclone of a RDM LUN procedure. 2. ESX Server must perform a rescan to detect the newly-cloned LUN. First, you must change the

logical volume management setting LVM.EnableResignature, which is designed to prevent unknown data-replicated LUNs from adding to the ESX Server data store and causing unexpected behavior. Changing LVM.EnableResignature from 0 (OFF), its default value, to 1 (ON) forces ESX Server to recognize data-replicated LUNs. From VI Client, as shown in Figure 9, select host DL580 by its IP address, 10.20.0.3. Select Advanced Settings from the hosts Configuration page
3. As shown in Figure 14, change LVM.EnableResignature to 1.

Figure 14: Changing the LVM.EnableResignature setting from 0 to 1 so the host can detect the newly-cloned LUN

20

4. Initiate a rescan operation on the host. On completion, verify that your newly-cloned LUNs are

now present. As shown in Figure 15, the snapclones you presented to ESX Server in Step 6 are now listed as LUNs 14 and 15 vmhba2:0:14 and vmhba2:0:15, respectively.

Figure 15: Verifying that the newly-cloned LUNs (LUN 14 and LUN 15) are present on the host

21

5. Note that the VMFS-3 LUNs have been renamed with snap- prefixes.

The names of the newly-created LUNs, as shown in Figure 16, are snap-00000002-VDiskdata1 and snap-00000002-VDisklgdata2. These LUNs are correctly listed with VMFS format.

Note: During cloning, the file systems size and attributes are preserved.

Figure 16: After rescanning, the VMFS-3 LUNs have been renamed with snap- prefixes

6. After verifying that the desired server can successfully detect the newly-cloned LUNs, restore the

value of LVM.EnableResignature to 0.

22

Adding a VMFS-3 LUN to a VM


Newly-cloned VMFS-3 LUNs can now be hot-added to any VM without powering it down.
Caution: Do not present a cloned LUN to the VM that was the source of the cloned data unless you first remove the original LUN from that VM. Consider the example of a VM named original with LUNs at drive E:\ and F:\. If you clone these LUNs (original E:\ and original :F\), creating new clones clone E:\ and clone F:\, you should not present the newly-cloned LUNs to VM original unless you first remove original E:\ and original :F\).

Useful tip You may find the naming convention for cloned LUNs to be confusing when adding a new clone to a VM. To ensure you select the correct LUN, make a note of which LUN ID is associated with which volume name. For example, Figure 17 shows the Storage view of host DL580s Configuration page, listing available storage. Cloned LUN ID 51 has been presented to ESX Server. Note the name associated with this LUN is snap-00000003-VDiskdata1. If you choose to add this LUN to a VM, look for the name snap-00000003-VDiskdata1.

Figure 17: Noting which LUN ID is associated with which volume name

23

You can use the Add Hardware Wizard to add a newly-cloned LUN to a VM. Ensure you select the Use an existing virtual disk option, as shown in Figure 18.

Figure 18: Selecting the Use an existing virtual disk option while you are adding a newly-cloned VMFS-3 LUN to a VM

24

As shown in Figure 19, the newly-created clones, snap-00000002-VDiskdata1 and snap-00000002VDisklgdata2, are now visible when you browse data stores for LUNs you can add to a VM.

Figure 19: Selecting the clone you wish to add to the VM

Creating snapshots
The procedures for creating snapshots of RDM or VMFS-3 LUNs are effectively the same as those for creating snapclones of RDM or VMFS-3 LUNs (Creating a snapclone of a RDM LUN or Creating a snapclone of a VMFS-3 LUN).

Remapping mirrorclones
Until it is broken or detached, a mirrorclone continues to mirror its original LUN. After the mirrorclone is detached, you can remap it to another ESX Server host in much the same way you would map a new snapclone or snapshot to the host (as described above).

25

Implementing CA
CA is an array-based application that can be used to create, manage and configure remote replication on the EVA product family. This section provides step-by-step instructions for creating and utilizing a CA LUN: Creating a CA LUN Failing over to the secondary site Failing back to the primary site These instructions have been validated by HP and VMware.

Creating a CA LUN
The process for creating and utilizing a CA LUN can be summarized as follows: Create a CA LUN by copying a data replication (DR) Group from a source array (on the primary site) to a destination array (on a secondary site). Present the newly-created CA LUN to the ESX Server host on secondary site. For a VMFS-3 LUN, change the value of the LVM.EnableResignature setting to 1. Rescan the host. Restore the original value of LVM.EnableResignature. Step-by-step instructions are provided below.

26

1. Utilizing Command View, select Create data replication group from the Data Replication Folder

Properties page (as shown in Figure 20).

Figure 20: Using Command View to create a new DR group

27

2. Specify the source disk and other options, as shown in Figure 21.

Figure 21: Creating DR Group 001

28

3. Monitor the progress of the copying process, as shown in Figure 22.

Figure 22: The copy process is 42% complete

29

4. After the copying is complete, verify that the source LUN has write access and the CA LUN has

Inquiry only (read) access. Figure 23 shows the DR Group Properties page for the source, DR Group 001 on VMWARE 2. The destination, on VMWARE1, is shown with Inquiry only access.

Figure 23: The destination only has read access

30

From the same example, Figure 24 shows the DR Group Properties page for the destination, DR Group 001 on VMWARE 1. The source is shown as VMWARE 2; host access is Inquiry only.

Figure 24: Confirming that the destination only has read access

31

Failing over to the secondary site


To initiate a failover from the primary to secondary site, simply select the DR Group on the destination array and invoke the Fail over option.
Caution: If you fail over a DR Group with LUNs belonging to a running VM, the VM will crash 3 . This is normal given that failover to the secondary site makes the boot image and data inaccessible to the VM; the primary site has readonly access.

1. In this example, select DR Group 001 on destination array VMWARE 1.

Select Fail over, as shown in Figure 25.

Figure 25: Selecting the new DR Group as the destination for the failover

You would receive the so-called blue screen of death (BSOD).

32

2. After the failover DR Group 001 on VMWARE 1 is complete, locate the CA LUN and present it to

the ESX Server host at the secondary site. The Members view of the DR Group Properties page, shown in Figure 26, indicates that the CA LUN is located in the \\Virtual Disks\Source from DL380\Vdisk001_dl580vm1 folder.

Figure 26: Locating the CA LUN so that you can present it to a physical host at the secondary site

33

3. Select Vdisk001_dl580vm1, a CA LUN from DR Group 001, as shown in Figure 27. Present the

CA LUN to host DL380 as LUN ID 20.

Figure 27: Presenting the CA LUN to the host at the secondary site

34

4. As with the previous procedures, mapping the CA LUN to the ESX Server host requires you to

temporarily change one of the hosts LVM settings. Utilize VI Client to change the following setting: RDM LUN: just a rescan is needed.

VMFS-3 LUN: change LVM.EnableResignature from 1 to 0 You can now initiate a rescan operation on the remote host. On completion, restore the LVM setting to its original value.
5. After the CA LUN has been mapped to the remote host, you can add it any VM.

Verify that the CA LUN is now shown with a snap- prefix, as shown in Figure 28 where Vdisk001_dl580vm1 now appears as snap-00000002-dl580-vm1.

Figure 28: The CA LUN has been detected by the physical host and is now shown with a snap- prefix

35

Failing back to the primary site


If you choose to fail the CA LUN back to its original location, you will need to rescan the appropriate ESX Server host. Remember to change the value of LVM.EnableResignature to the appropriate value. When you first power on the original VM with its failed-back CA LUN, the VM will be inaccessible, as shown in Figure 29.

Figure 29: After you power on the original VM, VI Client reports a general system error

In the example shown above, CA LUN snap-00000004-dl580-vm1 has just failed back to an ESX Server host with IP address 10.20.0.6. VM dl580-vm1-drgroup is inaccessible. Perform the following steps to make this VM accessible:
1. In VI Client, right-click on the inaccessible VM. 2. Select the Remove from inventory command.

36

3. From the Storage section of the ESX Server hosts Configuration page, locate the CA LUN that has

just failed back. Select the Browse Datastore command, as shown in Figure 30.

Figure 30: Using VI Client to browse the datastore so that you can locate the .vmx file for the inaccessible VM

37

4. To re-enable the VM, locate its .vmx file and select the Add to Inventory command, as shown in

Figure 31.

Figure 31: Re-enabling the VM by returning it to inventory

38

5. Before you power on the VM you have just re-enabled, select the Keep option to retain its original

Universally Unique Identifier (UUID), as shown in Figure 32.

Figure 32: Retaining the original UUID for the VM

39

Hardware proof-of-concept
HP and VMware carried out a hardware proof-of-concept to validate the data copied in the procedures outlined in this white paper. The proof-of-concept utilized Cognos 8 Business Intelligence (BI), a business intelligence application, in conjunction with a Microsoft SQL Server 2000 Service Pack 3a database, to generate a sales report. Following planned and unplanned failover events, it was possible to retrieve the report and the associated data. Figure 33 shows the hardware configuration for the proof-of-concept.

Figure 33: Hardware configuration

Source VMs were configured as follows: Dl580-vm1 One processor 256 MB RAM 10 GB RDM LUN for the operating system (Windows 2000 Advanced Server) 20 GB RDM LUN for Cognos 8 BI application server 25 GB RDM LUN for SQL Server 2000 Service Pack 3a

40

Dl580-vm2 One processor 256 MB RAM 10 GB RDM LUN for the operating system (Windows 2000 Advanced Server) 20 GB RDM LUN for Cognos 8 BI application server 25 GB RDM LUN for SQL Server 2000 Service Pack 3a

41

For more information


HP virtualization with VMware VMware Infrastructure, planning http://www.hp.com/go/vmware http://h71019.www7.hp.com/ActiveAnswers/cache/272 102-0-0-0-121.html http://h20000.www2.hp.com/bc/docs/support/Support Manual/c00746629/c00746629.pdf http://www.hp.com/storage http://www.vmware.com/products/vi/ http://www.vmware.com/pdf/vi3_san_guide.pdf

HP StorageWorks EVA replication software release notes Data storage from HP VMware Virtual Infrastructure 3 VMware Storage/SAN Compatibility Guide for ESX Server 3.x VMware SAN Configuration Guide HP StorageWorks Continuous Access EVA HP StorageWorks Business Copy EVA

http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf http://h18006.www1.hp.com/products/storage/software /conaccesseva/index.html http://h18006.www1.hp.com/products/storage/software /bizcopyeva/index.html

To help us improve our documents, please provide feedback at www.hp.com/solutions/feedback.

2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Microsoft and Windows are registered trademarks of Microsoft Corporation. 4AA1-0820ENW, Revision 2, April 2007

Вам также может понравиться