Вы находитесь на странице: 1из 28

EMC RecoverPoint

Deploying with AIX Hosts

Technical Notes
P/N 300-004-905
Rev A02
June 27, 2011

These technical notes describes required procedures and best practices


when deploying RecoverPoint with AIX hosts:

Introduction........................................................................................... 3
Environmental prerequisites............................................................... 5
Configuring consistency groups ...................................................... 13
Working with AIX Virtual I/O ......................................................... 17
Working with HACMP ...................................................................... 28

Revision history

Revision history
Table 1 on page 2 shows the revision history for this document.
Table 1

Revision history
Revision

Date

Description

A02

Added explanation of fast-fail mode


Added fast-fail best practices
Updated SCSI-2 support with AIX host-based
splitters
Improved instructions for disabling SCSI
reservations
Added section on dynamic tracking mode,
including procedures
Updated instructions for FC ID to take dynamic
tracking mode into account
Corrected instructions for first-time failover
Corrected instructions for planned failover and
unplanned failover
Added instructions for disabling SCSI
reservations when using Brocade splitter
Reviewed and corrected proceedures

A01

First release

EMC RecoverPoint Deploying with AIX hosts

Introduction

Introduction
The EMC RecoverPoint system provides full support for data
replication and disaster recovery with AIX-based host servers.
This document presents information and best practices relating to
deploying the RecoverPoint system with AIX hosts.
RecoverPoint can support AIX hosts with host-based, fabric-based,
and array-based splitters. The outline of the installation tasks for all
splitter types are similar to installation with any other host, but the
specific procedures are slightly different. The work flow outline of the
installation tasks for all splitter types is:
1. Creating and presenting volumes to their respective hosts at each
location; LUN masking and zoning host initiators to storage
targets (not covered in this document)
2. Creating the file system or setting raw access on the host.
3. Configuring RecoverPoint to replicate volumes (LUNs):
configuring volumes, replication sets, and consistency group
policies; attaching to splitters; and first-time initialization (full
synchronization) of the volumes.
4. Validating failover and failback

Scope

The document is intended primarily for system administrators who


are responsible for implementing their companies disaster recovery
plans. Technical knowledge of AIX is assumed.

AIX and replicating


boot-from-SAN
volumes

RecoverPoint does not support replicating AIX boot-from-SAN


volumes when using host-based splitters. When using fabric-based or
array-based splitters, boot-from-SAN volumes are replicated the
same way as data volumes.

Related documents

EMC RecoverPoint Administrators Guide


EMC RecoverPoint CLI Reference Guide
EMC RecoverPoint Installation Guide
EMC Deploying RecoverPoint with SANTap Technical Notes
EMC Deploying RecoverPoint with Brocade Splitter Technical Notes

EMC RecoverPoint Deploying with AIX hosts

Introduction

EMC RecoverPoint Deployment Manager Product Guide

Supported
configurations

Consult the EMC Support Matrix for RecoverPoint for information


about supported RecoverPoint configurations, operating systems,
cluster software, Fibre Channel switches, storage arrays, and storage
operating systems.

EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

Environmental prerequisites
This document assumes you have already installed a RecoverPoint
system and are either replicating volumes or ready to replicate. In
other words, it is assumed that the RecoverPoint ISO image is
installed on each RecoverPoint appliance (RPA); that initial
configuration, zoning, and LUN masking are completed; and that the
license is activated. In addition, it is assumed that AIX hosts with all
necessary patches are installed both at the production side and the
replica side. In the advanced procedures (LPAR-VIO, HACMP, etc.),
it is assumed that the appropriate hardware and environment are set
up.

Ensuring fast_fail
mode for AIX-based
hosts

For AIX hosts using host-based splitters, the replicated devices Fibre
Channel SCSI I/O Controller Protocol Device attribute (fc_err_recov)
must be set to fast_fail. The default setting is delayed_fail.
Fast_fail logic is called when the Fibre Channel adapter driver detects
a link event, such as a lost link between a storage device and a switch.
If that happens, the Fibre Channel adapter waits a short amount of
time, approximately 15 seconds, to allow the fabric to stabilize. If the
Fibre Channel adapter driver detects that the device is not on the
fabric at that point, it begins to fail all I/Os at the adapter driver. A
new I/O or future retries of the failed I/Os are failed immediately by
the adapter until the adapter driver detects that the device has
rejoined the fabric.
Although the fast_fail setting is not mandatory for fabric-based or
storage-based splitters, it is required by some multipathing software.
Check your multipathing softwares user manual. In environments
where multipathing software is in use, enabling fast_fail is the best
practice. Enabling fast_fail can decrease I/O fail times caused by loss
of the link between a storage device and the switch, allowing faster
failover to alternative paths.
To check the current settings for the attribute, run:
# lsattr El <FC_SCSI_I/O_Controller_Protocol_Device>

For example:
# lsattr El fscsi0

Check the output for the current value of fc_err_recov.


If it is necessary to change fc_err_recov for a device:
EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

1. Run:
# chdev l<FC SCSI I/O Controller Protocol Device> a
fc_err_recov=fast_fail -P

Setting the -P flag, permanently enables fast-fail the next time the
host is rebooted.
2. Reboot the host machine.
Dynamic tracking
mode

Previous releases of AIX required the user to unconfigure Fibre


Channel storage devices and adapter device instances before making
any changes in the Storage Area Network that might result in a
change in the N_Port ID (FC ID) of a storage port.
With AIX dynamic tracking enabled, the Fibre Channel adapter
driver detects when the N_Port ID (FC ID) of a device changes. The
adapter driver then reroutes the traffic for that device to the new
address while the devices are still online.
Examples of events that can change the N_Port ID (FC ID):

Recabling a storage device from one switch port to another

When using a Brocade splitter in frame redirection mode, binding


or unbinding storage targets

When using a SANTap splitter, moving a virtual initiator from the


back-end VSAN to the front-end VSAN

When using a SANTap splitter and:


Switch A and switch B are linked together with an interswitch
link (ISL), switch A has an intelligent module running
SANTap services and switch B does not
A single back-end VSAN spans switch A and switch B
Storage port A is connected to switch A; storage port B is
connected to switch B
Host A is zoned to both storage port A and storage port B.
Storage port B will have a different N_Port ID (FC ID) from
storage port A even though they are in the same back-end VSAN.

As a consequence, it is recommended that when dynamic tracking is


available (AIX 5.2 and later), it should always be enabled for the FC
SCSI I/O Controller Protocol Device attribute of every storage device
involved in replication.

EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

To check the current setting of the dynamic tracking attribute for a


Fibre Channel device <fscsi#>, run the following command:
# lsattr -El <fscsi#>

To enable dynamic tracking for Fibre Channel device <fscsi#>:


1. Run the following command:
# chdev -l <fscsi#> -a dyntrk=yes -P

When the -P flag is used, dynamic tracking is permanently


enabled the next time the host is rebooted. Make this change for
all relevant devices.
2. Reboot the host.

AIX and SCSI


reservation

By default, AIX hosts use SCSI reservation. Whether RecoverPoint


can support SCSI reservation depends on the SCSI reservation type
(SCSI-2 or SCSI-3) and the code level of the storage array. If
RecoverPoint cannot support the SCSI reservation, it must be
disabled on the AIX host.
SCSI-3 reservation is supported if the consistency groups
Reservation support is enabled in RecoverPoint.
SCSI-2 reservation is supported with host-based splitters according to
Table 2 on page 8

EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

Table 2

RecoverPoint SCSI-2 support with AIX host-based splitters


With PowerPath

Without PowerPath
Standalone
host

Standalone
host

Host in cluster

CLARiiON
Flare 26 and
later

OK

OK

Disable
reservations at
host

Only for fabric-based or


array based splitters

CLARiiON
Flare 24 and
earlier

Disable
reservations at
host

Disable reservations at
host

Disable
reservations at
host

Disable reservations at
host

OK

OK

OK

Only for fabric-based


splitters

Disable
reservations at
host

Only for fabric-based


splitters

Storage

Symmetrix
OK
5772 or later
Symmetrix
5771 or
earlier

Disable
reservations at
host

Host in cluster

For third-party storage arrays, consult EMCs Support Matrix or


EMC Customer Service.
The procedure for disabling SCSI reservation on AIX hosts follows.
Disabling SCSI
reservations on AIX
host

A RecoverPoint appliance (RPA) cannot access LUNs that have been


reserved with SCSI-2 reservations. For the RPA to be able to use those
LUNs during replication, AIX SCSI-2 reservation on those LUNs
must be disabled; that is, the AIX disk attribute reserve_policy must
be set to no_reserve. For more information on the reserve_policy
attribute, search for reserve_policy at www.ibm.com. On some AIX
systems, reserve_lock must be set to no instead of reserve_policy set
to no_reserve.
To check if reservations are enabled for a storage device, run:
# lsattr El <disk_name> | grep reserve_

Check the reserve_policy or the reserve_lock setting.


To disable AIX reservation, change the value of the reserve_lock as
follows:
8

EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

# chdev l <disk_name> a reserve_lock=no -P

On AIX systems using MPIO, reserve_policy=no_reserve is used


instead of reserve_lock=no.
If chdev returns device_busy, use one of the following
workarounds:
Close all applications using the disk device. Then run:
# sync
# umount <mount_point>
# varyoffvg <volume_group>

Then run chdev again as before. Then mount the disks and
reactivate the volume groups as follows:
# varyonvg <volume_group>
# mount /dev/r<logical_volume_name> /<mount_point>

Restart applications.
Use the same command with the addition of the P flag:
# chdev l <disk_name> a reserve_policy=no_reserve
P

When the -P flag is used, use of SCSI reservations will be


permanently disabled the next time the host is rebooted. Make
the change for all relevant devices. Then reboot the hosts.
PowerPath device

mapping

If you need to know the relationship between PowerPath logical


devices (as used by the Logical Volume Manager) and the hdisks (as
seen by the host), use the following command in PowerPath:
# powermt display dev=<powerpath_device_#>

Example:
# powermt display dev=0

The output is in the following format:


Pseudo name=hdiskpower0
Symmetrix ID=000190300519
Logical device ID=011D
state=alive; policy=SymmOpt; priority=0; queued-IOs=0
==============================================================================
---------------- Host -- Stor - -- I/O Path - -- Stats --### HW Path
I/O Paths Interf. Mode
State
Q-IOs Errors
==============================================================================
1 fscsi1
hdisk15
FA 16cA active alive
0 2
EMC RecoverPoint Deploying with AIX hosts

Environmental prerequisites

1 fscsi1
0 fscsi0
0 fscsi0

hdisk21
hdisk3
hdisk9

FA 2cA active alive


FA 16cA active alive
FA 2cA active alive

0 2
0 2
0 2

In this example, hdiskpower0 consists of four other disks: hdisk15,


hdisk21, hdisk3, and hdisk9.

10

EMC RecoverPoint Deploying with AIX hosts

Configuring RecoverPoint

Configuring RecoverPoint
To use RecoverPoint with AIX hosts, the following tasks must be
completed:

Installing RecoverPoint appliances and configuring the


RecoverPoint system

Installing splitters on AIX servers (unless using intelligent-fabric


or array-based splitters)

Required device adjustments when using fabric-based splitters

Configuring consistency groups for the AIX servers

Performing first-time failover

Installing and
configuring
RecoverPoint
appliances

To install RecoverPoint appliances and configure the RecoverPoint


system, refer to the EMC RecoverPoint Deployment Manager Product
Guide for your version of RecoverPoint.

Installing
host-based splitters
on AIX servers

To install host-based splitters on AIX servers, refer to the EMC


RecoverPoint Deployment Manager Product Guide for your version of
RecoverPoint.

Required
adjustments when
using fabric-based
splitters with AIX
servers

When using the AIX operating system, adjustments are required to


the RecoverPoint system because of the way AIX uses FC ID and
Physical Volume Identifiers.

FC ID

The AIX operating system uses the Fibre Channel identifier (FC ID,
also know as N_Port ID) assigned by the fabric switch as part of the
device path to the storage target. Fabric-based splitters rely on
changing the FC ID to reroute I/Os. As a result, the AIX operating
system may not recognize a volume that it accessed previously.
The procedures in this section should be carried out, even if dynamic
tracking (Dynamic tracking mode on page 6) is enabled, to remove
old references to unavailable devices (undefined devices).

EMC RecoverPoint Deploying with AIX hosts

11

Configuring RecoverPoint

Brocade splitter:
Brocade bindings, required for the Brocade splitter, change the FC IDs
of storage devices presented to the host initiators. After binding a
host initiator to its storage target(s), use the following procedure to
allow AIX hosts to identify storage devices with changed FC ID:
1. Discover the devices with the new FC ID:
# cfgmgr

2. Remove the old devices from the host. The old devices will be
listed as not defined:
# rmdev -dl <disk_name>

This procedure is required for both all production and all replica
devices.
SANTap splitter:
To avoid the need for special procedures when a volume is attached
to a SANTap-based splitter, manually make the FC ID of storage
devices persistent. Refer to the section Persistent FC ID in EMC
Deploying RecoverPoint with SANTap Technical Notes for instructions.
When the persistent FC ID is used, no additional special procedures
are required, because the FC ID is not altered.
If persistent FC ID is not used, the following procedure must be
carried out:
1. Discover the devices with new FC IDs:
# cfgmgr

2. After moving the AIX initiators to the front-end VSAN, remove


the old devices from the host; the old devices will be listed as not
defined:
# rmdev -dl <disk_name>

If the persistent FC ID is not used, this procedure is required for all


production and replica storage devices. If initiators are moved to the
back-end VSAN, it will be necessary to repeat the procedure.

Physical Volume
Identifier

12

RecoverPoint replicates at the block level. As soon as the replica


storage is initialized for replication, the Physical Volume Identifier
(PVID) of production storage devices will be copied to replica storage
devices. However, the Object Data Manager at the replica side will

EMC RecoverPoint Deploying with AIX hosts

Configuring RecoverPoint

know the replica device by its PVID before initialization. It will not
recognize the device with the replicated PVID. Run First-time
failover on page 13 to allow the AIX hosts at the replica to update
their Object Database Manager and recognize the devices with new
PVIDs.

Configuring
consistency groups

Configuring consistency groups for AIX servers is identical to the


procedure for other operating systems. Refer to the EMC RecoverPoint
Administrators Guide for instructions.

First-time failover

You must perform this procedure on a consistency group before you


can access images or fail over the consistency group, because
RecoverPoint will change the Physical Volume Identifier of the
storage devices. Carry out the following procedure after first-time
initialization of a consistency group:
1. Ensure that the initialization of the consistency group has
finished.
2. Enable image access. For instructions, refer to Accessing a
Replica in the EMC RecoverPoint Administrators Guide.
3. At the replica-side host, test the data integrity. To do so, run the
following commands on the replica-side host.
a. Discover the replica devices and force AIX to reread the
devices:
# cfgmgr

b. For each storage device, update the device information:


# chdev -l <disk_name> -a pv=yes

c. List the physical device information. The physical device


identifier for each listed device should be identical to that of
the production device.
# lspv

d. At the replica AIX host, import the volume group:


# importvg y <vg_name> <disk_name>

EMC RecoverPoint Deploying with AIX hosts

13

Configuring RecoverPoint

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.
e. Mount the volume by using the following command:
# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


option.
f. Verify that all required data is on the volume.
4. At the replica-side host, export the volume group:
# umount <mount_point(s)>
# varyoffvg <volume_group_name>

5. Disable image access.

Failing over and


failing back
Planned failover

After performing first-time failover, subsequent failovers do not


require disabling and enabling the storage devices.
1. Ensure that the first initialization for the consistency group has
been completed.
2. On the production-side AIX host, stop the applications. Then flush
the local devices and unmount the file systems to stop I/O to the
mount points):
a. Run the following commands:
# sync
# umount <mount_ point>
# varyoffvg <volume_group_name>

3. Use the RecoverPoint Management Application to enable image


access on the replica side. Refer to the EMC RecoverPoint
Administrators Guide for instructions.
4. On the replica-side AIX host:
a. Import the volume group:
# importvg y <volume_group_name> <disk_name>

14

EMC RecoverPoint Deploying with AIX hosts

Configuring RecoverPoint

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.
b. Activate the volume groups:
# varyonvg <vg_name>

c. Mount the file systems using the following command:


# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


option.
5. Start applications on the replica-side host.
6. Verify the integrity of the image. If the image is valid, fail over to
it. For instructions how to fail over, refer to the EMC RecoverPoint
Administrators Guide.
7. For instructions how to fail back to the production side, refer to
the EMC RecoverPoint Administrators Guide.
Unplanned failover

RecoverPoint is designed for recovering from disasters and


unexpected failures. Use the following procedure.
1. Use the RecoverPoint Management Application to enable image
access.
2. On the replica-side AIX host, import the volume group:
a. Import the volume group:
# importvg y <volume_group_name> <disk_name>

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.
b. Mount the volumes by using the following command:
# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


option.
c. Start applications on the replica-side host.

EMC RecoverPoint Deploying with AIX hosts

15

Configuring RecoverPoint

3. Verify the integrity of the image. If the image is valid, initiate


failover. Refer to the EMC RecoverPoint Administrators Guide for
instructions.
Failing back

16

For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

Working with AIX Virtual I/O


The Virtual I/O (VIO) Server is part of the IBM PowerVM hardware
feature, formerly known as Advanced Power Virtualization. Virtual
I/O Server allows sharing of physical resources between logical
partitions (LPARs), including virtual SCSI and virtual networking.
This allows more efficient utilization of physical resources through
sharing between LPARs and facilitates server consolidation.
For more information about the Virtual I/O Server, search at
www.ibm.com for Virtual I/O Server Advanced Power
Virtualization.

RecoverPoint and
Virtual I/O

Deploying RecoverPoint in a system with Virtual I/O Server and the


IBM PowerVM hardware feature requires detailed knowledge of
volumes, and an understanding of the implications and special
handling of Virtual I/O configuration.
The following procedure illustrates the recommended practices for
such deployment.
The steps for configuring Virtual I/O in several basic RecoverPoint
scenarios will be presented.
Note: Due to the nature of the Virtual I/O implementation, you may replicate
a disk volume only between two Virtual I/O systems or between two
non-Virtual I/O systems. RecoverPoint does not support replicating between
a Virtual I/O system and a non-Virtual I/O system.

Disabling SCSI
reservations when
using Brocade splitter
with Virtual I/O
Servers

When using an AIX host with a Brocade splitter and Virtual I/O
Servers, it may be necessary to permanently disable SCSI reservations
to avoid unexpected behavior. When SCSI reservations are
permanently disabled, any new disks discovered after Brocade
binding or unbinding appear without SCSI reservations on both
Virtual I/O Server LPARs. This avoids unexpected behavior in the
clustered Virtual I/O Server configuration.
In a clustered Virtual I/O Server configuration with LPARs sharing
access to the same devices on each Virtual I/O Server, set
reserve_lock = no.
To set reserve_lock = no, download and install EMC.AIX.5.3.0.4.tar
on all Virtual I/O Servers. The tar file is available for download from
ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS.

EMC RecoverPoint Deploying with AIX hosts

17

Working with AIX Virtual I/O

After installing the EMC.AIX.5.3.0.4.tar, on each Virtual I/O Server,


run the following script:
/usr/lpp/EMC/CLARiiON/bin/emc_reserve_v1.sh

This script detects the type of devices present on the host and
provides the option to set reserve_lock=no either for Symmetrix or
for CLARiiON arrays. To set reserve_lock=no for both types of
arrays, follow the instructions in the script and when prompted,
choose option 1.
The script reruns the bosboot command after each change to update
the Predefined Attribute settings (PdAt) in the boot image.
A sample script output follows:
****************
* Begin Script *
****************
*****************************************************
Dual Virtual I/O Server with shared disk configurations require the device reserve
lock to be set to 'no' before configuring to VIOC.
*****************************************************
*****************************************************
* EMC.Symmetrix.fcp.rte installed. Okay to proceed with change *
*****************************************************

1. Set reserve_lock=no in the PdAt for Symmetrix FCP devices.


2. Set reserve_lock=yes in the PdAt for Symmetrix FCP devices.
Please select one of the above (1-2) : 1
************************
* Status BEFORE update *
************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
18

"disk/fcp/SYMM_DISK_BCV"
"disk/fcp/SYMM_RAID1_BCV"
"disk/fcp/SYMM_RDF1_BCV"
"disk/fcp/SYMM_RDF2_BCV"
"disk/fcp/SYMM_1_RDF1_BCV"
"disk/fcp/SYMM_1_RDF2_BCV"
"disk/fcp/SYMM_5_RDF1_BCV"

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"

"disk/fcp/SYMM_5_RDF2_BCV"
"disk/fcp/SYMM_RAID5_BCV"
"disk/fcp/SYMM_6_RDF1_BCV"
"disk/fcp/SYMM_6_RDF2_BCV"
"disk/fcp/SYMM_RAID6_BCV"
"disk/fcp/SYMM_RDF1"
"disk/fcp/SYMM_RDF2"
"disk/fcp/SYMM_RAID1_RDF1"
"disk/fcp/SYMM_RAID1_RDF2"
"disk/fcp/SYMM_RAID5_RDF1"
"disk/fcp/SYMM_RAID5_RDF2"
"disk/fcp/SYMM_RAID6_RDF1"
"disk/fcp/SYMM_RAID6_RDF2"
"disk/fcp/SYMM_RAIDS_RDF1"
"disk/fcp/SYMM_RAIDS_RDF2"
"disk/fcp/SYMM_VRAID"
"disk/fcp/SYMM_DISK_THIN"
"disk/fcp/SYMM_DISK"
"disk/fcp/SYMM_RAID1"
"disk/fcp/SYMM_RAID5"
"disk/fcp/SYMM_RAID6"
"disk/fcp/SYMM_RAIDS"
"disk/fcp/SYMMETRIX"

*
* ODM updates - reserve_lock=no -

installed sucessfully

************************
* Status AFTER update *

EMC RecoverPoint Deploying with AIX hosts

19

Working with AIX Virtual I/O

************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"

20

"disk/fcp/SYMM_DISK_BCV"
"disk/fcp/SYMM_RAID1_BCV"
"disk/fcp/SYMM_RDF1_BCV"
"disk/fcp/SYMM_RDF2_BCV"
"disk/fcp/SYMM_1_RDF1_BCV"
"disk/fcp/SYMM_1_RDF2_BCV"
"disk/fcp/SYMM_5_RDF1_BCV"
"disk/fcp/SYMM_5_RDF2_BCV"
"disk/fcp/SYMM_RAID5_BCV"
"disk/fcp/SYMM_6_RDF1_BCV"
"disk/fcp/SYMM_6_RDF2_BCV"
"disk/fcp/SYMM_RAID6_BCV"
"disk/fcp/SYMM_RDF1"
"disk/fcp/SYMM_RDF2"
"disk/fcp/SYMM_RAID1_RDF1"
"disk/fcp/SYMM_RAID1_RDF2"
"disk/fcp/SYMM_RAID5_RDF1"
"disk/fcp/SYMM_RAID5_RDF2"
"disk/fcp/SYMM_RAID6_RDF1"
"disk/fcp/SYMM_RAID6_RDF2"
"disk/fcp/SYMM_RAIDS_RDF1"
"disk/fcp/SYMM_RAIDS_RDF2"
"disk/fcp/SYMM_VRAID"
"disk/fcp/SYMM_DISK_THIN"
"disk/fcp/SYMM_DISK"

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"

"disk/fcp/SYMM_RAID1"
"disk/fcp/SYMM_RAID5"
"disk/fcp/SYMM_RAID6"
"disk/fcp/SYMM_RAIDS"
"disk/fcp/SYMMETRIX"

***************************************
* 'savebase' verification starting... *
***************************************
*****************************************************
* 'bosboot -ad /dev/ipldevice' verification starting... *
*****************************************************

bosboot: Boot image is 43652 512 byte blocks.


DONE:
*****************************************************
*** Dual Virtual I/O Server with shared disk configurations require the device
reserve lock to be set to 'no' before configuring to VIOC.
***
*****************************************************

*****************************************************
* EMC.CLARiiON.fcp.rte installed. Okay to proceed with change *
*****************************************************

1. Set reserve_lock=no in the PdAt for CLARiiON FCP devices.


2. Set reserve_lock=yes in the PdAt for CLARiiON FCP devices.
Please select one of the above (1-2) : 1
************************
* Status BEFORE update *
************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =

"disk/fcp/CLAR_FC_idisk"
"disk/fcp/CLAR_FC_LUNZ"
"disk/fcp/CLAR_FC_raid0"
"disk/fcp/CLAR_FC_raid1"
"disk/fcp/CLAR_FC_raid10"

EMC RecoverPoint Deploying with AIX hosts

21

Working with AIX Virtual I/O

deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_raid3"
deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_raid5"
deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_VRAID"
deflt = "no"
* ODM updates - reserve_lock=no - installed sucessfully
************************
* Status AFTER update *
************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"

"disk/fcp/CLAR_FC_idisk"
"disk/fcp/CLAR_FC_LUNZ"
"disk/fcp/CLAR_FC_raid0"
"disk/fcp/CLAR_FC_raid1"
"disk/fcp/CLAR_FC_raid10"
"disk/fcp/CLAR_FC_raid3"
"disk/fcp/CLAR_FC_raid5"
"disk/fcp/CLAR_FC_VRAID"

***************************************
* 'savebase' verification starting... *
***************************************
*****************************************************
* 'bosboot -ad /dev/ipldevice' verification starting... *
*****************************************************
bosboot: Boot image is 43652 512 byte blocks.

22

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

Virtual I/O is configured in the following manner:

Volume group
k

vd
i

vd
is

Replicating Virtual
I/O

Virtual
I/O
server

Virtual
I/O
client

vd

LUN

sk

isk

Volume group
Storage

vdisk

Virtual
I/O
client

AIX LPAR

Storagea storage array, which contains the user data.


LUNthe LUN from the storage which is to be replicated with
RecoverPoint and which contains the user data, from which the
virtual disks are created.
AIX LPARdynamic partitioning of server resources into logical
partitions (LPARs), each of which can support a virtual server and
multiple clients.
Virtual I/O Serverthe instance in the LPAR which runs the server
portion of the Virtual I/O. This instance uses the physical HBA(s)
and sees the real world.
vdiskthe portion of the LUN presented to the VIO client by the
VIO server.
Virtual I/O Clientthe instance in the LPAR which runs the client
portion of the Virtual I/O. The client does not have access to the
physical devices (HBA, LUN, etc.) but gets a virtual device via the
Virtual I/O mechanism instead. Its disk is of the type Virtual SCSI
Disk Drive.
Volume groupThe Virtual I/O server has a volume group
configured on the storage LUN. vdisks from the volume group are
mapped to the Virtual I/O client.

EMC RecoverPoint Deploying with AIX hosts

23

Working with AIX Virtual I/O

First-time failover

The need for first-time failover is explained in Required adjustments


when using fabric-based splitters with AIX servers on page 11. After
first-time initialization of a consistency group:
1. Ensure that the initialization of the consistency group has
finished.
2. Enable image access. For instructions, refer to Accessing a
Replica in the EMC RecoverPoint Administrators Guide.
3. At the replica-side host, test the data integrity. To do so, run the
following commands on the replica-side host.
a. Discover the replica devices and force AIX to reread the
devices:
# cfgmgr

b. For each storage device, update the device information:


# chdev -l <disk_name> -a pv=yes

c. List the physical device information. The physical device


identifier for each listed device should be identical to that of
the production device.
# lspv

d. At the replica AIX host, import the volume group:


# importvg y <vg_name> <disk_name>

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.
e. Mount the volume by using the following command:
# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


option.
f. Verify that all required data is on the volume.
4. On the replica Virtual I/O Server, map the disk or disks to the
Virtual I/O clients. Use the following command:
# ioscli mkvdev -vdev <device_name> -vhost <vhost_name>
-dev <virtual_lun_name>

24

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

On some AIX systems, vadapter is used instead of vhost.


5. Use the following command to verify the mapping:
# ioscli lsmap -all

6. At the replica-side host, export the volume group:


# umount <mount_point(s)>
# varyoffvg <volume_group_name>
# exportvg <volume_group_name>

7. Disable image access.

Failover

Planned failover

After the first failover has been completed (First-time failover on


page 24), subsequent failovers do not require disabling and enabling
the storage devices.
A planned failover is used to make the remote side the production
side, allowing planned maintenance of the local side. For more
information about failovers, refer to the EMC RecoverPoint
Administration Guide.
1. Ensure that the first initialization for the consistency group has
been completed.
2. On the production-side Virtual I/O Client, stop the applications.
Then flush the local devices and unmount the file systems (stop
I/O to the mount point):
# sync
# umount <mount_point(s)>
# varyoffvg <volume_group_name>

3. Use the RecoverPoint Management Application to enable image


access on the replica side. Refer to the EMC RecoverPoint
Administrators Guide for instructions.
4. On the replica-side AIX host:
a. Import the volume group:
# importvg y <volume_group_name> <disk_name>

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.

EMC RecoverPoint Deploying with AIX hosts

25

Working with AIX Virtual I/O

b. Mount the file systems using the following command:


# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


command.
5. On the replica-side Virtual I/O Server, map the disks to the
Virtual I/O Client:
# ioscli mkvdev -vdev <device_name> -vhost <vhost_name>
-dev <virtual_lun_name>

On some AIX systems, vadapter is used instead of vhost.


6. Use the following command to verify the mapping:
# ioscli lsmap -all

7. Run the application on the replica-side LPAR (logical partition).


8. Verify the integrity of the image. If the image is valid, initiate
failover. For instructions how to fail over, refer to the EMC
RecoverPoint Administrators Guide.
9. For instructions how to fail back to the production side, refer to
the EMC RecoverPoint Administrators Guide.
Failing back
Unplanned failover

For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.
RecoverPoint is designed for recovering from disasters and
unexpected failures. Use the following procedure.
1. At the RecoverPoint Management Application, access an image.
2. At the replica-side Virtual I/O Server, import the volume group:
a. Import the volume group:
# importvg y <volume_group_name>
<virtual_disk_name>

Even if the volume group contains multiple disks, it is


sufficient to specify just one; all of the other disks in the
volume group will be imported automatically using the
information from the one specified.
b. Activate the volume groups:
# varyonvg <vg_name>

26

EMC RecoverPoint Deploying with AIX hosts

Working with AIX Virtual I/O

c. Mount the volumes by using the following command:


# mount /dev/r<logical_volume_name> /<mount_point>

or update the /etc/filesystems file and use the mount -a


option.
3. On the replica-side Virtual I/O Server, map the disks to the
Virtual I/O Client:
# ioscli mkvdev -vdev <device_name> -vhost <vhost_name>
-dev <virtual_lun_name>

On some AIX systems, vadapter is used instead of vhost.


4. Use the following command to verify the mapping:
# ioscli lsmap -all

5. On the replica Virtual I/O client, rediscover the disks:


# cfgmgr

6. Run the application on the replica-side LPAR (logical partition).


7. Verify the integrity of the image. If the image is valid, initiate
failover. Refer to the EMC RecoverPoint Administrators Guide for
instructions.
Failing back

For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.

EMC RecoverPoint Deploying with AIX hosts

27

Working with HACMP

Working with HACMP


HACMP overview

HACMP (High Availability Cluster Multi-Processing) is a host cluster


system that runs on the IBM AIX operating system.

First-time failover,
failing over, and
failing back

The procedures for first-time failover, failing over, and failing back
HACMP clusters are identical to the procedures for stand-alone
hosts, except for the commands for taking resources off-line and
bringing them on-line at the active side.
Use the following command to take resources off-line:
# clRGmove -s false -d -i -g <resource_group_name> -n
<node_name>

The same command will also sync, umount, and varyoffvg the
volume group.
Use the following command to bring resources on-line:
# clRGmove -s false -u -i -g <resource_group_name> -n
<node_name>

The same command will also vary on the volume group and
mount the file systems.

Copyright 2011 EMC Corporation. All rights reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO
REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN
THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable
software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
All other trademarks used herein are the property of their respective owners.

28

EMC RecoverPoint Deploying with AIX hosts

Вам также может понравиться