Академический Документы
Профессиональный Документы
Культура Документы
Technical Notes
P/N 300-004-905
Rev A02
June 27, 2011
Introduction........................................................................................... 3
Environmental prerequisites............................................................... 5
Configuring consistency groups ...................................................... 13
Working with AIX Virtual I/O ......................................................... 17
Working with HACMP ...................................................................... 28
Revision history
Revision history
Table 1 on page 2 shows the revision history for this document.
Table 1
Revision history
Revision
Date
Description
A02
A01
First release
Introduction
Introduction
The EMC RecoverPoint system provides full support for data
replication and disaster recovery with AIX-based host servers.
This document presents information and best practices relating to
deploying the RecoverPoint system with AIX hosts.
RecoverPoint can support AIX hosts with host-based, fabric-based,
and array-based splitters. The outline of the installation tasks for all
splitter types are similar to installation with any other host, but the
specific procedures are slightly different. The work flow outline of the
installation tasks for all splitter types is:
1. Creating and presenting volumes to their respective hosts at each
location; LUN masking and zoning host initiators to storage
targets (not covered in this document)
2. Creating the file system or setting raw access on the host.
3. Configuring RecoverPoint to replicate volumes (LUNs):
configuring volumes, replication sets, and consistency group
policies; attaching to splitters; and first-time initialization (full
synchronization) of the volumes.
4. Validating failover and failback
Scope
Related documents
Introduction
Supported
configurations
Environmental prerequisites
Environmental prerequisites
This document assumes you have already installed a RecoverPoint
system and are either replicating volumes or ready to replicate. In
other words, it is assumed that the RecoverPoint ISO image is
installed on each RecoverPoint appliance (RPA); that initial
configuration, zoning, and LUN masking are completed; and that the
license is activated. In addition, it is assumed that AIX hosts with all
necessary patches are installed both at the production side and the
replica side. In the advanced procedures (LPAR-VIO, HACMP, etc.),
it is assumed that the appropriate hardware and environment are set
up.
Ensuring fast_fail
mode for AIX-based
hosts
For AIX hosts using host-based splitters, the replicated devices Fibre
Channel SCSI I/O Controller Protocol Device attribute (fc_err_recov)
must be set to fast_fail. The default setting is delayed_fail.
Fast_fail logic is called when the Fibre Channel adapter driver detects
a link event, such as a lost link between a storage device and a switch.
If that happens, the Fibre Channel adapter waits a short amount of
time, approximately 15 seconds, to allow the fabric to stabilize. If the
Fibre Channel adapter driver detects that the device is not on the
fabric at that point, it begins to fail all I/Os at the adapter driver. A
new I/O or future retries of the failed I/Os are failed immediately by
the adapter until the adapter driver detects that the device has
rejoined the fabric.
Although the fast_fail setting is not mandatory for fabric-based or
storage-based splitters, it is required by some multipathing software.
Check your multipathing softwares user manual. In environments
where multipathing software is in use, enabling fast_fail is the best
practice. Enabling fast_fail can decrease I/O fail times caused by loss
of the link between a storage device and the switch, allowing faster
failover to alternative paths.
To check the current settings for the attribute, run:
# lsattr El <FC_SCSI_I/O_Controller_Protocol_Device>
For example:
# lsattr El fscsi0
Environmental prerequisites
1. Run:
# chdev l<FC SCSI I/O Controller Protocol Device> a
fc_err_recov=fast_fail -P
Setting the -P flag, permanently enables fast-fail the next time the
host is rebooted.
2. Reboot the host machine.
Dynamic tracking
mode
Environmental prerequisites
Environmental prerequisites
Table 2
Without PowerPath
Standalone
host
Standalone
host
Host in cluster
CLARiiON
Flare 26 and
later
OK
OK
Disable
reservations at
host
CLARiiON
Flare 24 and
earlier
Disable
reservations at
host
Disable reservations at
host
Disable
reservations at
host
Disable reservations at
host
OK
OK
OK
Disable
reservations at
host
Storage
Symmetrix
OK
5772 or later
Symmetrix
5771 or
earlier
Disable
reservations at
host
Host in cluster
Environmental prerequisites
Then run chdev again as before. Then mount the disks and
reactivate the volume groups as follows:
# varyonvg <volume_group>
# mount /dev/r<logical_volume_name> /<mount_point>
Restart applications.
Use the same command with the addition of the P flag:
# chdev l <disk_name> a reserve_policy=no_reserve
P
mapping
Example:
# powermt display dev=0
Environmental prerequisites
1 fscsi1
0 fscsi0
0 fscsi0
hdisk21
hdisk3
hdisk9
0 2
0 2
0 2
10
Configuring RecoverPoint
Configuring RecoverPoint
To use RecoverPoint with AIX hosts, the following tasks must be
completed:
Installing and
configuring
RecoverPoint
appliances
Installing
host-based splitters
on AIX servers
Required
adjustments when
using fabric-based
splitters with AIX
servers
FC ID
The AIX operating system uses the Fibre Channel identifier (FC ID,
also know as N_Port ID) assigned by the fabric switch as part of the
device path to the storage target. Fabric-based splitters rely on
changing the FC ID to reroute I/Os. As a result, the AIX operating
system may not recognize a volume that it accessed previously.
The procedures in this section should be carried out, even if dynamic
tracking (Dynamic tracking mode on page 6) is enabled, to remove
old references to unavailable devices (undefined devices).
11
Configuring RecoverPoint
Brocade splitter:
Brocade bindings, required for the Brocade splitter, change the FC IDs
of storage devices presented to the host initiators. After binding a
host initiator to its storage target(s), use the following procedure to
allow AIX hosts to identify storage devices with changed FC ID:
1. Discover the devices with the new FC ID:
# cfgmgr
2. Remove the old devices from the host. The old devices will be
listed as not defined:
# rmdev -dl <disk_name>
This procedure is required for both all production and all replica
devices.
SANTap splitter:
To avoid the need for special procedures when a volume is attached
to a SANTap-based splitter, manually make the FC ID of storage
devices persistent. Refer to the section Persistent FC ID in EMC
Deploying RecoverPoint with SANTap Technical Notes for instructions.
When the persistent FC ID is used, no additional special procedures
are required, because the FC ID is not altered.
If persistent FC ID is not used, the following procedure must be
carried out:
1. Discover the devices with new FC IDs:
# cfgmgr
Physical Volume
Identifier
12
Configuring RecoverPoint
know the replica device by its PVID before initialization. It will not
recognize the device with the replicated PVID. Run First-time
failover on page 13 to allow the AIX hosts at the replica to update
their Object Database Manager and recognize the devices with new
PVIDs.
Configuring
consistency groups
First-time failover
13
Configuring RecoverPoint
14
Configuring RecoverPoint
15
Configuring RecoverPoint
16
For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.
RecoverPoint and
Virtual I/O
Disabling SCSI
reservations when
using Brocade splitter
with Virtual I/O
Servers
When using an AIX host with a Brocade splitter and Virtual I/O
Servers, it may be necessary to permanently disable SCSI reservations
to avoid unexpected behavior. When SCSI reservations are
permanently disabled, any new disks discovered after Brocade
binding or unbinding appear without SCSI reservations on both
Virtual I/O Server LPARs. This avoids unexpected behavior in the
clustered Virtual I/O Server configuration.
In a clustered Virtual I/O Server configuration with LPARs sharing
access to the same devices on each Virtual I/O Server, set
reserve_lock = no.
To set reserve_lock = no, download and install EMC.AIX.5.3.0.4.tar
on all Virtual I/O Servers. The tar file is available for download from
ftp://ftp.emc.com/pub/elab/aix/ODM_DEFINITIONS.
17
This script detects the type of devices present on the host and
provides the option to set reserve_lock=no either for Symmetrix or
for CLARiiON arrays. To set reserve_lock=no for both types of
arrays, follow the instructions in the script and when prompted,
choose option 1.
The script reruns the bosboot command after each change to update
the Predefined Attribute settings (PdAt) in the boot image.
A sample script output follows:
****************
* Begin Script *
****************
*****************************************************
Dual Virtual I/O Server with shared disk configurations require the device reserve
lock to be set to 'no' before configuring to VIOC.
*****************************************************
*****************************************************
* EMC.Symmetrix.fcp.rte installed. Okay to proceed with change *
*****************************************************
"disk/fcp/SYMM_DISK_BCV"
"disk/fcp/SYMM_RAID1_BCV"
"disk/fcp/SYMM_RDF1_BCV"
"disk/fcp/SYMM_RDF2_BCV"
"disk/fcp/SYMM_1_RDF1_BCV"
"disk/fcp/SYMM_1_RDF2_BCV"
"disk/fcp/SYMM_5_RDF1_BCV"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
"disk/fcp/SYMM_5_RDF2_BCV"
"disk/fcp/SYMM_RAID5_BCV"
"disk/fcp/SYMM_6_RDF1_BCV"
"disk/fcp/SYMM_6_RDF2_BCV"
"disk/fcp/SYMM_RAID6_BCV"
"disk/fcp/SYMM_RDF1"
"disk/fcp/SYMM_RDF2"
"disk/fcp/SYMM_RAID1_RDF1"
"disk/fcp/SYMM_RAID1_RDF2"
"disk/fcp/SYMM_RAID5_RDF1"
"disk/fcp/SYMM_RAID5_RDF2"
"disk/fcp/SYMM_RAID6_RDF1"
"disk/fcp/SYMM_RAID6_RDF2"
"disk/fcp/SYMM_RAIDS_RDF1"
"disk/fcp/SYMM_RAIDS_RDF2"
"disk/fcp/SYMM_VRAID"
"disk/fcp/SYMM_DISK_THIN"
"disk/fcp/SYMM_DISK"
"disk/fcp/SYMM_RAID1"
"disk/fcp/SYMM_RAID5"
"disk/fcp/SYMM_RAID6"
"disk/fcp/SYMM_RAIDS"
"disk/fcp/SYMMETRIX"
*
* ODM updates - reserve_lock=no -
installed sucessfully
************************
* Status AFTER update *
19
************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
20
"disk/fcp/SYMM_DISK_BCV"
"disk/fcp/SYMM_RAID1_BCV"
"disk/fcp/SYMM_RDF1_BCV"
"disk/fcp/SYMM_RDF2_BCV"
"disk/fcp/SYMM_1_RDF1_BCV"
"disk/fcp/SYMM_1_RDF2_BCV"
"disk/fcp/SYMM_5_RDF1_BCV"
"disk/fcp/SYMM_5_RDF2_BCV"
"disk/fcp/SYMM_RAID5_BCV"
"disk/fcp/SYMM_6_RDF1_BCV"
"disk/fcp/SYMM_6_RDF2_BCV"
"disk/fcp/SYMM_RAID6_BCV"
"disk/fcp/SYMM_RDF1"
"disk/fcp/SYMM_RDF2"
"disk/fcp/SYMM_RAID1_RDF1"
"disk/fcp/SYMM_RAID1_RDF2"
"disk/fcp/SYMM_RAID5_RDF1"
"disk/fcp/SYMM_RAID5_RDF2"
"disk/fcp/SYMM_RAID6_RDF1"
"disk/fcp/SYMM_RAID6_RDF2"
"disk/fcp/SYMM_RAIDS_RDF1"
"disk/fcp/SYMM_RAIDS_RDF2"
"disk/fcp/SYMM_VRAID"
"disk/fcp/SYMM_DISK_THIN"
"disk/fcp/SYMM_DISK"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
"disk/fcp/SYMM_RAID1"
"disk/fcp/SYMM_RAID5"
"disk/fcp/SYMM_RAID6"
"disk/fcp/SYMM_RAIDS"
"disk/fcp/SYMMETRIX"
***************************************
* 'savebase' verification starting... *
***************************************
*****************************************************
* 'bosboot -ad /dev/ipldevice' verification starting... *
*****************************************************
*****************************************************
* EMC.CLARiiON.fcp.rte installed. Okay to proceed with change *
*****************************************************
"disk/fcp/CLAR_FC_idisk"
"disk/fcp/CLAR_FC_LUNZ"
"disk/fcp/CLAR_FC_raid0"
"disk/fcp/CLAR_FC_raid1"
"disk/fcp/CLAR_FC_raid10"
21
deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_raid3"
deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_raid5"
deflt = "no"
uniquetype = "disk/fcp/CLAR_FC_VRAID"
deflt = "no"
* ODM updates - reserve_lock=no - installed sucessfully
************************
* Status AFTER update *
************************
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
uniquetype =
deflt = "no"
"disk/fcp/CLAR_FC_idisk"
"disk/fcp/CLAR_FC_LUNZ"
"disk/fcp/CLAR_FC_raid0"
"disk/fcp/CLAR_FC_raid1"
"disk/fcp/CLAR_FC_raid10"
"disk/fcp/CLAR_FC_raid3"
"disk/fcp/CLAR_FC_raid5"
"disk/fcp/CLAR_FC_VRAID"
***************************************
* 'savebase' verification starting... *
***************************************
*****************************************************
* 'bosboot -ad /dev/ipldevice' verification starting... *
*****************************************************
bosboot: Boot image is 43652 512 byte blocks.
22
Volume group
k
vd
i
vd
is
Replicating Virtual
I/O
Virtual
I/O
server
Virtual
I/O
client
vd
LUN
sk
isk
Volume group
Storage
vdisk
Virtual
I/O
client
AIX LPAR
23
First-time failover
24
Failover
Planned failover
25
For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.
RecoverPoint is designed for recovering from disasters and
unexpected failures. Use the following procedure.
1. At the RecoverPoint Management Application, access an image.
2. At the replica-side Virtual I/O Server, import the volume group:
a. Import the volume group:
# importvg y <volume_group_name>
<virtual_disk_name>
26
For instructions to fail back to the production side, refer to the EMC
RecoverPoint Administrators Guide.
27
First-time failover,
failing over, and
failing back
The procedures for first-time failover, failing over, and failing back
HACMP clusters are identical to the procedures for stand-alone
hosts, except for the commands for taking resources off-line and
bringing them on-line at the active side.
Use the following command to take resources off-line:
# clRGmove -s false -d -i -g <resource_group_name> -n
<node_name>
The same command will also sync, umount, and varyoffvg the
volume group.
Use the following command to bring resources on-line:
# clRGmove -s false -u -i -g <resource_group_name> -n
<node_name>
The same command will also vary on the volume group and
mount the file systems.
28