You are on page 1of 70

HPE 3PAR AIX and IBM Virtual I/O

Server Implementation Guide

Abstract
This implementation guide provides information for establishing communications between an HPE 3PAR StoreServ Storage
and AIX 7.2, AIX 7.1, AIX 6.1, AIX 5.3, or IBM Virtual I/O Server platforms. General information is also provided on the basic
steps required to allocate storage on the 3PAR StoreServ Storage that can then be accessed by the AIX or IBM Virtual I/O
Server host.

Part Number: QL226-99086


Published: February 2016

Copyright 2012, 2016 Hewlett Packard Enterprise Development LP


The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services
are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting
an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
Confidential computer software. Valid license from Hewlett Packard Enterprise required for possession, use, or copying. Consistent with FAR
12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed
to the U.S. Government under vendor's standard commercial license.
Links to third-party websites take you outside the Hewlett Packard Enterprise website. Hewlett Packard Enterprise has no control over and is not
responsible for information outside the Hewlett Packard Enterprise website.

Contents
1 Introduction..........................................................................................................5
Audience...............................................................................................................................................5
Supported Configurations.....................................................................................................................5
3PAR OS Upgrade Considerations.......................................................................................................5
3PAR Documentation...........................................................................................................................6

2 Configuring the 3PAR StoreServ Storage for FC................................................7


Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x ...............................7
Configuring Ports on the 3PAR StoreServ Storage for a Direct Connection...................................8
Configuring Ports on the 3PAR StoreServ Storage for Fabric Connection.....................................9
Creating the Host Definition for FC...............................................................................................10
Connecting the 3PAR StoreServ Storage to the Host........................................................................12
Setting Up and Zoning the Fabric.......................................................................................................12
FC Smart SAN...............................................................................................................................14
3PAR Coexistence.........................................................................................................................14
Configuration Guidelines for FC Switch Vendors..........................................................................15
Target Port Limits and Specifications for FC.................................................................................16
3PAR Priority Optimization for FC.................................................................................................17
OS Specific Expected Behavior.....................................................................................................17
3PAR Persistent Ports for FC........................................................................................................17
3PAR Persistent Ports Setup and Connectivity Guidelines for FC...........................................18
HPE 3PAR Persistent Ports Limitations...................................................................................18
3PAR Express Scripts.........................................................................................................................18

3 Connecting the Host with FC.............................................................................19


Checking the Host for the Current OS Version...................................................................................19
Installing the IBM FC HBA..................................................................................................................19
Setting up the IBM FC HBA for use with the 3PAR StoreServ Storage........................................19
Displaying Firmware and Driver Versions for the IBM FC HBA.....................................................20
Displaying the IBM FC HBA WWNs..............................................................................................21
Auto-detecting Topology................................................................................................................21
Setting Host FC HBA Parameters DynamicTracking and FastFail................................................21
Installing 3PAR ODM for AIX MPIO on the AIX Server (Local Boot Drive) when Using 3PAR OS
3.2.x or 3.1.x.......................................................................................................................................22
Installing 3PAR ODM for AIX MPIO...............................................................................................22
Displaying the Path Status After Installing 3PAR ODM.................................................................24
Additional Modules Available with 3PAR ODM for AIX..................................................................25
Additional 3PAR ODM Settings.....................................................................................................26
Load Balancing Policies................................................................................................................26
3PAR ODM 3.1.1.0 Update for AIX MPIO..........................................................................................26
Installing the 3PAR ODM 3.1.0.0 to use with Veritas..........................................................................28
Installing the Veritas DMP Multipathing Modules...............................................................................29
Configuring the Veritas DMP Multipathing.....................................................................................29
Connecting the Host with an FC Reservation Policy.....................................................................29

4 Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host.........31
Creating Storage on the 3PAR StoreServ Storage.............................................................................31
Creating Virtual Volumes...............................................................................................................31
Creating Thinly Provisioned Virtual Volumes.................................................................................32
Creating Thinly Deduplicated Virtual Volumes..............................................................................32
Exporting LUNs to the Host...........................................................................................................32
Exporting VLUNs to the AIX or IBM Virtual I/O Server Host...............................................................33
Restrictions on Volume Size and Number.....................................................................................33
Scanning for New Devices on an AIX or IBM Virtual I/O Server Host...........................................34
Contents

Creating Virtual SCSI Devices for Connected LPARs........................................................................35


Growing VV Exported to AIX LPARs..................................................................................................39

5 Removing 3PAR Devices on an AIX or IBM Virtual I/O Server Host.................42


Removing FC Connected Devices on the Host..................................................................................42
Removing FC Devices on the host for the AIX host......................................................................42
Removing FC Devices on the host for the IBM Virtual I/O Server.................................................43
Removing FC Devices on the 3PAR StoreServ Storage....................................................................44
Removing FC Devices on the Storage for the AIX host................................................................44
Removing FC Devices on the Storage for the IBM Virtual I/O Server...........................................46
Removing the 3PAR MPIO for AIX Software......................................................................................47

6 Using IBM HACMP 5.5 with AIX........................................................................49


Installing IBM HACMP........................................................................................................................49
HACMP Parameters for 3PAR Storage..............................................................................................49

7 Using IBM PowerHA 7.1 and PowerHA 6.1 with AIX........................................50


Installing IBM PowerHA 7.1 or PowerHA 6.1......................................................................................50
PowerHA 7.1 and PowerHA 6.1 Parameters for 3PAR Storage.........................................................50

8 Booting from the 3PAR StoreServ Storage.......................................................51


Setting the Host FC HBA Parameters for a SAN Boot.......................................................................51
Assigning LUNs as the Boot Volume..................................................................................................51
Installing the AIX or IBM Virtual I/O Server Host OS for a SAN Boot.................................................51

9 Configuring File Services Persona....................................................................55


3PAR File Persona.............................................................................................................................55

10 Using Veritas Cluster Server with AIX Hosts...................................................56


11 Using Symantec Storage Foundation..............................................................57
Using Persistent Ports with Storage Foundation................................................................................57

12 AIX Client Path Failure Detection and Recovery.............................................58


AIX Client Automatic Path Failure Detection and Recovery...............................................................58
Setting Auto Path Failure Detection and Recovery............................................................................58
Direct Connect AIX Client Considerations..........................................................................................59

13 Migrating the IBM Virtual I/O Server................................................................60


VIOS Migration Using the IBM Migration DVD...................................................................................60
Requirements for Migrating VIOS.......................................................................................................60
Migrating from Previous VIOS Versions.............................................................................................60
Completing the VIOS Migration..........................................................................................................61

14 Cabling for IBM Virtual I/O Server Configurations...........................................62


Cabling and Configuration for Fabric Configurations (Dual VIO)........................................................62
Cabling and Configuration for Direct Connect Configurations (Dual VIO)..........................................63

15 PowerVM Live Partition Mobility......................................................................64


16 Support and other resources...........................................................................66
Accessing Hewlett Packard Enterprise Support.................................................................................66
Accessing updates..............................................................................................................................66
Websites.............................................................................................................................................67
Customer self repair...........................................................................................................................67
Remote support..................................................................................................................................67
Documentation feedback....................................................................................................................67

Index.....................................................................................................................68

Contents

1 Introduction
This implementation guide provides information for establishing communications between an
HPE 3PAR StoreServ Storage and AIX 7.2, AIX 7.1, AIX 6.1, AIX 5.3 platforms or an IBM Virtual
I/O Server (VIOS). General information is also provided on the basic steps required to allocate
storage on the 3PAR StoreServ Storage that can then be accessed by the AIX or IBM Virtual I/O
Server host.
NOTE:

For predictable performance and results with the 3PAR StoreServ Storage, the information
in this guide must be used in concert with the documentation provided by Hewlett Packard
Enterprise for the 3PAR StoreServ Storage and the documentation provided by OS, Host
Bus Adapter (HBA), and switch vendors for their respective products.

In addition to the OS patches mentioned in this guide, there might be additional patches
referenced at the Storage Single Point of Connectivity Knowledge (SPOCK) website.

For information about supported hardware and software platforms, see SPOCK (from SPOCK
Home under Explore Storage Interoperability With SPOCK, select Explore HPE 3PAR
StoreServ Storage interoperability):
http://www.hpe.com/storage/spock

Audience
This implementation guide is intended for system and storage administrators who monitor and
direct system configurations and resource allocation for the 3PAR StoreServ Storage.
The tasks described in this guide assume that the administrator is familiar with AIX, IBM Virtual
I/O Server, and the 3PAR OS.

Supported Configurations
FC connections are supported between the 3PAR OS and the AIX host in both a fabric-attached
and direct-connect topology.
NOTE:

iSCSI connections are not supported with AIX.

3PAR OS Upgrade Considerations


This implementation guide refers to new installations. For information about planning an online
3PAR OS upgrade, see the HPE 3PAR Operating System Upgrade Pre-Planning Guide at the
Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs
For complete details about supported host configurations and interoperability, see SPOCK (from
SPOCK Home under Explore Storage Interoperability With SPOCK, select Explore HPE 3PAR
StoreServ Storage interoperability):
http://www.hpe.com/storage/spock

Audience

3PAR Documentation
Type of Documentation:

See:

Current version of this implementation guide


and additional 3PAR storage documentation:

The Hewlett Packard Enterprise Storage Information Library:


http://www.hpe.com/info/storage/docs
By default, HPE 3PAR Storage is selected under Products and
Solutions.

Supported hardware and software platforms:

The Single Point of Connectivity Knowledge (SPOCK) for Hewlett


Packard Enterprise Storage Products website:
http://www.hpe.com/storage/spock

Customer Self Repair procedures (media):

The Hewlett Packard Enterprise Customer Self Repair Services


Media Library:
http://www.hpe.com/support/csr
Under Product category, select Storage. Under Product family,
select 3PAR StoreServ Storage for HPE 3PAR StoreServ 7000,
8000, 10000, and 20000 Storage systems.

All Hewlett Packard Enterprise products:

Hewlett Packard Enterprise Support Center:


http://www.hpe.com/support/hpesc

Introduction

2 Configuring the 3PAR StoreServ Storage for FC


NOTE: Hewlett Packard Enterprise recommends using default values to configure your host
unless otherwise specified in the following procedures.
This chapter describes how to establish a connection between a 3PAR StoreServ Storage and
an AIX or IBM Virtual I/O Server host using FC and how to set up the fabric when running 3PAR
OS 3.2.x or 3.1.x. For information on setting up the physical connection for a particular storage
system, see the appropriate Hewlett Packard Enterprise installation manual.
REQUIRED:
If you are setting up a fabric along with your installation of the 3PAR StoreServ Storage, see
Setting Up and Zoning the Fabric (page 12) before configuring or connecting your 3PAR
StoreServ Storage.

Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x


or 3.1.x
Before connecting the 3PAR StoreServ Storage port to the host, configure the ports using the
procedures in the following sections.
This section does not apply when deploying HPE Virtual Connect direct-attach FC storage for
3PAR StoreServ Storage systems, where the 3PAR StoreServ Storage ports are cabled directly
to the uplink ports on the Virtual Connect FlexFabric 10 Gb/24-port Module for c-Class
BladeSystem. Zoning is automatically configured based on the Virtual Connect SAN Fabric and
server profile definitions.
For more information about Virtual Connect, Virtual Connect interconnect modules, and the
Virtual Connect Direct-Attach feature, see Virtual Connect documentation. To obtain this
documentation, search the Hewlett Packard Enterprise Support Center website:
http://www.hpe.com/support/hpesc
See the HPE SAN Design Reference Guide at SPOCK (from SPOCK Home under Design
Guides, select SAN Design Guide):
http://www.hpe.com/storage/spock

Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x

Configuring Ports on the 3PAR StoreServ Storage for a Direct Connection


To configure 3PAR StoreServ Storage ports for a direct connection to the Citrix host, complete
the following procedure.
NOTE: Direct connect is not supported by the 16 Gb FC 3PAR StoreServ Storage target
interface.
To identify the type of connection, use the showport -par command to display the ConnType
for each port:

loop denotes a direct connection

point denotes a fabric connection

cli % showport -par


N:S:P
0:1:1
1:1:1

Connmode
host
host

ConnType
loop
point

CfgRate
auto
auto

MaxRate
8Gbps
8Gbps

Class2
disabled
disabled

UniqNodeWwn
disabled
disabled

VCN
disabled
disabled

IntCoal
disabled
disabled

TMWO
enabled
enabled

Smart_SAN
n/a
n/a

With a Direct Connection to an 8 Gb FC 3PAR StoreServ Storage Target Interface


An 8 Gb FC 3PAR Interface is supported in direct connect on 3PAR OS 3.2.x or 3.1.x and should be configured in
Fibre Channel Arbitrated Loop topology mode. Run the following 3PAR CLI commands with the appropriate parameters
for each direct connect port:
1. Take the port off line using the controlport offline <node:slot:port> command. For example:
cli % controlport offline 0:1:1
2. Run the controlport config host -ct loop <node:slot:port> command, where -ct loop specifies
a direct connection.
cli % controlport config host -ct loop 0:1:1
3. Reset the port by using the controlport rst <node:slot:port> command on the 3PAR StoreServ Storage.
For example:
cli % controlport rst 0:1:1
After all the ports are configured, verify that they are configured for a host in a direct connection by using the
showport -par command on the 3PAR StoreServ Storage.
cli % showport -par
N:S:P
0:1:1

Connmode
host

ConnType
loop

CfgRate
auto

MaxRate
8Gbps

Configuring the 3PAR StoreServ Storage for FC

Class2
disabled

UniqNodeWwn
disabled

VCN
disabled

IntCoal
disabled

TMWO
enabled

Configuring Ports on the 3PAR StoreServ Storage for Fabric Connection


To configure 3PAR StoreServ Storage ports for fabric connections on the 3PAR CLI, use the
following procedure. Complete this procedure for each port.
1. Check if a port is configured for a host port in fabric mode by using the 3PAR CLI showport
-par command on the 3PAR StoreServ Storage.
If the connection type ConnType value is point, the port is already configured for a fabric
connection. If the ConnType value is loop, the port is a direct connection and has not been
configured for a fabric connection.
cli % showport -par
N:S:P Connmode ConnType CfgRate MaxRate Class2
UniqNodeWwn VCN
IntCoal
0:4:1 host
point
auto
8Gbps
disabled disabled
disabled enabled

2.

If the port has not been configured, take the port offline before configuring it for connection
to a host.
CAUTION: Before taking a port offline in preparation for a fabric connection, verify that it
was not previously defined and that it is not connected to a host, because this would interrupt
the existing host connection. If a 3PAR StoreServ Storage port is already configured for a
fabric connection, ignore this step 2.
To take the port offline, run the controlport offline <node:slot:port> command
on the 3PAR StoreServ Storage. For example:
cli % controlport offline 1:5:1

3.

To configure the port to the host, run the controlport config host -ct point
<node:slot:port> command on the 3PAR StoreServ Storage, where -ct point
indicates that the connection type is a fabric connection. For example:
cli % controlport config host -ct point 1:5:1

4.

Reset the port by using the controlport rst <node:slot:port> command on the
3PAR StoreServ Storage. For example:
cli % controlport rst 1:5:1

Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x

Creating the Host Definition for FC


Before connecting the AIX or IBM Virtual I/O Server host to the 3PAR StoreServ Storage, create
a host definition that specifies a valid host persona on the 3PAR StoreServ Storage that is to be
connected to a host HBA port through a fabric or a direct connection. AIX and the Virtual I/O
Server use the host persona 8 for both the QLogic and Emulex HBAs. The following procedure
shows how to create the host definition.
NOTE: When the LUNs are presented to an AIX VIO client using NPIV, the AIX host OS
(persona 8) should be used. In the case of LUNs presented to the IBM Virtual I/O Server and
then presented to the AIX client using vSCSI, IBM Virtual I/O Server host OS (persona 8) should
be used. Both AIX and IBM Virtual I/O Server host OS types use persona 8, which means, from
a host perspective, that there is no difference in behavior from the array. The IBM Virtual I/O
Server host OS can be used as an administrative tool to keep track of which LUNs are presented
to the IBM Virtual I/O Server, and which are presented to the AIX VIO clients directly.
Creating the Host Definition:
NOTE: See the HPE 3PAR Command Line Interface Reference or the HPE 3PAR StoreServ
Management Console Users Guide for complete details about using the controlport,
createhost, and showhost commands.
These documents are available at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs

10

Configuring the 3PAR StoreServ Storage for FC

NOTE: Starting with 3PAR OS 3.2.2, the createhost command has been enhanced with
the -port option, which automatically creates zones on the switch or fabric if the target port is
a 16 Gb FC target and the switch supports enhanced zoning. See FC Smart SAN (page 14).
1.

To create host definitions on the 3PAR StoreServ Storage from the 3PAR CLI:
cli % createhost [options] <hostname> [<WWN>...]

AIX host example:


cli % createhost -persona 8 AIXhost 1122334455667788 1122334455667799

IBM Virtual I/O Server example:


cli % createhost -persona 8 VIOS 1122334455667788 1122334455667799

2.

To verify that the host was created, run the showhost command.
AIX example:
cli % showhost
Id
2

Name
AIXhost

Persona......-WWN/iSCSI_NameAIX-legacy
1122334455667788
1122334455667799

Port
-----

IBM Virtual I/O Server example:


cli % showhost
Id
6

Name
VIOS

Persona
-WWN/iSCSI_NameAIX-legacy
1122334455667788
--1122334455667799

Port
---

Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x

11

Connecting the 3PAR StoreServ Storage to the Host


During this stage, connect the 3PAR StoreServ Storage to the host directly or to the fabric. These
procedures include the task to physically cable the 3PAR StoreServ Storage to the host or fabric.

Setting Up and Zoning the Fabric


NOTE: This section does not apply when deploying Virtual Connect Direct-Attach Fibre Channel
storage for 3PAR StoreServ Storage systems, where the 3PAR StoreServ Storage ports are
cabled directly to the uplink ports on the Virtual Connect FlexFabric 10 Gb/24-port Module for a
c-Class BladeSystem. Zoning is automatically configured based on the Virtual Connect SAN
Fabric and server profile definitions.
For more information about Virtual Connect, Virtual Connect interconnect modules, and the
Virtual Connect Direct-Attach Fibre Channel feature, see the Hewlett Packard Enterprise Support
Center:
http://www.hpe.com/support/hpesc
See also the HPE SAN Design Reference Guide at SPOCK (from SPOCK Home under Design
Guides, select SAN Design Guide):
http://www.hpe.com/storage/spock
Fabric zoning controls which FC end-devices have access to each other on the fabric. Zoning
also isolates the host and 3PAR StoreServ Storage ports from Registered State Change
Notifications (RSCNs) that are irrelevant to these ports.
Set up fabric zoning by associating the device World Wide Names (WWNs) or the switch ports
with specified zones in the fabric. Use either the WWN method or the port zoning method with
the 3PAR StoreServ Storage. The WWN zoning method is recommended because the zone
survives the changes of switch ports when cables are moved around on a fabric.
Required:
Employ fabric zoning, by using the methods provided by the switch vendor, to create relationships
between host HBA/CNA ports and 3PAR StoreServ Storage ports before connecting the host
HBA/CNA ports or 3PAR StoreServ Storage ports to the fabrics.
FC switch vendors support the zoning of the fabric end-devices in different zoning configurations.
There are advantages and disadvantages with each zoning configuration, so determine what is
needed before choosing a zoning configuration.
The 3PAR StoreServ Storage arrays support the following zoning configurations:

One initiator to one target per zone

One initiator to multiple targets per zone (zoning by HBA). This zoning configuration is
recommended for the 3PAR StoreServ Storage. Zoning by HBA is required for coexistence
with other Hewlett Packard Enterprise storage systems.
NOTE:

12

For high availability and clustered environments that require multiple initiators to access
the same set of target ports, Hewlett Packard Enterprise recommends creating separate
zones for each initiator with the same set of target ports.

The storage targets in the zone can be from the same 3PAR StoreServ Storage, multiple
3PAR StoreServ Storages, or a mixture of 3PAR and other Hewlett Packard
Enterprise storage systems.

Configuring the 3PAR StoreServ Storage for FC

For more information about using one initiator to multiple targets per zone, see the HPE SAN
Design Reference Guide at SPOCK (from SPOCK Home under Design Guides, select SAN
Design Guide):
http://www.hpe.com/storage/spock
When using an unsupported zoning configuration and an issue occurs, Hewlett Packard
Enterprise might require implementing one of the supported zoning configurations as part of the
corrective action.
Verify the switch and zone configurations by using the 3PAR CLI showhost command to verify
that each initiator is zoned with the correct targets after completing the following tasks:

Complete configuration of the storage port to the host and connect to the switch.

Create a zone configuration on the switch following the HPE SAN Design Reference Guide
and enable the zone set configuration.

Use the showhost command to verify that the host is seen on the storage node.

Setting Up and Zoning the Fabric

13

FC Smart SAN
Starting with 3PAR OS 3.2.2, the following 3PAR StoreServ Storage systems support Smart
SAN on 16 Gb FC targets:

3PAR StoreServ 20000 Storage

3PAR StoreServ 10000 Storage

3PAR StoreServ 8000 Storage

3PAR StoreServ 7000 Storage

Smart SAN for 3PAR through its TDPZ (target-driven peer zoning) feature enables customers
to automate peer zoning, which results in the creation of fewer zones and enables configuration
of zones in minutes. Through automation, it reduces the probability of errors and potential
downtime. Without Smart SAN, an administrator needs to preconfigure zones on the FC switch,
before configuring hosts and VLUNs on the 3PAR StoreServ Storage. With Smart SAN, the
administrator can configure and control zoning directly from the 3PAR CLI.
For information about supported FC switches and their firmware revisions with Smart SAN, see
SPOCK:
http://www.hpe.com/storage/spock
For more information about Smart SAN for 3PAR, including configuration, see the HPE 3PAR
Smart SAN 1.0 User Guide at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs

3PAR Coexistence
The 3PAR StoreServ Storage array can coexist with other Hewlett Packard Enterprise storage
array families.
For supported Hewlett Packard Enterprise storage array combinations and rules, see the HPE SAN
Design Reference Guide at SPOCK (from SPOCK Home under Design Guides, select SAN
Design Guide):
http://www.hpe.com/storage/spock

14

Configuring the 3PAR StoreServ Storage for FC

Configuration Guidelines for FC Switch Vendors


Use the following FC switch vendor guidelines before configuring ports on fabrics to which the
3PAR StoreServ Storage connects.

Brocade switch ports that connect to a host HBA port or to a 3PAR StoreServ Storage port
should be set to their default mode. On Brocade 3xxx switches running Brocade firmware
3.0.2 or later, verify that each switch port is in the correct mode by using the Brocade telnet
interface and the portcfgshow command, as follows:
brocade2_1:admin> portcfgshow
Ports
0 1 2 3
4 5 6 7
-----------------+--+--+--+--+----+--+--+-Speed
AN AN AN AN
AN AN AN AN
Trunk Port
ON ON ON ON
ON ON ON ON
Locked L_Port
.. .. .. ..
.. .. .. ..
Locked G_Port
.. .. .. ..
.. .. .. ..
Disabled E_Port
.. .. .. ..
.. .. .. ..
where AN:AutoNegotiate, ..:OFF, ??:INVALID.

The following fill-word modes are supported on a Brocade 8 Gb switch running FOS firmware
6.3.1a and later:
admin>portcfgfillword
Usage: portCfgFillWord PortNumber Mode [Passive]
Mode: 0/-idle-idle
- IDLE in Link Init, IDLE as fill word (default)
1/-arbff-arbff - ARBFF in Link Init, ARBFF as fill word
2/-idle-arbff - IDLE in Link Init, ARBFF as fill word (SW)
3/-aa-then-ia - If ARBFF/ARBFF failed, then do IDLE/ARBFF

Hewlett Packard Enterprise recommends setting the fill word to mode 3 (aa-then-ia),
which is the preferred mode, by using the portcfgfillword command. If the fill word is
not correctly set, er_bad_os counters (invalid ordered set) will increase when using the
portstatsshow command while connected to 8 Gb HBA ports, as they need the
ARBFF-ARBFF fill word. Mode 3 will also work correctly for lower-speed HBAs, such as 4
Gb/2 Gb HBAs. For more information, see the Fabric OS Command Reference Manual and
the FOS release notes, at the Brocade website:
http://www.brocade.com/en.html
NOTE: In addition, some Hewlett Packard Enterprise switches, such as the HPE SN8000B
8-slot SAN backbone director switch, the HPE SN8000B 4-slot SAN director switch, the
HPE SN6000B 16 Gb FC switch, or the HPE SN3000B 16 Gb FC switch automatically select
the proper fill-word mode 3 as the default setting.

McDATA switch or director ports should be in their default modes as G or GX-port (depending
on the switch model), with their speed setting permitting them to autonegotiate.

Cisco switch ports that connect to 3PAR StoreServ Storage ports or host HBA ports should
be set to AdminMode = FX and AdminSpeed = auto port, with the speed set to auto
negotiate.
NOTE:

The AIX VIOS setting to enable dynamic tracking is recommended for all fabric.

QLogic switch ports should be set to port type GL-port and port speed auto-detect. QLogic
switch ports that connect to the 3PAR StoreServ Storage should be set to I/O Stream Guard
disable or auto, but never enable.

Setting Up and Zoning the Fabric

15

Target Port Limits and Specifications for FC


To avoid overwhelming a target port and to ensure continuous I/O operations, observe the
following limitations on a target port:

Follow the instructions for setting the maximum number of initiator connections supported
per array port, per array node pair, and per array as shown in the HPE 3PAR Support Matrix
documentation at SPOCK (from SPOCK Home under Other Hardware, select 3PAR):
http://www.hpe.com/storage/spock

Maximum I/O queue depth per port on each 3PAR StoreServ Storage HBA model, as follows:
HBA

Protocol

Array

Bus

Speed

Ports

Max. Queue
Depth

Emulex
LP11002

FC

F200, F400,
T400, T800

PCI-X

4 Gbps

959

3PAR
FC044X

FC

F200, F400,
T400, T800

PCI-X

4 Gbps

1638

Emulex
LPe12002

FC

3PAR
StoreServ

PCIe

8 Gbps

3276

PCIe

8 Gbps

3276

PCIe

16 Gbps

3072

PCIe

16 Gbps

3072

7000
Emulex
LPe12004

FC

3PAR
StoreServ
7000, 10000

Emulex
LPe16002

FC

3PAR
StoreServ
7000, 8000,
10000

Emulex
LPe16004

FC

3PAR
StoreServ
8000, 20000

The I/O queues are shared among the connected host HBA ports on a first-come, first-served
basis.

When all queues are in use and a host HBA port tries to initiate I/O, it receives a target queue
full response from the 3PAR StoreServ Storage port. This condition can result in erratic I/O
performance on each host. If this condition occurs, each host should be throttled so that it
cannot overrun the 3PAR StoreServ Storage port's queues when all hosts are delivering
their maximum number of I/O requests.
NOTE:

16

When host ports can access multiple targets on fabric zones, the target number assigned
by the host driver for each discovered target can change when the host is booted and
some targets are not present in the zone. This situation might change the device node
access point for devices during a host restart. This issue can occur with any
fabric-connected storage, and is not specific to the 3PAR StoreServ Storage.

The maximum number of I/O paths supported is 16.

Configuring the 3PAR StoreServ Storage for FC

3PAR Priority Optimization for FC


The 3PAR Priority Optimization feature introduced in 3PAR OS 3.1.2 MU2 is a more efficient
and dynamic solution for managing server workloads and can be utilized as an alternative to
setting host I/O throttles. When using this feature, a storage administrator is able to share storage
resources more effectively by enforcing quality of service limits on the array.
No special settings are needed on the host side to obtain the benefit of 3PAR Priority Optimization,
although certain per target or per adapter throttle settings might need to be adjusted in rare cases.
For complete details of how to use 3PAR Priority Optimization (Quality of Service) on 3PAR
StoreServ Storage systems, see the HPE 3PAR Priority Optimization technical whitepaper:
http://www.hpe.com/info/3PAR-Priority-Optimization

OS Specific Expected Behavior


As noted in the HPE 3PAR Priority Optimization white paper, there is no limitation on the minimum
number of IOPS and/or Bandwidth that can be set on a given VVset QoS Rule. It is important
that the workloads of the various applications are fully understood before applying any rules.
Lowering the QoS cap beyond a sensible limit will result in higher I/O response times and reduced
throughput on the host and eventually Queue Full errors returned by the array to the host.
An AIX host receiving Queue Full errors can respond by logging disk errors and failing the path
to the volume. These errors on the AIX host can be identified by running the AIX errpt command
and are identified as SC_DISK type errors with a description PATH HAS FAILED. These can be
followed by additional SC_DISK entries with a description PATH HAS RECOVERED. If these errors
are observed following a lowering of a VVset QoS Rule, the Rule setting should be considered
suspect and the value might be too low for a sensible minimum limit.

3PAR Persistent Ports for FC


The 3PAR Persistent Ports (or virtual ports) feature minimizes I/O disruption during a 3PAR
StoreServ Storage online upgrade or node-down event. Port shutdown or reset events do not
trigger this feature.
Each FC target storage array port has a partner array port automatically assigned by the system.
Partner ports are assigned across array node pairs.
3PAR Persistent Ports allows a 3PAR StoreServ Storage FC port to assume the identity of a
failed port (WWN port) while retaining its own identity. Where a given physical port assumes the
identity of its partner port, the assumed port is designated as a persistent port. Array port failover
and failback with 3PAR Persistent Ports is transparent to most host-based multipathing software,
which can keep all of its I/O paths active.
NOTE: Use of 3PAR Persistent Ports technology does not negate the need for properly installed,
configured, and maintained host multipathing software.
For a more complete description of the 3PAR Persistent Ports feature, its operation, and a
complete list of required setup and connectivity guidelines, see the following documents:

Technical whitepaper HPE 3PAR StoreServ Persistent Ports (Hewlett Packard


Enterprise document #F4AA4-4545ENW) at the Hewlett Packard Enterprise Support Center:
http://www.hpe.com/support/hpesc

HPE 3PAR Command Line Interface Administrators Manual in the "Using Persistent Ports
for Nondisruptive Online Software Upgrades" section at the Hewlett Packard
Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs

Setting Up and Zoning the Fabric

17

3PAR Persistent Ports Setup and Connectivity Guidelines for FC


Starting with 3PAR OS 3.1.2, the 3PAR Persistent Ports feature is supported for FC target ports.
Starting with 3PAR OS 3.1.3, the Persistent Port feature has additional functionality to minimize
I/O disruption during an array port loss_sync event triggered by a loss of array port connectivity
to the fabric.
Follow the specific cabling setup and connectivity guidelines so that 3PAR Persistent Ports
function properly:

3PAR StoreServ Storage FC partner ports must be connected to the same FC fabric, and
preferably to different FC switches on the fabric.

The FC fabric must support NPIV, and NPIV must be enabled.

Configure the host-facing HBAs for point-to-point fabric connection (there is no support for
direct-connect "loops").

HPE 3PAR Persistent Ports Limitations


When the FC target port fails over to its partner port as part of the 3PAR Persistent Ports feature,
I/O will be redirected by the 3PAR StoreServ Storage to the partner port. During the persistent
port failover, an AIX host might detect and report a temporary path loss in the AIX errpt log.
This applies to HPE 3PAR ODM installations as well as to Symantec Storage Foundation/DMP
installations with AIX and 3PAR StoreServ Storage.
Hewlett Packard Enterprise recommends enabling the 3par_pathmon.sh script that is provided
with HPE 3PAR ODM 3.1.0.0 when using the HPE 3PAR Persistent Port feature. See the README
provided with the 3PAR ODM 3.1.0.0 for instructions and usage.
The HPE Persistent Ports feature is not supported with iSCSI.

3PAR Express Scripts


The Express Writes feature optimizes the performance for small block random writes and was
introducted in 3PAR OS 3.2.1. It is enabled by default with 8 Gb Targets in Host HBA mode with
bit Express Writes.
This Express Writes feature is only available on the 3PAR StoreServ 7000 and 10000 Storage
systems with 8 Gb targets and not supported on 3PAR StoreServ 20000, 10000, 8000, or 7000
Storage systems with 16 Gb targets.

18

Configuring the 3PAR StoreServ Storage for FC

3 Connecting the Host with FC


Checking the Host for the Current OS Version
Before connecting the 3PAR StoreServ Storage to the host, verify support for the host OS and
HBA driver versions. See the Storage Single Point of Connectivity Knowledge (SPOCK) website
for configuration and interoperability information:
http://www.hpe.com/storage/spock
For the AIX host:
NOTE: The following examples do not necessarily reflect supported versions or the latest
version of AIX and Virtual I/O Server. They are intended only as examples. See the SPOCK
website for supported versions of AIX and VIOS:
http://www.hpe.com/storage/spock
To determine the current release information for the host and HBA, find the current version of
the AIX host system version by typing the oslevel s command in the AIX CLI interface:
# oslevel s
7100-02-02-1316

The results show the following details:

7100 represents the OS version.

02 indicates the technology level.

02 indicates the service pack.

1316 indicates a date code.

For the IBM Virtual I/O Server:


To determine the current IBM Virtual I/O Server version, run the ioslevel command in the IBM
Virtual I/O Server CLI interface. Example:
$ ioslevel
2.2.2.2

Installing the IBM FC HBA


Setting up the IBM FC HBA for use with the 3PAR StoreServ Storage
For HBA installation instructions, driver support, and usage guidelines see the IBM Installation
and Usage Guide for each product type. The required drivers are located on Volume 1 of the
IBM Base Installation CDs and are supplied with the hardware kit from IBM.
After the installation of the host FC HBAs, power on the AIX or IBM Virtual I/O Server host.

Checking the Host for the Current OS Version

19

Displaying Firmware and Driver Versions for the IBM FC HBA


If the IBM FC HBA is already installed in the host, verify support on the 3PAR StoreServ Storage
by checking the model version, FRU number, and firmware levels for each IBM FC HBA connecting
to the 3PAR StoreServ Storage.
NOTE: For the IBM Virtual I/O Server, perform the commands in this section from the IBM
Virtual I/O Server oem_setup_env environment. Commands are shown starting with a # on the
command line. To start the oem_setup_env environment from the padmin user account, use
the AIX CLI oem_setup_env command.
If the IBM FC HBA is already installed on the host, verify support on the 3PAR StoreServ Storage
by checking the model version, FRU number, and firmware levels for each IBM FC HBA connecting
to the 3PAR StoreServ Storage.

To display the individual HBA FC ports that belong to an installed FC adapter, type lsdev
| grep fcs on the AIX CLI:
# lsdev | grep fcs
fcs0
fcs1
fcs2
fcs3

Available
Available
Available
Available

04-00
04-01
05-00
05-01

8Gb
8Gb
4Gb
4Gb

PCI Express Dual Port FC Adapter (df1000f114108a03)


PCI Express Dual Port FC Adapter (df1000f114108a03)
FC PCI Express Adapter (df1000fe)
FC PCI Express Adapter (df1000fe)

To display the HBA type, type lscfg -vps -l fcs0 | grep -i <customer card>
on the AIX CLI:
For example, with AIX 6.1 and later:
# lscfg -vps -l fcs0 |grep -i "customer card"
Customer Card ID Number.....577D

577D is the IBM HBA model number.

To display the HBA FRU number, type lscfg -vps -l fcs0 | grep -i "fru" on
the AIX CLI.
# lscfg -vps -l fcs0 | grep -i "fru"
FRU Number..................10N9824

10N9824 is the FRU number.

To display the Firmware levels for each installed IBM FC HBA, type lscfg -vps -l fcs0
| grep Z9 on the AIX CLI:
# lscfg -vps -l fcs0 | grep Z9
Device Specific.(Z9)........US1.10X5

US1.10X5 is the current HBA firmware.

20

Connecting the Host with FC

Displaying the IBM FC HBA WWNs


To display the FC HBA WWNs, type lscfg -vps -l fcs0 | grep -i <network> on the
AIX CLI:
# lscfg -vps -l fcs0 | grep -i "network"
Network Address.............10000000C94E6031

where 10000000C94E6031 is the HBA WWN.

Auto-detecting Topology
IBM FC HBAs auto-detect the topology during a host restart.

Setting Host FC HBA Parameters DynamicTracking and FastFail


The following settings are required on the IBM FC HBAs for the DynamicTracking and
FastFail parameters.
Direct connect:

DynamicTracking disabled

FastFail enabled

Fabric connect:

DynamicTracking enabled

FastFail enabled

NOTE: Change these parameters on the AIX host or the IBM Virtual I/O Server. The host
requires a restart to enable these changes.
When dynamic tracking of FC devices is enabled, the FC adapter driver can detect when the
Fiber Channel N_Port ID of a device changes and can reroute traffic destined for that device to
the new address while the devices are still online.
The following events can cause an N_Port ID to change:

Moving a cable between a switch and storage device from one switch port to another.

Connecting two separate switches via an Inter-Switch Link (ISL).

Restarting a switch.

Setting the IBM FC HBA parameter of FastFail speeds up recovery time in the event of a path
failure.

Installing the IBM FC HBA

21

To set up DynamicTracking and FastFail on an IBM FC HBA, complete the following


procedure, using the SMIT Devices menu:
1. Select FC Adapter.
2. Select FC SCSI I/O Controller Protocol Device.
3. Select Change/Show Characteristics of a FC SCSI Protocol Device.
4. Select the appropriate FC SCSI Protocol Device.
5. Set the following options:

6.
7.
8.

Dynamic Tracking of FC Devices to Yes.

FC Fabric Event Error RECOVERY Policy to FastFail

Apply change to DATABASE only to Yes

Press Enter after making all changes.


Press F9 to exit the shell.
Restart the AIX host or IBM Virtual I/O Server.

Installing 3PAR ODM for AIX MPIO on the AIX Server (Local Boot Drive)
when Using 3PAR OS 3.2.x or 3.1.x
This section describes how to install the 3PAR ODM for the AIX MPIO.
NOTE: 3PAR OS 3.2.x or 3.1.x support the 3PAR ODM 3.1.0.0 for the AIX native multipathing
solution (default path control module (PCM) and the native MPIO framework).

Installing 3PAR ODM for AIX MPIO


This procedure applies to either a new installation or an existing installation where 3PAR StoreServ
Storage virtual volumes (VVs) already exist on an AIX 7.2, AIX 7.1, AIX 6.1, AIX 5.3, or IBM
Virtual I/O Server system. This installation must be performed by a user logged into either the
AIX system or IBM Virtual I/O server with root privileges.
NOTE: For the IBM Virtual I/O Server, the commands within this section are performed from
the IBM Virtual I/O Server oem_setup_env environment and start with a # on the command line.
To enter the oem_setup_env environment from the padmin user account, use the AIX CLI
oem_setup_env command.

22

Connecting the Host with FC

Installation of the 3PAR ODM requires a system restart to become effective. The smit MPIO
can be used to configure or manage the MPIO environment. By default, the AIX MPIO is set to
active/active mode. Install the 3PAR ODM software for IBM AIX on either the AIX or the IBM
Virtual I/O server host. If you are using Veritas VxDMP, do not install the 3PAR ODM software
for IBM AIX, and instead install the 3PAR ODM software for Veritas VxDMP.
1. Load the distribution CD containing the 3PAR ODM for IBM AIX into the CD drive.
2. Use the smit command on the AIX CLI to install the 3PAR MPIO for IBM AIX from the
distribution CD.
a. At the AIX command line, enter the smit command.
b. Select Install and Update Software.
c. Select Install Software.
d. At the Input Device directory for software, enter /dev/cd0.
e. In the ACCEPT new license agreements?, select Yes.
f. Press Enter to start the installation.
If 3PAR MPIO has been previously installed, type smit update_all on the AIX CLI.
NOTE:
3.

Be sure to set the parameter ACCEPT new license agreements to yes.

Add the 3par_pathmon.sh to /etc/inittab so that it runs after the AIX host restarts
as in the following example:
# /usr/lpp/3PARmpio/bin/3par_pathmon.sh a
The 3par_pathmon entry has been added to /etc/inittab

4.
5.

Restart the AIX server or IBM Virtual I/O Server.


After the server restarts, verify that 3par_pathmon.sh is running as follows:
# /usr/lpp/3PARmpio/bin/3par_pathmon.sh s
The 3PAR path monitoring program has been started.

For upgrade and removal instructions, see the appropriate 3PAR ODM Software for IBM AIX
Readme.
NOTE: Hewlett Packard Enterprise recommends enabling the 3par_pathmon.sh script that
is provided with 3PAR ODM 3.1.0.0. This daemon-like program monitors the 3PAR device paths
that are in a Failed state. For all paths in a Failed state that the health check function cannot
bring back, a chpath command is used to enable them. See the README provided with the
3PAR ODM 3.1.0.0 for instructions and usage.

Installing 3PAR ODM for AIX MPIO on the AIX Server (Local Boot Drive) when Using 3PAR OS 3.2.x or 3.1.x

23

Displaying the Path Status After Installing 3PAR ODM


After the installation of the 3PAR MPIO for AIX HBA, verify that the 3PAR ODM has been installed
successfully:
# lslpp -l|grep -i 3par
3PARmpio.64

3.1.0.0

COMMITTED

3PAR Multipath I/O for IBM

After the installation of the 3PAR ODM for AIX MPIO, check path status and verify the connection
between the host and 3PAR StoreServ Storage.

To display the disk devices available on the AIX or IBM Virtual I/O Server host, use the
lsdev -Cc disk command.
# lsdev -Cc disk
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5

Available
Available
Available
Available
Available
Available

00-08-00
07-00-01
07-00-01
07-00-01
07-00-01
07-00-01

Volume
Volume
Volume
Volume
Volume

To display the path status through the AIX CLI on hdisk1, use the lspath l hdisk1
command:
# lspath -l hdisk1
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

24

SAS Disk Drive


3PAR InServ Virtual
3PAR InServ Virtual
3PAR InServ Virtual
3PAR InServ Virtual
3PAR InServ Virtual

hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1

Connecting the Host with FC

fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5

To display the specific path information on hdisk1, use the lspath -l hdisk1 -F
"status name parent connection" command.
NOTE:

Highlighted text shows the details for node, slot, and port <n:s:p>.

# lspath -l hdisk1 -F "status name


Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1
hdisk1

fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5

parent connection"

20510002ac0000b3,0
20410002ac0000b3,0
21210002ac0000b3,0
21410002ac0000b3,0
21510002ac0000b3,0
20510002ac0000b3,0
20410002ac0000b3,0
21210002ac0000b3,0
21410002ac0000b3,0
21510002ac0000b3,0

To display the basic path status for ALL MPIO devices, use the lspath command.
# lspath
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk0
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2
hdisk1
hdisk2

sas0
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi4
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5
fscsi5

Additional Modules Available with 3PAR ODM for AIX


In the event of a failure, the customer support representative requires access to information
regarding the installation and configuration of the AIX or IBM Virtual I/O Server host. To access
this information, use the 3par_explorer.sh utility.
This utility is located in /usr/lpp/3PARmpio/bin.
Additional information regarding 3PAR MPIO utilities and other useful information can be found
in the HPE 3PAR ODM for IBM MPIO and Veritas VxDMP documentation at the Software Depot:
http://www.hpe.com/support/softwaredepot

Installing 3PAR ODM for AIX MPIO on the AIX Server (Local Boot Drive) when Using 3PAR OS 3.2.x or 3.1.x

25

Additional 3PAR ODM Settings


For details about AIX 3PAR ODM, see the 3PAR ODM 3.1 Base software for IBM documentation
at the Software Depot:
http://www.hpe.com/support/softwaredepot
NOTE: For AIX installations in other than en_US language convention, an ODM file must be
copied to additional locations to enable the OS to correctly display the 3PAR VV label of an hdisk
instance. The copy (cp) commands are:
cp /usr/lib/nls/msg/en_US/3par.cat /usr/lib/methods/3par.cat
and
cp /usr/lib/nls/msg/en_US/3par.cat /usr/lib/nls/msg/X/3par.cat
Where X is the language convention used at the time of installation.
IBM has addressed this issue with the APAR process and included a fix in the following OS
versions:

IV32569 6.1 TL8 SP2

IV32498 6.1 TL9 SP0

IV32581 7.1 TL2 SP2

IV33001 7.1 TL3 SP0

Load Balancing Policies


Two Load Balancing policies are available in 3PAR ODM 3.1.0.0:

Round Robin
Round Robin is set as the default Load Balancing Policy.

Failover

3PAR ODM 3.1.1.0 Update for AIX MPIO


3PAR ODM 3.1.1.0 update for AIX MPIO, an update to 3PAR ODM 3.1.0.0 for AIX MPIO, provides
the following enhancements.
3PAR ODM 3.1.1.0 for AIX MPIO contains the following additional features:

shortest_queue

Adds the new shortest_queue algorithm to the existing failover and round_robin
values

Allows path selection for I/O based on the minimum outstanding path queue entries

The default setting is round_robin. The native chdev command can be used to change
to shortest queue on a per LUN basis.

26

timeout_policy

Adds the new fail_path and disable_path settings to the existing retry_path
value

Sets the default to fail_path as recommended by IBM

Connecting the Host with FC

To install the 3PAR ODM 3.1.1.0 update for AIX MPIO:


1. Download the package to the target server for installation from the Software Depot.
http://www.hpe.com/support/softwaredepot
2.

Unzip and perform untar on the contents of the downloaded file and place the contents in
its own /tmp/dirname directory (replace dirname with a name of your choice).
NOTE: The 3PAR MPIO managed disks devices must be placed in a defined state prior
to applying the 3.1.1.0 update. To put disks in a defined state, use the rmdev -l hdiskX
command.
Rootvg on a SAN boot volume cannot be placed in the defined state. During the installation,
the default value of fail_path will be set for timout_policy for a SAN boot rootvg
volume.
Any algorithm changes that need to be done on the SAN boot rootvg volume require the
-P flag used with the chdev command. For example:
# chdev -a algorithm=shortest_queue -l hdiskX -P

3.

Use the smit command to install the product from the directory where the update was
installed.
NOTE: In the following procedure, be sure to set the parameter ACCEPT new license
agreements? to Yes.
a.
b.
c.
d.
e.
f.
g.

4.
5.

At the AIX command line, enter the smit command.


Select Software Installation and Maintenance.
Select Install and Update Software.
Select Install Software.
At the Input Device directory for software, enter /tmp/dirname.
In the ACCEPT new license agreements?, select Yes.
Press Enter to start the installation.

Restore disks. Use the mkdev -l hdiskX command to restore disks to available.
A reboot is required after installing the update.

3PAR ODM 3.1.1.0 Update for AIX MPIO

27

Installing the 3PAR ODM 3.1.0.0 to use with Veritas


When using Veritas Volume Manager, installation of the 3PAR ODM 3.1.0.0 for Veritas VxVM
for 3PAR OS 3.1.x will permit command tag queue support, allowing a queue depth greater than
one.
This procedure applies either to a new installation or to an existing installation where 3PAR
StoreServ Storage VVs already exist on an AIX 7.2, AIX 7.1, AIX 6.1, or AIX 5.3 system. A user
logged into the AIX system as the superuser or with root privileges must perform this installation.
Installation of the 3PAR ODM changes requires a system restart to become effective.
To Install the 3PAR ODM 3.1.0.0 Software for Veritas VxVM, use the following procedure:
1. Copy the 3parodm_vrts.tar.qz file to a temporary folder on your system.
2. Unzip and perform untar on the contents of the downloaded file.
3. Run the smit install command.
4. Select Install and Update Software > Install Software.
5. Press F4 and select the location of the unzipped files.
6. Press F4 for the software to install and then select the .bff file.
7. Click Enter. The smit install command installs the 3PAR ODM software. When
installation ends, the command status displays: OK
8. To verify the 3PAR ODM package was fully installed. After restarting, run the lslpp -l
devices.fcp.disk.3PAR.vxvm.rte command. This command shows the package
level and state.
For more information about the 3PAR ODM 3.1.0.0 for Veritas VxVM, see the HPE 3PAR
ODM 3.1 Software for Veritas VxVM Readme.
9.

If the AIX system was previously defined 3PAR StoreServ Storage VVs, the hdisk definitions
appear as 3PAR InServ Virtual Volume. Any newly created or exported 3PAR
StoreServ Storage VVs will also have characteristics similar to those shown below:
# lsdev -Cc disk
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5
hdisk6

Available
Available
Available
Available
Available
Available
Available

10-60-00-10,0
20-58-01 3PAR
20-58-01 3PAR
20-60-01 3PAR
20-58-01 3PAR
20-60-01 3PAR
20-60-01 3PAR

16 Bit
InServ
InServ
InServ
InServ
InServ
InServ

LVD SCSI Disk Drive


Virtual Volume
Virtual Volume
Virtual Volume
Virtual Volume
Virtual Volume
Virtual Volume

Existing 3PAR VVs or any newly created or exported 3PAR StoreServ Storage VVs have a default
queue depth of 16 defined. To display this value, run the following command on the AIX CLI,
where x is the hdisk number/definition.
# lsattr -El hdiskx | grep queue_depth

If required, changes might be applied to the new default queue depth of the 3PAR StoreServ
Storage VV. Any change made to the queue depth requires a system restart to become effective.
To change the device attribute for queue depth run the following command on the AIX CLI:
# chdev -l hdiskx -a queue_depth=yy -P

28

Connecting the Host with FC

Installing the Veritas DMP Multipathing Modules


If Veritas Volume Manager is used for AIX in Storage Foundation, follow the Veritas Volume
Manager Installation and User Guide, available on the Symantec website:
http://www.symantec.com/
Install all prerequisite APARs as required in the Veritas Installation Guide for AIX.
The Veritas DMP layer in Veritas Volume Manager does not recognize the storage server volumes
as multipathed until 3PAR ODM 3.1.0.0 software for Veritas VxVM is installed.
The 3PAR ODM 3.1.0.0 software for Veritas VxVM can be downloaded from the Symantec
website at: http://www.symantec.com/

Configuring the Veritas DMP Multipathing


There are no special considerations or configuration modifications when using the 3PAR StoreServ
Storage, as long as the 3PAR ODM 3.1.0.0 software for Veritas VxVM is installed.

Connecting the Host with an FC Reservation Policy


A reservation policy, which determines the type of reservation methodology that the device driver
implements when the device is opened, can be used to limit device access from other adapters,
whether the adapters are on the same system or another system. The reservation policy on a
3PAR device is controlled by the predefined ODM attribute reserve_policy. Change the value
of reserve_policy by invoking the AIX chdev command on a 3PAR MPIO device.
Three different reservation policies can be set on 3PAR MPIO devices:
no_reserve
If you set 3PAR devices with this reservation policy, no reservation is made
on the devices. A device without reservation can be accessed by any initiator
at any time. I/O can be sent from all the paths of the 3PAR device. This is
the default reservation policy of 3PAR ODM 3.1.0.0.
single_path

If you set this reservation policy for 3PAR MPIO devices, only the fail_over
path selection algorithm can be selected for the devices. With this reservation
policy, all paths are open on a 3PAR device; however, only one path makes
a reservation on the device. I/O can be sent only through this path.

PR_exclusive

With this reservation policy, a persistent reservation (PR) is made on the


3PAR device with a PR key. Any initiators that register with the same PR
key can access the device. Normally, you should pick a unique PR key for
a server. Different servers should each have a different, unique PR key. I/O
is routed to all paths of the MPIO device, because all paths of an MPIO
device are registered with the same PR key.
NOTE: The PR_shared reservation policy is not supported by Hewlett
Packard Enterprise at this time.

Installing the Veritas DMP Multipathing Modules

29

NOTE: For AIX installs in other than en_US language convention, you must move an ODM
file to enable the OS to correctly display the 3PAR VV label on the hdisk instance. Use the
following command to copy the file to the appropriate location.
cp /usr/lib/nls/msg/en_US/3par.cat /usr/lib/methods/3par.cat
and
cp /usr/lib/nls/msg/en_US/3par.cat /usr/lib/nls/msg/X/3par.cat
Where X is the language convention used at the time of installation.
IBM has addressed this issue with the APAR process and included a fix in the following OS
versions:

30

IV32569 6.1 TL8 SP2

IV32498 6.1 TL9 SP0

IV32581 7.1 TL2 SP2

IV33001 7.1 TL3 SP0

Connecting the Host with FC

4 Allocating Storage for Access by the AIX or IBM Virtual


I/O Server Host
Creating Storage on the 3PAR StoreServ Storage
This section describes the general procedures and commands that are required to create the
VVs that can then be exported for discovery by the AIX or IBM Virtual I/O Server host.
For additional information, see the HPE 3PAR Command Line Interface Administrators Manual.
For a comprehensive description of 3PAR OS commands, see the HPE 3PAR Command Line
Interface Reference at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs

Creating Virtual Volumes


Create volumes that are provisioned from one or more Common Provisioning Groups (CPGs).
Volumes can be either fully provisioned, thinly provisioned, or thinly deduplicated volumes.
Optionally, specify a CPG for snapshot space for provisioned volumes.
Using the 3PAR Management Console:
1. From the menu bar, select:
ActionsProvisioningVirtual VolumeCreate Virtual Volume
2.
3.

Use the Create Virtual Volume wizard to create a base volume.


Select one of the following options from the Allocation list:

Fully Provisioned

Thinly Provisioned

Thinly Deduplicated (Supported with 3PAR OS 3.2.1 MU1 and later)

Using the 3PAR CLI:


Create a fully provisioned or TPVV:
cli % createvv [options] <usr_CPG> <VV_name> [.<index>] <size>[g|G|t|T]

For example:
cli % createvv -cnt 5 testcpg TESTLUNS 5g

For complete details on creating volumes for the 3PAR OS version that is being used on the
3PAR StoreServ Storage, see the following documents:

HPE 3PAR Management Console User Guide

HPE 3PAR OS Command Line Interface Reference

These documents are available at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs
NOTE: The commands and options available for creating a VV might vary for earlier versions
of the 3PAR OS.

Creating Storage on the 3PAR StoreServ Storage

31

Creating Thinly Provisioned Virtual Volumes


To create TPVVs (thinly provisioned virtual volumes), see the following documents:

3PAR StoreServ Storage Concepts Guide

HPE 3PAR Command Line Interface Administrators Manual

HPE 3PAR OS Command Line Interface Reference

These documents are available at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs
NOTE:

To create TPVV, a 3PAR Thin Provisioning license is required.

Creating Thinly Deduplicated Virtual Volumes


NOTE: With 3PAR OS 3.2.1 MU1 and later, 3PAR Thin Deduplication feature is supported.
To create TDVVs (thinly deduplicated virtual volumes), a 3PAR Thin Provisioning license is
required.
3PAR Thin Deduplication allows the creation of TDVVs from solid state drive (SSD) CPGs. A
TDVV has the same characteristics as a TPVV, with the additional capability of removing
duplicated data before it is written to the volume. The TDVVs are managed like any other TPVV.
A TDVV must be associated with CPGs created from an SSD.
For more information about 3PAR Thin Deduplication, see the following documents:

3PAR StoreServ Storage Concepts Guide

HPE 3PAR Command Line Interface Administrators Manual

HPE 3PAR OS Command Line Interface Reference

HPE 3PAR Thin TechnologiesTechnical white paper containing best practices

These documents are available at the Hewlett Packard Enterprise Storage Information Library:
http://www.hpe.com/info/storage/docs

Exporting LUNs to the Host


This section explains how to export LUNs to the host as VVs, referred to as virtual LUNs (VLUNs).
CAUTION: If a configuration has two IBM Virtual I/O Servers, the LUN numbers when exported
to each of the IBM Virtual I/O Servers must be identical, otherwise, data corruption will occur.
For example:
cli % createvlun -cnt 5 TESTLUNs.0 0 VIOS#one
cli % createvlun -cnt 5 TESTLUNs.0 0 VIOS#two

32

Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host

To export VVs as VLUNs, use the following command:


createvlun [-cnt] <number of LUNs> <name_of_virtual_LUNs.int>
<starting_LUN_number> <hostname/hostdefinition>
where:

[-cnt] specifies the number of identical VVs to create by using an integer from 1 through
999. If not specified, one VV is created.

<name_of_virtual_LUNs> specifies the name of the VV exported as a virtual LUN.

<starting_LUN_number> indicates the starting LUN number.

.int is the integer value. For every LUN created, the .int suffix of the VV name is
incremented by one.

<hostname/hostdefinition> indicates that hostname is the name of the host created


earlier.

For example:
cli % createvlun -cnt 5 TESTLUNa.0 0 VIOS

Exporting VLUNs to the AIX or IBM Virtual I/O Server Host


This section describes how to discover exported devices to the AIX or IBM Virtual I/O Server
host.

Restrictions on Volume Size and Number


Follow the guidelines for creating VVs and Virtual LUNs (VLUNs) in the HPE 3PAR Command
Line Interface Administrators Manual while adhering to these cautions and guidelines:

This configuration supports sparse LUNs (meaning that LUNs might be skipped). LUNs
might also be exported in non-ascending order (such as 0, 5, 7, 3).

The 3PAR StoreServ Storage supports the exportation of VLUNs with LUNs in the range
from 0 to 65535.
NOTE:

AIX supports only 512 LUNs per host HBA port, 0-511.

Exported VLUNs will not be registered on the host until the cfgmgr command is run on the
host.

The maximum LUN size that can be exported to an AIX or IBM Virtual I/O Server host is 16
TB when the installed 3PAR OS version is 3.1.x. A LUN size of 16 TB on an AIX or IBM
Virtual I/O Server host is dependent on the installed AIX technology level, since some earlier
versions of AIX will not support an hdisk greater than 2 TB.

CAUTION: If the configuration being used utilizes two IBM Virtual I/O Servers, the LUN numbers
when exported to each of the IBM Virtual I/O Servers must be identical. If this important note is
not adhered to data corruption will occur.

Exporting VLUNs to the AIX or IBM Virtual I/O Server Host

33

Scanning for New Devices on an AIX or IBM Virtual I/O Server Host
This section describes the steps to scan for new devices on an AIX or IBM Virtual I/O Server
host.

LUN discovery on the AIX or IBM Virtual I/O Server host is accomplished by using the
cfgmgr command on the AIX or through the IBM Virtual I/O Server CLI command line.

Following the completion of the cfgmgr command, display the exported LUNs by using the
lsdev -Cc disk command on the AIX CLI or through the IBM Virtual I/O CLI command
line.
AIX example:
# lsdev -Cc disk
hdisk0 Available 00-08-00 SAS Disk Drive
hdisk1 Available 07-00-01 3PAR InServ Virtual Volume
hdisk2 Available 07-00-01 3PAR InServ Virtual Volume

IBM Virtual I/O Server example:


$ lsdev -type disk
name
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5

status
Available
Available
Available
Available
Available
Available

description
3PAR InServ
3PAR InServ
3PAR InServ
3PAR InServ
3PAR InServ
3PAR InServ

Virtual
Virtual
Virtual
Virtual
Virtual
Virtual

Volume
Volume
Volume
Volume
Volume
Volume

To display the LUN number for each exported 3PAR StoreServ Storage LUN for AIX, use
the lsattr -El hdisk1 | grep -i LUN command on the AIX CLI. For example:
# lsattr -El hdisk1 | grep -i LUN
lun_id

0x0

Logical unit number ID

False

Logical unit number ID

False

# lsattr -El hdisk2 | grep -i LUN


lun_id

0x1000000000000

To display the LUN number in HEX for each exported 3PAR StoreServ Storage LUN for the
IBM Virtual I/O Server, use the lsdev -dev hdisk1 -attr | grep -i lun command
through the IBM Virtual I/O CLI command line. For example:
$ lsdev -dev hdisk1 -attr | grep -i lun
lun_id

0xa000000000000

To display the exported raw LUN capacity in megabytes for AIX, use the bootinfo -s
hdisk1 command on the AIX CLI. For example:
# bootinfo -s hdisk1
5120

34

Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host

To display the exported raw LUN capacity in megabytes, use the bootinfo -s hdisk1
command from the oem_setup_env environment on the VIOS CLI. For example:
# bootinfo -s hdisk1
71680

Creating Virtual SCSI Devices for Connected LPARs


This section describes the steps to create a virtual SCSI device to be used by a connected LPAR.
In this example, assuming that virtual SCSI devices have been defined in the managed profiles
for the IBM Virtual I/O Server and that the LPAR is receiving its virtualized SCSI devices from
the virtual I/O Server, a virtual device is created by the physical backing device hdisk22 that
will be mapped to virtual adapter vhost0.
WARNING! In an environment where two IBM Virtual I/O Servers are used to access the same
storage, it is imperative to insure that the LUN numbers are identical on each IBM Virtual I/O
Server when virtualizing those devices to an attached client. Failure to do so will result in data
corruption.
Verify that hdisk22 is connected to the IBM Virtual I/O Server by using the lsdev -type
disk | grep hdisk22 command.
$ lsdev -type disk | grep hdisk22
hdisk22

Available

3PAR InServ Virtual Volume

Check/Verify the physical LUN number that is associated with hdisk22 by using the lsdev
-dev hdisk22 -attr | grep lun_id command.
$ lsdev -dev hdisk22 -attr | grep lun_id
lun_id

0x1f4000000000000

Logical unit number ID

False

Creating Virtual SCSI Devices for Connected LPARs

35

Or use the lspath -dev hdisk22 command to display the path status and the physical LUN
number associated with hdisk22.
$ lspath -dev hdisk22
status

name

parent connection

Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22

fscsi0
fscsi0
fscsi0
fscsi0
fscsi0
fscsi1
fscsi1
fscsi1
fscsi1
fscsi1

22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000
22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000

The Physical LUN number associated with hdisk22 is "1f4"


$ mkvdev -vdev hdisk22 -vadapter vhost0 -dev newvdev
newvdev Available

Verify any needed information regarding the newly created virtual device by using the lsmap
-all command and locating the hdisk22 backing device.

36

Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host

$ lsmap -all
SVSA
Physloc
Client Partition ID
--------------- -------------------------------------------- -----------------vhost0
U8203.E4A.10DB5C1-V5-C11
0x00000007
VTD
Status
LUN
Backing device
Physloc

newvdev
Available
0x9600000000000000
hdisk22
U789C.001.DQD2174-P1-C1-T1-W22410002AC000044-L1F4000000000000

Once the LUNs have been virtualized and exported to the appropriate VHOST device definitions
using the mkvdev command, the devices will not be visible on the AIX guest clients LPAR until
the cfgmgr command is executed.
CAUTION: If dual IBM Virtual I/O Servers are configured and using the same physical devices
from the 3PAR StoreServ Storage, it is important to verify that the same physical LUN numbers
are represented when creating virtual SCSI devices on each of the IBM Virtual I/O Servers.
Failure to perform this step when using dual IBM Virtual I/O Servers will result in data corruption.
Example of cautionary scenario:
Assuming that dual IBM Virtual I/O Servers are configured named VIOS1 and VIOS2.

From the IBM Virtual I/O Server named VIOS1, hdisk22 might be backed by physical device
LUN 1f4.

However, from the IBM Virtual I/O Server named VIOS2, hdisk22 might be backed by
physical device LUN 1f0.

From VIOS1:
$ lspath -dev hdisk22
status

name

parent connection

Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22

fscsi0
fscsi0
fscsi0
fscsi0
fscsi0
fscsi1
fscsi1
fscsi1
fscsi1
fscsi1

22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000
22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000

where 1f4000000000000 indicates the physical LUN number.

Creating Virtual SCSI Devices for Connected LPARs

37

From VIOS2:
$ lspath -dev hdisk22
status

name

parent connection

Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22
hdisk22

fscsi0
fscsi0
fscsi0
fscsi0
fscsi0
fscsi1
fscsi1
fscsi1
fscsi1
fscsi1

22410002ac000044,1f0000000000000
22510002ac000044,1f0000000000000
23220002ac000044,1f0000000000000
23410002ac000044,1f0000000000000
23510002ac000044,1f0000000000000
22410002ac000044,1f0000000000000
22510002ac000044,1f0000000000000
23220002ac000044,1f0000000000000
23410002ac000044,1f0000000000000
23510002ac000044,1f0000000000000

where 1f0000000000000 indicates the physical LUN number.


However, hdisk23 on VIOS2 has the correct physical LUN number of 1f4000000000000.
$ lspath -dev hdisk22
status

name

parent connection

Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled
Enabled

hdisk23
hdisk23
hdisk23
hdisk23
hdisk23
hdisk23
hdisk23
hdisk23
hdisk23
hdisk23

fscsi0
fscsi0
fscsi0
fscsi0
fscsi0
fscsi1
fscsi1
fscsi1
fscsi1
fscsi1

22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000
22410002ac000044,1f4000000000000
22510002ac000044,1f4000000000000
23220002ac000044,1f4000000000000
23410002ac000044,1f4000000000000
23510002ac000044,1f4000000000000

Therefore, when the virtual SCSI devices are created on each of the IBM Virtual I/O Servers, it
is necessary to use different hdisk definitions when using the mkvdev command to create the
virtual SCSI device;
With VIOS1, the command is:
$ mkvdev -vdev hdisk22 -vadapter vhost0 -dev newvdev
newvdev Available

With VIOS2, the command is:


$ mkvdev -vdev hdisk23 -vadapter vhost0 -dev newvdev
newvdev Available

In conclusion, when creating virtual SCSI devices to be used by connected LPARs, be sure to
first verify that the same physical LUN numbers are associated with each hdisk definition.

38

Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host

Growing VV Exported to AIX LPARs


This section explains how to grow 3PAR StoreServ Storage VVs that are mapped to AIX LPARs
being served by the VIO servers. This section assumes that 3PAR StoreServ Storage thinly
provisioned virtual volumes (TPVVs) are being used as the exported volumes to the served AIX
Logical Partitions (LPARs). Also, it is assumed that all of the TPVVs associated with an AIX
LPAR will have scalable volume groups created on the TPVVs being served to the AIX LPAR.
NOTE:

This feature is supported only with 3PAR OS 3.1.x and later.

In the following example the AIX LPAR being served by dual VIO servers already has six 3PAR
StoreServ Storage VVs that are accessible.
# lsdev -Cc disk
hdisk0
hdisk1
hdisk2
hdisk3
hdisk4
hdisk5

Available
Available
Available
Available
Available
Available

Virtual
Virtual
Virtual
Virtual
Virtual
Virtual

SCSI
SCSI
SCSI
SCSI
SCSI
SCSI

Disk
Disk
Disk
Disk
Disk
Disk

Drive
Drive
Drive
Drive
Drive
Drive

From the 3PAR StoreServ Storage, two new TPVVs will be created, exported to the VIO servers,
and then mapped to the AIX LPAR. The new VVs accessible to the AIX LPAR will then have
scalable volume groups created on each.
Create two TPVVs named vol1 and vol2 in CPG AIX, with an initial size of 7 G each:
# createvv -tpvv <cpg> <vvname> <size> AIX vol1 7G
# createvv -tpvv <cpg> <vvname> <size> AIX vol2 7G

Export each of the created TPVVs with LUN IDs of 500 and 501 to each of the VIO servers:
#
#
#
#

createvlun
createvlun
createvlun
createvlun

-f
-f
-f
-f

vol1
vol2
vol1
vol2

500
501
500
501

VIOS1
VIOS1
VIOS2
VIOS2

On each VIO server, scan for the newly created TPVVs using the cfgdev command.
The new 3PAR StoreServ Storage TPVVs discovered on each VIO server in this example were
assigned the values hdisk7 and hdisk8 by each of the VIO servers.
Map each of the TPVVs to the AIX LPAR through both VIO servers:
$ mkvdev -vdev hdisk7 -vadapter vhost0 -dev vol1
vol1 Available
$ mkvdev -vdev hdisk8 -vadapter vhost0 -dev vol2
vol2 Available

Growing VV Exported to AIX LPARs

39

On the AIX LPAR, scan for the new virtual SCSI devices previously mapped and list the disks:
# cfgmgr
# lsdev -Cc disk
hdisk0 Available
hdisk1 Available
hdisk2 Available
hdisk3 Available
hdisk4 Available
hdisk5 Available
hdisk6 Available
hdisk7 Available

Virtual
Virtual
Virtual
Virtual
Virtual
Virtual
Virtual
Virtual

SCSI
SCSI
SCSI
SCSI
SCSI
SCSI
SCSI
SCSI

Disk
Disk
Disk
Disk
Disk
Disk
Disk
Disk

Drive
Drive
Drive
Drive
Drive
Drive
Drive
Drive

Create scalable volume groups volume1 and volume2 on the two new disks:
# mkvg -S
0516-1254
volume1
# mkvg -S
0516-1254
volume2

-s 1 -y volume1 hdisk6
mkvg: Changing the PVID in the ODM.
-s 1 -y volume2 hdisk7
mkvg: Changing the PVID in the ODM.

Verify the volume group geometry on each of the newly created scalable volumes groups. In this
example, the partition size is 7099 megabytes for each.

40

lspv hdisk6
PHYSICAL VOLUME:
PV IDENTIFIER:
PV STATE:
STALE PARTITIONS:
PP SIZE:
TOTAL PPs:
FREE PPs:
USED PPs:
FREE DISTRIBUTION:
USED DISTRIBUTION:
MIRROR POOL:

hdisk6
VOLUME GROUP:
volume1
00f96d4dd08caf79 VG IDENTIFIER
00f96d4d00004c000000014ed08cff97
active
0
ALLOCATABLE:
yes
1 megabyte(s)
LOGICAL VOLUMES: 0
10171 (10171 megabytes) VG DESCRIPTORS:
2
10171 (10171 megabytes) HOT SPARE:
no
0 (0 megabytes)
MAX REQUEST:
256 kilobytes
2035..2034..2034..2034..2034
00..00..00..00..00
None

lspv hdisk7
PHYSICAL VOLUME:
PV IDENTIFIER:
PV STATE:
STALE PARTITIONS:
PP SIZE:
TOTAL PPs:
FREE PPs:
USED PPs:
FREE DISTRIBUTION:
USED DISTRIBUTION:
MIRROR POOL:

hdisk7
VOLUME GROUP:
volume2
00f96d4dd08f1c69 VG IDENTIFIER
00f96d4d00004c000000014ed08f1ce3
active
0
ALLOCATABLE:
yes
1 megabyte(s)
LOGICAL VOLUMES: 0
10171 (10171 megabytes) VG DESCRIPTORS:
2
10171 (10171 megabytes) HOT SPARE:
no
0 (0 megabytes)
MAX REQUEST:
256 kilobytes
2035..2034..2034..2034..2034
00..00..00..00..00
None

Allocating Storage for Access by the AIX or IBM Virtual I/O Server Host

The 3PAR StoreServ Storage VVs will now be grown to different values for each of the exported
TPVVs:
# growvv vol1 205G
# growvv vol2 478G

NOTE: When TPVVs are grown and are exported to VIO servers it is important to wait for a
period of time before attempting to grow the AIX LPAR volume groups. In this case a one minute
wait period was used. Wait time can vary depending on many factors on the VIOS; however,
Hewlett Packard Enterprise testing has shown two minutes is typically adequate.
When growing a VV that is exported to an LPAR served by VIO servers, it is advisable to stop
any I/O in progress to the volume group being grown or I/O stalls might be seen. An I/O stall is
defined as a period of time where no I/O will occur to the VV being grown.
Following a wait as noted above, the volumes groups can be changed on the AIX LPAR indicating
the new size:
# chvg -g volume1
0516-1712 chvg: Volume group volume1 changed.
partitions in the volume group.

volume1 can include up to 1024 physical volumes with 262144 total physical

# chvg -g volume2
0516-1712 chvg: Volume group volume2 changed.
partitions in the volume group.

volume2 can include up to 1024 physical volumes with 524288 total physical

The new hdisk sizes can be viewed using the bootinfo s command, or the lspv command
can be used to display the new volume group sizes:
# bootinfo -s hdisk6
217088
bootinfo -s hdisk7
496640
lspv hdisk6
PHYSICAL VOLUME:
PV IDENTIFIER:
PV STATE:
STALE PARTITIONS:
PP SIZE:
TOTAL PPs:
FREE PPs:
USED PPs:
FREE DISTRIBUTION:
USED DISTRIBUTION:
MIRROR POOL:

hdisk6
VOLUME GROUP:
volume1
00f96d4dd08caf79 VG IDENTIFIER
00f96d4d00004c000000014ed08cff97
active
0
ALLOCATABLE:
yes
1 megabyte(s)
LOGICAL VOLUMES: 0
217019 (217019 megabytes) VG DESCRIPTORS:
2
217019 (217019 megabytes) HOT SPARE:
no
0 (0 megabytes)
MAX REQUEST:
256 kilobytes
43404..43404..43403..43404..43404
00..00..00..00..00
None

lspv hdisk7
PHYSICAL VOLUME:
PV IDENTIFIER:
PV STATE:
STALE PARTITIONS:
PP SIZE:
TOTAL PPs:
FREE PPs:
USED PPs:
FREE DISTRIBUTION:
USED DISTRIBUTION:
MIRROR POOL:

hdisk7
VOLUME GROUP:
volume2
00f96d4dd08f1c69 VG IDENTIFIER
00f96d4d00004c000000014ed08f1ce3
active
0
ALLOCATABLE:
yes
1 megabyte(s)
LOGICAL VOLUMES: 0
496571 (496571 megabytes) VG DESCRIPTORS:
2
496571 (496571 megabytes) HOT SPARE:
no
0 (0 megabytes)
MAX REQUEST:
256 kilobytes
99315..99314..99314..99314..99314
00..00..00..00..00
None

Growing VV Exported to AIX LPARs

41

5 Removing 3PAR Devices on an AIX or IBM Virtual I/O


Server Host
This chapter explains how to remove the 3PAR StoreServ Storage VVs from the AIX or IBM
Virtual I/O Server host. Before physically disconnecting cables from the host or 3PAR StoreServ
Storage, remove the VVs from each device in the following sequence:
1. AIX or IBM Virtual I/O Server host.
2. 3PAR StoreServ Storage
NOTE: Performing a clean removal in this fashion insures the hdisk entry is removed from
the AIX device data base so that if another LUN is exported in the future with the same LUN and
characteristics, a device mismatch does not occur on the AIX or IBM Virtual I/O Server host.

Removing FC Connected Devices on the Host


When removing 3PAR StoreServ Storage VVs from the AIX or IBM Virtual I/O Server host,
complete the following steps.

Removing FC Devices on the host for the AIX host


1.
2.

Locate and verify details of the VV by using the lsdev -Cc disk command.
Remove the hdisk definition from the AIX host by using the rmdev -dl hdiskN command.
NOTE: Remove the VVs from the host before disconnecting the 3PAR StoreServ Storage
from the host.

3.

To ensure the VLUN is removed, use the lsdev -Cc disk command on the AIX CLI. For
example:
# lsdev -Cc disk
hdisk0 Available 00-08-00 SAS Disk Drive
hdisk1 Available 07-01-01 3PAR InServ Virtual Volume
hdisk2 Available 07-01-01 3PAR InServ Virtual Volume
# rmdev -dl hdisk1
hdisk1 deleted
# lsdev -Cc disk
hdisk0 Available 00-08-00 SAS Disk Drive
hdisk2 Available 07-01-01 3PAR InServ Virtual Volume

42

Removing 3PAR Devices on an AIX or IBM Virtual I/O Server Host

Removing FC Devices on the host for the IBM Virtual I/O Server
Locate and verify details of the VV by using the lsdev -type disk command.
CAUTION:

This procedure removes the mapped virtual defined device from an LPAR.

NOTE: If this device has been mapped by the IBM Virtual I/O Server to another LPAR, it is
highly advisable to remove the mapping before attempting to removed the hdisk definition.
For example:
To remove hdisk22, first check to see whether hdisk22 is mapped to another LPAR by using
the lsmap -all command. Scan the output for any backing device that shows up as hdisk22
and remove this mapping for this virtual vdev definition.
The backing device in this case for hdisk22 is as follows:
VTD
Status
LUN
Backing device
Physloc

biglun
Available
0x9600000000000000
hdisk22
U789C.001.DQD2174-P1-C1-T1-W22410002AC000044-L1F4000000000000

First, remove the mapping:


$ rmvdev -vdev hdisk22
biglun deleted

Then, remove the device:


$ rmdev -dev hdisk22
hdisk22 deleted

Removing FC Connected Devices on the Host

43

Removing FC Devices on the 3PAR StoreServ Storage


To remove a single exported VLUN from the AIX or IBM Virtual I/O Server host on the 3PAR
StoreServ Storage, complete the following steps.

Removing FC Devices on the Storage for the AIX host


1.

Use the showvlun -host aixhost command on the 3PAR StoreServ Storage.
# showvlun -host aixhost
Active VLUNs
Lun VVName
HostName -Host_WWN/iSCSI_Name- Port Type
0 TESTLUN.0 aixhost 10000000C9759527
1:4:1 host
1 TESTLUN.1 aixhost 10000000C9759527
1:4:1 host
0 TESTLUN.0 aixhost 10000000C9759526
1:4:1 host
1 TESTLUN.1 aixhost 10000000C9759526
1:4:1 host
0 TESTLUN.0 aixhost 10000000C9759527
0:5:1 host
1 TESTLUN.1 aixhost 10000000C9759527
0:5:1 host
0 TESTLUN.0 aixhost 10000000C9759526
0:5:1 host
1 TESTLUN.1 aixhost 10000000C9759526
0:5:1 host
0 TESTLUN.0 aixhost 10000000C9759526
1:2:1 host
1 TESTLUN.1 aixhost 10000000C9759526
1:2:1 host
0 TESTLUN.0 aixhost 10000000C9759527
1:2:1 host
1 TESTLUN.1 aixhost 10000000C9759527
1:2:1 host
0 TESTLUN.0 aixhost 10000000C9759527
0:4:1 host
1 TESTLUN.1 aixhost 10000000C9759527
0:4:1 host
0 TESTLUN.0 aixhost 10000000C9759526
0:4:1 host
1 TESTLUN.1 aixhost 10000000C9759526
0:4:1 host
0 TESTLUN.0 aixhost 10000000C9759526
1:5:1 host
1 TESTLUN.1 aixhost 10000000C9759526
1:5:1 host
0 TESTLUN.0 aixhost 10000000C9759527
1:5:1 host
1 TESTLUN.1 aixhost 10000000C9759527
1:5:1 host
------------------------------------------------------20 total
VLUN Templates
Lun VVName
HostName -Host_WWN/iSCSI_Name- Port Type
0 TESTLUN.0 aixhost ------------------ host
1 TESTLUN.1 aixhost ------------------ host
-----------------------------------------------------2 total

2.

44

Use the removevlun -f TESTLUN.0 0 aixhost command on the 3PAR StoreServ


Storage.

Removing 3PAR Devices on an AIX or IBM Virtual I/O Server Host

3.

To verify that the VLUN is removed on the 3PAR StoreServ Storage, use the showvlun
-host aixhost command.
# showvlun -host aixhost
Active VLUNs
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
1 TESTLUN.1 aixhost 10000000C9759527 1:4:1 host
1 TESTLUN.1 aixhost 10000000C9759526 1:4:1 host
1 TESTLUN.1 aixhost 10000000C9759527 0:5:1 host
1 TESTLUN.1 aixhost 10000000C9759526 0:5:1 host
1 TESTLUN.1 aixhost 10000000C9759526 1:2:1 host
1 TESTLUN.1 aixhost 10000000C9759527 1:2:1 host
1 TESTLUN.1 aixhost 10000000C9759527 0:4:1 host
1 TESTLUN.1 aixhost 10000000C9759526 0:4:1 host
1 TESTLUN.1 aixhost 10000000C9759526 1:5:1 host
1 TESTLUN.1 aixhost 10000000C9759527 1:5:1 host
------------------------------------------------------10 total
VLUN Templates
Lun VVName HostName -Host_WWN/iSCSI_Name- Port Type
1 TESTLUN.1 aixhost ---------------- --- host
-----------------------------------------------------1 total

Removing FC Devices on the 3PAR StoreServ Storage

45

Removing FC Devices on the Storage for the IBM Virtual I/O Server
1.

Use the showvlun -host VIOS1 command on the 3PAR StoreServ Storage.
# showvlun -host VIOS1

2.
3.

Active VLUNs
Lun VVName
0 VIOS1boot
10 AIX61boot_client1
50 VIOStest.0
51 VIOStest.1
52 VIOStest.2
53 VIOStest.3

HostName
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1

-Host_WWN/iSCSI_Name10000000C9759841
10000000C9759841
10000000C9759841
10000000C9759841
10000000C9759841
10000000C9759841

VLUN Templates
Lun VVName
0 VIOS1boot
10 AIX61boot_client1
50 VIOStest.0
51 VIOStest.1
52 VIOStest.2
53 VIOStest.3

HostName
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1

-Host_WWN/iSCSI_Name- Port Type


------------------ host
------------------ host
------------------ host
------------------ host
------------------ host
------------------ host

Port
3:5:1
3:5:1
3:5:1
3:5:1
3:5:1
3:5:1

Type
host
host
host
host
host
host

Use the removevlun -f VIOStest.3 53 VIOS1 command on the 3PAR StoreServ


Storage.
To verify that the VLUN is removed on the 3PAR StoreServ Storage, use the showvlun
-host VIOS1 command.
# showvlun -host VIOS1

46

Active VLUNs
Lun VVName
0 VIOS1boot
10 AIX61boot_client1
50 VIOStest.0
51 VIOStest.1
52 VIOStest.2

HostName
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1

-Host_WWN/iSCSI_Name10000000C9759841
10000000C9759841
10000000C9759841
10000000C9759841
10000000C9759841

VLUN Templates
Lun VVName
0 VIOS1boot
10 AIX61boot_client1
50 VIOStest.0
51 VIOStest.1
52 VIOStest.2

HostName
VIOS1
VIOS1
VIOS1
VIOS1
VIOS1

-Host_WWN/iSCSI_Name- Port Type


------------------ host
------------------ host
------------------ host
------------------ host
------------------ host

Removing 3PAR Devices on an AIX or IBM Virtual I/O Server Host

Port
3:5:1
3:5:1
3:5:1
3:5:1
3:5:1

Type
host
host
host
host
host

Removing the 3PAR MPIO for AIX Software


The 3PAR MPIO for AIX Software can be removed from an AIX or VIO Server.
To remove the software, first remove all devices presented from the 3PAR storage to the AIX or
VIO Server, as described in the sections above, and then remove the 3PARmpio.64 fileset using
smit. After the 3PARmpio.64 fileset is removed, reboot the AIX or VIO Server.
If the IBM AIX or VIO Server contains a 3PAR SAN boot volume (rootvg), before removing the
3PAR MPIO for AIX Software, perform an alt_disk_copy to a non-3PAR disk (such as an
internal SAS disk or other local disk), and then set this new non-3PAR disk as the boot device
using bootlist m normal <hdisk_device>, and then reboot the server. Once the server
is booted from a non-3PAR volume, remove the 3PARmpio.64 fileset using smit, and then
reboot the server after the software removal.
If the software has already been removed with an active 3PAR SAN boot volume, reinstall the
software, and then follow the procedures above or change the boot devices PdDvLn attribute
from disk/fcp/3PAR_VV_MPIO to disk/fcp/mpioosdisk before installing another MPIO
solution, such as AntemetA 3PAR Peer Persistence Solution for IBM AIX Software.
NOTE: The 3PAR MPIO for AIX Software and the AntemetA 3PAR Peer Persistence Solution
for IBM AIX Software cannot co-exist on an IBM AIX or VIO Server. Only install one of these
MPIO solutions for 3PAR on an IBM AIX or VIO Server. If one of these MPIO solutions is already
installed, the software must be completely removed from the AIX or VIO Server before installing
the other solution.
The example below changes the PdDvLn attribute on a 3PAR SAN boot device, hdisk1:
# odmget -q name=hdisk1 CuDv > hdisk1.odm

1.

Verify:
# cat hdisk1.odm
CuDv:
name = "hdisk1"
status = 0
chgstatus = 3
ddins = "scsidisk"
location = "32-T1-01"
parent = "fscsi1"
connwhere = "W_2"
PdDvLn = "disk/fcp/3PAR_VV_MPIO"

2.

Edit the file, and then change the line PdDvLn = "disk/fcp/3PAR_VV_MPIO" to PdDvLn
= "disk/fcp/mpioosdisk".
# vi

3.
4.

hdisk1.odm

Save the file.


Delete the odm entry for hdisk1:
# odmdelete -q name=hdisk1 -o CuDv
0518-307 odmdelete: 1 objects deleted.

Removing the 3PAR MPIO for AIX Software

47

5.

Add the device using the modified PdDvLn:


# odmadd hdisk1.odm

6.

Verify:
# odmget -q name=hdisk1 CuDv > hdisk1.odm.new
# cat hdisk1.odm.new
CuDv:
name = "hdisk1"
status = 0
chgstatus = 3
ddins = "scsidisk"
location = "32-T1-01"
parent = "fscsi1"
connwhere = "W_2"
PdDvLn = "disk/fcp/mpioosdisk"

48

Removing 3PAR Devices on an AIX or IBM Virtual I/O Server Host

6 Using IBM HACMP 5.5 with AIX


Installing IBM HACMP
Hewlett Packard Enterprise supports HACMP 5.5 when using 3PAR ODM 3.1.0.0 for IBM AIX
using enhanced concurrent volume groups in multihost environments.
NOTE:

The use of non-concurrent volume groups is not supported.

Persistent reservation with HACMP 5.5 is not supported. Shared volume groups managed by
HACMP 5.5 and accessed through 3PAR MPIO must be enhanced concurrent mode.
There are no other special considerations for using HACMP. See the HPE 3PAR Multipath I/O
2.2 for IBM AIX Users Guide for additional information.
See IBM HACMP documentation for HACMP planning, setup and usage. These IBM HACMP
documents are of particular importance:

Concepts and Facilities Guide

Planning and Installation Guide

Administration Guide

To obtain this documentation, search at the IBM website:


http://www.ibm.com

HACMP Parameters for 3PAR Storage


There are no special settings required for IBM HACMP when using enhanced concurrent volume
groups that are different from settings specified by the IBM documentation for setting up enhanced
concurrent volume groups.

Installing IBM HACMP

49

7 Using IBM PowerHA 7.1 and PowerHA 6.1 with AIX


Installing IBM PowerHA 7.1 or PowerHA 6.1
Hewlett Packard Enterprise supports PowerHA 7.1 (requires Fix Pack 3 minimum) and PowerHA
6.1 when using 3PAR ODM 3.1.0.0 for IBM AIX using enhanced concurrent volume groups in
multihost environments.
There are no other special considerations for using PowerHA 7.1 and PowerHA 6.1. See the
HPE 3PAR ODM 3.1 Readme for additional information.
See the IBM PowerHA 7.1 and PowerHA 6.1 documentation for planning, setup and usage.
These IBM PowerHA 7.1 and PowerHA 6.1 documents are of particular importance:

Concepts and Facilities Guide

Planning and Installation Guide

Administration Guide

PowerHA 7.1 and PowerHA 6.1 Parameters for 3PAR Storage


There are no special settings required for IBM PowerHA 7.1 and PowerHA 6.1 when using
enhanced concurrent volume groups that are different than specified by the IBM documentation
when setting up enhanced concurrent volume groups.

50

Using IBM PowerHA 7.1 and PowerHA 6.1 with AIX

8 Booting from the 3PAR StoreServ Storage


For details about connecting the 3PAR StoreServ Storage to the host, see Configuring the 3PAR
StoreServ Storage for FC (page 7).

Setting the Host FC HBA Parameters for a SAN Boot


The IBM FC HBA does not require setting parameters on the host side to support booting from
the 3PAR StoreServ Storage.

Assigning LUNs as the Boot Volume


On the 3PAR StoreServ Storage, create a VV of appropriate size and export a VLUN to the host
definition that will be used to represent the AIX or IBM Virtual I/O Server host definition for booting
from a 3PAR StoreServ Storage.
For details, see Configuring the 3PAR StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x
(page 7) and Exporting LUNs to the Host (page 32).
For the purpose of the AIX or IBM Virtual I/O Server installation, restrict the connection from the
host to the 3PAR StoreServ Storage to a single path. Only a single path should be available on
the 3PAR StoreServ Storage and a single path on the host to the VLUN VV that will be the AIX
or IBM Virtual I/O Server boot volume.

Installing the AIX or IBM Virtual I/O Server Host OS for a SAN Boot
Installation of the AIX 7.2, AIX 7.1, AIX 6.1, AIX 5.3, or IBM Virtual I/O Server operating system
is supported when it is booted from the 3PAR StoreServ Storage. Before installing AIX or the
IBM Virtual I/O Server operating system on a 3PAR StoreServ Storage VLUN VV, you must
remove or reduce paths to a single path to the 3PAR StoreServ Storage VLUN VV. After installing
AIX or the IBM Virtual I/O Server, install the 3PAR ODM 3.1.0.0 for AIX and then restart the AIX
or Virtual I/O server. With the 3PAR ODM 3.1.0.0 installed, multiple paths to the boot volume are
supported and can then be configured. Follow all recommended settings and guides as covered
in this document.
NOTE:

During the installation phase, restrict the number of paths to the 3PAR StoreServ Storage
AIX or IBM Virtual I/O Server boot volume to a maximum of one.

Once 3PAR ODM 3.1.0.0 for AIX or IBM Virtual I/O Server has been installed, it cannot be
de-installed for a host boot disk.

Setting the Host FC HBA Parameters for a SAN Boot

51

AIX or IBM Virtual I/O Server host boot is supported by the 3PAR StoreServ Storage. To install
the AIX or IBM Virtual I/O Server operating system on 3PAR StoreServ Storage volumes, complete
the following steps:
1. Configure the 3PAR StoreServ Storage for the AIX system. See Configuring the 3PAR
StoreServ Storage Port Running 3PAR OS 3.2.x or 3.1.x (page 7) and Exporting VLUNs
to the AIX or IBM Virtual I/O Server Host (page 33).
2. For AIX, perform the following tasks:
a. Enter the SMS Menu and choose the options to boot from the CD-ROM.
b. Follow the standard procedure as outlined by IBM for installing AIX on a bootable device.
For details, see IBM AIX documentation.
c. At the end of the installation options, prior to the OS install, change the installation
settings by selecting the 3PAR StoreServ Storage volume. Deselect any other installation
devices.
d. Install the AIX operating system on the selected 3PAR StoreServ Storage volume.
3.

For the IBM Virtual I/O Server, perform the following tasks:
a. Select the 3PAR StoreServ Storage LUN from the IBM SMS Menu to contain the OS
image.
b. Follow the standard procedure as outlined by IBM for installing the IBM Virtual I/O Server
on a bootable device. For details, see IBM AIX documentation.
c. Install the IBM Virtual I/O Server operating system on the selected 3PAR StoreServ
Storage volume.
NOTE: After the initial installation, the 3PAR StoreServ Storage volume is configured with
the AIX default PCM.

4.

On the AIX or Virtual I/O server install the 3PAR ODM 3.1.0.0 for IBM AIX, by completing
the following tasks:
NOTE: The commands within this section are performed from the IBM Virtual I/O Server
oem_setup_env environment and are designated as starting with a #" on the command
line. To enter the oem_setup_env environment from the padmin user account, use the AIX
CLI oem_setup_env command.
NOTE: This installation must be performed by a user logged into the AIX system as the
superuser or with root privileges.
a.

Load the distribution CD containing the 3PAR MPIO/3PAR ODM 3.1.0.0 for IBM AIX
into the CD drive.
CAUTION: Do not connect the AIX or IBM Virtual I/O Server host to mixed HBA types
on the 3PAR StoreServ Storage when using a direct connect mode. Boot failures or
missing paths might result. Use only like HBA types on the 3PAR StoreServ Storage.

b.

Use the smit update_all command to install the 3PAR MPIO/3PAR ODM 3.1.0.0
for IBM AIX from the distribution CD.
Be sure to set the parameter ACCEPT new license agreements to Yes.

c.

On the AIX CLI, use the bosboot -aDd /dev/ipldevice command.


# bosboot -aDd /dev/ipldevice
bosboot: Boot image is 25235 512 byte blocks.

5.

52

Restart the AIX or IBM Virtual I/O Server host.

Booting from the 3PAR StoreServ Storage

6.

After the AIX or IBM Virtual I/O Server host completely boots and is online, connect additional
paths to the Fabric or the 3PAR disk storage system directly by completing the following
tasks.
a. On the 3PAR StoreServ Storage, add additional paths to the host definition already
created. Use the 3PAR CLI createhost -add hostname WWN command to add the
additional paths to the defined 3PAR StoreServ Storage host definition.
b. On the AIX host CLI, execute the cfgmgr command.
On the IBM Virtual I/O Server host CLI, in the oem_setup_env environment, execute
the cfgmgr command.
c.

Verify that all paths appear on the 3PAR StoreServ Storage:


For the AIX host:
# showhost aixhost

For the IBM Virtual I/O Server:


# showhost VIOS

7.

To add the additional paths to the boot device configuration, choose one of the following
methods:

Use the /usr/lib/methods/cfgefscsi l fscsiX command on the AIX or IBM


Virtual I/O Server CLI, where X is the additional paths FC SCSI controller protocol
device.
# /usr/lib/methods/cfgefscsi -l fscsi1
hdisk0
# /usr/lib/methods/cfgefscsi -l fscsi2
hdisk0
# /usr/lib/methods/cfgefscsi -l fscsi3
hdisk0

Use the cfgmgr vl hdiskX command in the AIX or IBM Virtual I/O Server CLI,
where X is the 3PAR StoreServ Storage volume for the host boot.
# cfgmgr -vl hdisk0
---------------attempting to configure device 'hdisk0'
Time: 0 LEDS: 0x626
invoking /usr/lib/methods/cfgscsidisk -l hdisk0
Number of running methods: 1
---------------Completed method for: hdisk0, Elapsed time = 0
return code = 0
****************** no stdout ***********
****************** no stderr ***********
---------------Time: 0 LEDS: 0x539
Number of running methods: 0
---------------calling savebase
return code = 0
****************** no stdout ***********
****************** no stderr ***********
Configuration time: 0 seconds

Installing the AIX or IBM Virtual I/O Server Host OS for a SAN Boot

53

8.

To verify that the AIX or IBM Virtual I/O Server host recognizes multiple paths, use the
lspath l hdiskX command where X is the 3PAR StoreServ Storage volume for the
host boot.
# lspath -l hdisk0
Enabled
Enabled
Enabled
Enabled

hdisk0
hdisk0
hdisk0
hdisk0

fscsi0
fscsi1
fscsi2
fscsi3

9.

Use the bosboot -aDd /dev/ipldevice command on the AIX or IBM Virtual I/O Server
CLI.
10. Restart the AIX or the IBM Virtual I/O Server system.
All 3PAR StoreServ Storage VVs, including the selected 3PAR StoreServ Storage boot volumes
and any additional paths are now configured with the 3PAR PCM.

54

Booting from the 3PAR StoreServ Storage

9 Configuring File Services Persona


3PAR File Persona
Starting with 3PAR OS 3.2.1 MU2, the 3PAR File Persona software is available. The 3PAR File
Persona software provides file services and access to file storage by network protocols such as:

Server Message Block (SMB)

Network File System (NFS)

Web Distributed Authoring and Versioning (WebDAV)

For information on supported 3PAR StoreServ Storage models and client configurations, see
SPOCK (from SPOCK Home under Explore Storage Interoperability With SPOCK, select
Explore HPE 3PAR StoreServ Storage interoperabilityExplore HPE 3PAR File Persona
interoperability):
http://www.hpe.com/storage/spock
For a complete description of the 3PAR File Persona software, including required setup and
guidelines, see the "Using the 3PAR File Persona software" chapter of the HPE 3PAR Command
Line Interface Administrators Manual, available at the Hewlett Packard Enterprise Storage
Information Library:
http://www.hpe.com/info/storage/docs

3PAR File Persona

55

10 Using Veritas Cluster Server with AIX Hosts


Veritas Cluster Server with AIX hosts requires the Veritas Array Support Library (ASL). No special
settings are required to use Veritas Cluster Server with 3PAR StoreServ Storage.
The 3PAR ODM 3.1.0.0 for Veritas VxDMP must be installed for correct identification of 3PAR
StoreServ Storage volumes. The 3PAR ODM 3.1.0.0 for Veritas VxDMP is available at the
Software Depot:
http://www.hpe.com/support/softwaredepot

56

Using Veritas Cluster Server with AIX Hosts

11 Using Symantec Storage Foundation


As of 3PAR OS 3.1.2, the virtual volume (VV) WWN increased from 8 bytes to 16 bytes. The
increase in WWN length might cause the Symantec ASL to incorrectly identify the array volume
identification (AVID) number, subsequently resulting in use of a different naming convention for
DMP disk devices.
NOTE: This issue does not occur with Storage Foundation 6.1, which is compatible with both
8-byte and 16-byte WWNs.
The standard naming convention is as follows:
<enclosure_name><enclosure_number> <AVID>
For example:
3pardata4_5876
3pardata4_5877
3pardata4_5878
If the VVs in use reports a 16-byte WWN, the ASL extracts an AVID number of 0 for all VVs, and
Symantec sequentially enumerates the DMP devices to generate a unique DMP disk name. In
this case, the resulting disk names would be:
3pardata4_0
3pardata4_0_1
3pardata4_0_2
The name scheme used does not impact DMP functionality. However, if you want the DMP name
to contain the VV AVID number, Symantec provides updated ASLs that will properly extract the
AVID number. If AVID naming is desired, use the following ASL versions:
Storage Foundation 5.1 (all)
3PAR ASL version 5.1.100.600 or later
Storage Foundation 6.0 to 6.0.4

ASL version 6.0.100.100 or above

Using Persistent Ports with Storage Foundation


3PAR OS 3.1.2 also introduced the Persistent Port feature to the 3PAR family of arrays. See
3PAR Persistent Ports for FC (page 17) for more information. For compatibility with Symantec
Storage Foundation, you must set the dmp_fast_recovery dmp tunable to off when a 3PAR
array is running 3PAR OS 3.1.2 or higher.
Use the vxdmpadm gettune dmp_fast_recovery command to view the current setting:
# vxdmpadm gettune dmp_fast_recovery
Tunable
Current Value
Default Value
--------------------------------------------------------dmp_fast_recovery
on
on

Use the vxdmpadm settune dmp_fast_recovery command to change the current setting
(if needed):
# vxdmpadm settune dmp_fast_recovery=off
Tunable
Current Value
Default Value
--------------------------------------------------------dmp_fast_recovery
off
on

Using Persistent Ports with Storage Foundation

57

12 AIX Client Path Failure Detection and Recovery


This chapter explains how to set up the AIX client LPARs to enable client automatic path failure
detection and recovery in the event an IBM Virtual I/O Server goes down for any reason, as well
as considerations that should be given to AIX Clients directly connected to the 3PAR StoreServ
Storage.

AIX Client Automatic Path Failure Detection and Recovery


When one of the VIO servers goes down for any reason, the vscsi path coming from that server
shows as failed with the lspath command when executed from the virtualized AIX client.
# lspath
Failed hdisk0 vscsi0
Enabled hdisk0 vscsi1

Setting Auto Path Failure Detection and Recovery


Even though the VIO server comes back as available, the lspath command will still show one
of the paths as missing. For automatic path failure detection and recovery on the AIX client, set
the attributes hcheck_interval and hcheck_mode to 60 and nonactive, respectively. This
will cause a path failure to be detected automatically and recovered automatically once the path
has returned.
# chdev -l hdiskN -a hcheck_interval=60 -P
# chdev -l hdiskN -a hcheck_mode=nonactive -P

This task must be performed for each hdisk on the AIX client.
The AIX client will need to be restarted for the hcheck_interval attribute changes to take
effect.
As new disks are added to the AIX client, the command to set the hckeck_interval must be
performed for each new hdisk added. The same is true if an hdisk is removed (using the
rmdev -l hdiskN -d command) from the AIX client and added again later.
For further information regarding path failure detection and recovery, see the IBM manual at the
IBM website:
https://www-304.ibm.com/webapp/set2/sas/f/vios/documentation/
configuring_mpio_for_the_virtual_client.pdf

58

AIX Client Path Failure Detection and Recovery

Direct Connect AIX Client Considerations


FC direct-connect AIX clients attached to ports on the 3PAR StoreServ Storage require special
consideration when resetting the 3PAR StoreServ Storage port. If there was no I/O activity since
the 3PAR StoreServ Storage port reset occurred, the path to the AIX client might not be listed
by the 3PAR StoreServ Storage showhost command. The 3PAR StoreServ Storage will detect
the path after I/O activity occurs on the path, and then the showhost command will list the path.
See the following showhost command output examples:

The AIX client blues22 is an FC direct connect attached host with connections to 3PAR
StoreServ Storage ports 0:2:4 and 1:2:4:
# showhost blues22
Id Name
3 blues22

Persona
AIX-legacy

-WWN/iSCSI_Name10000000C9809802
10000000C9809803

Port
0:2:4
1:2:4

The path is no longer listed by the showhost command after a reset occurs on the 3PAR
StoreServ Storage port 0:2:4:
# showhost blues22
Id Name
3 blues22

Persona
AIX-legacy

-WWN/iSCSI_Name10000000C9809802
10000000C9809803

Port
--1:2:4

The showhost command lists again the path to 0:2:4 after some I/O activity occurs on
the path:
# showhost blues22
Id Name
3 blues22

Persona
AIX-legacy

-WWN/iSCSI_Name10000000C9809802
10000000C9809803

Port
0:2:4
1:2:4

Direct Connect AIX Client Considerations

59

13 Migrating the IBM Virtual I/O Server


VIOS Migration Using the IBM Migration DVD
This section covers the precautions that must be adhered to if the IBM Virtual I/O Server is being
migrated from previous versions of VIOS using the IBM migration DVD. This section applies only
to those IBM Virtual I/O Servers that have the VIOS boot disk located on the 3PAR storage array.
It is always strongly recommended that the VIOS system and configuration be backed up in the
event an issue occurs during the migration procedure.

Requirements for Migrating VIOS


Before starting the VIOS migration from previous versions of VIOS using the IBM migration DVD,
reduce the number of paths to the VIOS boot disks on the 3PAR StoreServ Storage array to a
single path. This can be accomplished by making appropriate SAN zoning changes to reduce
paths to a single path to the VIOS boot disks.
WARNING! Before starting the VIOS migration from previous versions of VIOS using the IBM
migration DVD, reduce the number of paths to the VIOS boot disks on the 3PAR StoreServ
Storage array to a single path. Failure to perform this step might result in the inability to boot the
VIOS from 3PAR StoreServ Storage array once the migration has completed.
REQUIRED:
Hewlett Packard Enterprise recommends backing up the VIOS system and system configuration
in the event an issue occurs during the migration procedure.
Once the number of paths have been reduced to a single path from the host to the 3PAR StoreServ
Storage and from the fabric to the 3PAR StoreServ Storage volumes containing the VIOS boot
disk and related system disks, proceed with the upgrade using the IBM Migration DVD.
CAUTION: Once the number of paths have been reduced to a single path from the host to the
3PAR StoreServ Storage, proceed with the upgrade using the IBM Migration DVD.

Migrating from Previous VIOS Versions


Once the requirements called out in Booting from the 3PAR StoreServ Storage (page 51) have
been adhered to, follow the instructions covered in the IBM documentation on migrating your
IBM Virtual I/O Server. Install all mandatory updates as required that pertain to your environment
as described by IBM. IBM procedures to perform the VIOS migration are available in the
Configuring MPIO for the virtual AIX client document at the following IBM website:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html

60

Migrating the IBM Virtual I/O Server

Completing the VIOS Migration


1.

When VIOS is booted from 3PAR StoreServ Storage, reduce the number of paths to the
VIOS boot device and related system disks to a single path.
For a direct connect environment this will be from the host to 3PAR StoreServ Storage.
For a SAN environment this will be from the fabric to 3PAR StoreServ Storage volumes
containing the VIOS boot disk and related system disks, and from the fabric to the VIOS
hosts.
WARNING!
migration.

2.
3.

Failure to reduce to a single path might result in an inability to boot after

Perform the migration using the IBM documentation.


Once the migration is complete, re-install 3PAR MPIO, making sure to use the options:

SOFTWARE to install
AUTOMATICALLY install requisite software?
OVERWRITE same or newer versions?
ACCEPT new license agreements?

4.
5.
6.
7.
8.
9.

3PARmpio
no
yes
yes

Perform the bosboot -aDd /dev/ipldevice command.


Restart the IBM Virtual I/O Server server.
Add additional paths to the VIOS system disks that were removed in Requirements for
Migrating VIOS (page 60).
Use the bosboot -aDd /dev/ipldevice command.
Restart the IBM Virtual I/O Server.
Verify all paths as operational.

Completing the VIOS Migration

61

14 Cabling for IBM Virtual I/O Server Configurations


This chapter provides cabling and configuration details for connections between the 3PAR
StoreServ Storage and the IBM Virtual I/O Server.

Cabling and Configuration for Fabric Configurations (Dual VIO)


The following diagram shows the cabling and configuration for a fabric configuration:
Figure 1 Cabling and Configuration for Fabric Configurations

62

Cabling for IBM Virtual I/O Server Configurations

Cabling and Configuration for Direct Connect Configurations (Dual VIO)


The following diagram shows the cabling and configuration for a direct connect configuration:
Figure 2 Cabling and Configuration for Direct Connect Configurations

Cabling and Configuration for Direct Connect Configurations (Dual VIO)

63

15 PowerVM Live Partition Mobility


IBM Live Partition Mobility (LPM) allows the migration of an active AIX partition from one physical
server to another. The partition can be powered off, or fully active with users logged in and I/O
in progress.
LPM is simply a function of the IBM Virtual I/O Server. To make an IBM Virtual I/O Server capable
of LPM, the servers must be licensed through the COD product enablement under PowerVM.
Additionally, when initially building an IBM Virtual I/O Server, the LPAR must be enabled as a
mover service partition. When building a mobile partition, the option to allow this partition to be
suspended must be enabled.
For detailed instructions about setting up this environment, see the IBM documentation on the
installation and configuration specifics of PowerVM.
All resources defined to a mobile partition must be virtualized. This includes disks presented to
the mobile partition by means of vSCSI or NPIV.
3PAR ODM must be installed on the IBM Virtual I/O Server that is hosting the mover service
partition. This package must be installed whether or not the IBM Virtual I/O Server is locally
booted from the internal hard-drive or SAN booted from the 3PAR disk array. If N_Port ID
Virtualization (NPIV)-type disks are being used over virtualized FC on the mobile partition, the
3PAR ODM must also be installed on the mobile partition.
NOTE: All IBM Virtual I/O Server FC HBAs connected to the 3PAR StoreServ Storage array
must have both Dynamic Tracking and Fast_Fail enabled. Failure to set these options
might result in an error during pre-migration validation. This is especially true if an FC path has
a failed path.
To change the FC HBA options for Dynamic Tracking and Fast_Fail, use the following
command from each 3PAR-connected IBM Virtual I/O Server from the oem_setup_env shell
NOTE:

An IBM Virtual I/O Server restart is required to make these changes permanent.

$ oem_setup_env

64

PowerVM Live Partition Mobility

To set up Dynamic Tracking and Fast Fail on an IBM FC HBA, complete the following steps,
using the smit Devices menu:
1. Select FC Adapter.
2. Select FC SCSI I/O Controller Protocol Device.
3. Select Change/Show Characteristics of a FC SCSI Protocol Device.
4. Select the appropriate FC SCSI protocol device.
5. Set the following options:

6.
7.

Dynamic Tracking of FC Devices to Yes.

FC Fabric Event Error RECOVERY Policy to FastFail

Apply change to DATABASE only to Yes

Exit the shell.


Restart the IBM Virtual I/O Server.

At this point, devices can be presented to the IBM Virtual I/O Server and exported to the mobile
LPAR by creating a vSCSI device mapped through a vhost.
There are no other 3PAR/SAN array-specific settings required to support LPM.
For information about supported versions of 3PAR ODM and PowerVM Virtual I/O Server, see
the appropriate interoperability information available on the SPOCK website:
http://www.hpe.com/storage/spock

65

16 Support and other resources


Accessing Hewlett Packard Enterprise Support

For live assistance, go to the Contact Hewlett Packard Enterprise Worldwide website:
www.hpe.com/assistance

To access documentation and support services, go to the Hewlett Packard Enterprise Support
Center website:
www.hpe.com/support/hpesc

Information to collect

Technical support registration number (if applicable)

Product name, model or version, and serial number

Operating system name and version

Firmware version

Error messages

Product-specific reports and logs

Add-on products or components

Third-party products or components

Accessing updates

Some software products provide a mechanism for accessing software updates through the
product interface. Review your product documentation to identify the recommended software
update method.

To download product updates, go to either of the following:

Hewlett Packard Enterprise Support Center Get connected with updates page:
www.hpe.com/support/e-updates

Software Depot website:


www.hpe.com/support/softwaredepot

To view and update your entitlements, and to link your contracts and warranties with your
profile, go to the Hewlett Packard Enterprise Support Center More Information on Access
to Support Materials page:
www.hpe.com/support/AccessToSupportMaterials
IMPORTANT: Access to some updates might require product entitlement when accessed
through the Hewlett Packard Enterprise Support Center. You must have an HP Passport
set up with relevant entitlements.

66

Support and other resources

Websites
Website

Link

Hewlett Packard Enterprise Information Library

www.hpe.com/info/enterprise/docs

Hewlett Packard Enterprise Support Center

www.hpe.com/support/hpesc

Contact Hewlett Packard Enterprise Worldwide

www.hpe.com/assistance

Subscription Service/Support Alerts

www.hpe.com/support/e-updates

Software Depot

www.hpe.com/support/softwaredepot

Customer Self Repair

www.hpe.com/support/selfrepair

Insight Remote Support

www.hpe.com/info/insightremotesupport/docs

Serviceguard Solutions for HP-UX

www.hpe.com/info/hpux-serviceguard-docs

Single Point of Connectivity Knowledge (SPOCK) Storage www.hpe.com/storage/spock


compatibility matrix
Storage white papers and analyst reports

www.hpe.com/storage/whitepapers

Customer self repair


Hewlett Packard Enterprise customer self repair (CSR) programs allow you to repair your product.
If a CSR part needs to be replaced, it will be shipped directly to you so that you can install it at
your convenience. Some parts do not qualify for CSR. Your Hewlett Packard Enterprise authorized
service provider will determine whether a repair can be accomplished by CSR.
For more information about CSR, contact your local service provider or go to the CSR website:
www.hpe.com/support/selfrepair

Remote support
Remote support is available with supported devices as part of your warranty or contractual support
agreement. It provides intelligent event diagnosis, and automatic, secure submission of hardware
event notifications to Hewlett Packard Enterprise, which will initiate a fast and accurate resolution
based on your products service level. Hewlett Packard Enterprise strongly recommends that
you register your device for remote support.
For more information and device support details, go to the following website:
www.hpe.com/info/insightremotesupport/docs

Documentation feedback
Hewlett Packard Enterprise is committed to providing documentation that meets your needs. To
help us improve the documentation, send any errors, suggestions, or comments to Documentation
Feedback (docsfeedback@hpe.com). When submitting your feedback, include the document
title, part number, edition, and publication date located on the front cover of the document. For
online help content, include the product name, product version, help edition, and publication date
located on the legal notices page.

Websites

67

Index
customer self repair, 67

Symbols
3PAR ODM
additional modules available, 25
additional settings, 26
for AIX MPIO on AIX server, 22
installing to use with Veritas, 28
load balancing policies, 26
verify installation, 24
version update, 26

A
accessing
updates, 66
allocating
storage for access by AIX or IBM VIOS host, 31
Array Support Library see ASL
ASL
Array Support Library, 56
assigning
LUNs, 51

B
booting
from storage, 51
SAN, 51

C
cabling
direct connect (Dual VIO), 63
fabric (Dual VIO), 62
for VIOS configurations, 62
checking
host for current OS version, 19
CLI
command line interface, 8
command line interface see CLI
configuring
direct connect (Dual VIO), 63
direct connection to host , 8
fabric (Dual VIO), 62
File Services Persona, 55
ports, 7, 9
ports for a direct connection, 8
storage for FC, 7
Veritas DMP multipathing, 29
connecting
host with FC, 19
to host, 12
contacting Hewlett Packard Enterprise, 66
creating
storage, 31
TDVVs, 32
TPVVs, 32
virtual SCSI devices for connected LPARs, 35
VLUNs, 31
VVs, 31
68

Index

D
data duplication, 32
deduplication, 32
deploying
Virtual Connect Direct-Attach Fibre Channel storage,
7
devices
removing, 42
removing FC on host, 42
removing FC on storage, 44
scanning for new on host, 34
direct connect
configuring, 63
documentation, 6
providing feedback on, 67

F
fabric
configuring, 62
setting up for FC, 12
zoning for FC, 12
failures
client auto path detection and recovery, 58
detection and recovery, 58
FC
configuring storage, 7
connecting host with, 19
creating host definition, 10
display IBM HBA firmware and driver versions, 20
display IBM HBA WWNs, 21
guidelines for FC switch vendors, 15
host connection, 12
persistent port setup, 18
Persistent Ports, 17
Priority Optimization, 17
removing devices on host, 42
removing devices on storage, 44
reservation policy, 29
set IBM HBA host parameters, 21
setting up fabric, 12
target port limits, 16
target port specifications, 16
zoning fabric, 12
features
3PAR Express Scripts, 18
3PAR Persistent Ports for FC, 17
3PAR Priority Optimization for FC, 17
File Persona, 55
Smart SAN, 14
Virtual Connect Direct-Attach Fibre Channel, 7
Fibre Channel see FC

H
HACMP
installing, 49

parameters, 49
HBA
set up IBM FC, 19
HBAs
display IBM FC firmware and driver versions, 20
display IBM FC WWNs, 21
FC, 51
set FC host parameters, 21
host
connecting with an FC reservation policy, 29
creating definition for FC, 10
installing OS for a SAN boot, 51
setting FC HBA parameters for SAN boot, 51
host persona
1, 11
11, 10
2, 11
6, 10
8, 10

I
IBM FC HBA
auto-detect topology, 21
installing, 19
installing
3PAR ODM for AIX MPIO, 22
HACMP, 49
host OS for a SAN boot, 51
IBM FC HBA, 19
PowerHA, 50
Veritas DMP multipathing modules, 29
IOPS, 17

L
Live Partition Mobility see LPM
logical partitions see LPARs
LPARs, 35
logical partitions, 39
LPM
Live Partition Mobility, 64
LUNs
exporting to host, 32
marked as offline after an upgrade, 5

setting up for FC, 18


ports
3PAR Persistent Ports for FC, 17
configure, 7
configuring, 9
direct connection , 8
direct connection with 8 GB host adapters, 8
FC target port limits, 16
FC target port specifications, 16
PowerHA
installing, 50
parameters, 50
using, 50

R
remote support, 67
removing
devices, 42
FC devices on host, 42
FC devices on storage, 44
restrictions
volume size and number, 33

S
scanning
for new devices on host, 34
setting
auto path failure detection and recovery, 58
host FC HBA parameters for SAN boot, 51
IBM FC HBA host parameters, 21
setting up
IBM FC HBA, 19
Smart SAN
FC, 14
SPOCK
Storage Single Point of Connectivity Knowledge, 5
storage
booting, 51
configuring for FC, 7
creating, 31
creating TDVV, 32
creating VVs, 31
support
Hewlett Packard Enterprise, 66

M
migrating
active partition, 64
IBM VIOS, 60
VIOS requirements, 60
VIOS using DVD, 60

OS
expected behavior, 17

TDVV
creating, 32
thinly deduplicated virtual volumes, 32
thinly deduplicated virtual volumes see TDVV
thinly provisioned virtual volume see TPVV
TPVV
creating, 31
thinly provisioned virtual volume, 32

Persistent Ports
connectivity guidelines for FC, 18
for FC, 17

updates
accessing, 66
upgrading, 5

69

3PAR ODM 3.1.1.0 update for AIX MPIO, 26


considerations, 5
using
Persistent Ports with Storage Foundation, 57
PowerHA, 50
Symantec Storage Foundation, 57
Veritas cluster server, 56

V
Veritas
ASL, 56
cluster server, 56
configuring DMP multipathing, 29
install DMP multipathing modules, 29
use with 3PAR ODM, 28
VIOS
complete migration, 61
migrating, 60
migrating from previous versions, 60
migration requirements, 60
migration using DVD, 60
Virtual I/O Server, 5
virtual
LUN, 32
see also VLUN
Virtual Connect Direct-Attach Fibre Channel
storage, 7
Virtual I/O Server see VIOS
virtual volumes see VVs
VLUNs
creating, 31
exporting to AIX or IBM VIOS host, 33
virtual LUNs, 32
volumes
number restrictions, 33
size restrictions, 33
VVs, 32
creating, 31
exported to LPARs, 39
fully provisioned, 31
thinly deduplicated, 32
thinly provisioned, 31
virtual volume, 31
virtual volumes, 22

W
websites, 67
customer self repair, 67

Z
ZFS
using with deduplication, 32

70

Index