Вы находитесь на странице: 1из 41

Improving I/O performance on

IBM eServer pSeries servers

Ravi Singh
Technical Sales Specialist – System p
Southfield, MI
email: rsingh@us.ibm.com

Page 1 of 41
TABLE OF CONTENTS

1. Introduction ………………………… 3
2. Striping and maximum allocation policy ………………………… 4
3. Comparison of striping and max. allocation policy ………………………... 7
4. Filesystems and JFS logs ………………………… 9
5. Creating VGs, LVs and Filesystems ………………………… 14
6. I/O Performance Tuning ………………………… 18
7. Guidelines ………………………… 22
8. Example ………………………… 23
9. References ………………………… 41

Page 2 of 41
Introduction

1.1. Pre-Requisites
Reader of these Guidelines is expected to have basic understanding of AIX LVM and LVM
commands.

1.2. AIX levels


The discussion here is applicable for the following AIX releases.
 AIX 4.3.3
 AIX 5.1 and
 AIX 5.2

1.3. Planning
AIX LVM provides multiple features and options for creating logical volumes and filesystems. It is
a good practice to spend some time to collect information and discuss the data layout procedure
before installing and customizing a new server. The useful information would be:

1. Are the disks striped at H/W level?


2. Will the data be striped across multiple filesystems by the dB (UDB, Sybase or Oracle)?
3. Are the disks RAID protected (either 1, 5 or 10)?
4. Filesystems required with mount point and size.
5. Filesystem usage: Data, log, temp, dump and application binaries.

If the answer to (1) and (2) are no, then one should consider either striping or creating LVs with
maximum allocation policy at AIX LVM level.

This helps to reduce I/O performance problems that typically show up as the first bottleneck after
the dB is built, data loaded and when the system is rolled into production.

Discussion here assumes the following.

 Third party disks are mirrored for availability


 Disks are not striped.
 Multiple paths are configured for adapter availability and load balancing
 Database does not stripe the data across filesystems or containers or LVs.

1.4. Disclaimer
The discussion here should be used as guidelines and does not guaranty optimum performance.
The performance of servers may vary and depend on many factors including varying peak load,
system hardware configuration, software installed and tuned, applications installed and
configured, I/O and network performance, tuning of H/W, OS and application for optimum
performance. The suggested method here is one of the alternatives to improve I/O performance
and not necessarily provide a guaranteed system performance.

1.5. Abbreviations used


VG: Volume Group
LV: Logical Volume
PV: Physical Volume
PP: Physical Partition
LP: Logical Partition

Page 3 of 41
Striping and Maximum allocation policy

2.1. Logical Volume Striping


2.1.1. Concept
Striping is the technique for spreading the data in a logical volume across several physical
volumes in such a way that the I/O capacity of the physical volumes can be used in parallel to
access the data. This functionality is not offered before AIX Version 4, and striping and mirroring
were introduced in AIX Version 4.3.3. The primary objective of striping is very high
performance I/O for large sequential files.

In an ordinary logical volume, the data addresses correspond to the sequence of blocks in the
underlying physical partitions. In a striped logical volume, the data addresses follow the sequence
of strip units. A complete strip unit consists of one stripe on each of the physical volumes that
contain part of the striped logical volume. The LVM determines which physical block on which
physical volume corresponds to the block being read or written. If more than one physical volume
is involved, the necessary I/O operations are scheduled simultaneously.

2.1.2. Logical Partitions mapping scheme


In the striped logical volumes, all the logical partitions are mapped to physical partitions on these
physical volumes in a round-robin fashion. For example, the logical partition 1 is mapped to
physical partition 1 on the first physical volume (PV1), and the logical partition 2 is mapped to the
physical partition 1 on the second physical volume (PV2), and so on. In other words, the physical
partitions are equally allocated on each physical volume. This logical partitions to physical
partitions mapping scheme is achieved by the inter-physical volume allocation set to maximum.

2.1.3. Physical partitions


When your applications access the striped logical volumes, the storage area for each physical
partition is not used contiguously. These physical partitions are divided into chunks. The chunks
size may be 4 KB, 8 KB, 16 KB, 32 KB, 64 KB, or 128 KB. This size is determined at the creation
of the striped logical volumes, and cannot be changed after. The chunk size is also called stripe
unit size or stripe length. The number of the physical volumes that accommodate the striped
logical volume is also called stripe width.

2.1.4. Write phase


In the write phase, a caller (like the journaled file system device driver) issues one write I/O
request to the AIX LVM device driver (step 1). This request is chopped into several chunks (step
2), and they are written to each physical disk in parallel (step 3). Since these three writes (to the
chunks 1-1, 2-1, and 3-1) are achieved on separate drives, these writes are executed in parallel.
It improves the write performance compared to one drive. Subsequently, if all the writes to each
physical volume return with no error (step 4), then the LVM device driver returns a success to the
caller (step 5). Otherwise, it returns an error to the caller.

2.1.5. Read phase


In the read phase, a caller (like the journal file system device driver) issues one read I/O request
to the AIX LVM device driver (step 1). This request is split into several read calls for chunks (step
2), and these small reads are sent to each physical disk in parallel. Since these reads are
achieved on separate drives, they are executed in parallel (step 3). It improves the read
performance compared to one physical drive.

Page 4 of 41
2.2. Inter physical volume allocation policy (Range of Physical Volumes)
The inter-physical volume allocation policy is one of the attributes of the logical volumes. It
stipulates how many physical volumes are used for the allocation of the physical partitions for the
logical volume.

Minimum: If PVs are not specified while creating a LV, system will use minimum no. of PVs to
create the LV and the PPs are created contiguously from one PV to another PV. By default it is
set to m (minimum) to minimize the physical volumes that are used for the allocation of physical
partitions to a logical volume.

Maximum: This makes the LV to spread across multiple PVs. This option is specified by the
value x (maximum) to use all the available PVs in the VG or a set of PVs can be specified while
creating a LV.

To check the inter-physical volume allocation policy of a logical volume, you can use the lslv
command and the output as shown below:

………..
INTER-POLICY: minimum
………..

2.3. Logical Volume with Maximum Allocation Policy


At first look, it looks like a striped logical volume. But, there is a distinct difference between this
logical volume and a striped logical volume since there is no chunk level I/O striping benefit. Also,
this physical volume allocation does not provide as much I/O performance gain compared to
striping. This configuration is beneficial if striping is not implemented, and it is called a poor man’s
stripe (not an actual striped logical volume).

This spreads a LV across multiple PVs in chunks of PP size (4 MB, 8 MB, 16 MB, 32 MB, 64
MB…..), while striping is at stripe size (4K, 8K,…..128K).

2.4. Pre-defined allocation policy


In a striped logical volume, the inter-physical volume allocation policy is forcibly pre-defined as
maximum to achieve the logical partitions mapping across multiple PVs. It cannot be changed
after the initial creation. If you try to force this value, then you would see the error message
described in section 3.3 “Prohibited options of the mklv command with striping”.

2.5. The reorganization relocation flag


The reorganization relocation flag is also one of the attributes of the logical volumes. It stipulates
whether the logical volume can be relocated or not. For a non-striped logical volume, it is set to y
(relocatable) to allow the relocation of the logical volume during the reorganization process.
For striped logical volumes, the Relocate parameter must be set to n (the default for striped logical
volumes). This means that the striped logical volumes are never reorganized during the
execution of the reorgvg command. If a striped logical volume was reorganized, then the
physical partitions allocation policy might not fit the logical partitions mapping scheme.

2.6. Reorganization and migration


Reorganization and migration of physical partitions are two different tasks and shouldn’t be
confused. The reorganization (executed by the reorgvg command) is a procedure that repacks
the actual location of physical partitions in one physical volume to generate contiguous allocation
of the physical partitions. The migration of physical partitions (executed by the migratepv
Page 5 of 41
command) is a procedure that moves the entire physical partition from one physical volume to
another within a volume group. This reorganization relocation flag does not prohibit the migration
of the physical disk that accommodates the striped logical volume.

Page 6 of 41
Comparison of striped and a max. allocation policy LV

3.1. Creation and extension of the striped logical volumes


This section provides some information and guidelines about the creation and extension of the
striped logical volumes.

 Specify a multiple of the strip width as logical partition number during the initial creation of
the striped logical volume.
 You cannot change the stripe width after the initial creation.
 Extension is made by multiples of the strip width.

3.1.1. Initial creation


When you create a striped logical volume, you can only specify the number of the logical
partitions that is a multiple of the strip width. For example, if you attempt to create a striped logical
volume on three physical volumes, you can only specify multiplies of three as the number of
logical partitions. If you specified anything else other than these values, then it will round up to the
next multiple of the width. If the no of logical partitions is specified as two, then it is round up to
three.

These logical partitions allocated must be equally located on every physical volume composing
the striped logical volume. If we have a volume group composed of two physical volumes. one is
4.5 GB (named hdisk2) and the other 9.1 GB (named hdisk3). We choose 4 MB as the physical
partition size for this volume group, then there are 537 PPs on disk 2 and 1084 PPs on disk 3.

In this example, if we create a striped logical volume in this volume group, then only 1074 logical
partitions (537 physical partitions per each physical volume) can be allocated to this striped
logical volume. Otherwise, the attempt to create the striped logical volumes will fail. The
remaining space of the physical volume hdisk3 cannot be used in striped logical volume. You can
still use this space for non-striped logical volumes, but this can affect the I/O performance of the
striped logical volumes.

3.1.2. Cannot change the stripe width


You cannot add new physical volumes to an already created striped logical volume. To do this,
you have to recreate it (you have to create a backup of it and restore it). Because the actual data
is already striped on the physical volumes, if new physical volumes are added to this striped
logical volume, all the data that reside on physical volumes have to be re-striped to the newly
added physical volume physical partitions.

In another words, you cannot change the stripe width of an existing striped logical volume.

3.2. Extension is made by stripe width base


To extend the striped logical volumes, you have to specify a multiple of the stripe width as logical
partition number and the procedure to extend is not covered here. Otherwise the attempt to
extend the striped logical volumes will fail. In the above example, since the striped logical volume
scatters over three physical volumes, the stripe width is three. Therefore, you have to specify a
multiple of three as the extension for the no. of logical partitions.

3.3. Prohibited options of the mklv command with striping


To create the striped logical volume, you must use the -S option of the mklv command. This
optional flag specifies that the logical volume is designated as a striped logical volume. There is a

Page 7 of 41
fundamental difference between the mirroring and striping function, of course, other than the
functionality itself. You can always mirror an existing non-mirrored logical volume (including a
striped logical volume from AIX Version 4.3.3), and also remove the mirror from the mirrored
logical volumes. But, you cannot convert a non-striped logical volume to a striped logical volume,
or vice versa. The only way to create the striped logical volumes is to explicitly specify the -S flag
on the mklv command line or use the corresponding SMIT panel.

The AIX LVM provides many ways to control the physical partitions allocation of the logical
volumes. They are forced by optional flags of the mklv command. But, if you attempt to create a
striped logical volume, some of these optional flags cannot be used with the -S flag.

In AIX Version 4.3.3, prohibited optional flags with –S are -e, -d, -m, -s.

3.4. Prohibited options of the extendlv command with striping


Some limitations also apply to the extendlv command, used to extend the logical volume size. If
you attempt to extend the striped logical volumes, the following two command options are
prohibited.

 -m mapfile option
 Physical volume names

3.5. Guidelines for striped LVs


It is recommended that you keep in mind the following rules.

 Due to the lack of precise control for the physical partitions allocation, you may not place
the striped logical volumes as you want. The best way to avoid this situation is to
dedicate a volume group that accommodates the striped logical volumes. It also benefits
the I/O performance.
 If you create a striped logical volume, then you should choose disks that have same
characteristics (especially size). Otherwise, you cannot use the entire surface of the
physical volumes for the striped logical volumes.
 If you cannot avoid using different sized physical volumes for creating the striped logical
volumes, and, if you have to use the rest of the space of the larger size physical
volume(s), you should minimize the I/O activity on that portion. Otherwise, that activity
might affect the striped logical I/O performance and you might loose the benefit of the
striped logical volumes.

3.6. Advantages of a Logical Volume with maximum allocation policy


A LV created with maximum allocation policy across multiple disks does not have limitations
described in 3.1 to 3.5. A filesystem created can be expanded later without any restriction and this
is discussed in the later chapters. The major advantage of a LV with max. allocation policy is easy
to create, manage, migrate, mirror and expand compared to a striped LV though the performance
gain is not as much compared to a striped LV. However, the performance gain of this compared to
a LV created contiguously (without –ex flag in mklv command) across multiple PVs is significant.

The LV created with this policy spreads across multiple PVs in chunks of PP (PP size is specified
while creating a VG) and the PPs are allocated in round-robin across all the specified disks during
mklv command. Hence as the data starts growing across multiple PVs and when it is accessed
randomly, multiple disks perform I/O and there by boosting the performance. In a sequential
access situation, if multiple dB instances are running on the same server, then the contention for
the same disk by multiple I/O requests of dB instances gets reduced as the data is spread across
multiple PVs.

Page 8 of 41
Filesystems and JFS logs

4.1. The AIX journaled file system


A file system is a hierarchical structure (file tree) of files and directories. This type of structure
resembles an inverted tree with the root at the top and branches at the bottom. The branches
represent the directories and the leaves represent files. This file tree uses directories to organize
data and programs into groups, allowing the management of several directories and files at one
time.

A file system is a set of files, directories, and other structures. File systems maintain information
and identify where a file or directory's data is located on the disk. Besides files and directories, file
systems consist of:

 The superblock
 The i-nodes
 The data blocks
 The allocation bitmaps

Journaled file systems are created on top of a logical volumes.

4.2 The JFS structure


This section describes the components of a file system. In addition to directories and files, there
are other elements that make the retrieval of information possible and efficient.

4.2.1 The Superblock


The superblock maintains information about the entire file system. The superblock is 4096 bytes
in size and starts at byte offset 4096 on the logical volume, it includes the following fields:

 Size of the file system


 Number of data blocks in the file system
 A flag indicating the state of the file system
 Allocation group sizes

The dumpfs command shows you the superblock as well as the i-node map, and disk map
information for the file system or special device specified.

4.2.2 Logical blocks


A logical block contains a file or directory data. These units are 4096 bytes in size. Logical blocks
are not tangible entities; however, the data in a logical block consumes physical storage space on
the disk. Each file or directory consists of 0 or more logical blocks. Fragments, as opposed to
logical blocks, are the basic units for allocated disk space in the journaled file system (JFS). Each
logical block allocates fragments for the storage of its data.

4.2.3 Disk i-nodes


Each file and directory has an i-node that contains access information such as file type, access
permissions, user ID and group ID (UID & GID), and number of links to that file. These i-nodes
also contain addresses for finding the location on the disk where the data for a logical block is
stored. Each i-node has an array of numbered sections. Each section contains an address for
one of the file or directory's logical blocks. These addresses indicate the starting fragment and the
total number of fragments included in a single allocation. For example, a file with a size of 4096
bytes has a single address on the i-node's array. Its 4096 bytes of data are contained in a single
logical block. A larger file with a size of 6144 bytes has two addresses. One

Page 9 of 41
address contains the first 4096 bytes and a second address contains the remaining 2048 bytes (a
partial logical block). If a file has a large number of logical blocks, the i-node does not contain the
disk addresses. Instead, the i-node points to an indirect block that contains the additional
addresses.

The number of disk i-nodes available to a file system depends on the size of the file system, the
allocation group size (8 MB by default), and the ratio of bytes per i-node (4096 by default). These
parameters are given to the mkfs command at file system creation. When enough files have
been created to use all the available i-nodes, no more files can be created, even if the file system
has free space. The number of available i-nodes can be determined by using the df -v command.

4.2.4 Disk i-node structure


Each disk i-node in the journaled file system is a 128-byte structure. The offset of a particular
i-node within the i-node list of the file system produces the unique number (i-number) by which
the operating system identifies the i-node. A bit map, known as the i-node map, tracks the
availability of free disk i-nodes for the file system.

4.2.5 i-node addressing


The JFS uses the indirect blocks to address the disk space allocated to larger files. Indirect
blocks allow greater flexibility for file sizes. The indirect block is assigned using the i_rindirect field
of the disk i-node. This field allows three methods for addressing the disk space:
 Direct
 Single indirect
 Double indirect
 Multiple Indirect

The exact same methods are used to address disk space in compressed and fragmented file
systems.

4.2.6 Fragments
The journaled file system fragment support allows disk space to be divided into allocation units
that are smaller than the default size of 4096 bytes. Smaller allocation units or fragments
minimize wasted disk space by more efficiently storing the data in a file or directory's partial
logical blocks. The functional behavior of journaled file system fragment support is based on that
provided by Berkeley Software Distribution (BSD) fragment support. Similar to BSD, the JFS
fragment support allows users to specify the number of i-nodes that a file system has.

4.2.7 Fragments and number of bytes per i-node (NBPI)


The fragment size for a file system is specified during its creation. The allowable fragment sizes
for journaled file systems are 512, 1024, 2048, and 4096 bytes. For consistency with previous
versions of AIX, the default fragment size is 4096 bytes. Different file systems can have different
fragment sizes, but only one fragment size can be used within a single file system. Different
fragment sizes can also coexist on a single system (machine) so that users can select the
fragment size most appropriate for each file system.

4.2.7.1 Specifying fragment size and NBPI


Fragment size and the number-of-bytes-per-i-node (NBPI) value are specified during the file
system's creation with the crfs and mkfs commands or by using the System Management
Interface Tool (SMIT). The decision of fragment size and how many i-nodes to create for the file
system should be based on the projected number of files contained by the file system and their
size.

Page 10 of 41
4.2.7.2 Identifying fragment size and NBPI
The file system fragment size and the number-of-bytes-per-i-node (NBPI) value can be identified
through the lsfs command or the System Management Interface Tool (SMIT). For application
programs, the statfs subroutine can be used to identify the file system fragment size.

4.2.8 File types


This section describes the type of file you can encounter in the AIX operating system.
 A regular file
 A special file
 A symbolic link
 A FIFO
 A directory
 A socket
 A sparse file (can be regular)

4.3. Concepts of JFSLOG


AIX uses a special logical volume called the log device as a circular journal for recording
modifications to file system meta-data. File system meta-data include the superblock, i-nodes,
indirect data pointers, and directories. When meta-data is modified, a duplicate transaction is
made to the JFS log. When a sync() / fsync() occurs, commit records are written to the JFS log to
indicate that modified pages in memory have been committed to disk.

The following are examples of when JFS log transactions occur when:
 a file is being created or deleted
 a write() occurs for a file opened with O_SYNC
 fsync() or sync() is called
 a file is opened with O_APPEND
 a write causes an indirect or double-indirect block to be allocated

The use of a JFS log allows for rapid and clean recovery of file systems if a system goes down.
However, there may be a performance trade-off here. If an application is doing synchronous I/O
or is creating and/or removing many files in a short amount of time, then there may be a lot of I/O
going to the JFS log logical volume. If both the JFS log logical volume and the file system logical
volume are on the same physical disk, then this could cause an I/O bottleneck. The
recommendation would be to migrate the JFS log device to another physical disk. Information
about I/Os to the JFS log can be recorded using the filemon command. If you notice that a file
system and its log device are both heavily utilized, it may be better to put each one on a separate
physical disk (assuming that there is more than one disk in that volume group). This can be
done using the migratepv command or via SMIT.

Two things can be done to avoid the jfslog from becoming a performance bottleneck. The first is
to increase the size of the jfslog, and the second is to create more than one jfslog per volume
group.

4.4. Increasing the JFS Log Size


With the increased filesystem sizes available to current versions of AIX, it is not unimaginable that
the amount of concurrent transactions within the filesystem would increase. When this happens,
there is a possibility of a lot of writes taking place in the jfslog.

By default, a jfslog file is created with one logical partition in a volume group. When the amounts
of writes to jfslog increases beyond a threshold so that there is not enough time to commit these
logs, any further writes will be suspended. These pending writes will remain so until all the
outstanding writes are committed and the meta data and jfslog are in sync with each other.

Page 11 of 41
If the jfslog is made bigger than the default, I/O can continue to proceed because the jfslog wrap
threshold would not be reached as easily. The steps taken to increase the jfslog would be as
follows:

1. Backup the file system.


2. Create the jfslog logical volume with two logical partitions now, instead of one.

# mklv -t jfslog -y LVname VGname 2 Pvname

where LVname is the name of the jfslog logical volume, VGname is the name of the volume group
on which it is to reside, and PVname is the hdisk name on which the jfslog is to be located.

3. When the jfslog logical volume has been created, it has to be formatted:

# /usr/sbin/logform /dev/LVname

4. The next step is to modify the affected filesystem or filesystems and the logical volume control
block (LVCB).

# chfs -a logname=/dev/LVname /filesystemname

5. Finally, unmount and then mount the affected file system so that this new jfslog logical volume
can be used.

# unmount /filesystemname; mount /filesystemname

4.5. Creating more than one jfslog for a Volume Group


By default, one jfslog is created per volume group containing journaled filesystems. Sometimes,
because of heavy transactions taking place in more than one file system in this volume group, it
may be beneficial from a performance standpoint to create more jfslogs so that there would be
less sharing, and thus less resource contention, for them.

The steps outlined in the earlier section can be used to create these additional jfslogs. Where
possible, the jfslogs should be created on disks of relatively low activity so as to free the disk
resources to focus on the logging activity.

It is a good practice to create one jfslog for each big filesystem and on a separate PV. If the
datavg has 16 disks, you can create loglv01 to loglv16 on each of these disks, assign one jfslog
for each filesystem when the filesystem is created.

4.6. Ramdisk filesystem


Ramdisk is created out of memory and can be used for temporary storing area and a filesystem
can be created on this ramdisk. A good example to use this filesystem is for database temp
space, as this is not useful and not required when the dB is stopped.

4.6.1. Command Description


The mkramdisk command is shipped as part of bos.rte.filesystems, which allows the user to
create a RAM disk. Upon successful execution of the mkramdisk command, a new RAM disk is
created, a new entry added to /dev, the name of the new RAM disk is written to standard output,
and the command exits with a value of 0. If the creation of the RAM disk fails, the command prints
an internalized error message, and the command will exit with a nonzero value.

The size can be specified in terms of MB or GB. By default, it is in 512 byte blocks. Suffix M will
be used to specify size in megabytes and G to specify size in gigabytes.

Page 12 of 41
The names of the RAM disks are in the form of /dev/rramdiskx where x is the logical RAM disk
number (0 through 63). The mkramdisk command also creates block special device entries (for
example, /dev/ramdisk5) although use of the block device interface is discouraged because it
adds overhead. The device special files in /dev are owned by root with a mode of 600. However,
the mode, owner, and group ID can be changed using normal system commands.

Up to 64 RAM disks can be created.

Note: The size of a RAM disk cannot be changed after it is created.

The mkramdisk [ -u ] size[ M | G ] command is responsible for generating a major number,


loading the ram disk kernel extension, configuring the kernel extension, creating a ram disk, and
creating the device special files in /dev. Once the device special files are created, they can be
used just like any other device special files through normal open, read, write, and close system
calls.

RAM disks can be removed by using the rmramdisk command. RAM disks are also removed
when the machine is rebooted.

4.6.2. An example
To set up a RAM disk that is approximately 20 MB in size and create a file system on that RAM
disk, enter the following:

mkramdisk 40000
ls -l /dev |grep ram
mkfs -V jfs /dev/ramdisk x
mkdir /ramdisk0
mount -V jfs -o nointegrity /dev/ramdisk x /ramdisk x

where x is the logical RAM disk number. By default, RAM disk pages are pinned. Use the -u flag
to create RAM disk pages that are not pinned.

Note: In AIX 5.1 and 4.3.3, the max. size of ramdisk is 2 GB.

Page 13 of 41
Creating VGs, LVs and Filesystems

5.1. Creating VGs and Choosing no. of physical volumes

While creating LVs with maximum allocation policy, choose between 8 to 16 disks in a VG
depending on the size of disks. More disks can be added to the VG later if the filesystems are to
be expanded.

 Create a VG with 8 to 16 disks for datavg, this will have filesystems for database only
(eg., data, log, temp, index).
 Create a second VG with 8 to16 disks for dumpvg, this will have database dump
filesystems (eg., /dump1, /dump2……)
 Create a third VG with the required no. of PVs for appvg, this will have filesystems for
application binaries, temporary storage area and so on. (/global/site/vendor/Sybase,
/clocal/udb, …….)
 Choose the lowest possible PP size while creating the VGs to get a better spread of data
across multiple PVs or disks.

Grouping the filesystems on the basis of its usage is the first step while creating them. This helps
to separate PVs used for accessing data, dump and applications and gives a better control to
manage disks. As a guideline, do not create a VG with more than 32 disks, it is easy to manage,
AIX LVM commands run faster and easy to administer.

During disk migration from one SAN to another or from one frame of disks to another, it is easy to
use LVM mirroring to mirror a new set of disks by adding them to the VG, mirroring and then
breaking the mirror on old disks. To plan for such a migration later, create VGs with big VG
enabled so that a VG can have upto 128 PVs and 512 LVs.

It is equally important to have free PPs available in each VG and each PV, so that a filesystem
can be expanded later and also when free PPs are required during disk migration.

5.2. Creating LVs


Depending on the no. of PVs in a VG and the filespace requirements, create LVs so that there is
still free space in the VG as well each PV has free PPs on it. If a filesystem is to be expanded
later, then this free space can be used. As a guideline, allow for few GB of free space in the VG.

LVs can be created either using SMIT, WSM or from command line.

5.2.1. SMIT interface


Using the fast path ‘smit mklv’, enter the
 VG name
 LV name
 No. of Logical Partitions
 Physical Volume Names: If you have 8 PVs in the VG, then specify hdisk1,
hdisk2….hdisk8
 Range of Physical Volumes: Select maximum

When you create the second LV, for Physical Volume names, start from hdisk2, hdisk3….hdisk8,
hdisk1. This makes sure the starting PP and PV for the second LV is not the same as LV1.
Similarly for the third LV, it can be hdisk3, hdisk4,…….hdisk2, hdisk1 and so on.

This method of creating LVs gives a better distribution of data and disk access. If these are used
for filesystems or containers to store either data, log or temp, then the probability of the same disk

Page 14 of 41
being accessed at the same time by multiple filesystems gets reduced and may result in reduced
I/O wait.

5.2.2. Command Line


An example to create LVs similar to SMIT interface is given below. These commands are taken
from the script given in the Example Chapter.

mklv –ydatalv1 –ex datavg 66 hdiskpower11 hdiskpower12 hdiskpower13


mklv –ydatalv2 –ex datavg 66 hdiskpower12 hdiskpower13 hdiskpower11
mklv –ydatalv3 –ex datavg 66 hdiskpower13 hdiskpower11 hdiskpower12

Here, three LVs are created on three disks, each allocated starting from a different disk.

5.2.3. JFSlogs
As discussed in Section 4.5, create one JFSlog for each LV created and distribute them across
multiple PVs.

Using the above example of three LVs on three PVs, commands given below create three
JFSlogs.

mklv –y loglv11 –tjfslog datavg 1 hdiskpower11


mklv –y loglv12 –tjfslog datavg 1 hdiskpower12
mklv –y loglv13 –tjfslog datavg 1 hdiskpower13

Each of the JFSlog created here will be used later while creating filesystems and assigned to one
filesystem only. These JFSlogs created should be formatted before assigning to filesystems,
example given below.

echo y|logform /dev/loglv11


echo y|logform /dev/loglv12
echo y|logform /dev/loglv13

In AIX 5.1 and 5.2, if JFS2 filesystems are used, then the type (-t flag) should be specified as
jfs2log.

5.3. Creating Filesystems


Filesystems can be created using SMIT or WSM or Command Line interface. If SMIT is used,
then it will be a two-step process.

5.3.1. SMIT interface


 Use fast path ‘smitty crfs’ and select JFS or JFS2, select option for previously defined
LV, select Large file enabled, select the LV from the drop down list and then enter all the
required inf. in the next screen.
 By default, system uses one jfslog/jfs2log for all filesystems in one VG. Using the
following command, change the jfslog/jfs2log for the filesystem.
o chfs –a logname=/dev/Lvname /filesystemname
o mount /filesystemname

5.3.2. Command line interface


While creating a filesystem using command line, jfslog/jfs2log can be specified with –a option as
in the example below.

crfs –vjfs –ddatalv1 –m/data1 –abf=true –Ano –prw –tno –afrag=4096 –anbpi=4096 –aag=8
-alogname=loglv11
Page 15 of 41
mount /filesystemname

The flag –abf=true indicates large file enabled and –alogname specifies the name of the
jfslog/jfs2log to be used.

5.4. Changing JFSlog for a filesystem


If the jfslog for a filesystem is to be changed, do not edit /etc/filesystems and change the jfslog
value in the stanza for that filesystem. The jfslog inf. is stored in /etc/filesystems as well in Logical
Volume Control Block (LVCB) on the disk. Hence you should unmount the filesystem, change
jfslog using chfs command and then mount the filesystem back.

umount /filesystemname
chfs –a logname=/dev/Lvname /filesystemname
mount /filesystemname

5.5. Ramdisk for temp space


For filesystems storing temporary files that do not have life if the dB is shutdown or system is
rebooted, ramdisk can be used. Since this is memory resident, provides faster access to files
and does not involve I/O thereby reducing I/O wait and improving system performance.

Creating and removing a ramdisk is discussed in section 4.6.

5.6. Expanding filesystems


If a filesystem created using maximum allocation policy is to be expanded later, the first choice is
to use the free PPs available on the PVs in the VG.

5.6.1. Using the free space in the VG


Expand the filesystem using ‘smitty chfs’ or from command line using chfs command. New
Logical Partitions (PP) will be created on the PVs with free PPs by spreading the PPs across all
PVs. Using the maximum allocation policy, the added PPs are created in round-robin across all
the PVs that have free PPs.

5.6.2. Adding more PVs to the VG


If the VG does not have any free PPs or if the free PPs available are not enough to expand the
filesystem then you can add free PVs to the VG.

Guidelines you should observe here are:

 Set the max LP limit to a new value for each LV, if required. When a LV is created, it sets
the max. LP limit to either 512 or the no. of PPs allocated, if greater than 512.
 Add PVs equal to the no. of existing PVs in the VG, ie., if the VG has 8 PVs then add
another 8 PVs. If this is not permissible, add PVs in multiples of two. This ensures, you
have enough PVs to spread the PPs across multiple PVs.
 Add all the PVs in one step and expand the filesystem. If you add 4 PVs and expand the
filesystem in first step, later add another 4 PVs and expand again, then PPs will be
created round-robin across 4 PVs during 1st step and then across 4 PVs during 2nd step. If
you add all the 8 PVs in one step and then expand, PPs will be spread across 8 PVs.
 If you add only one PV to the VG and expand the filesystem, all the PPs will be created
contiguously on the newly added PV, which may become I/O bottleneck.

Expand the filesystem using ‘smitty chfs’ or from command line using chfs command.
Command line examples for expanding a filesystem by adding one PV at a time to the VG and
adding multiple PVs to the VG are given below. The distribution of PPs across the PVs is given in
the Sample Output section.
Page 16 of 41
It can be seen that, a new JFSlog is created for each PV added and this can be used if a new
filesystem is created or if a jfslog is to be changed to one of the existing filesystems.

Example for adding one PV at a time


extendvg -f datavg hdiskpower14
echo 'Creating JFSlog loglv14'
mklv -y loglv14 -tjfslog datavg 1
echo y|logform /dev/loglv14
echo '\nNow expanding /data1, /data2 and /data3 by adding 22 PPs to each'
chfs -asize=+2793042 /data1
chfs -asize=+2793042 /data2
chfs -asize=+2793042 /data3

extendvg -f datavg hdiskpower15


echo 'Creating JFSlog loglv15'
mklv -y loglv15 -tjfslog datavg 1
echo y|logform /dev/loglv15
echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'
chfs -asize=+2793042 /data1
chfs -asize=+2793042 /data2
chfs -asize=+2793042 /data3

Example for adding two PVs together


extendvg -f datavg hdiskpower14 hdiskpower15
echo 'Creating JFSlog loglv14'
mklv -y loglv14 -tjfslog datavg 1
echo y|logform /dev/loglv14
echo 'Creating JFSlog loglv15'
mklv -y loglv15 -tjfslog datavg 1
echo y|logform /dev/loglv15
echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'
chfs -asize=+5586086 /data1
chfs -asize=+5586086 /data2
chfs -asize=+5586086 /data3

Page 17 of 41
I/O performance tuning

6.1 Modifying I/O performance with vmtune


The vmtune command is used to modify the VMM parameters that control the behavior of the
memory-management subsystem. The vmtune command can only be invoked successfully by
the root user, and any changes made to its parameters will only remain in effect until the next
reboot. Running vmtune without any parameters gives you the current settings:

# /usr/samples/kernel/vmtune

• numclust
If a server has large memory, you should probably do some tuning so that when syncd runs,
there won't be a huge amount of I/O that gets flushed to disk. One of the things you should
consider is turning on the write-behind options using the vmtune command. This increases
performance by asynchronously writing modified pages in memory to disk rather than waiting for
syncd to do the flushing. Sequential write-behind initiates I/O for pages if the VMM detects that
writing is sequential. The file system divides each file into clusters of four dirty pages of 4 KB
each. These 16 KB clusters are not written to disk until the program begins to write the next 16
KB cluster. At this point, the file system forces the four dirty pages to be written to disk. By
spreading out the I/O over time instead of waiting for syncd, it prevents I/O bottlenecks from
taking place. A benefit derived from the clustering is that file fragmentation is diminished.

If it is envisaged that there would be sequential writes of very large files, it may benefit
performance by boosting the numclust value to an even higher figure. Any integer greater than 0
is valid, and the default is 1 cluster. Care must be taken when changing this parameter to ensure
that the devices used on the machine support fast writes.

To turn on sequential write-behind:

# /usr/samples/kernel/vmtune -c 2

• maxrandwrt
Another type of write-behind supported by the vmtune command is the random write-behind. This
option can be used to specify the threshold (in 4 KB pages) for random writes to be accumulated
in memory before the pages are written to disk. This threshold is on a per-file basis. You may also
want to consider turning on random write behind. To turn on random write-behind, try the
following value:

# /usr/samples/kernel/vmtune -W 128

It should be noted that not every application would benefit in performance from write-behind. In
the case of database index creation, it is actually beneficial to disable the write-behind before the
creation activity. Write-behind can than be re-enabled after the indexes have been created.

• maxperm
If it is intended for the system to serve as an NFS file server, the large bulk of its memory would
be dedicated to storing persistent file pages rather than working segments. Thus, it would help
performance to push up the maxperm value to take up as much of the memory as possible. The
following command would do just that:

# /usr/samples/kernel/vmtune -P 100

Page 18 of 41
The converse is true if the system is to be used for numerically intensive computations or in some
other application where working segments form the dominant part of the virtual memory. In such a
situation, the minperm and maxperm values should be lowered. Fifty percent would be a good
start.

• maxpgahead
If large files are going to be read into memory often, the maxpgahead should be increased from
its default value of 8. The new value could be any power of 2 value because the algorithm for
reading ahead keeps doubling the pages read. The flag to modify maxpgahead is -R. So, to set
the value to 16, you would enter:

# /usr/samples/kernel/vmtune -R 16

Turning on and tuning these parameters (numclust and maxrandwrt) could result in a reduced I/O
bottleneck because now, writes to disk to not have to wait on the syncd daemon, but rather, can
be spread more evenly over time. There will also be less file fragmentation because dirty pages
are clustered first before being written to disk.

The other vmtune parameters that can be tuned for I/O performance are listed below. It is
important to bear in mind, though, that using large files does not necessarily warrant any tuning of
these parameters. The decision to tune them will depend very much on what type of I/Os are
occurring to the files.

The size of the I/O, whether it is raw or journaled file system I/O, and the rate at which the I/O is
taking place are important considerations.

• numfsbufs
This parameter specifies the number of file system buf structs. Buf structs are defined in
/usr/include/sys/buf.h. When doing writes, each write buffer will have an associated buffer header
as described by the struct buf. This header describes the contents of the buffer.

Increasing this value will help write performance for very large writes size on devices that support
very fast writes. A filesystem will have to be unmounted and then mounted again after changing
this parameter in order for it to take effect.

# /usr/samples/kernel/vmtune -b 512

The default value for this parameter is 93 for AIX V4. Each filesystem gets two pages worth of buf
structs. Since two pages is 8192 bytes and since sizeof(struct buf) is about 88, the ratio is around
93 (8192/88=93). The value of numfsbufs should be based on how many simultaneous I/Os you
would be doing to a single filesystem. Usually, though, this figure will be left unchanged, unless
your application is issuing very large writes (many megabytes at a time) to fast I/O devices
such as HIPPI.

• lvm_bufcnt
This parameter specifies the number of LVM buffers for raw physical I/Os. If the striped logical
volumes are on raw logical volumes and writes larger than 1.125 MB are being done to these
striped raw logical volumes, increasing this parameter might increase throughput of the write
activity.

# /usr/samples/kernel/vmtune -u 16

The 1.125 MB figure comes about because the default value of lvm_bufcnt is 9, and the
maximum size the LVM can handle in one write is 128 KB. (9 buffers * 128 KB equals 1.125 MB.).

Page 19 of 41
• hd_pbuf_cnt
This attribute controls the number of pbufs available to the LVM device driver. Pbufs are pinned
memory buffers used to hold I/O requests. In AIX V4, a single pbuf is used for each sequential I/O
request regardless of the number of pages in that I/O. The default allows you to have a queue
of at least 16 I/Os to each disk, which is quite a lot. So, it is often not a bottleneck. However, if
you have a RAID array which combines a lot of physical disks into one hdisk, you may need to
increase it.

6.2 Paging Space


A side effect of using large amounts of memory is that there should be a large amount of paging
space made available to back it up. The actual amount of paging space to be defined on any
system varies and will depend on the workload.

The default hd6 created in the rootvg would, in most cases, be far from sufficient, and it will be
necessary to expand the hd6 paging space. Care must be taken when expanding hd6 to ensure
that it is one contiguous whole and not fragmented since fragmentation would impact
performance negatively.

Ideally, there should be several paging spaces of roughly equal sizes created, each on a separate
physical disk drive. You should also attempt to create these paging spaces on relatively lightly
loaded physical volumes so as to avoid causing any of these drives to become bottlenecks.

6.2.1. Paging space size guidelines


> On AIX 4.1 and 4.2
PS = 512 MB + (RAM in MB - 256) * 1.25 if RAM >= 256 MB or
PS = 192 MB + (RAM in MB) if RAM < 256 MB

> On AIX 4.3 and above


PS = 3 * RAM in MB if RAM < 96 MB
PS = 2 * RAM in MB if 96 MB < RAM < 256 MB
PS = RAM in MB if RAM > 256 MB
PS = 2 GB if RAM > 2 GB

Where PS is the Paging Space.

These are guidelines and during performance tuning, monitor the Paging Space usage and
increase as required.

6.2.2. Paging space for a dB server


 Create a VG ‘pagingvg’ with either two or three disks.
 Increase hd6 to 2GB
 Create new paging LVs of 2 GB in pagingvg, each LV on a separate disk
 Monitor the Paging Space usage and increase as required.

6.3. Modifying VMM with vmtune


Use of vmtune command can either improve the system performance or degrade it, hence any
changes to this should be carefully evaluated and one parameter should be tuned at a time to
analyze the performance improvement.

The vmtune command is used to modify the VMM parameters that control the behavior of the
memory-management, CPU and I/O subsystems. The vmtune command can only be invoked
successfully by the root user, and any changes made to its parameters will only remain in effect
until the next reboot.

Page 20 of 41
6.4. SSA adapters and loops

6.4.1. SSA Adapters and enclosure


If the adapter has fast write cache, make sure it is enabled on SSA logical disks. Check the
microcode level on SSA adapter and SSA enclosure, and upgrade if required.

6.4.2. SSA Loops


Each SSA adapter has four ports and supports a maximum of two loops. Hence, choosing right
no. of loops and no. of adapters in a loop are critical to get best I/O performance when a server is
installed.

Minimum no. of disks that can be configured in a loop are 4 or one quadrant of the enclosure. A
SSA enclosure can have 16 disks. Dummies can be used in each quadrant to distribute the disks
across all four quadrants. Hence create loops with either 4 or 8 disks in a loop.

6.4.2. SSA RAID5 disks


If you are creating RAID5 disks, follow the guidelines below.
 Choose 8 disks in a loop (6+Parity+Hot Spare) for each RAID5 disk.
 Create each RAID5 disk on a separate loop.
 Create LVs across multiple RAID5 disks using max. allocation policy discussed in section
2.3.

6.5. Asynchronous IO tuning


‘lsattr –El aio0’ gives the characteristics of aio kernel extension. Tune this for
 Status to be available
 max and min servers: Max Servers: No. of disks * 10, Min. Servers: Max servers/2
 maxreqs: Multiple of 4096.

6.6. Fibre Channel adapters and SAN storage


 Check the microcode on FC adapters and upgrade if required.
 Check the third party storage device drivers and upgrade if required.
 If multiple paths are used for load balancing and failover, make sure they are available.

6.7. Adapter Placement


If a system has lot of high performance adapters on one system bus, then that bus can become
the performance bottleneck. Hence, it is always better to spread these fast adapters across
several busses. Refer to the latest PCI Adapter Placement Reference Guide (SA38-0538-19 or
later) to make sure the adapters are placed in PCI slots as per the recommendations to get
maximum performance.

6.8. Filemon
The filemon command is one of the performance tools used to investigate I/O related
performance. The syntax of the command can be found in the AIX documentation or in the
Performance and Tuning Guide.

Page 21 of 41
Guidelines

7.1. Initial setup


 Create VGs with big VG enabled (-B flag in mkvg command) so that if needed the VG
can have upto 128 PVs and 512 LVs.
 Choose the lowest possible no. for the PP size while creating VGs. This gives a better
distribution of data across multiple PVs.
 Create a separate VG for data, dump, application binaries and so on.
 Create Filesystems always large file enabled (-abf=true in crfs command).
 Create filesystems and jfslogs of type=jfs2 in AIX 5.1 and later.
 Create one jfslog on each PV in a VG (loglv01, loglv02…..loglv08, if there are 8 PVs in a
VG).
 Create an LV with maximum allocation policy (-ex in mklv command) by specifying all LVs
in the VG.
 Do not use all PPs in PVs, always have few GBs of free space in a VG and few free PPs
on each PV.
 Do not allocate all the PPs in a VG to multiple LVs, if needed add an extra PV right in the
beginning. Using free PPs in a VG and few free PPs on each PV, you can easily expand
the filesystem later by spreading it across the same set of disks.
 Create each LV starting from a disk different from the previous LV. If you used hdisk1,
hdisk2….hdisk8 as the sequence for datalv1, use hdisk2, hdisk3………hdisk8, hdisk1 as
the sequence for datalv2. This gives each LV a different PV as the starting point.
 Allocate a separate JFSLog for each filesystem created on a separate disk.

7.2. Expanding filesystems


 Add an equal no. of new disks to the no. of disks already in the VG. If this is not possible,
add in multiples of two.
 Follow the guidelines in initial setup regarding keeping free PPs in each VG and each PV.
 If possible avoid adding PVs to the VG in multiple steps to expand a filesystem, instead
add all of them together and then expand.
 The expanded part of the filesystem will have PPs allocated in round-robin across the
new PVs added in the VG as the LV is already created with maximum allocation policy.

7.3. Paging Space


Refer to Section 6.2 for discussion on setting-up paging space .

7.4. VMM tuning


Refer to sections 6.1 and 6.3 for detailed discussion I/O tuning using vmtune.

7.5. ulimits
Tune ulimits for specific or all users so that files greater than 2GB can be created.

Page 22 of 41
Example

8.1. Sample script “disk-layout.ksh”


#!/usr/bin/ksh
# Ravi SIngh, IBM
# Create datavg, LVs datalv1, datalv2 and datalv3 on three PVs.
# LVs have maximum allocation policy to create each PP on a separate PV
# in round-robin.
# The starting PV for each LV is different.
# Create one JFSLog for each filesystem, each one on a separate PV.
# Create /data1 on datalv1 using loglv11, /data2 on datalv2 using
# loglv12 and /data3 on datalv3 using loglv13.
# Phase2 : Expand the filesystems already created.
# Option 1: Add one PV at a time and expand filesystems.
# Option 2: Add two PVs together and expand filesystems.
#
# 04/23/2003 : Ver 1

echo 'This script assumes hdiskpower11, 12, 13, 14 and 15 are free'
echo 'Creates VG datavg, LVs datalv1, datalv2 and datalv3'
echo 'Creates JFSLog loglv11, loglv12 and loglv13'
echo 'Creates filesystems /data1, /data2 and /data3'
echo 'Is it OK (y/n)?'
read OPT JUNK
if [ "$OPT" = "y" ]
then

echo 'Log file is /tmp/disk-layout.log'


echo 'Check if datavg exists, umount all filesystems and export vg'
lsvg -o|grep -i "datavg"
if [ $? = 0 ]
then

echo 'Unmounting Filesystems'


lsvg -l datavg|tail +3|awk '{print $7}'|while read FS
do
if [ "$FS" != "N/A" ]
then
echo "$FS"
fuser -cxku "$FS"
umount $FS
fi
done

echo 'Exporting datavg'


varyoffvg datavg
exportvg datavg
fi

echo 'Creating VG datavg'


mkvg -f -y datavg -s64 hdiskpower11 hdiskpower12 hdiskpower13

Page 23 of 41
echo 'Creating JFS logs'
mklv -y loglv11 -tjfslog datavg 1 hdiskpower11
mklv -y loglv12 -tjfslog datavg 1 hdiskpower12
mklv -y loglv13 -tjfslog datavg 1 hdiskpower13

echo 'Formatting JFSlogs'

echo y|logform /dev/loglv11


echo y|logform /dev/loglv12
echo y|logform /dev/loglv13

echo 'Creating LVs'


mklv -ydatalv1 -ex datavg 66 hdiskpower11 hdiskpower12 hdiskpower13
mklv -ydatalv2 -ex datavg 66 hdiskpower12 hdiskpower13 hdiskpower11
mklv -ydatalv3 -ex datavg 66 hdiskpower13 hdiskpower11 hdiskpower12

echo 'Creating Filesystems'


crfs -vjfs -ddatalv1 -m/data1 -Ano -prw -tno -afrag=4096 -anbpi=4096 -aag=8 -alogname=loglv11

crfs -vjfs -ddatalv2 -m/data2 -Ano -prw -tno -afrag=4096 -anbpi=4096 -aag=8 -alogname=loglv12

crfs -vjfs -ddatalv3 -m/data3 -Ano -prw -tno -afrag=4096 -anbpi=4096 -aag=8 -alogname=loglv13

echo 'Mounting Filesystems'


mount /data1
mount /data2
mount /data3
(

date
echo 'This is an example to show'
echo '1> Creating LVs and Filesystems on three PVs with max allocation policy'
echo '2> Creating one JFSLog on a separate PV for each filesystem'
echo '3> Creating filesystems'
echo '4> When the VG is full, extending it by adding PV(s)'
echo '5> Expanding Filesystems'

echo '\nDisplaying layout of LVs'


echo
echo 'Each LV spreads on three PVs and each JFSlog on a separate PV'
echo 'Each Filesystem uses a separate JFSLog'
echo 'loglv01 for /data1, loglv02 for /data2 and loglv03 for /data3'
echo
lsvg -l datavg
echo '\n df -k'
df -k |grep data
echo
echo 'LV starts from hdiskpower11 to hdiskpower12 to hdiskpower13 in round-robin'
lslv -l datalv1
lslv -m datalv1
echo
echo 'LV starts from hdiskpower12 to hdiskpower13 to hdiskpower11 in round-robin'
lslv -l datalv2
lslv -m datalv2
Page 24 of 41
echo
echo 'LV starts from hdiskpower13 to hdiskpower12 to hdiskpower12 in round-robin'
lslv -l datalv3
lslv -m datalv3
) > /tmp/disk-layout.log 2>&1

echo 'Enter 1 to add hdiskpower14 or 2 to add hdiskpower14 and 15'


read OPT JUNK
if [ "$OPT" = 1 ]
then

echo '\nNow extending VG datavg by adding one PV hdiskpower14'


extendvg -f datavg hdiskpower14
echo 'Creating JFSlog loglv14'
mklv -y loglv14 -tjfslog datavg 1
echo y|logform /dev/loglv14

echo '\nNow expanding /data1, /data2 and /data3 by adding 22 PPs to each'

chfs -asize=+2793042 /data1


chfs -asize=+2793042 /data2
chfs -asize=+2793042 /data3
(
echo '\n==========================================='
echo '\nPhase 2 : Adding more PVs and expanding already created LVs and FSs'
date
echo '\nAfter adding hdiskpower14 to datavg'
echo
lsvg -l datavg
echo '\nAll the added PPs now resides only on hdiskpower14'
lslv -l datalv1
echo '\n df -k'
df -k /data1

echo '\nAll the added PPs now resides only on hdiskpower14'


lslv -l datalv2
echo '\n df -k'
df -k /data2

echo '\nAll the added PPs now resides only on hdiskpower14'


lslv -l datalv3
echo '\n df -k'
df -k /data3

) >> /tmp/disk-layout.log 2>&1

echo 'Now extending VG datavg by adding second PV hdiskpower15'


extendvg -f datavg hdiskpower15
echo 'Creating JFSlog loglv15'
mklv -y loglv15 -tjfslog datavg 1
echo y|logform /dev/loglv15

echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'

Page 25 of 41
chfs -asize=+2793042 /data1
chfs -asize=+2793042 /data2
chfs -asize=+2793042 /data3
(
echo
date
echo '\nAfter adding hdiskpower15'
echo
lsvg -l datavg
echo '\nAll the added PPs now resides only on hdiskpower15'
lslv -l datalv1
echo '\n df -k'
df -k /data1
echo
lslv -m datalv1

echo '\nAll the added PPs now resides only on hdiskpower15'


lslv -l datalv2
echo '\n df -k'
df -k /data2
echo
lslv -m datalv2

echo '\nAll the added PPs now resides only on hdiskpower15'


lslv -l datalv3
echo '\n df -k'
df -k /data3
echo
lslv -m datalv3

) >> /tmp/disk-layout.log 2>&1

else

echo 'Now extending VG datavg by adding PVs hdiskpower14 and 15'


extendvg -f datavg hdiskpower14 hdiskpower15
echo 'Creating JFSlog loglv14'
mklv -y loglv14 -tjfslog datavg 1
echo y|logform /dev/loglv14
echo 'Creating JFSlog loglv15'
mklv -y loglv15 -tjfslog datavg 1
echo y|logform /dev/loglv15

echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'

chfs -asize=+5586086 /data1

chfs -asize=+5586086 /data2

chfs -asize=+5586086 /data3


(
echo '\n==========================================='
echo 'Phase 2 : Adding more PVs and expanding already created LVs and FSs'
date
Page 26 of 41
echo
echo '\nAfter adding hdiskpower14 and 15 to datavg'
lsvg -l datavg
echo
echo '\nAll the added PPs round-robin between hdiskpower14 and hdiskpower15'
lslv -l datalv1
lslv -m datalv1
echo '\n df -k'
df -k /data1
echo
echo '\nAll the added PPs roundrobin between hdiskpower14 and hdiskpower15'
lslv -l datalv2
lslv -m datalv2
echo '\n df -k'
df -k /data2
echo
echo '\nAll the added PPs roundrobin between hdiskpower14 and hdiskpower15'
lslv -l datalv3
lslv -m datalv3
echo '\n df -k'
df -k /data3
echo

echo
date

) >> /tmp/disk-layout.log 2>&1

fi

echo 'Log file is /tmp/disk-layout.log'

fi

8.2. Screen output of script


shcladv1b # ./disk-layout.ksh
This script assumes hdiskpower11, 12, 13, 14 and 15 are free
Creates VG datavg, LVs datalv1, datalv2 and datalv3
Creates JFSLog loglv11, loglv12 and loglv13
Creates filesystems /data1, /data2 and /data3
Is it OK (y/n)?
y
Log file is /tmp/disk-layout.log
Check if datavg exists, umount all filesystems and export vg
Creating VG datavg
datavg
Creating JFS logs
loglv11
loglv12
loglv13
Formatting JFSlogs
Creating LVs
datalv1

Page 27 of 41
datalv2
datalv3
Creating Filesystems
Based on the parameters chosen, the new /data1 JFS file system
is limited to a maximum size of 134217728 (512 byte blocks)

New File System size is 8650752


Based on the parameters chosen, the new /data2 JFS file system
is limited to a maximum size of 134217728 (512 byte blocks)

New File System size is 8650752


Based on the parameters chosen, the new /data3 JFS file system
is limited to a maximum size of 134217728 (512 byte blocks)

New File System size is 8650752


Mounting Filesystems
Enter 1 to add hdiskpower14 or 2 to add hdiskpower14 and 15
2
Now extending VG datavg by adding PVs hdiskpower14 and 15
Creating JFSlog loglv14
loglv14
Creating JFSlog loglv15
loglv15
Now expanding /data1, /data2 and /data3 by adding 22 PPs to each
Filesystem size changed to 14286848
Filesystem size changed to 14286848
Filesystem size changed to 14286848
Log file is /tmp/disk-layout.log
shcladv1b #

8.3. Output of script


Mon May 12 15:03:45 EDT 2003
This is an example to show
1> Creating LVs and Filesystems on three PVs with max allocation policy
2> Creating one JFSLog on a separate PV for each filesystem
3> Creating filesystems
4> When the VG is full, extending it by adding PV(s)
5> Expanding Filesystems

Displaying layout of LVs

Each LV spreads on three PVs and each JFSlog on a separate PV


Each Filesystem uses a separate JFSLog
loglv01 for /data1, loglv02 for /data2 and loglv03 for /data3

datavg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv11 jfslog 1 1 1 open/syncd N/A
loglv12 jfslog 1 1 1 open/syncd N/A
loglv13 jfslog 1 1 1 open/syncd N/A
datalv1 jfs 66 66 3 open/syncd /data1
datalv2 jfs 66 66 3 open/syncd /data2
datalv3 jfs 66 66 3 open/syncd /data3

Page 28 of 41
df -k
/dev/datalv1 4325376 4189564 4% 17 1% /data1
/dev/datalv2 4325376 4189564 4% 17 1% /data2
/dev/datalv3 4325376 4189564 4% 17 1% /data3

LV starts from hdiskpower11 to hdiskpower12 to hdiskpower13 in roundrobin


datalv1:/data1
PV COPIES IN BAND DISTRIBUTION
hdiskpower11 022:000:000 54% 000:012:010:000:000
hdiskpower12 022:000:000 54% 000:012:010:000:000
hdiskpower13 022:000:000 54% 000:012:010:000:000
datalv1:/data1
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0016 hdiskpower11
0002 0016 hdiskpower12
0003 0016 hdiskpower13
0004 0017 hdiskpower11
0005 0017 hdiskpower12
0006 0017 hdiskpower13
0007 0018 hdiskpower11
0008 0018 hdiskpower12
0009 0018 hdiskpower13
0010 0019 hdiskpower11
0011 0019 hdiskpower12
0012 0019 hdiskpower13
0013 0020 hdiskpower11
0014 0020 hdiskpower12
0015 0020 hdiskpower13
0016 0021 hdiskpower11
0017 0021 hdiskpower12
0018 0021 hdiskpower13
0019 0022 hdiskpower11
0020 0022 hdiskpower12
0021 0022 hdiskpower13
0022 0023 hdiskpower11
0023 0023 hdiskpower12
0024 0023 hdiskpower13
0025 0024 hdiskpower11
0026 0024 hdiskpower12
0027 0024 hdiskpower13
0028 0025 hdiskpower11
0029 0025 hdiskpower12
0030 0025 hdiskpower13
0031 0026 hdiskpower11
0032 0026 hdiskpower12
0033 0026 hdiskpower13
0034 0027 hdiskpower11
0035 0027 hdiskpower12
0036 0027 hdiskpower13
0037 0028 hdiskpower11
0038 0028 hdiskpower12
0039 0028 hdiskpower13
0040 0029 hdiskpower11
0041 0029 hdiskpower12
0042 0029 hdiskpower13
0043 0030 hdiskpower11
Page 29 of 41
0044 0030 hdiskpower12
0045 0030 hdiskpower13
0046 0031 hdiskpower11
0047 0031 hdiskpower12
0048 0031 hdiskpower13
0049 0032 hdiskpower11
0050 0032 hdiskpower12
0051 0032 hdiskpower13
0052 0033 hdiskpower11
0053 0033 hdiskpower12
0054 0033 hdiskpower13
0055 0034 hdiskpower11
0056 0034 hdiskpower12
0057 0034 hdiskpower13
0058 0035 hdiskpower11
0059 0035 hdiskpower12
0060 0035 hdiskpower13
0061 0036 hdiskpower11
0062 0036 hdiskpower12
0063 0036 hdiskpower13
0064 0037 hdiskpower11
0065 0037 hdiskpower12
0066 0037 hdiskpower13

LV starts from hdiskpower12 to hdiskpower13 to hdiskpower11 in roundrobin


datalv2:/data2
PV COPIES IN BAND DISTRIBUTION
hdiskpower12 022:000:000 0% 014:000:003:005:000
hdiskpower13 022:000:000 0% 014:000:003:005:000
hdiskpower11 022:000:000 0% 014:000:003:005:000
datalv2:/data2
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0038 hdiskpower12
0002 0038 hdiskpower13
0003 0038 hdiskpower11
0004 0039 hdiskpower12
0005 0039 hdiskpower13
0006 0039 hdiskpower11
0007 0040 hdiskpower12
0008 0040 hdiskpower13
0009 0040 hdiskpower11
0010 0001 hdiskpower12
0011 0001 hdiskpower13
0012 0001 hdiskpower11
0013 0002 hdiskpower12
0014 0002 hdiskpower13
0015 0002 hdiskpower11
0016 0003 hdiskpower12
0017 0003 hdiskpower13
0018 0003 hdiskpower11
0019 0004 hdiskpower12
0020 0004 hdiskpower13
0021 0004 hdiskpower11
0022 0005 hdiskpower12
0023 0005 hdiskpower13
0024 0005 hdiskpower11
Page 30 of 41
0025 0006 hdiskpower12
0026 0006 hdiskpower13
0027 0006 hdiskpower11
0028 0007 hdiskpower12
0029 0007 hdiskpower13
0030 0007 hdiskpower11
0031 0008 hdiskpower12
0032 0008 hdiskpower13
0033 0008 hdiskpower11
0034 0009 hdiskpower12
0035 0009 hdiskpower13
0036 0009 hdiskpower11
0037 0010 hdiskpower12
0038 0010 hdiskpower13
0039 0010 hdiskpower11
0040 0011 hdiskpower12
0041 0011 hdiskpower13
0042 0011 hdiskpower11
0043 0012 hdiskpower12
0044 0012 hdiskpower13
0045 0012 hdiskpower11
0046 0013 hdiskpower12
0047 0013 hdiskpower13
0048 0013 hdiskpower11
0049 0014 hdiskpower12
0050 0014 hdiskpower13
0051 0014 hdiskpower11
0052 0041 hdiskpower12
0053 0041 hdiskpower13
0054 0041 hdiskpower11
0055 0042 hdiskpower12
0056 0042 hdiskpower13
0057 0042 hdiskpower11
0058 0043 hdiskpower12
0059 0043 hdiskpower13
0060 0043 hdiskpower11
0061 0044 hdiskpower12
0062 0044 hdiskpower13
0063 0044 hdiskpower11
0064 0045 hdiskpower12
0065 0045 hdiskpower13
0066 0045 hdiskpower11

LV starts from hdiskpower13 to hdiskpower12 to hdiskpower12 in roundrobin


datalv3:/data3
PV COPIES IN BAND DISTRIBUTION
hdiskpower13 022:000:000 0% 000:000:000:008:014
hdiskpower11 022:000:000 0% 000:000:000:008:014
hdiskpower12 022:000:000 0% 000:000:000:008:014
datalv3:/data3
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0046 hdiskpower13
0002 0046 hdiskpower11
0003 0046 hdiskpower12
0004 0047 hdiskpower13
0005 0047 hdiskpower11
Page 31 of 41
0006 0047 hdiskpower12
0007 0048 hdiskpower13
0008 0048 hdiskpower11
0009 0048 hdiskpower12
0010 0049 hdiskpower13
0011 0049 hdiskpower11
0012 0049 hdiskpower12
0013 0050 hdiskpower13
0014 0050 hdiskpower11
0015 0050 hdiskpower12
0016 0051 hdiskpower13
0017 0051 hdiskpower11
0018 0051 hdiskpower12
0019 0052 hdiskpower13
0020 0052 hdiskpower11
0021 0052 hdiskpower12
0022 0053 hdiskpower13
0023 0053 hdiskpower11
0024 0053 hdiskpower12
0025 0054 hdiskpower13
0026 0054 hdiskpower11
0027 0054 hdiskpower12
0028 0055 hdiskpower13
0029 0055 hdiskpower11
0030 0055 hdiskpower12
0031 0056 hdiskpower13
0032 0056 hdiskpower11
0033 0056 hdiskpower12
0034 0057 hdiskpower13
0035 0057 hdiskpower11
0036 0057 hdiskpower12
0037 0058 hdiskpower13
0038 0058 hdiskpower11
0039 0058 hdiskpower12
0040 0059 hdiskpower13
0041 0059 hdiskpower11
0042 0059 hdiskpower12
0043 0060 hdiskpower13
0044 0060 hdiskpower11
0045 0060 hdiskpower12
0046 0061 hdiskpower13
0047 0061 hdiskpower11
0048 0061 hdiskpower12
0049 0062 hdiskpower13
0050 0062 hdiskpower11
0051 0062 hdiskpower12
0052 0063 hdiskpower13
0053 0063 hdiskpower11
0054 0063 hdiskpower12
0055 0064 hdiskpower13
0056 0064 hdiskpower11
0057 0064 hdiskpower12
0058 0065 hdiskpower13
0059 0065 hdiskpower11
0060 0065 hdiskpower12
0061 0066 hdiskpower13
Page 32 of 41
0062 0066 hdiskpower11
0063 0066 hdiskpower12
0064 0067 hdiskpower13
0065 0067 hdiskpower11
0066 0067 hdiskpower12

===========================================
Phase 2 : Adding more PVs and expanding already created LVs and FSs
Mon May 12 15:06:39 EDT 2003

After adding hdiskpower14 and 15 to datavg


datavg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv11 jfslog 1 1 1 open/syncd N/A
loglv12 jfslog 1 1 1 open/syncd N/A
loglv13 jfslog 1 1 1 open/syncd N/A
datalv1 jfs 109 109 5 open/syncd /data1
datalv2 jfs 109 109 5 open/syncd /data2
datalv3 jfs 109 109 5 open/syncd /data3
loglv14 jfslog 1 1 1 closed/syncd N/A
loglv15 jfslog 1 1 1 closed/syncd N/A

All the added PPs roundrobin between hdiskpower14 and hdiskpower15


datalv1:/data1
PV COPIES IN BAND DISTRIBUTION
hdiskpower11 022:000:000 54% 000:012:010:000:000
hdiskpower12 022:000:000 54% 000:012:010:000:000
hdiskpower13 022:000:000 54% 000:012:010:000:000
hdiskpower14 022:000:000 54% 000:012:010:000:000
hdiskpower15 021:000:000 57% 000:012:009:000:000
datalv1:/data1
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0016 hdiskpower11
0002 0016 hdiskpower12
0003 0016 hdiskpower13
0004 0017 hdiskpower11
0005 0017 hdiskpower12
0006 0017 hdiskpower13
0007 0018 hdiskpower11
0008 0018 hdiskpower12
0009 0018 hdiskpower13
0010 0019 hdiskpower11
0011 0019 hdiskpower12
0012 0019 hdiskpower13
0013 0020 hdiskpower11
0014 0020 hdiskpower12
0015 0020 hdiskpower13
0016 0021 hdiskpower11
0017 0021 hdiskpower12
0018 0021 hdiskpower13
0019 0022 hdiskpower11
0020 0022 hdiskpower12
0021 0022 hdiskpower13
0022 0023 hdiskpower11
Page 33 of 41
0023 0023 hdiskpower12
0024 0023 hdiskpower13
0025 0024 hdiskpower11
0026 0024 hdiskpower12
0027 0024 hdiskpower13
0028 0025 hdiskpower11
0029 0025 hdiskpower12
0030 0025 hdiskpower13
0031 0026 hdiskpower11
0032 0026 hdiskpower12
0033 0026 hdiskpower13
0034 0027 hdiskpower11
0035 0027 hdiskpower12
0036 0027 hdiskpower13
0037 0028 hdiskpower11
0038 0028 hdiskpower12
0039 0028 hdiskpower13
0040 0029 hdiskpower11
0041 0029 hdiskpower12
0042 0029 hdiskpower13
0043 0030 hdiskpower11
0044 0030 hdiskpower12
0045 0030 hdiskpower13
0046 0031 hdiskpower11
0047 0031 hdiskpower12
0048 0031 hdiskpower13
0049 0032 hdiskpower11
0050 0032 hdiskpower12
0051 0032 hdiskpower13
0052 0033 hdiskpower11
0053 0033 hdiskpower12
0054 0033 hdiskpower13
0055 0034 hdiskpower11
0056 0034 hdiskpower12
0057 0034 hdiskpower13
0058 0035 hdiskpower11
0059 0035 hdiskpower12
0060 0035 hdiskpower13
0061 0036 hdiskpower11
0062 0036 hdiskpower12
0063 0036 hdiskpower13
0064 0037 hdiskpower11
0065 0037 hdiskpower12
0066 0037 hdiskpower13
0067 0027 hdiskpower14
0068 0027 hdiskpower15
0069 0026 hdiskpower14
0070 0026 hdiskpower15
0071 0025 hdiskpower14
0072 0025 hdiskpower15
0073 0024 hdiskpower14
0074 0024 hdiskpower15
0075 0023 hdiskpower14
0076 0023 hdiskpower15
0077 0022 hdiskpower14
0078 0022 hdiskpower15
Page 34 of 41
0079 0021 hdiskpower14
0080 0021 hdiskpower15
0081 0020 hdiskpower14
0082 0020 hdiskpower15
0083 0019 hdiskpower14
0084 0019 hdiskpower15
0085 0018 hdiskpower14
0086 0018 hdiskpower15
0087 0017 hdiskpower14
0088 0017 hdiskpower15
0089 0016 hdiskpower14
0090 0016 hdiskpower15
0091 0040 hdiskpower14
0092 0040 hdiskpower15
0093 0039 hdiskpower14
0094 0039 hdiskpower15
0095 0038 hdiskpower14
0096 0038 hdiskpower15
0097 0037 hdiskpower14
0098 0037 hdiskpower15
0099 0036 hdiskpower14
0100 0036 hdiskpower15
0101 0035 hdiskpower14
0102 0035 hdiskpower15
0103 0034 hdiskpower14
0104 0034 hdiskpower15
0105 0033 hdiskpower14
0106 0033 hdiskpower15
0107 0032 hdiskpower14
0108 0032 hdiskpower15
0109 0031 hdiskpower14

df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv1 7143424 6919164 4% 17 1% /data1

All the added PPs roundrobin between hdiskpower14 and hdiskpower15


datalv2:/data2
PV COPIES IN BAND DISTRIBUTION
hdiskpower12 022:000:000 0% 014:000:003:005:000
hdiskpower13 022:000:000 0% 014:000:003:005:000
hdiskpower11 022:000:000 0% 014:000:003:005:000
hdiskpower15 022:000:000 0% 014:000:004:004:000
hdiskpower14 021:000:000 0% 014:000:003:004:000
datalv2:/data2
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0038 hdiskpower12
0002 0038 hdiskpower13
0003 0038 hdiskpower11
0004 0039 hdiskpower12
0005 0039 hdiskpower13
0006 0039 hdiskpower11
0007 0040 hdiskpower12
0008 0040 hdiskpower13
0009 0040 hdiskpower11
Page 35 of 41
0010 0001 hdiskpower12
0011 0001 hdiskpower13
0012 0001 hdiskpower11
0013 0002 hdiskpower12
0014 0002 hdiskpower13
0015 0002 hdiskpower11
0016 0003 hdiskpower12
0017 0003 hdiskpower13
0018 0003 hdiskpower11
0019 0004 hdiskpower12
0020 0004 hdiskpower13
0021 0004 hdiskpower11
0022 0005 hdiskpower12
0023 0005 hdiskpower13
0024 0005 hdiskpower11
0025 0006 hdiskpower12
0026 0006 hdiskpower13
0027 0006 hdiskpower11
0028 0007 hdiskpower12
0029 0007 hdiskpower13
0030 0007 hdiskpower11
0031 0008 hdiskpower12
0032 0008 hdiskpower13
0033 0008 hdiskpower11
0034 0009 hdiskpower12
0035 0009 hdiskpower13
0036 0009 hdiskpower11
0037 0010 hdiskpower12
0038 0010 hdiskpower13
0039 0010 hdiskpower11
0040 0011 hdiskpower12
0041 0011 hdiskpower13
0042 0011 hdiskpower11
0043 0012 hdiskpower12
0044 0012 hdiskpower13
0045 0012 hdiskpower11
0046 0013 hdiskpower12
0047 0013 hdiskpower13
0048 0013 hdiskpower11
0049 0014 hdiskpower12
0050 0014 hdiskpower13
0051 0014 hdiskpower11
0052 0041 hdiskpower12
0053 0041 hdiskpower13
0054 0041 hdiskpower11
0055 0042 hdiskpower12
0056 0042 hdiskpower13
0057 0042 hdiskpower11
0058 0043 hdiskpower12
0059 0043 hdiskpower13
0060 0043 hdiskpower11
0061 0044 hdiskpower12
0062 0044 hdiskpower13
0063 0044 hdiskpower11
0064 0045 hdiskpower12
0065 0045 hdiskpower13
Page 36 of 41
0066 0045 hdiskpower11
0067 0031 hdiskpower15
0068 0030 hdiskpower14
0069 0030 hdiskpower15
0070 0029 hdiskpower14
0071 0029 hdiskpower15
0072 0028 hdiskpower14
0073 0028 hdiskpower15
0074 0014 hdiskpower14
0075 0014 hdiskpower15
0076 0013 hdiskpower14
0077 0013 hdiskpower15
0078 0012 hdiskpower14
0079 0012 hdiskpower15
0080 0011 hdiskpower14
0081 0011 hdiskpower15
0082 0010 hdiskpower14
0083 0010 hdiskpower15
0084 0009 hdiskpower14
0085 0009 hdiskpower15
0086 0008 hdiskpower14
0087 0008 hdiskpower15
0088 0007 hdiskpower14
0089 0007 hdiskpower15
0090 0006 hdiskpower14
0091 0006 hdiskpower15
0092 0005 hdiskpower14
0093 0005 hdiskpower15
0094 0004 hdiskpower14
0095 0004 hdiskpower15
0096 0003 hdiskpower14
0097 0003 hdiskpower15
0098 0002 hdiskpower14
0099 0002 hdiskpower15
0100 0001 hdiskpower14
0101 0001 hdiskpower15
0102 0053 hdiskpower14
0103 0053 hdiskpower15
0104 0052 hdiskpower14
0105 0052 hdiskpower15
0106 0051 hdiskpower14
0107 0051 hdiskpower15
0108 0050 hdiskpower14
0109 0050 hdiskpower15

df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv2 7143424 6919164 4% 17 1% /data2

All the added PPs roundrobin between hdiskpower14 and hdiskpower15


datalv3:/data3
PV COPIES IN BAND DISTRIBUTION
hdiskpower13 022:000:000 0% 000:000:000:008:014
hdiskpower11 022:000:000 0% 000:000:000:008:014
hdiskpower12 022:000:000 0% 000:000:000:008:014
Page 37 of 41
hdiskpower14 022:000:000 0% 000:000:000:009:013
hdiskpower15 021:000:000 0% 000:000:000:009:012
datalv3:/data3
LP PP1 PV1 PP2 PV2 PP3 PV3
0001 0046 hdiskpower13
0002 0046 hdiskpower11
0003 0046 hdiskpower12
0004 0047 hdiskpower13
0005 0047 hdiskpower11
0006 0047 hdiskpower12
0007 0048 hdiskpower13
0008 0048 hdiskpower11
0009 0048 hdiskpower12
0010 0049 hdiskpower13
0011 0049 hdiskpower11
0012 0049 hdiskpower12
0013 0050 hdiskpower13
0014 0050 hdiskpower11
0015 0050 hdiskpower12
0016 0051 hdiskpower13
0017 0051 hdiskpower11
0018 0051 hdiskpower12
0019 0052 hdiskpower13
0020 0052 hdiskpower11
0021 0052 hdiskpower12
0022 0053 hdiskpower13
0023 0053 hdiskpower11
0024 0053 hdiskpower12
0025 0054 hdiskpower13
0026 0054 hdiskpower11
0027 0054 hdiskpower12
0028 0055 hdiskpower13
0029 0055 hdiskpower11
0030 0055 hdiskpower12
0031 0056 hdiskpower13
0032 0056 hdiskpower11
0033 0056 hdiskpower12
0034 0057 hdiskpower13
0035 0057 hdiskpower11
0036 0057 hdiskpower12
0037 0058 hdiskpower13
0038 0058 hdiskpower11
0039 0058 hdiskpower12
0040 0059 hdiskpower13
0041 0059 hdiskpower11
0042 0059 hdiskpower12
0043 0060 hdiskpower13
0044 0060 hdiskpower11
0045 0060 hdiskpower12
0046 0061 hdiskpower13
0047 0061 hdiskpower11
0048 0061 hdiskpower12
0049 0062 hdiskpower13
0050 0062 hdiskpower11
0051 0062 hdiskpower12
0052 0063 hdiskpower13
Page 38 of 41
0053 0063 hdiskpower11
0054 0063 hdiskpower12
0055 0064 hdiskpower13
0056 0064 hdiskpower11
0057 0064 hdiskpower12
0058 0065 hdiskpower13
0059 0065 hdiskpower11
0060 0065 hdiskpower12
0061 0066 hdiskpower13
0062 0066 hdiskpower11
0063 0066 hdiskpower12
0064 0067 hdiskpower13
0065 0067 hdiskpower11
0066 0067 hdiskpower12
0067 0049 hdiskpower14
0068 0049 hdiskpower15
0069 0048 hdiskpower14
0070 0048 hdiskpower15
0071 0047 hdiskpower14
0072 0047 hdiskpower15
0073 0046 hdiskpower14
0074 0046 hdiskpower15
0075 0045 hdiskpower14
0076 0045 hdiskpower15
0077 0044 hdiskpower14
0078 0044 hdiskpower15
0079 0043 hdiskpower14
0080 0043 hdiskpower15
0081 0042 hdiskpower14
0082 0042 hdiskpower15
0083 0041 hdiskpower14
0084 0041 hdiskpower15
0085 0067 hdiskpower14
0086 0067 hdiskpower15
0087 0066 hdiskpower14
0088 0066 hdiskpower15
0089 0065 hdiskpower14
0090 0065 hdiskpower15
0091 0064 hdiskpower14
0092 0064 hdiskpower15
0093 0063 hdiskpower14
0094 0063 hdiskpower15
0095 0062 hdiskpower14
0096 0062 hdiskpower15
0097 0061 hdiskpower14
0098 0061 hdiskpower15
0099 0060 hdiskpower14
0100 0060 hdiskpower15
0101 0059 hdiskpower14
0102 0059 hdiskpower15
0103 0058 hdiskpower14
0104 0058 hdiskpower15
0105 0057 hdiskpower14
0106 0057 hdiskpower15
0107 0056 hdiskpower14
0108 0056 hdiskpower15
Page 39 of 41
0109 0055 hdiskpower14

df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv3 7143424 6919164 4% 17 1% /data3

Mon May 12 15:06:42 EDT 2003

Page 40 of 41
References

9.1. Redbooks
 AIX Version 4.3 Differences Guide, SG24-2014
 AIX Version 5.2 Differences Guide, SG24-5765
 RS/6000 Performance Tools in Focus, SG24-4989
 Understanding IBM RS/6000 Performance and Sizing, SG24-4810
 AIX 64-bit Performance in Focus SG24-5103-00
 AIX Logical Volume Manager, from A to Z: Introduction and Concepts SG24-5432-00
 AIX 5L Version 5.2 Commands Reference, Volume 3

Page 41 of 41

Вам также может понравиться