Академический Документы
Профессиональный Документы
Культура Документы
Ravi Singh
Technical Sales Specialist – System p
Southfield, MI
email: rsingh@us.ibm.com
Page 1 of 41
TABLE OF CONTENTS
1. Introduction ………………………… 3
2. Striping and maximum allocation policy ………………………… 4
3. Comparison of striping and max. allocation policy ………………………... 7
4. Filesystems and JFS logs ………………………… 9
5. Creating VGs, LVs and Filesystems ………………………… 14
6. I/O Performance Tuning ………………………… 18
7. Guidelines ………………………… 22
8. Example ………………………… 23
9. References ………………………… 41
Page 2 of 41
Introduction
1.1. Pre-Requisites
Reader of these Guidelines is expected to have basic understanding of AIX LVM and LVM
commands.
1.3. Planning
AIX LVM provides multiple features and options for creating logical volumes and filesystems. It is
a good practice to spend some time to collect information and discuss the data layout procedure
before installing and customizing a new server. The useful information would be:
If the answer to (1) and (2) are no, then one should consider either striping or creating LVs with
maximum allocation policy at AIX LVM level.
This helps to reduce I/O performance problems that typically show up as the first bottleneck after
the dB is built, data loaded and when the system is rolled into production.
1.4. Disclaimer
The discussion here should be used as guidelines and does not guaranty optimum performance.
The performance of servers may vary and depend on many factors including varying peak load,
system hardware configuration, software installed and tuned, applications installed and
configured, I/O and network performance, tuning of H/W, OS and application for optimum
performance. The suggested method here is one of the alternatives to improve I/O performance
and not necessarily provide a guaranteed system performance.
Page 3 of 41
Striping and Maximum allocation policy
In an ordinary logical volume, the data addresses correspond to the sequence of blocks in the
underlying physical partitions. In a striped logical volume, the data addresses follow the sequence
of strip units. A complete strip unit consists of one stripe on each of the physical volumes that
contain part of the striped logical volume. The LVM determines which physical block on which
physical volume corresponds to the block being read or written. If more than one physical volume
is involved, the necessary I/O operations are scheduled simultaneously.
Page 4 of 41
2.2. Inter physical volume allocation policy (Range of Physical Volumes)
The inter-physical volume allocation policy is one of the attributes of the logical volumes. It
stipulates how many physical volumes are used for the allocation of the physical partitions for the
logical volume.
Minimum: If PVs are not specified while creating a LV, system will use minimum no. of PVs to
create the LV and the PPs are created contiguously from one PV to another PV. By default it is
set to m (minimum) to minimize the physical volumes that are used for the allocation of physical
partitions to a logical volume.
Maximum: This makes the LV to spread across multiple PVs. This option is specified by the
value x (maximum) to use all the available PVs in the VG or a set of PVs can be specified while
creating a LV.
To check the inter-physical volume allocation policy of a logical volume, you can use the lslv
command and the output as shown below:
………..
INTER-POLICY: minimum
………..
This spreads a LV across multiple PVs in chunks of PP size (4 MB, 8 MB, 16 MB, 32 MB, 64
MB…..), while striping is at stripe size (4K, 8K,…..128K).
Page 6 of 41
Comparison of striped and a max. allocation policy LV
Specify a multiple of the strip width as logical partition number during the initial creation of
the striped logical volume.
You cannot change the stripe width after the initial creation.
Extension is made by multiples of the strip width.
These logical partitions allocated must be equally located on every physical volume composing
the striped logical volume. If we have a volume group composed of two physical volumes. one is
4.5 GB (named hdisk2) and the other 9.1 GB (named hdisk3). We choose 4 MB as the physical
partition size for this volume group, then there are 537 PPs on disk 2 and 1084 PPs on disk 3.
In this example, if we create a striped logical volume in this volume group, then only 1074 logical
partitions (537 physical partitions per each physical volume) can be allocated to this striped
logical volume. Otherwise, the attempt to create the striped logical volumes will fail. The
remaining space of the physical volume hdisk3 cannot be used in striped logical volume. You can
still use this space for non-striped logical volumes, but this can affect the I/O performance of the
striped logical volumes.
In another words, you cannot change the stripe width of an existing striped logical volume.
Page 7 of 41
fundamental difference between the mirroring and striping function, of course, other than the
functionality itself. You can always mirror an existing non-mirrored logical volume (including a
striped logical volume from AIX Version 4.3.3), and also remove the mirror from the mirrored
logical volumes. But, you cannot convert a non-striped logical volume to a striped logical volume,
or vice versa. The only way to create the striped logical volumes is to explicitly specify the -S flag
on the mklv command line or use the corresponding SMIT panel.
The AIX LVM provides many ways to control the physical partitions allocation of the logical
volumes. They are forced by optional flags of the mklv command. But, if you attempt to create a
striped logical volume, some of these optional flags cannot be used with the -S flag.
In AIX Version 4.3.3, prohibited optional flags with –S are -e, -d, -m, -s.
-m mapfile option
Physical volume names
Due to the lack of precise control for the physical partitions allocation, you may not place
the striped logical volumes as you want. The best way to avoid this situation is to
dedicate a volume group that accommodates the striped logical volumes. It also benefits
the I/O performance.
If you create a striped logical volume, then you should choose disks that have same
characteristics (especially size). Otherwise, you cannot use the entire surface of the
physical volumes for the striped logical volumes.
If you cannot avoid using different sized physical volumes for creating the striped logical
volumes, and, if you have to use the rest of the space of the larger size physical
volume(s), you should minimize the I/O activity on that portion. Otherwise, that activity
might affect the striped logical I/O performance and you might loose the benefit of the
striped logical volumes.
The LV created with this policy spreads across multiple PVs in chunks of PP (PP size is specified
while creating a VG) and the PPs are allocated in round-robin across all the specified disks during
mklv command. Hence as the data starts growing across multiple PVs and when it is accessed
randomly, multiple disks perform I/O and there by boosting the performance. In a sequential
access situation, if multiple dB instances are running on the same server, then the contention for
the same disk by multiple I/O requests of dB instances gets reduced as the data is spread across
multiple PVs.
Page 8 of 41
Filesystems and JFS logs
A file system is a set of files, directories, and other structures. File systems maintain information
and identify where a file or directory's data is located on the disk. Besides files and directories, file
systems consist of:
The superblock
The i-nodes
The data blocks
The allocation bitmaps
The dumpfs command shows you the superblock as well as the i-node map, and disk map
information for the file system or special device specified.
Page 9 of 41
address contains the first 4096 bytes and a second address contains the remaining 2048 bytes (a
partial logical block). If a file has a large number of logical blocks, the i-node does not contain the
disk addresses. Instead, the i-node points to an indirect block that contains the additional
addresses.
The number of disk i-nodes available to a file system depends on the size of the file system, the
allocation group size (8 MB by default), and the ratio of bytes per i-node (4096 by default). These
parameters are given to the mkfs command at file system creation. When enough files have
been created to use all the available i-nodes, no more files can be created, even if the file system
has free space. The number of available i-nodes can be determined by using the df -v command.
The exact same methods are used to address disk space in compressed and fragmented file
systems.
4.2.6 Fragments
The journaled file system fragment support allows disk space to be divided into allocation units
that are smaller than the default size of 4096 bytes. Smaller allocation units or fragments
minimize wasted disk space by more efficiently storing the data in a file or directory's partial
logical blocks. The functional behavior of journaled file system fragment support is based on that
provided by Berkeley Software Distribution (BSD) fragment support. Similar to BSD, the JFS
fragment support allows users to specify the number of i-nodes that a file system has.
Page 10 of 41
4.2.7.2 Identifying fragment size and NBPI
The file system fragment size and the number-of-bytes-per-i-node (NBPI) value can be identified
through the lsfs command or the System Management Interface Tool (SMIT). For application
programs, the statfs subroutine can be used to identify the file system fragment size.
The following are examples of when JFS log transactions occur when:
a file is being created or deleted
a write() occurs for a file opened with O_SYNC
fsync() or sync() is called
a file is opened with O_APPEND
a write causes an indirect or double-indirect block to be allocated
The use of a JFS log allows for rapid and clean recovery of file systems if a system goes down.
However, there may be a performance trade-off here. If an application is doing synchronous I/O
or is creating and/or removing many files in a short amount of time, then there may be a lot of I/O
going to the JFS log logical volume. If both the JFS log logical volume and the file system logical
volume are on the same physical disk, then this could cause an I/O bottleneck. The
recommendation would be to migrate the JFS log device to another physical disk. Information
about I/Os to the JFS log can be recorded using the filemon command. If you notice that a file
system and its log device are both heavily utilized, it may be better to put each one on a separate
physical disk (assuming that there is more than one disk in that volume group). This can be
done using the migratepv command or via SMIT.
Two things can be done to avoid the jfslog from becoming a performance bottleneck. The first is
to increase the size of the jfslog, and the second is to create more than one jfslog per volume
group.
By default, a jfslog file is created with one logical partition in a volume group. When the amounts
of writes to jfslog increases beyond a threshold so that there is not enough time to commit these
logs, any further writes will be suspended. These pending writes will remain so until all the
outstanding writes are committed and the meta data and jfslog are in sync with each other.
Page 11 of 41
If the jfslog is made bigger than the default, I/O can continue to proceed because the jfslog wrap
threshold would not be reached as easily. The steps taken to increase the jfslog would be as
follows:
where LVname is the name of the jfslog logical volume, VGname is the name of the volume group
on which it is to reside, and PVname is the hdisk name on which the jfslog is to be located.
3. When the jfslog logical volume has been created, it has to be formatted:
# /usr/sbin/logform /dev/LVname
4. The next step is to modify the affected filesystem or filesystems and the logical volume control
block (LVCB).
5. Finally, unmount and then mount the affected file system so that this new jfslog logical volume
can be used.
The steps outlined in the earlier section can be used to create these additional jfslogs. Where
possible, the jfslogs should be created on disks of relatively low activity so as to free the disk
resources to focus on the logging activity.
It is a good practice to create one jfslog for each big filesystem and on a separate PV. If the
datavg has 16 disks, you can create loglv01 to loglv16 on each of these disks, assign one jfslog
for each filesystem when the filesystem is created.
The size can be specified in terms of MB or GB. By default, it is in 512 byte blocks. Suffix M will
be used to specify size in megabytes and G to specify size in gigabytes.
Page 12 of 41
The names of the RAM disks are in the form of /dev/rramdiskx where x is the logical RAM disk
number (0 through 63). The mkramdisk command also creates block special device entries (for
example, /dev/ramdisk5) although use of the block device interface is discouraged because it
adds overhead. The device special files in /dev are owned by root with a mode of 600. However,
the mode, owner, and group ID can be changed using normal system commands.
RAM disks can be removed by using the rmramdisk command. RAM disks are also removed
when the machine is rebooted.
4.6.2. An example
To set up a RAM disk that is approximately 20 MB in size and create a file system on that RAM
disk, enter the following:
mkramdisk 40000
ls -l /dev |grep ram
mkfs -V jfs /dev/ramdisk x
mkdir /ramdisk0
mount -V jfs -o nointegrity /dev/ramdisk x /ramdisk x
where x is the logical RAM disk number. By default, RAM disk pages are pinned. Use the -u flag
to create RAM disk pages that are not pinned.
Note: In AIX 5.1 and 4.3.3, the max. size of ramdisk is 2 GB.
Page 13 of 41
Creating VGs, LVs and Filesystems
While creating LVs with maximum allocation policy, choose between 8 to 16 disks in a VG
depending on the size of disks. More disks can be added to the VG later if the filesystems are to
be expanded.
Create a VG with 8 to 16 disks for datavg, this will have filesystems for database only
(eg., data, log, temp, index).
Create a second VG with 8 to16 disks for dumpvg, this will have database dump
filesystems (eg., /dump1, /dump2……)
Create a third VG with the required no. of PVs for appvg, this will have filesystems for
application binaries, temporary storage area and so on. (/global/site/vendor/Sybase,
/clocal/udb, …….)
Choose the lowest possible PP size while creating the VGs to get a better spread of data
across multiple PVs or disks.
Grouping the filesystems on the basis of its usage is the first step while creating them. This helps
to separate PVs used for accessing data, dump and applications and gives a better control to
manage disks. As a guideline, do not create a VG with more than 32 disks, it is easy to manage,
AIX LVM commands run faster and easy to administer.
During disk migration from one SAN to another or from one frame of disks to another, it is easy to
use LVM mirroring to mirror a new set of disks by adding them to the VG, mirroring and then
breaking the mirror on old disks. To plan for such a migration later, create VGs with big VG
enabled so that a VG can have upto 128 PVs and 512 LVs.
It is equally important to have free PPs available in each VG and each PV, so that a filesystem
can be expanded later and also when free PPs are required during disk migration.
LVs can be created either using SMIT, WSM or from command line.
When you create the second LV, for Physical Volume names, start from hdisk2, hdisk3….hdisk8,
hdisk1. This makes sure the starting PP and PV for the second LV is not the same as LV1.
Similarly for the third LV, it can be hdisk3, hdisk4,…….hdisk2, hdisk1 and so on.
This method of creating LVs gives a better distribution of data and disk access. If these are used
for filesystems or containers to store either data, log or temp, then the probability of the same disk
Page 14 of 41
being accessed at the same time by multiple filesystems gets reduced and may result in reduced
I/O wait.
Here, three LVs are created on three disks, each allocated starting from a different disk.
5.2.3. JFSlogs
As discussed in Section 4.5, create one JFSlog for each LV created and distribute them across
multiple PVs.
Using the above example of three LVs on three PVs, commands given below create three
JFSlogs.
Each of the JFSlog created here will be used later while creating filesystems and assigned to one
filesystem only. These JFSlogs created should be formatted before assigning to filesystems,
example given below.
In AIX 5.1 and 5.2, if JFS2 filesystems are used, then the type (-t flag) should be specified as
jfs2log.
crfs –vjfs –ddatalv1 –m/data1 –abf=true –Ano –prw –tno –afrag=4096 –anbpi=4096 –aag=8
-alogname=loglv11
Page 15 of 41
mount /filesystemname
The flag –abf=true indicates large file enabled and –alogname specifies the name of the
jfslog/jfs2log to be used.
umount /filesystemname
chfs –a logname=/dev/Lvname /filesystemname
mount /filesystemname
Set the max LP limit to a new value for each LV, if required. When a LV is created, it sets
the max. LP limit to either 512 or the no. of PPs allocated, if greater than 512.
Add PVs equal to the no. of existing PVs in the VG, ie., if the VG has 8 PVs then add
another 8 PVs. If this is not permissible, add PVs in multiples of two. This ensures, you
have enough PVs to spread the PPs across multiple PVs.
Add all the PVs in one step and expand the filesystem. If you add 4 PVs and expand the
filesystem in first step, later add another 4 PVs and expand again, then PPs will be
created round-robin across 4 PVs during 1st step and then across 4 PVs during 2nd step. If
you add all the 8 PVs in one step and then expand, PPs will be spread across 8 PVs.
If you add only one PV to the VG and expand the filesystem, all the PPs will be created
contiguously on the newly added PV, which may become I/O bottleneck.
Expand the filesystem using ‘smitty chfs’ or from command line using chfs command.
Command line examples for expanding a filesystem by adding one PV at a time to the VG and
adding multiple PVs to the VG are given below. The distribution of PPs across the PVs is given in
the Sample Output section.
Page 16 of 41
It can be seen that, a new JFSlog is created for each PV added and this can be used if a new
filesystem is created or if a jfslog is to be changed to one of the existing filesystems.
Page 17 of 41
I/O performance tuning
# /usr/samples/kernel/vmtune
• numclust
If a server has large memory, you should probably do some tuning so that when syncd runs,
there won't be a huge amount of I/O that gets flushed to disk. One of the things you should
consider is turning on the write-behind options using the vmtune command. This increases
performance by asynchronously writing modified pages in memory to disk rather than waiting for
syncd to do the flushing. Sequential write-behind initiates I/O for pages if the VMM detects that
writing is sequential. The file system divides each file into clusters of four dirty pages of 4 KB
each. These 16 KB clusters are not written to disk until the program begins to write the next 16
KB cluster. At this point, the file system forces the four dirty pages to be written to disk. By
spreading out the I/O over time instead of waiting for syncd, it prevents I/O bottlenecks from
taking place. A benefit derived from the clustering is that file fragmentation is diminished.
If it is envisaged that there would be sequential writes of very large files, it may benefit
performance by boosting the numclust value to an even higher figure. Any integer greater than 0
is valid, and the default is 1 cluster. Care must be taken when changing this parameter to ensure
that the devices used on the machine support fast writes.
# /usr/samples/kernel/vmtune -c 2
• maxrandwrt
Another type of write-behind supported by the vmtune command is the random write-behind. This
option can be used to specify the threshold (in 4 KB pages) for random writes to be accumulated
in memory before the pages are written to disk. This threshold is on a per-file basis. You may also
want to consider turning on random write behind. To turn on random write-behind, try the
following value:
# /usr/samples/kernel/vmtune -W 128
It should be noted that not every application would benefit in performance from write-behind. In
the case of database index creation, it is actually beneficial to disable the write-behind before the
creation activity. Write-behind can than be re-enabled after the indexes have been created.
• maxperm
If it is intended for the system to serve as an NFS file server, the large bulk of its memory would
be dedicated to storing persistent file pages rather than working segments. Thus, it would help
performance to push up the maxperm value to take up as much of the memory as possible. The
following command would do just that:
# /usr/samples/kernel/vmtune -P 100
Page 18 of 41
The converse is true if the system is to be used for numerically intensive computations or in some
other application where working segments form the dominant part of the virtual memory. In such a
situation, the minperm and maxperm values should be lowered. Fifty percent would be a good
start.
• maxpgahead
If large files are going to be read into memory often, the maxpgahead should be increased from
its default value of 8. The new value could be any power of 2 value because the algorithm for
reading ahead keeps doubling the pages read. The flag to modify maxpgahead is -R. So, to set
the value to 16, you would enter:
# /usr/samples/kernel/vmtune -R 16
Turning on and tuning these parameters (numclust and maxrandwrt) could result in a reduced I/O
bottleneck because now, writes to disk to not have to wait on the syncd daemon, but rather, can
be spread more evenly over time. There will also be less file fragmentation because dirty pages
are clustered first before being written to disk.
The other vmtune parameters that can be tuned for I/O performance are listed below. It is
important to bear in mind, though, that using large files does not necessarily warrant any tuning of
these parameters. The decision to tune them will depend very much on what type of I/Os are
occurring to the files.
The size of the I/O, whether it is raw or journaled file system I/O, and the rate at which the I/O is
taking place are important considerations.
• numfsbufs
This parameter specifies the number of file system buf structs. Buf structs are defined in
/usr/include/sys/buf.h. When doing writes, each write buffer will have an associated buffer header
as described by the struct buf. This header describes the contents of the buffer.
Increasing this value will help write performance for very large writes size on devices that support
very fast writes. A filesystem will have to be unmounted and then mounted again after changing
this parameter in order for it to take effect.
# /usr/samples/kernel/vmtune -b 512
The default value for this parameter is 93 for AIX V4. Each filesystem gets two pages worth of buf
structs. Since two pages is 8192 bytes and since sizeof(struct buf) is about 88, the ratio is around
93 (8192/88=93). The value of numfsbufs should be based on how many simultaneous I/Os you
would be doing to a single filesystem. Usually, though, this figure will be left unchanged, unless
your application is issuing very large writes (many megabytes at a time) to fast I/O devices
such as HIPPI.
• lvm_bufcnt
This parameter specifies the number of LVM buffers for raw physical I/Os. If the striped logical
volumes are on raw logical volumes and writes larger than 1.125 MB are being done to these
striped raw logical volumes, increasing this parameter might increase throughput of the write
activity.
# /usr/samples/kernel/vmtune -u 16
The 1.125 MB figure comes about because the default value of lvm_bufcnt is 9, and the
maximum size the LVM can handle in one write is 128 KB. (9 buffers * 128 KB equals 1.125 MB.).
Page 19 of 41
• hd_pbuf_cnt
This attribute controls the number of pbufs available to the LVM device driver. Pbufs are pinned
memory buffers used to hold I/O requests. In AIX V4, a single pbuf is used for each sequential I/O
request regardless of the number of pages in that I/O. The default allows you to have a queue
of at least 16 I/Os to each disk, which is quite a lot. So, it is often not a bottleneck. However, if
you have a RAID array which combines a lot of physical disks into one hdisk, you may need to
increase it.
The default hd6 created in the rootvg would, in most cases, be far from sufficient, and it will be
necessary to expand the hd6 paging space. Care must be taken when expanding hd6 to ensure
that it is one contiguous whole and not fragmented since fragmentation would impact
performance negatively.
Ideally, there should be several paging spaces of roughly equal sizes created, each on a separate
physical disk drive. You should also attempt to create these paging spaces on relatively lightly
loaded physical volumes so as to avoid causing any of these drives to become bottlenecks.
These are guidelines and during performance tuning, monitor the Paging Space usage and
increase as required.
The vmtune command is used to modify the VMM parameters that control the behavior of the
memory-management, CPU and I/O subsystems. The vmtune command can only be invoked
successfully by the root user, and any changes made to its parameters will only remain in effect
until the next reboot.
Page 20 of 41
6.4. SSA adapters and loops
Minimum no. of disks that can be configured in a loop are 4 or one quadrant of the enclosure. A
SSA enclosure can have 16 disks. Dummies can be used in each quadrant to distribute the disks
across all four quadrants. Hence create loops with either 4 or 8 disks in a loop.
6.8. Filemon
The filemon command is one of the performance tools used to investigate I/O related
performance. The syntax of the command can be found in the AIX documentation or in the
Performance and Tuning Guide.
Page 21 of 41
Guidelines
7.5. ulimits
Tune ulimits for specific or all users so that files greater than 2GB can be created.
Page 22 of 41
Example
echo 'This script assumes hdiskpower11, 12, 13, 14 and 15 are free'
echo 'Creates VG datavg, LVs datalv1, datalv2 and datalv3'
echo 'Creates JFSLog loglv11, loglv12 and loglv13'
echo 'Creates filesystems /data1, /data2 and /data3'
echo 'Is it OK (y/n)?'
read OPT JUNK
if [ "$OPT" = "y" ]
then
Page 23 of 41
echo 'Creating JFS logs'
mklv -y loglv11 -tjfslog datavg 1 hdiskpower11
mklv -y loglv12 -tjfslog datavg 1 hdiskpower12
mklv -y loglv13 -tjfslog datavg 1 hdiskpower13
crfs -vjfs -ddatalv2 -m/data2 -Ano -prw -tno -afrag=4096 -anbpi=4096 -aag=8 -alogname=loglv12
crfs -vjfs -ddatalv3 -m/data3 -Ano -prw -tno -afrag=4096 -anbpi=4096 -aag=8 -alogname=loglv13
date
echo 'This is an example to show'
echo '1> Creating LVs and Filesystems on three PVs with max allocation policy'
echo '2> Creating one JFSLog on a separate PV for each filesystem'
echo '3> Creating filesystems'
echo '4> When the VG is full, extending it by adding PV(s)'
echo '5> Expanding Filesystems'
echo '\nNow expanding /data1, /data2 and /data3 by adding 22 PPs to each'
echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'
Page 25 of 41
chfs -asize=+2793042 /data1
chfs -asize=+2793042 /data2
chfs -asize=+2793042 /data3
(
echo
date
echo '\nAfter adding hdiskpower15'
echo
lsvg -l datavg
echo '\nAll the added PPs now resides only on hdiskpower15'
lslv -l datalv1
echo '\n df -k'
df -k /data1
echo
lslv -m datalv1
else
echo 'Now expanding /data1, /data2 and /data3 by adding 22 PPs to each'
echo
date
fi
fi
Page 27 of 41
datalv2
datalv3
Creating Filesystems
Based on the parameters chosen, the new /data1 JFS file system
is limited to a maximum size of 134217728 (512 byte blocks)
datavg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
loglv11 jfslog 1 1 1 open/syncd N/A
loglv12 jfslog 1 1 1 open/syncd N/A
loglv13 jfslog 1 1 1 open/syncd N/A
datalv1 jfs 66 66 3 open/syncd /data1
datalv2 jfs 66 66 3 open/syncd /data2
datalv3 jfs 66 66 3 open/syncd /data3
Page 28 of 41
df -k
/dev/datalv1 4325376 4189564 4% 17 1% /data1
/dev/datalv2 4325376 4189564 4% 17 1% /data2
/dev/datalv3 4325376 4189564 4% 17 1% /data3
===========================================
Phase 2 : Adding more PVs and expanding already created LVs and FSs
Mon May 12 15:06:39 EDT 2003
df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv1 7143424 6919164 4% 17 1% /data1
df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv2 7143424 6919164 4% 17 1% /data2
df -k
Filesystem 1024-blocks Free %Used Iused %Iused Mounted on
/dev/datalv3 7143424 6919164 4% 17 1% /data3
Page 40 of 41
References
9.1. Redbooks
AIX Version 4.3 Differences Guide, SG24-2014
AIX Version 5.2 Differences Guide, SG24-5765
RS/6000 Performance Tools in Focus, SG24-4989
Understanding IBM RS/6000 Performance and Sizing, SG24-4810
AIX 64-bit Performance in Focus SG24-5103-00
AIX Logical Volume Manager, from A to Z: Introduction and Concepts SG24-5432-00
AIX 5L Version 5.2 Commands Reference, Volume 3
Page 41 of 41