You are on page 1of 29

HP-UX LVM, Disk and FileSystem Tasks

Table of Contents
1. Basic Tasks..................................................................
................................................................................
...................... 1
2. Recognizing and Initializing in LVM a newly added Disk / LUN (Discovery / Res
can) 2
3. Removing a Physical Volume...................................................
.............................................................................. 2
4. Creating a Volume Group......................................................
................................................................................
... 3
5. Adding a Disk to a Volume Group..............................................
........................................................................ 4
6. Removing a Disk from a Volume Group..........................................
............................................................... 5
7. Removing a Volume Group......................................................
................................................................................
5
8. Creating a Logical Volume and Mounting its File System.......................
........................................... 6
9. Extending a Logical Volume and its FileSystem................................
....................................................... 8
10. Reducing a Logical Volume and its FileSystem................................
..................................................... 9
11. Adding a Disk to a Volume Group and Creating a Logical Volume...............
.......................... 9
12. Adding a Disk, Creating a Volume Group and Creating a Logical Volume........
............. 10
13. Adding a Disk to a Volume Group and Extending a Logical Volume..............
.................... 11
14. Adding a LUN / External Disk, Extending the Volume Group and the Logical Vol
ume 11
15. Importing and Exporting Volume Groups.......................................
.......................................................... 12
16. Removing a Logical Volume...................................................
........................................................................... 13
17. Moving Disks Within a System (LVM Configuration with Persistent Device Files
).. 13
18. Moving Disks Within a System (LVM Configuration with Legacy Device Files)...
..... 14
19. Moving Disks Between Systems................................................
.................................................................. 14
20. Moving Data to a Different Physical Volume..................................
...................................................... 15
21. Replacing a Mirrored Non-Boot Disk..........................................
................................................................. 15
22. Replacing an Unirrored Non-Boot Disk........................................
............................................................. 16
23. Replacing a Mirrored Boot Disk..............................................
........................................................................ 17
24. Creating a Spare Disk.......................................................
................................................................................
..... 18
25. Reinstating a Spare Disk....................................................
................................................................................
. 18
26. Changing Physical Volume Boot Types.........................................
.......................................................... 19

27. Enabling and Disabling a Path to a Physical Volume..........................


............................................ 19
28. Creating an Alternate Boot Disk.............................................
........................................................................ 20
29. Mirroring the Boot Disk.....................................................
................................................................................
..... 21
30. Mirroring the Boot Disk on HP 9000 Servers..................................
...................................................... 22
31. Mirroring the Boot Disk on HP Integrity Servers.............................
.................................................... 22
32. Backing Up a Mirrored Logical Volume........................................
............................................................. 23
33. Backing Up and Restoring Volume Group Configuration.........................
................................... 24
34. Quiescing and Resuming a Volume Group.......................................
..................................................... 25
35. Adding a Mirror to a Logical Volume.........................................
................................................................. 25
36. Removing a Mirror from a Logical Volume.....................................
........................................................ 26
37. Increasing the Primary Swap.................................................
......................................................................... 26
38. Identifying Available Disks to be Used in a Volume Group....................
.................................... 26
39. Creating a Physical Volume Group (PVG)......................................
....................................................... 28
40. Creating (Mirroring) Logical Volumes on Specific Physical Volumes...........
..................... 29

1. Basic Tasks

Search for attached disk:


ioscan -fnC disk
Get Disk Info:
diskinfo /dev/rdsk/c0t1d0
Initialize a disk for use with LVM:
pvcreate -f /dev/rdsk/c0t1d0
Initialize the disk and check the disk for bad blocks:
mediainit /dev/rdsk/c0t1d0
Display volume group info:
vgdisplay -v vg01

2. Recognizing and Initializing in LVM a newly added Disk / LUN (Discovery / Res
can)

As a new disk is added to the server or a new LUN is assigned to the host, you h
ave to execute the following procedure for the operating system to discovery the
disk/LUN and the LVM to initialize it.
Check for the new hardware (the new devices will have a hardware path, but no de
vice file associated with it):
ioscan -fnC disk | more
Create the device file for the hardware path:
insf
If using vpaths then create the vpath association:
/opt/IBMdpo/bin/cfgvpath
/opt/IBMdpo may not be the path of the sdd software, so run "whereis cfgvpath" t
o find the correct path.
Verify the new devices / vpaths:
ioscan -fnC disk
strings /etc/vpath.cfg
/opt/IBMdpo/bin/showvpath
Create the physical volume (Initialize the disk for use with LVM).
For each disk (not vpath) issue:
pvcreate -f /dev/rdsk/cxtxdx
For vpaths this information can be found in the file /etc/vpath.cfg.

3. Removing a Physical Volume

To simply remove a physical volume that's not used and referred by LVM structure
s (volume groups and logical volumes) follow this quick procedure.
To remove a physical volume used and referred by LVM structures you first have t
o remove the logical volumes and the volume groups which rely on it by following
the procedures detailed in the sections below (Removing a Logical Volume and Re
moving a Volume Group) and then you can issue the commands in this section.
Identify the hardware path of the disk to remove:
ioscan -fnC disk
Remove the special device file:
rmsf -H <HW_path_from_ioscan_output>

4. Creating a Volume Group

Create the device structure needed for a new volume group.


In the mknod command the group number (the last parameter) is an hexadecimal and
it should be different for each volume group. For example, for the volume vg02,
tipically the second volume created) that number would be 0x020000.
The default limit is 10 volume groups as set by the kernel parameter maxvgs.
mkdir /dev/vgdatacd vgdata

mknod group c 64 0x010000


chown -R root:sys /dev/vgdata
chmod 755 /dev/vgdata
Create volume group vgdata - I suggest NOT to use the default values for -s (phy
sical_extend_size=4MB) and to do use -e (max_physical_extends=1016):
vgcreate -e 65535 -s 64 /dev/vgdata /dev/dsk/c0t0d4
vgdisplay -v vgdata
If your expecting to use more than 16 physical disks use the -p option, range fr
om 1 to 256 disks.
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.
To do this:
1. Umount the filesystem which relies on the logical volume and deactivate the v
olume group:
umount /datavol
vgchange -a n /dev/vgdata
2. Edit the control script file of the cluster package that will use the logical
volume to add the reference to it:
vi /etc/cmcluster/package_name/package_name.run
3. Export the volume group map:
vgexport -p -s -m /tmp/vgdata.map /dev/vgdata
4. Copy map file and the package control file to all nodes by using rcp or scp:
rcp /etc/cmcluster/package_name/package_name.run gldev:/etc/cmcluster/package_na
me/
5. Check and note the minor number in group file for the volume group:
ls -l /dev/vgdata/group
6. On the alternate node, create the control file using the same minor number of
the primary node noted at step 5 (always on the alternate node):
mknod /dev/vgdata/group c 64
7. Import the volume group on the alternate node:
vgimport -s -m /tmp/vgdata.map /dev/vgdata
8. Check wether the volume group can be activated on the alternate node, make a
backup of the volume group configuration:
vgchange -a y /dev/vgdata
vgcfgbackup /dev/vgdata
9. On the alternate node, deactivate the volume group and check:
vgchange -a n /dev/vgdata
vgdisplay -v vgdata
Note: When a volume group is created the maximum physical extents per volume (ma
x_pe parameter) will be set to the max_pe of the largest physical volume (PV) or
1016, which ever is greater, if no max_pe is specified.
The effect of not specifying the max_pe parameter at volume creation through the
-e option would be that any PV added to the volume group in the future regardle
ss of their size will be limited to the volume groug creation value of max_pe.
Therefore, consider increasing the max_pe to accommodate PV's that may likely be
larger than the largest PV used to create the Volume Group.

The formula to use to determine the value is:


physical_extent_size * max_pe = size_of_the_disk.
The default value for physical_extent_size is 4M and the maximum value for max_p
e is 65535 (example for 18 gig disk use a value 4608 for max_pe: 4M * 4608 = 18
gig).
There is also a default value of a maximum of 16 disks per volume group.
The following is an example of the creation of a volume group modifying these tw
o parameters (max_pe = 4608, maximum number of disk = 24):
vgcreate -e 4608 -p 24 /dev/vgdata /dev/dsk/c0t0d4

5. Adding a Disk to a Volume Group

Run a discovery of the physical disks / LUNs:


ioscan -fnCdisk
Prepare the disk - if the disk was previously used in another volume group then
use the -f option to overwrite the existing volume group informations on the dis
k:
pvcreate -f /dev/rdsk/c0t4d0
vgdisplay -v | grep PV
Add the disk to the volume group:
vgextend vg01 /dev/dsk/c0t4d0
vgdisplay -v vg01

6. Removing a Disk from a Volume Group

Check wether the disk is still used:


pvdisplay /dev/dsk/c0t4d0
Look at line starting with "Allocated PE" the number at the end of the line shou
ld be 0. If it is not the disk is still in use.
Check the volume group layout, the number of physical volumes (PV) in the volume
group and check the presence of the disk in the volume:
vgdisplay -v vg01
vgdisplay -v vg01 | grep PV
vgdisplay -v vg01 | grep PV | grep c0t4d0
Remove disk from volume group and check the results:
vgreduce vg01 /dev/dsk/c0t4d0
vgdisplay -v vg01 | grep PV
vgdisplay -v vg01 | grep PV | grep c0t4d0
vgdisplay -v vg01

7. Removing a Volume Group

Before removing a volume group, backup the data on the volumes of the volume gro
up.
Identify all of the logical volumes in this volume group:
vgdisplay -v /dev/vg01
OR:
vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'
Kill the processes using the volumes and unmount all of the logical volumes with
a command such as the following repeated for each of the volumes in the volume
group:
fuser -ku /dev/vg01/lvhome
umount /dev/vg01/lvhome
You can avoid issuing manually the above two commands for each of the volumes in
the volume group by using a for loop such as the following (customize it for yo
ur needs):
for vol_name in vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'; do fuser
-ku /dev/vg01/$vol_name; sleep 2; umount -f $vol_name; echo Umounted $vol_name;
done
Check all of the logical volumes in the volume group are unmounted:
bdf
bdf | grep vg01
After you have freed and umounted all of the logical volumes, remove the volume
group and make some check:
vgexport /dev/vg01
vgdisplay -v vg01
vgdisplay -v | grep vg01
vgdisplay -v
Note: using vgexport to remove a volume group is easier and faster than using th
e lvremove on each logical volume, the vgreduce on each of the physical volumes
(except the last one) and a vgremove.
Moreover, another advantage is that the /dev/vg01 directory is also removed.
Anyway, for the right of information, this is the common alternative procedure y
ou can follow instead of using the vgexport.
After you have freed and unmounted all of the logical volume by using the fuser
and the umount, issue the following commands for each logical volume in the volu
me group:
lvremove /dev/vg01/lvoln
Then, issue the following command for each disk in the volume group:
vgreduce /dev/vg01 /dev/disk/diskn
You can avoid issuing manually the above two commands for each of the volumes an
d disks in the volume group by using a for loop such as the following (customize
it for your needs):
for vol_name in vgdisplay -v vg01 | grep "LV Name" | awk '{print $3}'; do lvremo
ve /dev/vg01/$vol_name; sleep 2; echo Removed $vol_name; done
for disk_name in `vgdisplay -v | grep "PV Name" | cut -c 36-45`; do vgreduce /de
v/vg01 /dev/disk/$disk_name; sleep 2; echo Removed $disk_name; done
Finally, remove the volume group and make some check:
vgremove vg01

vgdisplay -v vg01
vgdisplay -v | grep vg01
vgdisplay -v
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Volume Gr
oup".

8. Creating a Logical Volume and Mounting its File System

Run a discovery of the physical disks / LUNs:


ioscan -fnCdisk
Create a 100 GB logical volume lvdata:
lvcreate -n lvdata -L 100000 -p w vgdata
lvdisplay -v lvdata
Create the filesystem:
newfs -F vxfs /dev/vgdata/rlvdata
Create the mount point and mount the file system:
mkdir /datavol
mount /dev/vgdata/rlvdata /datavol
If the volume is dedicated to a specific application (such as a database instanc
e or an application server instance), then typically you may need to assign that
mount point permissions according to the application's needs and the ownership
to the application's user and group, for example:
chmod 755 /datavol
chown oracle:oragroup /datavol
To have the filesystem mounted at system's boot you must edit the /etc/fstab to
add the entry corresponding to the new volume and its filesystem:
vi /etc/fstab
/dev/vgdata/rlvdata /datavol vxfs defaults 0 2
If the system on which you're creating the logical volume is a node of an HP Ser
viceGuard Cluster, then you have to present the new structure to the cluster to
make it aware of it.
To do this:
1. Umount the filesystem which relies on the logical volume and deactivate the v
olume group:
umount /datavol
vgchange -a n /dev/vgdata
2. Edit the control script file of the cluster package that will use the logical
volume to add the reference to it:
vi /etc/cmcluster/package_name/package_name.run
3. Export the volume group map:
vgexport -p -s -m /tmp/vgdata.map /dev/vgdata

4. Copy map file and the package control file to all nodes by using rcp or scp:
rcp /etc/cmcluster/package_name/package_name.run gldev:/etc/cmcluster/package_na
me/
5. Check and note the minor number in group file for the volume group:
ls -l /dev/vgdata/group
6. On the alternate node, create the control file using the same minor number of
the primary node noted at step 5 (always on the alternate node):
mknod /dev/vgdata/group c 64
7. Import the volume group on the alternate node:
vgimport -s -m /tmp/vgdata.map /dev/vgdata
8. Check wether the volume group can be activated on the alternate node, make a
backup of the volume group configuration:
vgchange -a y /dev/vgdata
vgcfgbackup /dev/vgdata
9. Always on the alternate node, create the mount point directory and assign the
permissions accordingly with the primary node:
mkdir /datavol
chmod 755 /datavol
chown oracle:oragroup /datavol
10. Mount the filesytem which relies on the logical volume on the alternate node
:
mount /dev/vgdata/rlvdata /datavol
bdf
11. Unmount the filesystem, deactivate the volume group on the alternate node an
d check:
umount /datavol
vgchange -a n /dev/vgdata
vgdisplay -v vgdata
If the environment uses mirrored individual disks in physical volume groups (PVG
s), check the /etc/lvmpvg file to ensure that each physical volume group contain
s the correct physical volume names for the alternate node.
When you use PVG-strict mirroring, the physical volume group configuration is re
corded in the /etc/lvmpvg file on the configuration node: this file defines the
physical volume group swhich at the basis of mirroring and indicates which physi
cal volumes belong to each physical volume group.
On each cluster's node, the /etc/lvmpvg file must contain the correct physical v
olume names for the physical volume groups s disks as they are known on that node.
Physical volume names for the same disks could bedifferent on different nodes.
After distributing volume groups to other nodes, make sure each node s /etc/lvmpvg
file correctly reflects the contents of all physical volume groups on that node
.

9. Extending a Logical Volume and its FileSystem

The logical volume extension consists of extendind the volume itself and then th

e filesystem which relies on it.


Extend a logical volume to 200 MB:
lvextend -L 200 /dev/vgdata/lvdata
lvdisplay -v lvdata
If the "OnlineJFS" package is not installed, then filesystem must be unmounted b
efore you can extend the file system.
Check wether the "OnlineJFS" package is installed:
swlist -l product | grep -i jfs
If the package is not installed, then kill all process that has open files on th
e volume, check if the volume has been freed, umount it and check if it's been u
nmounted:
fuser -ku /dev/vgdata/lvdata
fuser -cu /dev/vgdata/lvdata
umount /dev/vgdata/lvdata
bdf
If you receive messages telling the volume/filesystem is busy and it cannot be u
nmounted, then force the unmount by using the umount -f option:
umount -f /dev/vgdata/lvdata
Defragment the filesystem (it's optional but a good practice):
fsadm -d -D -e -E /data
Extend the file system to 200 MB:
extendfs /data
bdf
If the "OnlineJFS" package is installed, then calculate the space to add in bloc
ks (200 MB / 4 MB = 50 LE; 50 x 1024 = 51200 blocks) and perform the extension
:
fsadm -F vxfs -b 51200 /data
bdf
If you want to set the largefiles option so the filesystem can support files gre
ater than 2GB, then you can do it also after volume creation by issuing the foll
owing command:
fsadm -F vxfs -o largefiles /data
bdf

10. Reducing a Logical Volume and its FileSystem

Before reducing the volume backup the volume data.


To reduce (or shrink) a logical volume in size it's not necessary to umount the
filesystem, but the logical volume must be unused because not all of the applica
tions might handle the size reduction while they're operating.
Check if there are open files on the volume:
fuser -cu /dev/vg01/lvol5
Kill the processes that have open files on the volume:
fuser -ku /dev/vg01/lvol5

Reduce the logical volume size:


lvreduce -L 500 /dev/vg01/lvol5
lvdisplay -v lvol5
bdf

11. Adding a Disk to a Volume Group and Creating a Logical Volume

Run a discovery of the physical disks / LUNs:


ioscan -fnCdisk
Prepare the disk - if the disk was previously used in another volume group then
use the -f option to overwrite the existing volume group informations on the dis
k:
pvcreate -f /dev/dsk/c0t5d0
Add the disk to the volume group and check the result:
vgextend vg00 /dev/dsk/c0t5d0
vgdisplay -v | grep PV
vgdisplay vg00
Create the logical volume:
lvcreate -n lvdata -L 100000 -p w /dev/vg00
Create the file system:
newfs -F vxfs /dev/vg00/lvdata
Create the mount point for the new file system:
mkdir datavol
If you want the new filesystem to be mounted at system boot, then edit the /etc/
fstab file to add the entry for the newly created logical volume and its filesys
tem:
vi /etc/fstab
/dev/vg00/lvdata /datavol vxfs defaults 0 2
Mount the new filesystem:
mount -a
Note: the command mount -a mounts all of the filesystems in the /etc/fstab file,
so this command is useful to test wether the newly added entry in the file is s
yntactically right. If you're sure all of the entries in the file are already mo
unted and there's no "dangerous" entry that must be skipped by mounting but (for
any reason) it was not commented out, avoid using the mount -a and issue an "ex
plicit" mount by specifying the new filesystem to mount (mount -F fs_type /dev/v
g00/lvdata /datavol).
Make some check:
bdf
lvdisplay /dev/vg00/lvdata
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Logical V
olume and Mounting the File System".

12. Adding a Disk, Creating a Volume Group and Creating a Logical Volume

Run a discovery of the physical disks / LUNs:


ioscan -fnCdisk
Prepare the disk - if the disk was previously used in another volume group then
use the -f option to overwrite the existing volume group informations on the dis
k:
pvcreate -f /dev/dsk/c0t5d0
Add the disk to the volume group:
vgextend vg01 /dev/dsk/c0t5d0
vgdisplay -v | grep PV
Create the structure for the volume group:
mkdir /dev/vg01
mknod /dev/vg01/group c 64 0x010000
chown -R root:sys /dev/vg01
chmod 755 /dev/vg01
chmod 640 /dev/vg01/group
Create the logical volume:
lvcreate -n lvdata -L 100000 -p w /dev/vg00
Create the file system:
newfs -F vxfs /dev/vg00/lvdata
Create the mount point for the new file system:
mkdir datavol
If you want the new filesystem to be mounted at system boot, then edit the /etc/
fstab file to add the entry for the newly created logical volume and its filesys
tem:
vi /etc/fstab
/dev/vg00/lvdata /datavol vxfs defaults 0 2
Mount the new filesystem:
mount -a
Note: the command mount -a mounts all of the filesystems in the /etc/fstab file,
so this command is useful to test wether the newly added entry in the file is s
yntactically right. If you're sure all of the entries in the file are already mo
unted and there's no "dangerous" entry that must be skipped by mounting but (for
any reason) it was not commented out, avoid using the mount -a and issue an "ex
plicit" mount by specifying the new filesystem to mount (mount -F fs_type /dev/v
g00/lvdata /datavol).
Make some check:
bdf
lvdisplay /dev/vg00/lvdata
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Logical V
olume and Mounting the File System".

13. Adding a Disk to a Volume Group and Extending a Logical Volume

Run a discovery of the physical disks / LUNs:


ioscan -fnCdisk
Prepare the disk - if the disk was previously used in another volume group then
use the -f option to overwrite the existing volume group informations on the dis
k:
pvcreate -f /dev/dsk/c0t5d0
Add the disk to the volume group and check the results:
vgextend vg01 /dev/dsk/c0t5d0
vgdisplay -v | grep PV
vgdisplay vg00
Extend the logical volume and check the results:
lvextend -L 200000 /dev/vg00/lvdata
lvdisplay /dev/vg00/lvdata
bdf

14. Adding a LUN / External Disk, Extending the Volume Group and the Logical Vol
ume

Discovery the new LUN:


ioscan -fnCdisk > /tmp/ioscan1.txt
insf -eCdisk
ioscan -fnCdisk > /tmp/ioscan2.txt
diff /tmp/ioscan1.txt /tmp/ioscan2.txt
ioscan -fnC disk
Run SAM and check for "unused" hardware path:
sam
you will notice something like:
Hardware Path Number of Paths Use Volume Group Total MB DES
1/10/0/0.115.10.19.98.1.3 2 Unused -- 8192 IBM
Get the LUN Info:
diskinfo /dev/rdsk/c33t1d3
Get the Volume Group Info:
vgdisplay -v vg01
Add disk to the Volume Group:
pvcreate -f /dev/rdsk/c0t4d0
vgextend vg01 /dev/dsk/c0t4d0
vgdisplay -v vg01
OR
1)
2)
3)

by Using SAM:
Open SAM
Select Disks and File Systems --> Volume Groups
Arrow down to the volume group you want to extend (from bdf) and hit the spac

e bar to select it
4) Tab once to get to the menu at the top and then arrow over to "Actions" and h
it enter --> 5) Select "Extend" from the Actions menu
6) Select "Select Disk(s)..." and hit enter
7) Select the appropriate disk to add with the space bar and select OK, then Sel
ect OK which will expand the volum
8) Exit SAM
Check the File System Mounted on the Logical Volume to Extend and Take Note of t
he Space Info (kbytes used avail %used):
bdf /oradata
Extend the Logical Volume:
lvextend -L 11776 /dev/vg01/lvol3
Defragment the File System Mounted on the Logical Volume:
fsadm -d -D -e -E /oradata
Extend the File System Mounted on the Logical Volume:
fsadm -F vxfs -b 11776M /oradata
Check the File System Info to Verify the Current Space:
bdf /oradata

15. Importing and Exporting Volume Groups

1) Make the volume group unavailable by disactivating it:


vgchange -a n /dev/vgdata
2) Export the the disk while creating a logical volume map file:
vgexport -v -m data_map vgdata
3) Disconnect the drives and move to new system.
4) Move the data_map file to the new system.
5) On the new system recreate the volume group directory:
mkdir /dev/vgdata
mknod /dev/vgdata/group c 64 0x010000
chown -R root:sys /dev/vgdata
chmod 755 /dev/vgdata
chmod 640 /dev/vgdata/group
6) Import the disks to the new system:
vgimport -v -m data_map /dev/vgdata /dev/dsk/c2t1d0 /dev/dsk/c2t2d0
7) Enable the new volume group:
vgchange -a y /dev/vgdata

16. Removing a Logical Volume

Before removing the logical volume backup the volume data.

Check wether processes has open files on the logical volume:


fuser -cu /dev/vgdata/lvdata
As the volume has been freed, umount it and check if it's been unmounted:
fuser -ku /dev/vgdata/lvdata
fuser -cu /dev/vgdata/lvdata
umount /dev/vgdata/lvdata
bdf
If you receive messages telling the volume/filesystem is busy and it cannot be u
nmounted, then force the unmount by using the umount -f option:
umount -f /dev/vgdata/lvdata
Remove the logical volume:
lvremove /dev/vgdata/lvdata
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Logical V
olume and Mounting the File System".

17. Moving Disks Within a System (LVM Configuration with Persistent Device Files
)

Deactivate the volume group:


vgchange -a n /dev/vgnn
Physically move your disks to their desired new locations.
Activate the volume group:
vgchange -a y /dev/vgnn

18. Moving Disks Within a System (LVM Configuration with Legacy Device Files)

Deactivate the volume group:


vgchange -a n /dev/vgnn
If you want to retain the same minor number for the volume group, examine the vo
lume group's group file:
ls -l /dev/vgnn/group
Remove the volume group device files and its entry from the LVM configuration fi
les:
vgexport -v -s -m /tmp/vgnn.map /dev/vgnn
Physically move your disks to their desired new locations.
To view the new locations:
vgscan -v
If you want to retain the minor number of the volume group device file, create i

t (the group file in this example has a major number of 64 and a minor number of
0x01000000):
mkdir /dev/vgnn
mknod /dev/vgnn/group c 64 0x010000
Add the volume group entry back to the LVM configuration files:
vgimport -v -s -m /tmp/vgnn.map /dev/vgnn
Activate the newly imported volume group:
vgchange -a y /dev/vgnn
Back up the volume group configuration:
vgcfgbackup /dev/vgnn

19. Moving Disks Between Systems

Make the volume group and its associated logical volumes unavailable to users:
vgchange -a n /dev/vg_planning
Preview the removal of the volume group information from the LVM configuration f
iles:
vgexport -p -v -s -m /tmp/vg_planning.map /dev/vg_planning
Remove the volume group information:
vgexport -v -s -m /tmp/vg_planning.map /dev/vg_planning
Connect the disks to the new system and copy the /tmp/vg_planning.map file to th
e new system.
Create the volume group Device Files:
mkdir /dev/vg_planning
mknod /dev/vg_planning/group c 64 0x010000
chown -R root:sys /dev/vg_planning
chmod 755 /dev/vg_planning
chmod 640 /dev/vg_planning/group
Get device file information about the disks:
ioscan -funN -C disk
Import the volume group:
vgimport -N -v -s -m /tmp/vg_planning.map /dev/vg_planning
Activate the newly imported volume group:
vgchange -a y /dev/vg_planning

20. Moving Data to a Different Physical Volume


To move the data in logical volume /dev/vg01/markets from the disk /dev/disk/dis
k4 to the disk /dev/disk/disk7:
pvmove -n /dev/vg01/markets /dev/disk/disk4 /dev/disk/disk7
To move all data off disk /dev/dsk/disk3 and relocate it at the destination disk
/dev/disk/disk5:

pvmove /dev/disk/disk3 /dev/disk/disk5


To move all data off disk /dev/disk/disk3 and let LVM transfer the data to avail
able space within the volume group:
pvmove /dev/disk/disk3

21. Replacing a Mirrored Non-Boot Disk

Take note of the hardware paths to the disk:


ioscan m lun /dev/disk/disk14
Halt LVM access to the disk
If the disk is not hot-swappable, power off the system to replace it.
If the disk is hot-swappable, detach it:
pvchange -a N /dev/disk/disk14
Physically Replace the disk.
If the system was not rebooted, Notify the mass storage subsystem that the disk
has been replaced:
scsimgr replace_wwid D /dev/rdisk/disk14
Determine the new LUN instance number for the replacement disk:
ioscan m lun
In this example, LUN instance 28 was created for the new disk, with LUN hardware
path 64000/0xfa00/0x1c, device special files /dev/disk/disk28 and /dev/rdisk/di
sk28, at the same lunpath hardware path as the old disk, 0/1/1/1.0x3.0x0. The ol
d LUN instance 14 for the old disk now has no lunpath associated with it.
If the system was rebooted to replace the failed disk, then ioscan -m lun does n
ot display the old disk.
Assign the old instance number to the replacement disk (this assigns the old LUN
instance number 14 to the replacement disk and the device special files for the
new disk are renamed to be consistent with the old LUN instance number):
io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28
ioscan m lun /dev/disk/disk14
Restore LVM configuration information to the new disk:
vgcfgrestore -n /dev/vgnn /dev/rdisk/disk14
Restore LVM access to the disk (if the disk is hot-swappable):
pvchange a y /dev/disk/disk14
If the disk is not hot-swappable and you had reboot the system, reattach the dis
k by reactivating the volume group as follows:
vgchange -a y /dev/vgnn

22. Replacing an Unirrored Non-Boot Disk

Take note of the hardware paths to the disk:


ioscan m lun /dev/disk/disk14

Halt LVM access to the disk


If the disk is not hot-swappable, power off the system to replace it.
If the disk is hot-swappable, disable user and LVM access to all unmirrored logi
cal volumes.
For each unmirrored logical volume using the disk kill all of the processes acce
ssing the volume:
fuser -cu dev/vg01/lvol1
fuser -ku dev/vg01/lvol1
umount /dev/vg01/lvol1
Disable LVM access to the disk:
pvchange -a N /dev/disk/disk14
Physically Replace the disk.
If the system was not rebooted, Notify the mass storage subsystem that the disk
has been replaced:
scsimgr replace_wwid D /dev/rdisk/disk14
Determine the new LUN instance number for the replacement disk:
ioscan m lun
Assign the old instance number to the replacement disk (this assigns the old LUN
instance number 14 to the replacement disk and the device special files for the
new disk are renamed to be consistent with the old LUN instance number):
io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28
ioscan m lun /dev/disk/disk14
Restore LVM configuration information to the new disk:
vgcfgrestore -n /dev/vgnn /dev/rdisk/disk14
Restore LVM access to the disk (if the disk is hot-swappable):
pvchange a y /dev/disk/disk14
If the disk is not hot-swappable and you had reboot the system, reattach the dis
k by reactivating the volume group as follows:
vgchange -a y /dev/vgnn
Recover any lost data:
LVM recovers all the mirrored logical volumes on the disk, and starts that recov
ery when the volume group is activated.
For all the unmirrored logical volumes that you identified in Step 2, restore th
e data from backup and reenable user access.
For raw volumes, restore the full raw volume using the utility that was used to
create your backup. Then restart the application.
For file systems, you must re-create the file systems first:
newfs -F fstype /dev/vgnn/rlvolnn

23. Replacing a Mirrored Boot Disk

Take note of the hardware paths to the disk:


ioscan m lun /dev/disk/disk14
Halt LVM access to the disk

If the disk is not hot-swappable, power off the system to replace it.
If the disk is hot-swappable, detach it:
pvchange -a N /dev/disk/disk14
Physically Replace the disk.
If the system was not rebooted, Notify the mass storage subsystem that the disk
has been replaced:
scsimgr replace_wwid D /dev/rdisk/disk14
Determine the new LUN instance number for the replacement disk:
ioscan m lun
In this example, LUN instance 28 was created for the new disk, with LUN hardware
path 64000/0xfa00/0x1c, device special files /dev/disk/disk28 and /dev/rdisk/di
sk28, at the same lunpath hardware path as the old disk, 0/1/1/1.0x3.0x0. The ol
d LUN instance 14 for the old disk now has no lunpath associated with it.
If the system was rebooted to replace the failed disk, then ioscan -m lun does n
ot display the old disk.
Assign the old instance number to the replacement disk (this assigns the old LUN
instance number 14 to the replacement disk and the device special files for the
new disk are renamed to be consistent with the old LUN instance number):
io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28
ioscan m lun /dev/disk/disk14
Restore LVM configuration information to the new disk:
vgcfgrestore -n /dev/vgnn /dev/rdisk/disk14
Restore LVM access to the disk (if the disk is hot-swappable):
pvchange a y /dev/disk/disk14
If the disk is not hot-swappable and you had reboot the system, reattach the dis
k by reactivating the volume group as follows:
vgchange -a y /dev/vgnn
Initialize boot information on the disk:
lvlnboot -v
lvlnboot -R /dev/vg00
lvlnboot -v

24. Creating a Spare Disk

Initialize the disk as an LVM disk:


pvcreate /dev/rdisk/disk3
Ensure the volume group has been activated:
vgchange -a y /dev/vg01
Designate one or more physical volumes as spare physical volumes within the volu
me group:
vgextend -z y /dev/vg01 /dev/disk/disk3
Alternately, you can change a physical volume with no extents currently allocate
d within it into a spare physical volume:
pvchange -z y /dev/disk/disk3

25. Reinstating a Spare Disk

After a failed disk has been repaired or a decision has been made to replace it,
follow these steps to reinstate it and return the spare disk to its former stan
dby status.
Physically connect the new or repaired disk.
Restore the LVM configuration:
vgcfgrestore -n /dev/vg01 /dev/rdisk/disk1
Ensure the volume group has been activated:
vgchange -a y /dev/vg01
Be sure that allocation of extents is now allowed on the replaced disk:
pvchange -x y /dev/disk/disk1
Move the data from the spare to the replaced physical volume:
pvmove /dev/disk/disk3 /dev/disk/disk1
The data from the spare disk is now back on the original disk or its replacement
, and the spare disk is returned to its role as a standby empty disk.

26. Changing Physical Volume Boot Types

To change a disk type from bootable to nonbootable, follow these steps:


Use vgcfgrestore to determine if the volume group contains any bootable disks:
vgcfgrestore -l -v -n vg01
Run vgmodify twice, once with the -B n and once without it. Compare the availabl
e values for max_pe and max_pv:
vgmodify -t -B n vg01 /dev/rdsk/c2t1d0
vgmodify -t vg01
Choose new values for max_pe and max_pv. Review the values by running vgmodify w
ith the new settings and the -r option:
vgmodify -r -p 6 -e 56828 -B n vg01 /dev/rdsk/c2t1d0
Deactivate the volume group:
vgchange -a n vg01
Commit the changes by running vgmodify without the -r option:
vgmodify -p 6 -e 56828 -B n vg01 /dev/rdsk/c2t1d0
Activate the volume group:
vgchange -a y vg01
Run the vgcfgrestore or pvdisplay commands to verify that the disk type has chan
ged:
vgcfgbackup vg01

vgcfgrestore -l -v -n vg01

27. Enabling and Disabling a Path to a Physical Volume

To detach a link to a physical volume:


pvchange -a n /dev/disk/disk33
If you are using LVM's alternate links for multipathed disks, each link uses a d
ifferent legacy device files. In that situation, to detach all links to a physic
al volume, use N as the argument to the -a option:
pvchange -a N /dev/dsk/c5t0d0
To reattach a specific path to a physical volume:
pvchange -a y /dev/dsk/c5t0d0
Because detaching a link to a physical volume is temporary, all detached links i
n a volume group are reattached when the volume group is activated, either at bo
ot time or with an explicit vgchange command:
vgchange -a y /dev/vg02

28. Creating an Alternate Boot Disk

With non-LVM disks, a single root disk contains all the attributes needed for bo
ot, system files, primary swap, and dump. Using LVM, a single root disk is repla
ced by a pool of disks, a root volume group, which contains all of the same elem
ents but allowing a root logical volume, a boot logical volume, a swap logical v
olume, and one or more dump logical volumes. Each of these logical volumes must
be contiguous, that is, contained on a single disk, and they must have bad block
relocation disabled.
If you newly install your HP-UX system and choose the LVM configuration, a root
volume group is automatically configured (/dev/vg00), as are separate root (/dev
/vg00/lvol3) and boot (/dev/vg00/lvol1) logical volumes. If you currently have a
combined root and boot logical volume and you want to reconfigure to separate t
hem after creating the boot logical volume, use the lvlnboot command with the -b
option to define the boot logical volume to the system, taking effect the next
time the system is booted.
If you create your root volume group with multiple disks, use the lvextend comma
nd to place the boot, root, and primary swap logical volumes on the boot disk.
You can use pvmove to move the data from an existing logical volume to another d
isk if necessary to make room for the root logical volume.
Create a bootable physical volume:
On an HP Integrity server, partition the disk using the idisk command and a part
ition description file, then run insf.
Run pvcreate with the -B option.
On an HP Integrity server, use the device file denoting the HP-UX partition:
pvcreate -B /dev/rdisk/disk6_p2

On an HP 9000 server, use the device file for the entire disk:
pvcreate -B /dev/rdisk/disk6
Create a directory for the volume group:
mkdir /dev/vgroot
mknod /dev/vgroot/group c 64 0xnn0000
Create the root volume group, specifying each physical volume to be included:
vgcreate /dev/vgroot /dev/disk/disk6
Place boot utilities in the boot area:
mkboot /dev/rdisk/disk6
Add an autoboot file to the disk boot area:
mkboot -a "hpux" /dev/rdisk/disk6
Create the boot logical volume:
lvcreate -C y -r n -n bootlv /dev/vgroot # lvextend
/disk/disk6

L 512 /dev/vgroot/bootlv /dev

Create the primary swap logical volume:


lvcreate -C y r n -n swaplv /dev/vgroot # lvextend
disk/disk6

L 2048 /dev/vgroot/swaplv /dev/

Create the root logical volume:


lvcreate -C y r n -n rootlv /dev/vgroot # lvextend
disk/disk6

L 1024 /dev/vgroot/rootlv /dev/

Specify that bootlv is the boot logical volume:


lvlnboot -b /dev/vgroot/bootlv
Specify that rootlv is the root logical volume:
lvlnboot -r /dev/vgroot/rootlv
Specify that swaplv is the primary swap logical volume:
lvlnboot -s /dev/vgroot/swaplv
Specify that swaplv:
lvlnboot -d /dev/vgroot/swaplv
Verify the configuration:
lvlnboot -v /dev/vgroot
Once the boot and root logical volumes are created, create file systems for them
:
mkfs F hfs /dev/vgroot/rbootlv
OR
mkfs F vxfs /dev/vgroot/rrootlv
On HP Integrity servers, the boot file system can be VxFS:
mkfs F vxfs /dev/vgroot/rbootlv

29. Mirroring the Boot Disk

After you create mirror copies of the root, boot, and primary swap logical volum

es, if any of the underlying physical volumes fail, the system can use the mirro
r copy on the other disk and continue. When the failed disk comes back online, i
t is automatically recovered, provided the system has not been rebooted.
If the system reboots before the disk is back online, reactivate the volume grou
p to update the LVM data structures that track the disks within the volume group
. You can use vgchange -a y even though the volume group is already active.
To reactivate volume group vg00:
vgchange -a y /dev/vg00
As a result, LVM scans and activates all available disks in the volume group vg
00, including the disk that came online after the system rebooted.
The procedure for creating a mirror of the boot disk is different for HP 9000 an
d HP Integrity servers. HP Integrity servers use partitioned boot disks.

30. Mirroring the Boot Disk on HP 9000 Servers

Make sure the device files are in place:


insf -e -H 0/1/1/0.0x1.0x0
Create a bootable physical volume:
pvcreate -B /dev/rdisk/disk4
Add the physical volume to your existing root volume group:
vgextend /dev/vg00 /dev/disk/disk4
Place boot utilities in the boot area:
mkboot /dev/rdisk/disk4
Add an autoboot file to the disk boot area:
mkboot -a "hpux" /dev/rdisk/disk4
The logical volumes on the mirror boot disk must be extended in the same order t
hat they are configured on the original boot disk. Determine the list of logical
volumes in the root volume group and their order:
pvdisplay -v /dev/disk/disk0 | grep 'current.*0000 $'
Mirror each logical volume in
ysical volume:
lvextend m 1 /dev/vg00/lvol1
lvextend m 1 /dev/vg00/lvol2
lvextend m 1 /dev/vg00/lvol3
lvextend m 1 /dev/vg00/lvol4
lvextend m 1 /dev/vg00/lvol5
lvextend m 1 /dev/vg00/lvol6
lvextend m 1 /dev/vg00/lvol7
lvextend m 1 /dev/vg00/lvol8
lvsync -T /dev/vg00/lvol*

vg00 (the root volume group) onto the specified ph


/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2

Update the root volume group information:


lvlnboot -R /dev/vg00
lvlnboot -v
Specify the mirror disk as the alternate boot path in nonvolatile memory:

setboot

a 0/1/1/0.0x1.0x0

Add a line to /stand/bootconf for the new boot disk:


vi /stand/bootconf l /dev/disk/disk4

31. Mirroring the Boot Disk on HP Integrity Servers

Create a Partition Description File:


vi /tmp/idf
Partition the disk:
idisk -f /tmp/idf -w /dev/rdisk/disk2
Check the partitions layout:
idisk /dev/rdisk/disk2
Create the device files for all the partitions:
insf -e -H 0/1/1/0.0x1.0x0
Create a bootable physical volume using the device file denoting the HP-UX parti
tion:
pvcreate -B /dev/rdisk/disk2_p2
Add the physical volume to your existing root volume group:
vgextend vg00 /dev/disk/disk2_p2
Place boot utilities in the boot area. Copy EFI utilities to the EFI partition,
and use the device special file for the entire disk:
mkboot -e -l /dev/rdisk/disk2
Add an autoboot file to the disk boot area:
mkboot -a "hpux" /dev/rdisk/disk2
The logical volumes on the mirror boot disk must be extended in the same order t
hat they are configured on the original boot disk. Determine the list of logical
volumes in the root volume group and their order:
pvdisplay -v /dev/disk/disk0_p2 | grep 'current.*0000 $'
Mirror each logical volume in
ysical volume:
lvextend m 1 /dev/vg00/lvol1
lvextend m 1 /dev/vg00/lvol2
lvextend m 1 /dev/vg00/lvol3
lvextend m 1 /dev/vg00/lvol4
lvextend m 1 /dev/vg00/lvol5
lvextend m 1 /dev/vg00/lvol6
lvextend m 1 /dev/vg00/lvol7
lvextend m 1 /dev/vg00/lvol8
lvsync -T /dev/vg00/lvol*

vg00 (the root volume group) onto the specified ph


/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2
/dev/disk/disk2_p2

If lvextend fails with the message "m: illegal option HP MirrorDisk/UX is not in
stalled".
Update the root volume group information:
lvlnboot -R /dev/vg00
lvlnboot -v

Specify the mirror disk as the alternate boot path in nonvolatile memory:
setboot a 0/1/1/0.0x1.0x0
Add a line to /stand/bootconf for the new boot disk:
vi /stand/bootconf l /dev/disk/disk2_p2

32. Backing Up a Mirrored Logical Volume

You can split a mirrored logical volume into two logical volumes to perform a ba
ckup on an offline copy while the other copy stays online. When you complete the
backup of the offline copy, you can merge the two logical volumes back into one
. To bring the two copies back in synchronization, LVM updates the physical exte
nts in the offline copy based on changes made to the copy that remained in use.
You can use HP SMH to split and merge logical volumes, or use the lvsplit and lv
merge commands.
To back up a mirrored logical volume containing a file system, using lvsplit and
lvmerge, follow these steps:
Split the logical volume /dev/vg00/lvol1 into two separate logical volumes:
lvsplit /dev/vg00/lvol1
Perform a file system consistency check on the logical volume to be backed up:
fsck /dev/vg00/lvol1b
Mount the file system:
mkdir /backup_dir
mount /dev/vg00/lvol1b /backup_dir
Perform the backup.
Unmount the file system:
umount /backup_dir
Merge the split logical volume back with the original logical volume:
lvmerge /dev/vg00/lvol1b /dev/vg00/lvol1

33. Backing Up and Restoring Volume Group Configuration

If you back up your volume group configuration, you can restore a corrupted or l
ost LVM configuration in the event of a disk failure or corruption of your LVM c
onfiguration information.
It is important that volume group configuration information be saved whenever yo
u make any change to the configuration such as adding or removing disks to a vol
ume group, changing the disks in a root volume group, creating or removing logic
al volumes, extending or reducing logical volumes, etc...
By default, vgcfgbackup saves the configuration of a volume group to the file /e
tc/lvmconf/volume_group_name.conf.

Backup Configuration:
vgcfgbackup -f pathname/filename volume_group_name
To run vgcfgrestore, the physical volume must be detached.
Restore Configuration (using the default backup file /etc/lvmconf/vgsales.conf):
pvchange -a n /dev/disk/disk5
vgcfgrestore -n /dev/vgsales /dev/rdisk/disk5
pvchange -a y /dev/disk/disk5
If the physical volume is not mirrored or the mirror copies are not current and
available, you must deactivate the volume group with vgchange, perform the vgcfg
restore, and activate the volume group:
vgchange -a n /dev/vgsales
vgcfgrestore -n /dev/vgsales /dev/rdisk/disk5
vgchange -a y /dev/vgsales

34. Quiescing and Resuming a Volume Group

If you plan to use a disk management utility to create a backup image or snapshot
of all the disks in a volume group, you must make sure that LVM is not writing t
o any of the disks when the snapshot is being taken; otherwise, some disks can c
ontain partially written or inconsistent LVM metadata.
To keep the volume group disk image in a consistent state, you must either deact
ivate the volume group or quiesce it.
Deactivating the volume group requires you to close all the logical volumes in t
he volume group, which can be disruptive.
Quiescing the volume group enables you to keep the volume group activated and th
e logical volumes open during the snapshot operation, minimizing the impact to y
our system.
You can quiesce both read and write operations to the volume group, or just writ
e operations.
While a volume group is quiesced, the vgdisplay command reports the volume group
access mode as quiesced.
The indicated I/O operations queue until the volume group is resumed, and comman
ds that modify the volume group configuration fail immediately.
By default, the volume group remains quiesced until it is explicitly resumed.
You can specify a maximum quiesce time in seconds using the -t option of the vgc
hange command: if the quiesce time expires, the volume group is resumed automati
cally.
The vgchange -Q option indicates the quiescing mode, which can be rw.
To quiesce a volume group for a maximum of ten minutes (600 seconds):
vgchange -Q w -t 600 vg08
To resume a quiesced volume group:
vgchange -R vg08

35. Adding a Mirror to a Logical Volume

Add a mirror copy to a logical volume:


lvextend -m 1 /dev/vg00/lvol1
Add a mirror copy to a logical volume forcing it onto a specified physical disk:
lvextend -m 1 /dev/vg00/lvol1 /dev/disk/disk4

36. Removing a Mirror from a Logical Volume

To remove mirror copies reducing it to 0 copies:


lvreduce -m 0 /dev/vg00/lvol1
To remove mirror copies reducing it to 1 copy:
lvreduce -m 1 /dev/vg00/lvol1
To remove the mirror copy from a specific disk reducing it to 0 copies:
lvreduce -m 0 /dev/vg00/lvol1 /dev/disk/disk4

37. Increasing the Primary Swap

Because of the contiguous allocation policy, you cannot extend the swap logical
volume: so to increase the primary swap you create a bigger logical volume and m
odify the Boot Data Reserved Area (BDRA) to make it primary.
Create the logical volume on the volume group vg00:
lvcreate -C y -L 240 /dev/vg00
As the name of this new logical volume will be displayed on the screen, note it:
it will be needed later.
To ease the example, we'll assume now the name of the new volume is /dev/vg00/lv
ol8.
Display the current root and swap logical volumes (lvol2 is the default primary
swap):
lvlnboot -v /dev/vg00
Specify lvol8 is the primary swap logical volume:
lvlnboot -s /dev/vg00/lvol8 /dev/vg00
Recover any missing links to all of the logical volumes in the BDRA and update t
he BDRA of each bootable physical volume in the volume group.
Update the root volume group information:
lvlnboot -R /dev/vg00
Reboot the system:
init 6

38. Identifying Available Disks to be Used in a Volume Group

List the disks / LUNs:


ioscan -funC disk
Some of the devices you see in the output of the previos command will be allocat
ed, some will not.
List all of the physical volumed and their devices for all of the existing volum
e groups:
vgdisplay -v
vgdisplay -v | grep PV
This is a list of the devices that are in use (as they are part of a volume grou
p).
Now, compare the two outputs: any device that's in the ioscan output, but NOT in
the vgdisplay output are available for use.
You can automate this process by using sed, awk and grep.
1. Create a file (here named wte_hitachi_disk) listing all of the disks that can
be used (in this example HITACHI):
ioscan -fnC disk >> wte_disk_ioscan
cat wte_disk_ioscan | sed 1,2d | grep -v -e TOSHIBA -e c2t0d0 | xargs -n 10 | gr
ep HITACHI | grep -vi subsystem >> wte_hitachi_disk
Create a file containing the ioscan -fnC disk output
cat will output the file containing the disk ioscan
sed deletes the first 2 header lines of the file
grep prints out any lines that DO NOT include TOSHIBA or c2t0d0
xargs groups the output into groups of 10
Next grep finds all of the lines with HITACHI in them
All of the lines that have HITACHI in them are saved to the file wte_hitachi_dis
k
2. Refine the ioscan of HITACHI disks to include just the disk devices, sorted this is a list of all HITACHI disks on the system:
awk '{ print $9 }' wte_hitachi_disk | sort -u > wte_hitachi_sorted_u
awk prints just the 9th field of the file
sort - u sorts the file and surpresses any duplicates
This is saved to the sorted file wte_hitachi_sorted_u.
Print a list of all the disks that are currently being used (a list of PVs):
vgdisplay -v | grep "PV Name" > wte_pvdisk_used
vgdisplay -v prints a verbose listing of all volume groups
grep only prints lines that contain PV Name
The list of PVs is saved to the file wte_pvdisk_used.
Refine the list of disks that are being used:
awk '{ print $3 }' wte_pvdisk_used | sort -u > wte_pvdisk_sorted_u
awk prints on the 3rd field (the disk device)
sort sorts the list, surpressing any duplicate entries
The results are saved to the file wte_pvdisk_sorted_u.
Compare the two files - the list of all the Hitachi disks on the system with the
list of all disks being used:
diff wte_hitachi_sorted_u wte_pvdisk_sorted_u
diff compares the two files and prints out any differences. The difference will
be one or more disks that the system sees, but that are not being used by LVM.

39. Creating a Physical Volume Group (PVG)

Imagine an SC10 rack with 2 controllers. Some of the disks are on 1 controller,
some on the controller 2. For high availability you would want a logical volume
to be created on a disk that is on one controller and mirrored on a disk that is
on another controller.However, the concept of "controller" is unknown in LVM.
Hence PVG's.
You create one for the disks on one controller, another one for the disks on the
other controller, then you make logical volumes PVG-strict (lvchange -s g ...).
PVGs increase not only I/O high availability but also performance.
By using PVGs physical volumes can be grouped by controllers: then logical volum
es can be created on different PVGs. This way PVGs allow to know where each disk
is and what channels to mirror down, so with careful planning and diligent use
of the LVM commands you can ensure I/O separation without using LVM.

You can use two mirroring types.


Clean-mirror:
When creating the volume group you would set it up by putting A and B in PVG1, a
nd C and D in PVG2: you do this by using "-g" option in volume group creation to
define the PVG name. When creating logical volumes you may to mirror (for examp
le) 1 copy, you can put the mirrored copy in a separate PVG - so that your mirro
red copies are always in PVG2, for example. This would help in disaster recovery
situation. The lvcreate command has "-s g" option to setup a PVG-strict mirror.
To extend a logical volume you issue a straight forward lvextend since it was a
lready defined as PVG-strict.
Dirty-mirror:
The volume group is setup in a normal manner. When creating logical volumes, if
you setup to mirror 1 copy and do not specify to LVM where to put its mirrored
copy then you may end up in a siuation whereby the mirrored copy can reside anyw
here in A, B, C, D, even on the disk where the primary copy resides. So you can
actually instruct lvextend to put the mirror-copy on a specific disk, but then y
ou would have to keep track of the PEs, etc... If PVG is setup this is done auto
matically.
To create a physical volume group (PVG) create a file named /etc/lvmpvg with the
following syntax:
VG vg_name
PVG pvg_name
pv_path
...
PVG pvg_name
pv_path
...
VG vg_name
PVG pvg_name
pv_path
[...]

For example, to use two PVGs in vg01 with c1t6d0 and c2t6d0 in one PVG (PVG0), c
3t6d0 and c4t6d0 in the other PVG (PVG1) the contents of the file /etc/lvmpvg sh
ould be:
VG /dev/vg01
PVG PVG0
/dev/dsk/c1t6d0
/dev/dsk/c2t6d0
PVG PVG1
/dev/dsk/c3t6d0
/dev/dsk/c4t6d0
Then create the physical volume groups:
vgcreate -g pvg1 /dev/vgname /dev/dsk/c5t8d0 /dev/dsk/c5t9d0
vgextend -g pvg2 /dev/vgname /dev/dsk/c7t1d0 /dev/dsk/c7t2d0
vgdisplay -v vgname
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Volume Gr
oup".

40. Creating (Mirroring) Logical Volumes on Specific Physical Volumes

After creating the /etc/lvmpvg file as describe above in the previous section, e
ach copy of the mirror you create could be force on different PVG.
To create a logical volume in the physical volume group:
lvcreate -m 1 -n volume_name -L size_in_mb -s g /dev/pvg_name
lvdisplay -v lvhome
To create a RAID 0 + 1 you need at least two disks in each PVG, then you use the
"-s g & -D Y" options of the lvcreate command during the logical volume creatio
n.
If the logical volume is already created but not mirrored yet, issue the follow
ing command:
lvchange -s g /dev/vg01/lvhome
lvextend -m 1 /dev/vg01/lvhome
lvdisplay -v lvhome
If the system on which you're creating the volume group is a node of an HP Servi
ceGuard Cluster, then you have to present the new structure to the cluster to ma
ke it aware of it.To do this follow the steps about deploying LVM configuration
on HP ServiceGuard Cluster nodes at the end of the section "Creating a Logical V
olume and Mounting the File System".