Академический Документы
Профессиональный Документы
Культура Документы
4.0 Administration
ES-310
This product or document is protected by copyright and distributed under licenses restricting its use, copying, distribution, and
decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of
Sun and its licensors, if any.
Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.
Sun, Sun Microsystems, the Sun logo, Solaris, OpenBoot, Ultra, Sun Blade, Sun StorEdge, Solstice DiskSuite, RSM, SunPlex, Sun Fire, Java,
Sun BluePrints, Sun Enterprise, SunOS, and SunSolve are trademarks or registered trademarks of Sun Microsystems, Inc. in the U.S. and
other countries.
All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the U.S. and
other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc.
UNIX is a registered trademark in the U.S. and other countries, exclusively licensed through X/Open Company, Ltd.
Adobe is a registered trademark of Adobe Systems, Incorporated. PostScript is a trademark or a registered trademark of Adobe Systems,
Incorporated, which may be registered in certain jurisdictions.
The OPEN LOOK and Sun Graphical User Interface was developed by Sun Microsystems, Inc. for its users and licensees. Sun acknowledges
the pioneering efforts of Xerox in researching and developing the concept of visual or graphical user interfaces for the computer industry.
Sun holds a non-exclusive license from Xerox to the Xerox Graphical User Interface, which license also covers Sun’s licensees who
implement OPEN LOOK GUIs and otherwise comply with Sun’s written license agreements.
RESTRICTED RIGHTS: Use, duplication, or disclosure by the U.S. Government is subject to restrictions of FAR 52.227-14(g)(2)(6/87) and
FAR 52.227-19(6/87), or DFAR 252.227-7015 (b)(6/95) and DFAR 227.7202-3(a).
DOCUMENTATION IS PROVIDED “AS IS” AND ALL EXPRESS OR IMPLIED CONDITIONS, REPRESENTATIONS, AND
WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE
OR NON-INFRINGEMENT, ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO BE
LEGALLY INVALID.
THIS MANUAL IS DESIGNED TO SUPPORT AN INSTRUCTOR-LED TRAINING (ILT) COURSE AND IS INTENDED TO BE
USED FOR REFERENCE PURPOSES IN CONJUNCTION WITH THE ILT COURSE. THE MANUAL IS NOT A STANDALONE
TRAINING TOOL. USE OF THE MANUAL FOR SELF-STUDY WITHOUT CLASS ATTENDANCE IS NOT RECOMMENDED.
Please
Recycle
Copyright 2004 Sun Microsystems Inc. 4150 Network Circle, Santa Clara, California 95054, Etats-Unis. Tous droits réservés.
Ce produit ou document est protégé par un copyright et distribué avec des licences qui en restreignent l’utilisation, la copie, la distribution,
et la décompilation. Aucune partie de ce produit ou document ne peut être reproduite sous aucune forme, par quelque moyen que ce soit,
sans l’autorisation préalable et écrite de Sun et de ses bailleurs de licence, s’il y en a.
Le logiciel détenu par des tiers, et qui comprend la technologie relative aux polices de caractères, est protégé par un copyright et licencié
par des fournisseurs de Sun.
Sun, Sun Microsystems, le logo Sun, Solaris, OpenBoot, Ultra, Sun Blade, Sun StorEdge, Solstice DiskSuite, RSM, SunPlex, Sun Fire, Java,
Sun BluePrints, Sun Enterprise, SunOS, et SunSolve sont des marques de fabrique ou des marques déposées de Sun Microsystems, Inc. aux
Etats-Unis et dans d’autres pays.
Toutes les marques SPARC sont utilisées sous licence sont des marques de fabrique ou des marques déposées de SPARC International, Inc.
aux Etats-Unis et dans d’autres pays. Les produits portant les marques SPARC sont basés sur une architecture développée par Sun
Microsystems, Inc.
UNIX est une marques déposée aux Etats-Unis et dans d’autres pays et licenciée exclusivement par X/Open Company, Ltd.
Adobe est une marque enregistree de Adobe Systems, Incorporated. PostScript est une marque de fabrique d‘Adobe Systems, Incorporated,
laquelle pourrait être déposée dans certaines juridictions aux Etats-Unis et dans d’autres pays.
L’interfaces d’utilisation graphique OPEN LOOK et Sun™ a été développée par Sun Microsystems, Inc. pour ses utilisateurs et licenciés.
Sun reconnaît les efforts de pionniers de Xerox pour larecherche et le développement du concept des interfaces d’utilisation visuelle ou
graphique pour l’industrie de l’informatique. Sun détient une licence non exclusive de Xerox sur l’interface d’utilisation graphique Xerox,
cette licence couvrant également les licenciés de Sun qui mettent en place l’interface d’utilisation graphique OPEN LOOK et qui en outre
se conforment aux licences écrites de Sun.
LA DOCUMENTATION EST FOURNIE “EN L’ETAT” ET TOUTES AUTRES CONDITIONS, DECLARATIONS ET GARANTIES
EXPRESSES OU TACITES SONT FORMELLEMENT EXCLUES, DANS LA MESURE AUTORISEE PAR LA LOI APPLICABLE, Y
COMPRIS NOTAMMENT TOUTE GARANTIE IMPLICITE RELATIVE A LA QUALITE MARCHANDE, A L’APTITUDE A UNE
UTILISATION PARTICULIERE OU A L’ABSENCE DE CONTREFAÇON.
CE MANUEL DE RÉFÉRENCE DOIT ÊTRE UTILISÉ DANS LE CADRE D’UN COURS DE FORMATION DIRIGÉ PAR UN
INSTRUCTEUR (ILT). IL NE S’AGIT PAS D’UN OUTIL DE FORMATION INDÉPENDANT. NOUS VOUS DÉCONSEILLONS DE
L’UTILISER DANS LE CADRE D’UNE AUTO-FORMATION.
Please
Recycle
Table of Contents
About This Course .............................................................Preface-xiv
Course Goals........................................................................ Preface-xiv
Course Map............................................................................ Preface-xv
Topics Not Covered.............................................................Preface-xvi
How Prepared Are You?....................................................Preface-xvii
Introductions ..................................................................... Preface-xviii
How to Use Course Materials ............................................ Preface-xix
Conventions ........................................................................... Preface-xx
Icons ............................................................................... Preface-xx
Typographical Conventions ..................................... Preface-xxi
Notes to the Instructor........................................................Preface-xxii
Sun Storage Concepts .....................................................................1-1
Objectives ........................................................................................... 1-1
Disk Storage Administration Introduction .................................... 1-2
VxVM Software Installation .................................................... 1-2
VxVM Initialization .................................................................. 1-2
RAID Volume Design.............................................................. 1-3
RAID Volume Creation............................................................ 1-3
RAID Volume Administration................................................ 1-4
Interfaces for Sun Storage Devices .................................................. 1-5
SCSI Overview........................................................................... 1-5
SCSI Interface Implementation .............................................. 1-6
SCSI Interface Standards.......................................................... 1-7
SCSI Priority.............................................................................. 1-9
SCSI Phases and the Move to Fibre Channel ........................ 1-9
Fibre Channel Technology....................................................... 1-9
Fibre Channel-Arbitrated Loop ........................................... 1-10
Advantages of FC-AL............................................................. 1-10
Fibre Channel Compared to SCSI........................................ 1-11
RAID Technology ............................................................................ 1-12
Host-Based RAID (Software RAID Technology)................ 1-12
Controller-Based RAID (Hardware RAID Technology) ... 1-13
iv
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Disk Storage Concepts..................................................................... 1-14
Hot Swapping.......................................................................... 1-14
Storage Area Networking ..................................................... 1-16
Multihost Storage Access...................................................... 1-21
Multipath Storage Access ..................................................... 1-23
Storage Configuration Identification ............................................ 1-28
Conducting Physical Inventory ............................................ 1-28
Displaying Storage Configurations ...................................... 1-28
Identifying Controller Addressing...................................... 1-30
Identifying Device Path Components.................................. 1-31
Identifying DMP Devices...................................................... 1-34
Storage Array Firmware ................................................................. 1-35
Fibre Channel HBA Cards ..................................................... 1-35
Verifying Fibre Channel HBA Firmware ........................... 1-36
Verifying SPARCstorage Array 100 Firmware.................. 1-37
Verifying Sun StorEdge A5x00 Array Firmware............... 1-38
Verifying Sun StorEdge T3 Array Firmware ..................... 1-39
Verifying Sun StorEdge A5x00 Disk Drive Firmware ...... 1-40
Firmware Upgrade Best Practices........................................ 1-41
Exercise: Recording Your Storage Configuration ....................... 1-42
Preparation............................................................................... 1-42
Task 1 – Reviewing Sun Storage Features ........................... 1-42
Task 2 – Identifying Host Adapter Configurations ........... 1-44
Task 3 – Identifying Storage Array Configurations.......... 1-45
Task 4 – Verifying Storage Interface Firmware
Revisions ............................................................................... 1-46
Task 5 – Verifying Array Disk Drive Firmware
Revisions ............................................................................... 1-47
Exercise Summary............................................................................ 1-48
Managing Data ................................................................................. 2-1
Objectives ........................................................................................... 2-1
Virtual Disk Management ................................................................ 2-2
Availability................................................................................. 2-2
Performance ............................................................................... 2-2
Scalability .................................................................................. 2-3
Maintainability .......................................................................... 2-3
RAID Technology Introduction ....................................................... 2-4
Supported RAID Standards..................................................... 2-4
RAID Terminology .................................................................. 2-5
RAID Level Common Features ........................................................ 2-6
Concatenation – RAID 0........................................................... 2-6
Striping – RAID 0 ...................................................................... 2-8
Mirroring – RAID 1................................................................ 2-10
Mirrored Stripe – RAID 0+1 ................................................. 2-12
Mirrored Concatenation – RAID 0+1 .................................. 2-14
Striped Mirror – RAID 1+0 ................................................... 2-15
vi
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Protecting Storage Devices From Usage............................. 3-29
Global Exclusion .................................................................... 3-32
Installing the VEA............................................................................ 3-35
VEA Software Initialization.................................................. 3-36
VEA Client Software Startup ............................................... 3-37
Host Connection Window .................................................... 3-38
Resolving Low-Bandwidth Access Problems .................... 3-39
Using Basic VEA Features .............................................................. 3-40
Main Window Functional Areas........................................... 3-40
Resizing Display Panes .......................................................... 3-45
Modifying Preferences .......................................................... 3-46
Customizing the Grid Display ............................................. 3-47
Examining VEA Command Logs ......................................... 3-48
Using the VEA Search Tool .................................................. 3-49
Decoding VxVM Error Messages .................................................. 3-50
Exercise: Configuring VxVM.......................................................... 3-51
Preparation............................................................................... 3-51
Task 1 – Reviewing Key Lecture Points.............................. 3-52
Task 2 – Installing the VxVM Software .............................. 3-55
Task 3 – Verifying the VxVM System Files ........................ 3-57
Task 4 – Evaluating the Storage Configuration ................. 3-58
Task 5 – Installing the VEA Client Software ...................... 3-59
Task 6 – Starting the VEA Client Software......................... 3-60
Task 7 – Customizing the VEA GUI Appearance ............. 3-61
Task 8 – Navigating the VxVM Technical Manuals........... 3-61
Task 9 – Using the VxVM Error Numbering System........ 3-63
Exercise Summary............................................................................ 3-64
VERITAS Volume Manager Basic Operations ............................... 4-1
Objectives ........................................................................................... 4-1
VxVM Disk Group Functions........................................................... 4-2
Primary Functions of a Disk Group ....................................... 4-2
VxVM Disk Drives .................................................................... 4-3
Standard VxVM Disk Groups ................................................ 4-4
Shared VxVM Disk Groups .................................................... 4-5
Cross-Platform Data Sharing Disk Groups .......................... 4-6
VxVM Disk Group Operations ........................................................ 4-7
Verifying Disk Group Status ............................................................ 4-8
Using the vxdisk Command to Verify Disk Group
Status........................................................................................ 4-8
Using the vxdg Command to Verify Disk Group Status..... 4-8
Administering Disk Groups Using the vxdiskadm Utility.......... 4-9
Functional Overview ............................................................. 4-10
Creating a New Disk Group................................................. 4-11
Removing a Disk Drive From a Disk Group...................... 4-12
viii
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Creating Volumes Using the VEA GUI ........................................ 5-13
Disk Selection Method............................................................ 5-13
Using the Disk Selection Form............................................. 5-14
Using the Volume Attributes Form..................................... 5-15
Using the Create File System Form ..................................... 5-16
Creating Volumes Using the vxassist Command.................... 5-17
The vxassist Command ...................................................... 5-17
Specifying Volume Size.......................................................... 5-17
Using vxassist Command Options .................................. 5-18
Modifying Volume Access Attributes........................................... 5-20
Verifying Volume Ownership............................................... 5-20
Modifying Volume Ownership and Permissions............... 5-20
Adding a UFS File System to Existing Volumes ......................... 5-21
Using the VEA GUI to Add a File System........................... 5-21
Adding a File System From the Command Line............... 5-23
Enabling the Solaris OS UFS Logging Feature ................... 5-24
Administering Volume Logs.......................................................... 5-25
Using DRLs .............................................................................. 5-25
Using RAID-5 Logs................................................................ 5-26
Planning Log Placement ....................................................... 5-27
Adding a Volume Log From the VEA GUI......................... 5-28
Adding a Volume Log From the Command Line ............. 5-29
Removing Volume Logs Using the VEA GUI.................... 5-30
Removing Volume Logs From the Command Line.......... 5-31
Using the VEA GUI to Analyze Volume Structures ................... 5-32
Displaying Volume Layout Details ...................................... 5-32
Viewing Disk Volume Mapping and Performance........... 5-33
Exercise: Creating Volumes and File Systems ............................. 5-34
Preparation............................................................................... 5-34
Task 1 – Reviewing Key Lecture Points.............................. 5-35
Task 2 – Creating a Volume.................................................. 5-37
Task 3 – Adding a Volume Mirror ...................................... 5-39
Task 4 – Adding a File System to a Volume........................ 5-41
Task 5 – Adding a DRL ......................................................... 5-43
Task 6 – Resizing a Volume and File System..................... 5-45
Task 7 – Creating a RAID-5 Volume ................................... 5-47
Task 8 – Analyzing Volumes Using the VEA GUI............ 5-49
Task 9 – Verifying Ending Lab Status ................................. 5-54
Exercise Summary............................................................................ 5-55
VERITAS Volume Manager Advanced Operations ....................... 6-1
Objectives ........................................................................................... 6-1
Boot Disk Encapsulation and Mirroring......................................... 6-2
Optimizing the Boot Disk Hardware Configuration ........... 6-2
Boot Disk Encapsulation Prerequisites ................................. 6-3
Encapsulating the System Boot Disk..................................... 6-4
x
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Basic Intelligent Storage Provisioning Administration .............. 6-54
Primary ISP Components ...................................................... 6-54
Using Storage Pool Set Templates ....................................... 6-56
Using Storage Pool Templates ............................................. 6-58
Using Application Volume Templates ............................... 6-60
Creating Application Volumes Using the vxvoladm
Command............................................................................. 6-62
Creating Application Volumes Using the VEA GUI ........ 6-63
Interpreting Application Volume Configurations ............ 6-65
Replacing Failed Disk Drives ......................................................... 6-66
Failure Behavior ...................................................................... 6-66
Evaluating Failure Severity ................................................... 6-67
General Disk Drive Replacement Process .......................... 6-70
Exercise: Performing Advanced Operations................................ 6-72
Preparation............................................................................... 6-72
Task 1 – Reviewing Key Lecture Points.............................. 6-73
Task 2 – Encapsulating the System Boot Disk ................... 6-76
Task 3 – Mirroring the System Boot Disk ............................ 6-78
Task 4 – Performing an Online Volume Relayout............. 6-80
Task 5 – Evacuating a Disk Drive ........................................ 6-82
Task 6 – Moving a Populated Volume................................. 6-82
Task 7 – Performing a Snapshot Backup ............................ 6-84
Task 8 – Creating a Layered Volume .................................. 6-85
Task 9 – Replacing a Failed Disk Drive .............................. 6-86
Task 10 – Using Intelligent Storage Provisioning .............. 6-88
Task 11 – Configuring a Best Practice Boot Disk............... 6-91
Exercise Summary............................................................................ 6-93
VERITAS File System Basic Operations........................................ 7-1
Objectives ........................................................................................... 7-1
Basic VxFS Features ........................................................................... 7-2
Extent-Based Space Allocation................................................ 7-2
File System Intent Logging ..................................................... 7-3
Installing the VxFS Software ............................................................ 7-4
Creating VxFS File Systems .............................................................. 7-5
Extended VxFS Mount Options ....................................................... 7-6
Intent Log Behavior .................................................................. 7-6
Error Handling Behavior ......................................................... 7-7
Other VxFS Mount Options..................................................... 7-7
Online File System Administration ................................................. 7-8
Online Defragmentation .......................................................... 7-8
Online Resizing ......................................................................... 7-8
Online Backup and Restore ..................................................... 7-8
xii
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Preface
Course Goals
Upon completion of this course, you should be able to:
● Install and initialize VERITAS Volume Manager (VxVM) software
● Define VxVM objects
● Describe public and private regions
● Start and customize the Volume Manager Storage Administrator
VERITAS Enterprise Administrator (VEA) graphical user interface
(GUI)
● Perform operations using the command-line interface
● Perform disk and volume operations
● Create redundant array of independent disk (RAID) volumes
● Set up dirty-region logs (DRLs)
● Perform common file system operations using the VEA GUI
● Create new disk groups, remove disks from group, move disks
between disk groups, and deport and import disk groups between
servers
● Simulate disk failure and complete a disk recovery
● Create and mange hot-spare pools
● Manage and disable the hot-relocation feature
● Perform basic performance analysis
Preface-xiv
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Course Map
Course Map
The following course map enables you to see what you have
accomplished and where you are going in reference to the course goals.
Overview
Sun Storage Managing Data
Concepts
Refer to the Sun Educational Services catalog for specific information and
registration.
Introductions
Now that you have been introduced to the course, introduce yourself to
the other students and the instructor, addressing the following items:
● Name
● Company affiliation
● Title, function, and job responsibility
● Experience related to topics presented in this course
● Reasons for enrolling in this course
● Expectations for this course
Conventions
The following conventions are used in this course to represent various
training elements and alternative learning resources.
Icons
Note – Indicates additional information that can help students but is not
crucial to their understanding of the concept being described. Students
should be able to understand the concept or complete the task without
this information. Examples of notational information include keyword
shortcuts and minor system adjustments.
Typographical Conventions
Courier is used for the names of commands, files, directories,
programming code, and on-screen computer output; for example:
Use ls -al to list all files.
system% You have mail.
Courier bold is used for characters and numbers that you type; for
example:
To list the files in this directory, type:
# ls
Palatino italics is used for book titles, new words or terms, or words that
you want to emphasize; for example:
Read Chapter 6 in the User’s Guide.
These are called class options.
Objectives
Upon completion of this module, you should be able to:
● Describe the major disk storage administration tasks
● Describe Sun storage interface types
● Describe available RAID technologies including:
● Host-based RAID technology
● Controller-based RAID technology
● Describe disk storage concepts that are common to many storage
installations including:
● Hot swapping
● Storage area networking
● Multihost access
● Multipath access
● Identify storage configurations including:
● Conducting physical inventory
● Displaying storage configurations
● Identifying controller addresses
● Decoding logical device paths
● Verify storage array firmware revisions
1-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Disk Storage Administration Introduction
VxVM Initialization
When you install VxVM, at least one disk drive must be brought under
VxVM control using the vxinstall utility. You can either encapsulate a
disk, which preserves existing data on the disk, or you can initialize a disk,
which effectively destroys existing data.
If you are not familiar with the device address strategy in your particular
installation, you might accidentally initialize the wrong disk drives. This
error could destroy valuable data, including the operating system.
In most cases, compromises are made when choosing among cost savings,
performance, availability, and maintainability.
You can configure the GUI to display command-line equivalents for each
operation.
Even though you might not be responsible for the design of your VxVM
volume structures, you must still be familiar with most aspects of your
particular storage devices.
Each of the basic interface types has two or more variations, which have
evolved over a period of several years. The interfaces have improved in
the following areas:
● Data transfer speed
● Data transfer latency
● Interface cable lengths
SCSI Overview
SCSI was initially implemented in the 1980s as a way of making the
interface between the host computer system and the disks independent of
the computer manufacturer. Prior to the introduction of SCSI, all the
computer manufacturers had their own way of connecting the host
computer system to the disk drives.
SCSI introduced the idea of intelligent disk drives where the host
computer system requested the transfer of a block of data from the disk.
The host system had no need to know the underlying disk geometry. It
issued a request to the disk for the transfer of a block of data. The shift of
intelligence from the host computer system to the disk allowed the same
disk to be used by different manufacturers, which ultimately led to
cheaper, faster, and larger disk drives.
The connection between the host system was by the SCSI bus, for which a
set of standards was agreed upon. The speed and data capacity of the
SCSI bus has been increased to allow for the higher demands of today’s
servers. One of the earliest problems faced with SCSI was the differing
cable lengths from the host system to the disk drives themselves. For the
SCSI bus to reliably operate over differing cable lengths, two electrical
connections methods were defined: Single-ended (for short connection
lengths) and differential (for connection over longer cables).
Single-Ended SCSI
Differential SCSI
As shown in Figure 1-2, the data bits are sent using two equal and
opposite voltages. These allow the signal to travel farther without
degradation. Differential SCSI allows cable lengths up to 25 meters.
inv add
SCSI-3 Standards
High Low
Single-
Voltage Voltage
Type Speed Width Ended Targets
Differential Differential
Length
Length Length
SCSI Priority
The bus arbitration mechanism for SCSI uses the SCSI target ID to
determine priority. Narrow SCSI has target addresses 0–7. Target 7 has
highest priority (usually the ID of the controller), and target 8 has the
lowest. Performance can be affected through injudicious use of SCSI target
addresses.
Advantages of FC-AL
The FC-AL development effort is part of the American National Standards
Institute/International Organization for Standardization (ANSI/ISO)
accredited SCSI-3 standard. This standard helps to prevent the creation of
non-conforming, incompatible implementations. Virtually all major
system vendors are implementing FC-AL, as are all major disk drive and
storage system vendors.
FC-AL operates on both fiber-optic cable and copper wire, and it can be
used for more than just disk input/output (I/O). The Fibre Channel
specification supports high-speed system and network interconnects
using a wide variety of popular protocols, including:
● SCSI
● Internet Protocol (IP)
● Adaptation Layer for Computer Data (AAL5) (ATM)
● Fibre Channel Link Encapsulation (FC-LE)
● Institute of Electrical and Electronics Engineers specification for data
link layer transmission (IEEE 802.2)
RAID Technology
RAID virtual data structures can be created and managed by software
applications, or they can be a resident-hardware function of some storage
devices.
VM
Software
User or
Application
Access
3-Gbyte
virtual volume
Controller c4
1-Gbyte
physical
disks
T1 T2 T3
Storage Array
Although the physical paths to the three disk drives in Figure 1-3 still
exist, they are not accessed directly by users or applications. Only the
virtual volume paths are referenced by users.
Software that runs on the host system creates and manages the virtual
software.
Disk Disk
Disk Disk
User Disk Disk
Access Disk Disk
RAID
Configuration
Software
Note – The Sun StorEdge T3 array RAID structures are configured using
either the Sun StorEdge Component Manager software or resident storage
array operating system commands.
Hot Swapping
Most Sun storage arrays are engineered so that a failed disk drive can be
replaced without interrupting customer applications. The
disk-replacement process includes one or more software operations that
can vary with each disk storage platform.
In its basic form, the process to replace a failed disk drive that is under
VxVM control is as follows:
1. Use the VxVM vxdiskadm utility to logically remove the disk.
2. Use the VxVM vxdiskadm utility to logically install the new disk.
The VxVM disk replacement process is more complex for some storage
arrays, such as the Sun StorEdge A5x00 array. The Sun StorEdge A5x00
array procedure is as follows:
1. Use the VxVM vxdiskadm utility options 4 and 11 to logically
remove the disk and place it offline.
2. Use the luxadm utility’s remove_device command.
3. Use the luxadm utility’s insert_device command.
4. Run the vxdctl enable command to read in the new configuration.
5. Use the VxVM vxdiskadm utility option 5 to logically install the new
disk.
. Note – You must be familiar with the disk-replacement process for your
particular disk storage devices.
Host Adapter
TL
Fiber-optic cables
SAN Definitions
SAN Fabric
Networks that use Fibre Channel switches are referred to as fabrics. The
term fabric characterizes a network of multiple switches as opposed to a
network with a single switch. Each connection in a fabric can use the full
100-megabytes per second (Mbyte/sec) Fibre Channel bandwidth.
There are two types of Fibre Channel devices, public and private.
Private devices do not have full Fibre Channel addressing capability. They
have only the Arbitrated Loop Physical Address (ALPA) portion of the
Fibre Channel physical address. These devices exist only on loops, and,
unless the Switch offers extra support, these devices cannot communicate
outside their own loop.
You can configure the Fibre Channel switch ports to function in several
ways using switch management software. The primary reason for
different port functionality is to allow selective access between Fibre
Channel devices and host systems. You should only use the following
port configurations:
● Fabric port (F_Port) – A fabric port connects a Fibre Channel switch
to a fabric-aware node port (or N_Port) on an end-device.
● Segmented loop port (SL_Port) – Segmented loop ports provide
support for private arbitrated loops on a Fibre Channel switch. All
segmented loop ports in the same SL zone behave as one private
arbitrated loop (and so they share the same ALPA space).
● Translated loop port (TL_Port) – Translated loop ports provide
support for public and private loop devices on a Fibre Channel
switch. Translated loop ports translate between private and public
addresses, allowing public devices and private devices to
communicate with one another.
● Trunk port (T_Port) – A trunk port connects a Fibre Channel switch
to another Fibre Channel switch (this is known as cascading).
Zones
Sun EnterpriseTM
3500 Server
Switch 1
Host
Adapter 1 2
Zone
3 4
Host
Adapter 5 Zone 6
7 Zone 8
Sun StorEdge
A5200 Array
The number of storage devices that can be attached to a host can grow to
the thousands with the advent of SANs with native fabric connectivity.
Probing all these devices at boot time and creating device nodes can
increase the boot time greatly. In addition, a host might not need access to
all of the storage devices it can access. The Sun StorEdge Network FC
Switch-16, Version 3.0, no longer creates device nodes for every storage
device attached. Instead, the administrator creates device nodes on
demand by using the cfgadm utility. The device nodes, once created, are
persistent across reboots.
Multi-Initiated SCSI
SCSI SCSI
Card Card
In Out
t9 t12
t10 t13
t11 t14
The SCSI initiator values are changed using complex system firmware
commands. The process of changing these values varies with system
hardware platforms.
Do not change the external SCSI bus, scsi-initiator-id, globally, change it at the interface card level.
Read the documentation carefully. The procedures are hardware platform-specific.
Host 1
Interface
Board A
Host 2
Interface
Board B
Host 3
SPARCstorage
Array 100
SOC
host adapter
Host 1
Port A
Host 2 Interface
Port B board
Some Sun storage devices allow dual connections to a storage array from
a single host system. As shown in Figure 1-9, one host adapter can be
configured as a backup if the primary access path fails.
Storage Array
Host System
Drive
Drive
Drive
Drive
Ultra SCSI Controller
Card
C1
Drive
Drive
Drive
Drive
Card C2
RDAC
Driver
RAID
Configuration
Software
Applications directly interface with the RDAC driver and are unaware of
interface failure. If one of the dual-controller paths fails, the RDAC driver
automatically directs I/O to the functioning path.
Note – The Sun StorEdge A3500FC array uses a Fibre Channel interface
instead of the SCSI interface used on the other RDAC-controlled storage
arrays.
The Alternate Path (AP) software for the Solaris OS works with Dynamic
Reconfiguration (DR) to provide the ability to move all I/O off a system
board before removal for upgrade or repair. AP is not applicable to all
architectures.
System Interconnect
Board #1 Board #2
Drive
Drive
Drive
Drive
HBA Interface
Card
C1
HBA Interface
Drive
Drive
Drive
Drive
Card C2
DMP
Driver
Specific paths can be enabled and disabled with the VxVM vxdmpadm
command.
Note – During a VxVM installation, you must take special steps to ensure
that the DMP feature is compatible with AP, SAN, and the Sun StorEdge
Traffic Manager software.
System Interconnect
Board #1 Board #2
Sun StorEdge Traffic Manager is the official name for the MPxIO product.
The RDAC, AP, DMP, and Sun StorEdge Traffic Manager software can
coexist in some configurations, but you might need to selectively
configure each interface for use by only one of the applications. Table 1-4
compares some of the multipathing software features.
Use the luxadm probe option as follows to locate several types of Sun
storage arrays including Sun StorEdge T3 array logical unit numbers
(LUNs). In the following example, two SENA type arrays were found
along with two single-LUN T3 storage arrays. The luxadm command does
not identify the exact model of storage.
# luxadm probe
Found Enclosure(s):
SENA Name:AA Node WWN:5080020000034ed8
Logical Path:/dev/es/ses0
Logical Path:/dev/es/ses1
Node WWN:50020f200000c193 Device Type:Disk device
Logical Path:/dev/rdsk/c2t1d0s2
Node WWN:50020f200000c367 Device Type:Disk device
Logical Path:/dev/rdsk/c3t1d0s2
Host System
Sun StorEdge
D1000 Array
Internal
UDWS
SCSI c0 c1
FC-AL
c3
0 1 2 3 4 5 6
FC-AL Hub
Sun StorEdge
A5x00 Array
Sun StorEdge
A5x00 Array
After the VxVM software is installed and licensed, you use the vxdmpadm
command to display the basic controller configuration. The following is
an example.
# vxdmpadm listctlr all
CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME
=====================================================
c0 Disk ENABLED Disk
c2 SENA ENABLED SENA0
c3 SENA ENABLED SENA1
c4 T3 ENABLED T30
c5 T3 ENABLED T31
System drivers and applications use the device paths to access specific
disk drives.
# ls -l /dev/dsk/c2t1d0s2
lrwxrwxrwx 1 root root 74 Oct 21 21:01 /dev/dsk/c2t1d0s2 ->../..
/devices/pci@6,4000/pci@2/SUNW,qlc@5/fp@0,0/ssd@w50020f230000c193,0:c
# ls -l /dev/dsk/c3t1d0s2
lrwxrwxrwx 1 root root 74 Sep 24 22:46 /dev/dsk/c2t1d0s2 ->../..
/devices/pci@6,4000/pci@3/SUNW,qlc@4/fp@0,0/ssd@w21000020370c055a,0:c
Logical device paths to disk drives are found under the /dev/dsk
directory for block devices and under the /dev/rdsk directory for raw
devices.
The number of devices associated with each target depends on the type of
storage device. The relationship between target and device numbers for
software RAID Sun storage is as follows:
● SPARCstorage Array 100:
● Thirty disks
● Six targets, t0–t5
● Five devices (d0–d4) associated with each target
● SPARCstorage® RSM™ array:
● Two selectable target ranges
● Seven disks
● Seven targets, t0–t6 or t8–t14
● A single device (d0) associated with each target
● Sun StorEdge D1000 array:
● Two selectable target ranges
● Eight disks, t0–t3 and t8–t11
● Twelve disks, t0–t5 and t8–t13
● A single device (d0) associated with each target
● Sun StorEdge A5x00 array:
● Four selectable target ranges
● Fourteen disks, targets, t0–t6 and t16–t22
● Twenty-two disks, targets t0–t10 and t16–t26
● A single device (d0) associated with each target
● Sun StorEdge MultiPack array:
● Two selectable target ranges for a six-disk model
● Six disks, targets, t1–t6 or t9–t14
● Twelve disks, targets t2–t5 and t8–t15
● A single device (d0) associated with each target
● Sun StorEdge MultiPack-FC array:
● Fifteen selectable target ranges
● Six disks, targets, t8–t13
● A single device (d0) associated with each target
Notice that the device paths for devices 1 and 2 have the same disk drive
identifier, 20370c0de8. Because the controller numbers are different,
devices 1 and 2 are connected to two different controller interfaces in the
same system.
Note – The numbers ISP 2100 and ISP 2200 are model numbers of
integrated circuit chips on the interface cards.
There are different luxadm command options for each generation of Fibre
Channel HBA cards. However, the most current version of luxadm has a
single option (fcode_download) that can be used to verify and upgrade
firmware on most Fibre Channel HBA cards. An example of the command
output follows.
# /usr/sbin/luxadm fcode_download -p
CONTROLLER STATUS
Vendor: SUN
Product ID: SSA110
Product Rev: 1.0
Firmware Rev: 3.6
Serial Num: 00000078CCF9
Accumulate Performance Statistics: Enabled
pSOSystem (129.150.47.115)
Login: root
Password:
T300 Release 1.00 1999/12/15 16:55:46 (129.150.47.115)
t3:/:<1> ver
Note – You can also use the format utility’s inquiry command option to
verify firmware revisions in selected disk drives.
You should read all firmware-related patch README notes carefully. The
README notes frequently have specific warnings and procedure
requirements that can help prevent extended system downtime.
In some cases, permanent HBA damage can result if you try to upgrade
firmware from an old version to a new version. Review the patch
README notes for instructions informing you to first perform an
upgrade to an intermediate revision level.
Preparation
Ask your instructor to identify the system and storage that is assigned for
your use during this exercise.
If you want to simplify the task of documenting the training system configuration, you can precede this lab
with a short lecture describing your particular system configuration.
The answers are installation, initialization, volume design, volume creation, and volume administration.
The answers are Redundant Dual Active Controller Driver (RDAC), Solaris Alternate Path Driver (AP),
VERITAS Dynamic Multipathing driver (DMP), and Sun StorEdge Traffic Manager software.
The answers are Sun StorEdge RAID Manager, pSOS, or Sun StorEdge Component Manager.
5. Which of the following tools assist with swapping out a failed disk
drive?
a. vxdiskunsetup
b. vxdiskadm
c. vxdisk
d. vxdctl
e. luxadm
Before proceeding with this task, record the type of storage arrays
attached to your classroom system and how many of each type there are.
Determine this by visual inspection.
Type of Storage Arrays Number of Each Type
_______________ _____
_______________ _____
_______________ _____
Use the luxadm utility to determine very specific information about both
of these storage devices. The most commonly used commands are:
# luxadm probe
# luxadm display controller_number
# luxadm display enclosure_name
# luxadm display logical_path
# luxadm display enclosure_name, disk_location
# luxadm fcode_download -p
For each storage array, record the WWN, the enclosure name
(Sun StorEdge A5x00 array only), the number of disks present, and the
controller number.
WWN Enclosure Number of Controller
(12 or 16 digits) Name Disks Installed Number
_______________ _____ _____ _____
_______________ _____ _____ _____
_______________ _____ _____ _____
Note – For the Sun StorEdge A5x00 array, the WWN of the enclosure is
not used in the physical path. You must use luxadm display command
to determine the WWN of the Sun StorEdge A5x00 units.
The luxadm utility does not recognize the Sun StorEdge D1000 array.
Specific array information must be gathered using other tools such as the
format utility and visual identification.
Use the format utility to determine the controller number of each Sun
StorEdge D1000 HBA card and the number of disks in each storage unit.
Controller Number Array Type Number of Disks
_______ __________ _________
_______ __________ _________
_______ __________ _________
_______ __________ _________
Record the firmware revision of at least one Sun StorEdge A5x00 disk
drive if you have this array model.
Controller Number Disk Drive Firmware Revision
_______ ______________________
_______ ______________________
_______ ______________________
_______ ______________________
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Managing Data
Objectives
Upon completion of this module, you should be able to:
● List the advantages of using virtual disk management
● Describe standard RAID terminology
● List the common features of each supported RAID level including:
● Concatenation – RAID 0
● Striping – RAID 0
● Mirroring – RAID 1
● Mirrored Stripe – RAID 0+1
● Mirrored Concatenation – RAID 0+1
● Striped Mirror – RAID 1+0
● Concatenated Mirror – RAID 1+0
● Striping with distributed parity – RAID 5
● Describe the optimum hardware configuration for each supported
RAID level
2-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Virtual Disk Management
Availability
VxVM provides availability improvements in this area in the following
ways:
● Preventing failed disk drives from making data unavailable
The probability of a single disk drive failure increases with the
number of disk drives on a system. Data redundancy techniques
prevent failed disk drives from making data unavailable.
● Allowing file systems to grow while they are in use
Allowing file systems to grow while they are in use reduces the
system downtime and eases the system administration burden.
● Allowing multiple-host configurations
In a dual-host configuration, one host can take over disk drive
management for another failed host. This configuration prevents a
failed host from making data unavailable.
Performance
Many applications today require high data throughput levels. The VxVM
products can assist in this area by more efficiently balancing the I/O load
across disk drives.
Scalability
Traditionally, file system size has been limited to the size of a single disk
drive. Using VxVM techniques, you can create file systems that consist of
many disk drives. The fact that there are multiple disk drives is
transparent to all applications. The size limit of file systems is increased to
the UNIX limit of 1 terabyte (Tbyte).
Maintainability
Administering large installations can be much easier with the assistance
of well-designed tools. VxVM has both graphical and command-line tools
to assist administrators.
VxVM also has a number of command-line programs and utilities that are
useful and are preferred by many administrators. They can also be used in
shell programs to perform virtually all administration tasks.
Note – RAID levels 2, 3, 4, and 6 are not available with VxVM. They are
not commonly implemented in commercial applications. RAID 0+1 and
RAID 1+0 are not true RAID levels but are abstractions composed of more
than one RAID level.
RAID Terminology
In the explanation of RAID levels in the following pages, a number of
technical terms are used to describe RAID structure components. The
following are some of the definitions:
● Stripe unit refers to a sequential group of data blocks on a single disk
drive. The stripe unit size is configurable.
● The terms disk drive and column are synonymous in RAID
discussions.
● Stripe width is the stripe unit size times the number of columns.
● Transfer rate and I/O per second (IOPS) are performance metrics:
● Transfer rate is the speed (measured in Mbytes per second) at
which a system can move data through its controller. In RAID
systems, read and write transfer rates can vary considerably.
High transfer rates are particularly valuable for applications
that must move large amounts of data quickly, such as
document imaging, data mining, or digital video applications.
● IOPS is a measure of the ability of a storage system to handle
multiple, independent I/O requests in a certain period of time.
RAID systems with high transfer rates do not always have good
IOPS performance. Database and transaction processing
systems are examples of applications that typically require high
I/O rate performance.
Concatenation – RAID 0
The primary reason for using concatenation is to create a virtual disk
drive that is larger than one physical disk drive. Concatenation obtains
more storage capacity by logically combining portions of two or more
physical disk drives. Concatenation also enables you to grow a virtual
disk drive by concatenating additional physical disk drive space to it. This
technique does not restrict the mix of different size drives and member
drives can be of any size. Therefore, no storage space is lost.
The example in Figure 2-1 on page 2-7 shows the concatenation of three
physical disk drives. Each portion of the concatenation is managed by
VxVM and is called a subdisk. A subdisk is the basic unit that VxVM uses
to assemble and control all data storage areas.
Physical Block 1
Disk 1
Block 1000
Block 1
Block 1001 Array Management
Physical
Software
Disk 2
Block 2000
Block 3000
The term block represents a disk drive block or sector (512 bytes) of data.
Advantages
● One hundred percent of the disk drive capacity is available for user
data.
Limitations
Striping – RAID 0
The primary reason for using striping is to improve IOPS performance.
The performance increase comes from accessing the data in parallel.
Parallel access increases I/O throughput because all disk drives in the
virtual device are busy most of the time servicing I/O requests.
The array management software is responsible for making the array look
like a single virtual disk drive. Striping takes portions of multiple physical
disk drives and combines them into one virtual disk drive that is
presented to the application.
As shown in Figure 2-2 on page 2-9, the I/O stream is divided into
segments called stripe units (SUs), which are mapped across two or more
physical disk drives, forming one logical storage unit. The stripe units are
interleaved so that the combined space is made alternately from each
slice, and is, in effect, shuffled like a deck of cards. The stripe units are
analogous to the lanes of a freeway.
There is no data protection in this scheme and, because of the way that
striping is implemented, the loss of one disk drive results in loss of data
on all striped disk drives. Therefore, while this implementation improves
performance, it degrades reliability.
SU 1 SU 4
Physical
Disk 1
SU 1
SU 2 SU 5 SU 2
Array Management
Physical SU 3
Software
Disk 2 SU 4
SU 5
SU 6
SU 3 SU 6 Virtual Disk
Physical
Disk 3 SU = Stripe Unit
Advantages
Limitations
Mirroring – RAID 1
The primary reason for using mirroring is to provide a high level of
availability or reliability.
The array management software takes duplicate copies of the data located
on multiple physical disk drives and presents one virtual disk drive to the
application, as shown in Figure 2-3.
Block 1
Block 2
Block 3
Block 1
Block 4
Array Management Block 2
Software
Block 3
Block 1 Block 4
Block 2
Virtual Disk
Block 3
Block 4
Advantages
Limitations
As shown in Figure 2-4, two drives are first striped and then mirrored.
The reliability is as high as with mirroring. Because the technique of
striping is also used, the performance is much better than when using just
mirroring.
SU 1
SU 3 SU 1
SU 5 SU 2
SU 7 Array SU 3
Management SU 4
Software SU 5
SU 2 SU 6
SU 4 Striping
SU 7 SU 1
SU 6 SU 8 SU 2
SU 8 SU 3
Array
Physical Disk Virtual Disks Management SU 4
Software SU 5
SU 6
SU 1 Mirroring SU 7
SU 3 SU 1 SU 8
SU 5 SU 2
SU 7 Array SU 3 Virtual Disk
Management SU 4
Software SU 5
SU 2 SU 6
SU 4 Striping
SU 7
SU 6 SU 8
SU 8
Advantages
Limitations
As shown in Figure 2-5, two drives are first concatenated and then
mirrored for increased reliability. Because the technique of concatenation
is used, varied storage segments of dissimilar size can be combined to
maximize storage utilization.
Blocks 0-499
Array
Management
Software
Concatenation
Blocks 500-999
Block 0
Array
Management
Software
Mirroring
Block 999
Volume
Blocks 0-499
Array
Management
Software
Concatenation
Blocks 500-999
Advantages
Limitations
Striped mirror volumes also have a quicker recovery time after a disk
drive failure because only a single stripe must be resynchronized instead
of an entire mirror. As a best practice, use striped mirror volumes for large
volumes where failure recovery time and performance are issues.
SV 1
Array
Management SU 1
Software
Mirroring
SV 2
Array
Physical Disk SU 1
Virtual Disks Management
Sub-Volumes SU 2
Software
Striping Virtual Disk
SV 2
Array
Management SU 2
Software
Mirroring
SV 2
Advantages
Limitations
Block 1 Block 1
Mirror
Block 1
Block 1001 Block 1001 Array
Mirror Management
Software
Block 2000 Block 2000
Block 3000
Virtual Disk
Block 2001 Block 2001
Mirror
Advantages
Limitations
Three of the RAID levels introduced by the Berkeley Group have been
referred to as parity RAID because they use a common data protection
mechanism. RAID 3, 4, and 5 all use the concept of bit-by-bit parity to
protect against data loss.
Disk 1
SU 1 SU 5 SU 9 P(10-12) SU 1
SU 2
Disk 2 SU 3
SU 4
SU 2 SU 6 P(7-9) SU 10
SU 5
Array Management
SU 6
Disk 3 Software
SU 7
SU 3 P(4-6) SU 7 SU 11 SU 8
SU 9
SU 10
Disk 4
SU 11
P(1-3)SU 4 SU 8 SU 12 SU = Stripe Unit SU 12
Virtual Disk
Ask students to discuss the impact of RAID 5 on performance, cost, failure, and recovery.
The layout specifications to use for vxassist is layout=raid5 (or raid5nolog), logging is the default.
Advantages
Limitations
Point out that this is the reason for hardware-based RAID, such as that used in the Sun StorEdge A3500 and
Sun StorEdge A1000 arrays.
Preparation
The first several tasks in this exercise are group discussions about
optimizing hardware configurations to suit particular volume structures.
The last task involves identifying and recording your VxVM server
configuration and selecting six disk drives for use during the remainder of
this course.
You should work on VxVM servers in small groups of two or three. Each
group has six disk drives with which to work. Ideally, three of the disk
drives are on one array, and three are on a different array.
You must examine your classroom setup and determine which disk drives
you are going to use.
Caution – More than one group might be working on the same server. It is
essential that each group is using their own disk drives and does not
accidentally reconfigure disk drives that are being used by another group.
11. Which of the following RAID structures has the highest tolerance to
disk drive failures?
a. RAID 0+1
b. RAID 5
c. RAID 1
d. RAID 1+0
e. RAID 0
The answer is d.
Array Array
t1 t1
t2 t2
t3 t3
Multiple boards, HBAs, and arrays reduce the possibility of a catastrophic failure that disables an entire site.
Volume Volume
Subdisk 3
Subdisk 4
RAID 0 (Concatenated)
Hint – Assume each subdisk or stripe is a different disk drive: losing one disk drive disables the volume.
Good – One system board, one HBA, and one array is as good as the concatenation gets. The stripe
performance can be improved a lot.
Better – One system board, four HBAs, and four arrays provide the best stripe performance.
Best – Four system boards, four HBAs, and four arrays provide a slight performance gain.
Subdisk 1 Subdisk 1
Subdisk 2 Subdisk 2
Subdisk 3 Subdisk 3
Subdisk 4 Subdisk 4
Mirror Mirror
Hint – Assume each subdisk is a different disk drive: losing two disk drives can disable the volume.
Good – One system board, two HBAs, and two arrays provide near maximum availability, but not much
possibility of performance increase.
Better – Two system boards, two HBAs, and two arrays provide a slight availability increase, but no
performance gains.
Best – There is not much else to do except use a SunPlex™ platform configuration.
RAID 5
Volume
Hint – Assume each stripe or log is a different disk drive: losing two disk drives can disable the volume.
Good – One system board, one HBA, and one array: a single HBA failure disables the volume.
Better – Four system boards, four HBAs, and four arrays provide some availability increase and better
performance.
Volume
Mirror Mirror
Hint – Assume each stripe is a different disk drive: losing two disk drives can disable the volume.
Good – Two system boards, two HBAs, and two arrays provide some availability and a performance gain from
striping.
Better – Two system boards, two HBAs, two hubs, and eight arrays increase availability and improve
performance.
Best – Eight system boards, eight HBAs, and eight arrays provide the best availability and performance.
Volume
Hint – Assume each mirror is a different disk drive: you can lose up to four disk drives without disabling the
volume.
Good – Two system boards, two HBAs, and two arrays provide availability and some performance gains from
striping. Primary mirrors are on one array, secondaries are on the other.
Better – Two system boards, two HBAs, two hubs, and eight arrays increase availability and improve
performance.
Best – Eight system boards, eight HBAs, and eight arrays provide the best availability and performance.
HBA c6
HBA c5
HBA c4
HBA c3
1. Circle the disk drives in Figure 2-15 that you would use to build the
following RAID structures:
● A three-disk RAID-0 striped volume
● A two-disk RAID-0 concatenated volume
● A two-disk RAID-1 mirrored volume
● A four-disk RAID-5 volume (no log)
● A four-disk RAID-0+1 mirror-stripe volume
● A four-disk RAID-1+0 stripe-mirror volume
2. If all the disk drives in Figure 2-15 are 9 Gbytes in size, what is the
approximate data storage available for each of the structures?
Assume you are using entire disk drives. There is one disk drive left over for a spare.
You create and destroy disk groups and several different volume
structures, so it is important that the disk drives you select are not being
used by another group.
Use the following commands to select and record the logical addresses of
the six disk drives that your group chooses:
● format
● luxadm probe
● luxadm display
Record the logical paths to the six disk drives for your workgroup in the
form c2t3d4.
Disk: _______________ Disk: _______________
Disk: _______________ Disk: _______________
Disk: _______________ Disk: _______________
Caution – If there are other workgroups using the same VxVM server,
you must check with them to ensure that you are not using some of their
disk drives.
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● List the key elements of pre-installation planning
● Research VxVM software patch requirements
● Install the VxVM software
● Initialize the VxVM software
● Verify the post-installation environment
● Prepare for virtual disk drive management
● Install the VEA client software
● Use the basic VEA features
● Use the VxVM error numbering system
3-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Installation Planning
Installation Planning
VxVM installations vary in size from small desktop systems to large
servers with Tbytes of data storage. Regardless of the system size, the
installation should be carefully planned in advance.
System Downtime
During a new installation or an upgrade, some system downtime is
always required. Usually you should schedule system downtime so that it
occurs during off-peak system-usage time. Thorough pre-installation
planning usually minimizes the system downtime.
You have the option of not placing certain disk drives under VxVM
control. This option is useful if you have applications that are currently
using file systems or partitions, and you do not want to update the
applications’ references to these file systems or partitions.
In contrast, you might want to put your system disk under VxVM control
so that it can be mirrored.
You might also need to plan for new disk storage devices. In addition, you
might need to add more memory and larger backup tape systems to
compensate for the increased storage load.
Upgrade Resources
Some of the most frustrating installation issues can be discovering that
you are missing a CD-ROM, discovering that you do not have the needed
patches, or discovering that you have misplaced the installation
documents. Having all the required CD-ROMs, patches on the
appropriate media, and documentation minimizes your frustration. Not
only should you have the documentation (for example, release notes and
installation procedures), but you should read it. Reading the installation
documentation is the only way to ensure that you have all of the required
patches.
Licensing
VxVM uses license keys to control access. If you have a
SPARCstorage Array 100 or a Sun StorEdge A5x00 array attached to your
system, VxVM automatically configures a basic-use license. You can also
configure non-array drives connected to the same host. Other storage
arrays might require manual license installations.
Backups
Not only must you have backups, but you must verify them. If there is a
hardware failure or not enough space to facilitate the upgrade, you must
be able to recover or back out the software. Perform a complete backup
immediately prior to the installation process.
Ensure that you read all of the README notes in all of the patches.
Array firmware patches usually install new software drivers that are
sometimes designed to work only with a small range of array firmware
revisions.
Caution – If the mismatch between the system software drivers and the
array-resident firmware is too great, the storage arrays can become
unavailable. Correcting the problem can be difficult and might require
Sun support and hours of downtime.
The configuration used to produce the preceding output purposely uses some out-of-date components.
A configuration using all of the most recent components produces little or no output.
At the time of writing, PatchPro has still not been updated to reflect the current VxVM version.
Installing Patches
The following is a typical patch installation scenario for the configuration
shown in Figure 3-1 on page 3-5:
1. Pay close attention to the PatchPro listing Legend section and the
Patch Fixes column.
The order of the patches can be critical. Firmware patches must be
installed with care. You must carefully study all firmware patch
README notes before taking any action, especially in the following
areas:
● Keyword and Synopsis section
● Patches Required With This Patch section
● WARNING and Patch Installation Instructions sections
2. Examine the /var/sadm/patch directory to check for patches that
were installed after the operating system installation.
You can also use the patchadd -p command, but it displays many
screens of patches that are incorporated into the currently installed
operating system.
3. Verify all firmware levels before attempting to install firmware
patches.
Verifying firmware levels varies according to system and storage
types. Older products are checked using the luxadm command.
Newer products, such as the Sun StorEdge T3 array, require you to
use array-resident firmware programs to verify revision levels.
● support
The support directory contains a group of support tools used to
gather configuration information. Use these tools only under the
direction of technical support personnel. Sun technical support
personnel use a different information gathering tool called Explorer.
● veritas_enabled
The veritas_enabled directory contains many library files to
support a wide range of Sun and third-party storage arrays.
Package Title
The example does not show the installation of the VxFS software packages.
The VRTSvmdoc package does not prompt for any user-input during
its installation.
The VRTSobgui package does not prompt for any user-input during
its installation.
At this point, the VxVM software is installed but not operational. If you
reboot the system, you see at least two VxVM error messages similar to
the following:
VxVM NOTICE V-5-2-3347 Volume Manager not started
VxVM NOTICE V-5-2-3365 VxVM not started
VxVM Provider initialization warning: Configuration
daemon is not accessible
Only the vxsvc daemon is running. VxVM must be initialized using the
vxinstall utility before it can start successfully.
Licensing information:
System host ID: 80960386
Host type: SUNW,Ultra-4
SPARCstorage Array or Sun Enterprise Network Array: found
Sep 3 11:38:17 ns-east-104 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 added disk
array DISKS, datype = Disk
Sep 3 11:38:17 ns-east-104 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 added disk
array 5080020000034ed8, datype = SENA
Sep 3 11:38:17 ns-east-104 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 added disk
array 5080020000029e70, datype = SENA
Sep 3 11:38:17 ns-east-104 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 added disk
array 60020f200000c3670000000000000000, datype = T3
Sep 3 11:38:17 ns-east-104 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 added disk
array 60020f200000c1930000000000000000, datype = T3
Sep 3 11:38:18 ns-east-104 vxdmp: WARNING: VxVM vxdmp V-5-0-336
Unlicensed array S/N 60020f200000c3670000000000000000 installed
Sep 3 11:38:18 ns-east-104 vxdmp: WARNING: VxVM vxdmp V-5-0-336
Unlicensed array S/N 60020f200000c1930000000000000000 installed
Licensing Requirements
The configuration used in the following example features two
Sun StorEdge T3B arrays. According to the restrictions outlined in the
vxinstall output, an additional license must be installed.
Note – You can also use the vxlicinst utility to manually install a license
key at any time.
● /etc/rcS.d/S25vxvm-sysboot
This script file runs early in the boot sequence to configure the / and
/usr volumes. This file also contains configurable debugging
parameters.
● /etc/rcS.d/S35vxvm-startup1
This script file runs after the / and /usr volumes are available. It
also makes other volumes available that are needed by the Solaris OS
early in the Solaris OS boot sequence, such as swap and /var.
● /etc/rcS.d/S85vxvm-startup2
This script file starts I/O daemons, rebuilds the /dev/vx/dsk and
/dev/vx/rdsk directories, imports all disk groups, and starts all
volumes that were not started earlier in the boot sequence.
● /etc/rcS.d/S86vxvm-reconfig
This script file contains commands to execute the fsck utility on the
root partition before anything else on the system executes.
● /etc/rc2.d/S50isisd
This script file starts the ISIS service daemon (vxsvc) associated with
the VEA graphical interface during the system boot sequence.
● /rc2.d/S94vxnm-vxnetd
This script file starts the vxnetd daemon if the VVR software option
is installed and licensed.
● /etc/rc2.d/S95vxvm-recover
This script file attaches and resynchronizes plexes and starts several
VxVM watch daemons, including: vxrelocd, vxcached, and
vxconfigbackupd. You can also modify this file to change the
default VxVM disk drive failure response from hot relocation to hot
sparing.
● /etc/rc2.d/S96vradmind
This script file starts the vradmind daemon if the VERITAS Volume
Replicator (VVR) software option is installed and licensed.
● /etc/rc2.d/S96vxrsyncd
This script file starts the vxrsyncd daemon if VVR is installed and
licensed.
Hostname: ns-east-104
VxVM vxvm-startup2 INFO V-5-2-503 VxVM general startup...
The system is coming up. Please wait.
NIS domain name is Ecd.East.Sun.COM
starting rpc services: rpcbind keyserv ypbind done.
Setting netmask of lo0 to 255.0.0.0
Setting netmask of hme0 to 255.255.255.0
Setting default IPv4 interface for multicast: add net 224.0/4: gateway
ns-east-104
syslog service starting.
volume management starting.
The system is ready.
Although there are many VxVM program files in the /usr/sbin directory,
only a few are commonly used. They include vxassist, vxstat, vxinfo,
vxprint, vxtask, vxinstall, vxdg, vxdisk, and vxdiskadm.
# ls /usr/sbin/vx*
vxadm vxdiskpr vxpool vxstat
vxassist vxdmpadm vxprint vxtask
vxcache vxedit vxrecover vxtemplate
vxclust vxexport vxrecover.wrap vxtrace
vxcmdlog vxibc vxrelayout vxtranslog
vxconfigd vxinfo vxrlink vxtune
vxdco vxinstall vxrsync vxusertemplate
vxdctl vxiod vxrvg vxvol
vxddladm vxmake vxscriptlog vxvoladm
vxdg vxmemstat vxsd vxvoladmtask
vxdisk vxmend vxsnap vxvset
vxdiskadd vxnetd vxsp
vxdiskadm vxnotify vxspcshow
vxdiskconfig vxplex vxstart_vvr
The script and program files in the /etc/vx/bin directory are called by
higher-level user commands and are not commonly used directly.
# ls /etc/vx/bin
egettxt vxckdiskrm vxedvtoc vxr5vrfy
strtovoff vxclustadm vxeeprom vxreattach
ugettxt vxclustipc vxencap vxrelocd
vsshutdown vxcntrllist vxevac vxresize
vxa5kchk vxconfigbackup vxldiskcmd vxroot
vxapslice vxconfigbackupd vxmirror vxrootmir
vxbadcxcld vxconfigrestore vxmksdpart vxslicer
vxbaddxcld vxconvarrayinfo vxnewdmname vxspare
vxbootsetup vxcxcld vxparms vxsparecheck
vxcached vxdarestore vxpartadd vxsplitlines
vxcap-part vxdevlist vxpartinfo vxswapctl
vxcap-vol vxdevpromnm vxpartrm vxtaginfo
vxcdsconvert vxdisksetup vxpartrmall vxunreloc
vxcheckda vxdiskunsetup vxprtvtoc vxunroot
vxchksundev vxdxcld vxr5check
The vxdisk utility shows the current VxVM status of all disks drives
attached to the system.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none - - online invalid
c2t1d0s2 auto:none - - online invalid
c2t3d0s2 auto:none - - online invalid
c2t5d0s2 auto:none - - online invalid
c2t16d0s2 auto:none testdg01 testdg online
c2t18d0s2 auto:none - - online invalid
c2t20d0s2 auto:none - - online invalid
c2t22d0s2 auto:none - - online invalid
c3t32d0s2 auto:none - - online invalid
c3t33d0s2 auto:sliced - - online
c3t35d0s2 auto:none - - online invalid
c3t37d0s2 auto:none - - online invalid
c3t50d0s2 auto:none - - online invalid
c3t52d0s2 auto:none - - online invalid
c4t1d0s2 auto:none - - online invalid
c5t1d0s2 auto:none - - online invalid
Disks that show a status of online invalid are not under VxVM control.
Disks that show a status of online have been initialized, but are not
assigned to a disk group. When online disks are added to a disk group,
they are assigned a name which appears in the DISK column. By default,
the disk name is derived from the name of the disk group.
What is not evident is that slice 7 of the disk c2t16d0 is mounted with a
file system. You must plan for all existing data before proceeding with
disk drive encapsulation or initialization.
Note – When you use VxVM software such as the vxdiskadm utility to
manage disk drives, the software takes extensive steps to detect any
existing data structures.
As shown in Figure 3-2 on page 3-25, a physical disk drive that has been
initialized by VxVM is divided into two sections called the private region
and the public region:
● The private region is used for configuration information.
● The public region is used for data storage.
By default, VxVM uses partitions 3 and 4 for the private and public
regions.
Private Region
VxVM requires a single cylinder for the private region. On larger drives,
one cylinder can store more than an Mbyte.
The public region is configured to be the rest of the physical disk drive.
The volume table of contents (VTOC) listing for a freshly initialized
VxVM disk drive is shown in the following example. Some output is
omitted for clarity.
# prtvtoc /dev/dsk/c2t22d0s2
...
...
* First Sector Last
* Partition Tag Flags Sector Count Sector
2 5 01 0 17682084 17682083
3 15 01 0 3591 3590
4 14 01 3591 17678493 17682083
The disk header is a block stored in the private region of a disk drive that
defines the following import properties of the disk drive:
● Current host ownership of the disk drive
When a disk drive is part of a disk group that is in active use by a
particular host, the disk drive is stamped with that host’s host ID
(host name). If another VxVM system attempts to access the disk
drive, its VxVM daemons detect that the disk drive has a
nonmatching host ID (host name) and disallows access until the first
system releases the disk drive.
● Disk identifier
A 64-byte unique identifier is assigned to a physical disk drive when
its private region is initialized.
Kernel Log
The kernel log is kept in the private region on the disk drive and is
written by the VxVM kernel. The kernel log contains records describing
certain types of actions, such as transaction commits, plex detaches
resulting from I/O failures, dirty-region log failures, the first write to a
volume, and volume close information. The kernel log is used after a
crash or clean reboot to recover the state of the disk group just prior to the
crash or reboot.
Generally, you do not encapsulate disk drives with existing data unless
you want to increase availability or performance of the data through the
use of software RAID structures.
When a disk with existing data structures (such as a mounted file system)
is encapsulated, VxVM analyzes the disk structure and takes measures to
preserve all existing data and the disk partition map found on block zero.
2 5 01 0 17682084 17682083
7 0 00 0 2100735 2100734 /Test
2 5 01 0 17682084 17682083
3 14 01 0 17682084 17682083
4 15 01 17674902 7182 17682083
Although the format utility displays all known storage devices, you must
use the vxdmpadm command as shown to display the storage types and
enclosure names for use in device exclusion.
# vxdmpadm listctlr all
CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME
=====================================================
c0 Disk ENABLED Disk
c2 SENA ENABLED SENA0
c3 SENA ENABLED SENA1
c4 T3 ENABLED T30
c5 T3 ENABLED T31
The disk array type field (datype) displayed during the system boot
process equates to the ENCLR_TYPE field displayed in the output of the
vxdmpadm command.
Limited Exclusion
Caution – The three manual exclusion files do not prevent other VxVM
commands from seeing and operating on the storage devices. You can still
see and perform operations on all the devices using VxVM commands,
such as vxdg, vxdisk, vxdisksetup, and vxassist.
The manual exclusion files are used to protect specific storage devices
from being initialized or encapsulated after an initial software installation.
The exclusion files are also useful to protect specific SAN storage devices.
You can remove or rename the manual exclusion files after you complete
the initialization or encapsulation process.
The following example shows the format using each of the different
manual exclusion files.
# more /etc/vx/enclr.exclude
SENA0
# more /etc/vx/cntrls.exclude
c4
# more /etc/vx/disks.exclude
c0t0d0
c0t1d0
If you try to initialize all attached storage using the vxdiskadm utility, you
see exclusion messages similar to the following:
Select disk devices to add:[<pattern-list>,all,list,q,?]
all
/dev/vx/rdmp/c4t1d0s2
/dev/vx/rdmp/c0t0d0s2 /dev/vx/rdmp/c0t1d0s2
/dev/vx/rdmp/SENA0_0s2 /dev/vx/rdmp/SENA0_1s2
/dev/vx/rdmp/SENA0_2s2 /dev/vx/rdmp/SENA0_3s2
/dev/vx/rdmp/SENA0_4s2 /dev/vx/rdmp/SENA0_5s2
/dev/vx/rdmp/SENA0_6s2
Global Exclusion
There are two additional files, /etc/vx/vxvm.exclude and
/etc/vx/vxdmp.exclude, that should not be manually edited. They are
modified indirectly using the vxdiskadm utility option 17, Prevent
multipathing/Suppress devices from VxVM’s view.
# vxdiskadm
....
....
Select an operation to perform: 17
Exclude Devices
Menu: VolumeManager/Disk/ExcludeDevices
VxVM INFO V-5-2-1239
....
# reboot
....
....
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none - - online invalid
c2t1d0s2 auto:none - - online invalid
c2t3d0s2 auto:none - - online invalid
c2t5d0s2 auto:none - - online invalid
c2t16d0s2 auto:none - - online invalid
c2t18d0s2 auto:none - - online invalid
c2t20d0s2 auto:none - - online invalid
c2t22d0s2 auto:none - - online invalid
c3t32d0s2 auto:none - - online invalid
c3t33d0s2 auto:none - - online invalid
c3t35d0s2 auto:none - - online invalid
c3t37d0s2 auto:none - - online invalid
c3t50d0s2 auto:none - - online invalid
c3t52d0s2 auto:none - - online invalid
c5t1d0s2 auto:none - - online invalid
# more /etc/vx/vxvm.exclude
exclude_all 0
paths
#
controllers
c4 /pci@6,4000/pci@4/SUNW,qlc@4/fp@0,0
#
product
#
pathgroups
#
Note – The format utility still sees the c4 controller and can use it
normally.
As shown in Figure 3-3, you can use the VEA client software to:
● Run the client software remotely on an administration system
● Run the client software on the VxVM server and display the VEA
GUI locally
● Run the client software on the VxVM server and display the VEA
GUI remotely
Remote System
VEA Client
Software
Network
VM Server
Disk
Disk Groups
Volumes
The VRTSob and VRTSobgui packages are both installed on the VxVM
server, which includes the VEA client interface and the VEA server
software. You can start the client software on the server. However, the
VEA client software package, VRTSobgui, is more commonly installed on
a remote administration workstation.
You can manually stop and start the VEA server software on the VxVM
server using the /etc/init.d/isisd stop (or start) command options.
Table 3-2 shows the options you can use to control the
/opt/VRTSob/bin/vxsvc program directly:
Option Function
You can also use the startup options shown in Table 3-3 as needed.
Option Function
If you enable the Remember password feature, the next time you connect
you select the hostname from the pull-down menu. The Username and
Password fields are automatically configured.
Note – The Actions menu entries change according to the type of objects
being displayed in the grid area. Some of the toolbar icons’ functions also
change as different objects are displayed in the grid area.
The menu bar in the VEA GUI has the functions shown in Figure 3-6. The
Actions menu entries change according to the type of objects currently
displayed in the grid area.
The tear off menu opens a separate window relating to the tabs in the
current grid area display. The tear off feature is useful when analyzing
multiple aspects of a grid area display.
Toolbar Buttons
The toolbar, shown in Figure 3-7, provides direct access to general VEA
functions. Some of the toolbar selections change according to the type of
objects being displayed in the grid area.
All the toolbar functions are available elsewhere in menus, but the toolbar
offers a convenient way to access commonly used functions.
The object tree window, shown in Figure 3-8, has an icon for every type of
VEA object that is referenced during VxVM administration. The objects
are arranged in a hierarchy starting with VxVM servers at the top.
You can expand small nodes on the object tree branches to display
detailed information about the node’s subject.
When you select an object tree icon with the first mouse button, expanded
configuration information about that object appears in the grid area.
The grid area display, shown in Figure 3-9, results from selecting
Enclosure in the object tree.
In the previous example, the object tree pane has been widened using its
resizing bar, and the message area has been fully collapsed using its
resizing arrow so that it is no longer visible.
Modifying Preferences
The Preferences window, accessed by selecting Preferences from the Tools
menu, contains three tabs.
● Appearance tab (shown in Figure 3-11) – Used to modify the general
look and feel of the VEA GUI.
Each type of grid display has data tabs associated with it. You can use the
tabs to display different information related to the current grid display.
By default, when a disk group is displayed in the grid area, the Disks tab
is active. In the example shown in Figure 3-12, the Volumes tab is selected
so that all volumes associated with the sdgA disk group are displayed.
In the example shown in Figure 3-13, a search was made for all volumes
at least 1 Gbyte in size.
The new error message numbers are grouped into two sections as follows:
● Errors V-5-0-2 through error V-5-0-386
● Errors V-5-1-90 through error V-5-1-5929
Preparation
If you are installing the VxVM software on a central server, your
instructor must perform the installation as a demonstration.
1. Ask your instructor for the location of the VxVM software.
VxVM location: ______________________________
2. If the lab system does not have certain Sun storage array models
attached, you are asked for a license string during the VxVM
VRTSvlic package installation. Ask your instructor for a
demonstration license string.
Demo license: _____________________________
The Adobe Acrobat Reader program must be available to the students to examine the PDF versions of the
VERITAS documents.
Discuss the classroom configuration and the process students should follow until everyone is clear about how
to proceed.
Divide students into groups depending on how many systems you have available for VxVM installation.
You can install and initialize the VxVM software yourself while students watch. It is up to you to determine
which method works best in your classroom configuration.
The answer is c.
The answer is d.
The answer is c.
6. What is the default amount of space that VxVM requires for a disk
drive’s private region?
a. 4800 sectors
b. 2048 sectors
c. 1648 sectors
d. 4096 sectors
e. 1024 sectors
The answer is b.
The answer is when you want to preserve existing data on the disk drive.
The answer is c.
The answer is c.
6. During the installation, answer yes to all questions unless you feel
there are serious problems.
If there are problems, ask your instructor for advice.
7. After the package installation is completed, log out and log back in
again.
The installation process usually alters search-path values.
8. Use the vxinstall utility to initialize VxVM and answer the
following key questions:
Do you want to use enclosure based names for all disks?
[y,n,q,?] (default: n) n
Do you want to setup a system wide default disk group?
[y,n,q,?] (default: y) n
# echo $MANPATH
/usr/man:/opt/VRTS/man:/opt/VRTSvlic/man
The correct syntax for .profile file entries in the Bourne shell
environment is:
PATH=$PATH:/opt/VRTS/bin:/opt/VRTSalloc/bin:/opt/VRTSvlic/b
in:/opt/VRTSvmpro/bin /etc/vx/bin
MANPATH=/usr/man:/opt/VRTS/man:/opt/VRTSlic/man
export PATH MANPATH
The correct syntax for .cshrc file entries in the C shell environment is:
set path = ($path /opt/VRTS/bin /opt/VRTSalloc/bin
/opt/VRTSvlic/bin /opt/VRTSvmpro/bin /etc/vx/bin)
# pkgadd -d . VRTSobgui
2. View the contents of the PDF manuals using the Adobe Acrobat
Reader (acroread) program
The hypertext links and search features of the Adobe Acrobat Reader
are useful when you are looking for specific information in the
manuals. Use the Control-F sequence to enable the Adobe Acrobat
Find window.
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● Describe the function of VxVM disk groups
● List disk group administrative operations including:
● Initialize disk drives for VxVM use
● Create disk groups
● Add and remove disk drives for a disk group
● Import and deport disk groups
● Destroy a disk group
● Rename VxVM disk drives
● Administer disk groups using the vxdiskadm utility
● Administer disk groups using command-line programs
● Administer disk groups using the VEA GUI
4-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
VxVM Disk Group Functions
Easier Administration
Disk groups enable you to group disk drives into logical collections for
administrative convenience. You can group them according to department
or application. For example, you can create separate disk groups for sales,
finance, and development.
You can move a disk group and its components as a unit from one host
machine to another. This feature provides higher availability of the data in
the following ways:
● If one system fails, another system running VxVM can import the
failed system’s disk group and provide access to it.
● The first system deports the disk group.
● Deporting a disk group disables access to that disk group by the first
system. Another host can then import the disk group and start
accessing all the disk drives in the disk group.
● The second system imports the disk group and starts accessing it.
● A host can only import disk groups with unique names. Therefore,
all disk groups on all systems should be given unique names, with
the exception of the rootdg disk group.
When you bring a disk drive under VxVM control, you can:
● Add it to a new or existing disk group
● Add it to the free-disk pool
The easiest operation is to add a disk drive to the free-disk pool. The
vxdisksetup command repartitions the disk drive into VxVM format,
and then a blank header is written to the disk drive.
If you add a disk drive to a disk group, the disk drive is assigned a
unique media name and it is associated with a disk group object. This
information is then written into the blank header on the disk drive.
Unless you intervene, the default media names that are assigned to disk
drives are based on either the disk group name or the logical device path
to the disk drive.
Default disk group-based media names are similar to dgX01 or DGa04. The
default device path-based media names assigned by some command-line
programs are similar to c3t0d16s2 or c5t4d0s2. Device path-based
media names can lead to confusing status and configuration listings.
Each disk group is owned by a single host system. The current ownership
is written into all configuration records. Many of the disk drives in the
disk group have a copy of the configuration record.
A disk group and all its components can be moved as a unit from one
host system to another. Usually, both host systems are connected to the
same dual-ported storage arrays.
As shown in Figure 4-1, even though a second host is attached to the same
storage array, access is allowed only to the current owner of the disk
group. A disk group can be deported from one host and imported by a
different host, but this is generally used as an emergency solution to a
catastrophic host system failure. When a disk group is imported by a
different host, the name of the new host is written into the disk-based
VxVM configuration records.
Host 1 X Host 2
Volume Volume
Storage Array
When a shared disk group is imported by any of the attached nodes, the
name of the Sun Cluster software cluster (cluster_name) is written into
the disk-resident VxVM configuration records, and the disk group is
automatically accessible by all of the attached nodes.
Host 2
Host 1
A0 A1 B0 B1
The disk group
is owned by a
cluster_name
Disk Group
Volume Volume
The CDS feature is not licensed by Sun but, by default, disk drives are
initialized in the CDS format.
The new cdsdisk partition map allocates all disk space to slice 7 as
shown in the following example.
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 4923 8.43GB (4924/0/0) 17682084
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 - wu 0 - 4923 8.43GB (4924/0/0) 17682084
Technically, the cdsdisk format does not interfere with standard VxVM
operation. However, if you are not comfortable using the cdsdisk format
at your site, you can disable it so that the vxdiskadm utility uses the
sliced format when initializing disk drives.
Before initializing or encapsulating any disk drives for VxVM use, use the
vxdiskadm utility option 22, Change/Display the default disk
layouts, to modify the default disk format and private region size. The
changes are stored in the /etc/default/vxdisk file. Use the following
file format:
# more /etc/default/vxdisk
format=sliced
privlen=2048
Functional Overview
The title displayed when the vxdiskadm utility starts up is Volume
Manager Support Operations. The vxdiskadm utility performs a wide
range of support functions, but also offers assistance in performing a
number of common administrative tasks.
Note – For clarity, many informational messages are omitted from the
previous example.
Remove a disk
Menu: VolumeManager/Disk/RemoveDisk
Note – Only selected options for each command are described in this
module. When appropriate, other options are described in later modules.
During the VxVM software installation and initialization, you might see
error messages, such as:
VxVM:vxconfigd: WARNING: Disk c3t35d0 in group hanfs:
Disk device not found
VxVM:vxconfigd: WARNING: Disk c2t18d0 in group hadbms:
Disk device not found
These errors can indicate that there are disk drives that still contain VxVM
configuration records from a previous installation. You can clear these
disk drives and return them to an uninitialized state by using the
vxdiskunsetup command, as shown in the following example:
# /etc/vx/bin/vxdiskunsetup -C c2t3d0
The vxdiskunsetup command will not clear a disk drive if the VxVM
configuration records indicate it is imported by some other host. The -C
option forces the de-partitioning of the disk drive in such a case. The disk
drives are returned to standard Solaris OS partitioning.
Note – You must initialize a disk drive before it can be added to a new or
existing disk group using the vxdg command.
To create a new disk group using the vxdg command, you must furnish
the disk drive logical access name (accessname) of at least one disk drive
to be added to the disk group. The VxVM accessname is essentially the
logical path to the disk drive in the form: c3t4d0. You should also specify
a media name for the disk drive. If you do not specify a media name, it
defaults to the accessname. The following shows a typical session to
initialize a new disk drive and add it to a new disk group.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none - - online invalid
c2t1d0s2 auto:none - - online invalid
c2t3d0s2 auto:none - - online invalid
c2t5d0s2 auto:none - - online invalid
c2t16d0s2 auto:none - - online invalid
c2t18d0s2 auto:none - - online invalid
c2t20d0s2 auto:none - - online invalid
c2t22d0s2 auto:none - - online invalid
c3t32d0s2 auto:none - - online invalid
c3t33d0s2 auto:none - - online invalid
c3t35d0s2 auto:none - - online invalid
c3t37d0s2 auto:none - - online invalid
c3t50d0s2 auto:none - - online invalid
c3t52d0s2 auto:none - - online invalid
# vxdisksetup -i c2t1d0
# vxdisksetup -i c2t3d0
# vxdg list
NAME STATE ID
newDG enabled 1065465185.43.ns-east-104
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none - - online invalid
c2t1d0s2 auto:sliced ndg-01 newDG online
c2t3d0s2 auto:sliced ndg-02 newDG online
c2t5d0s2 auto:sliced ndg-03 newDG online
c2t16d0s2 auto:sliced ndg-04 newDG online
c2t18d0s2 auto:none - - online invalid
c2t20d0s2 auto:none - - online invalid
c2t22d0s2 auto:none - - online invalid
c3t32d0s2 auto:none - - online invalid
c3t33d0s2 auto:none - - online invalid
c3t35d0s2 auto:none - - online invalid
c3t37d0s2 auto:none - - online invalid
c3t50d0s2 auto:none - - online invalid
c3t52d0s2 auto:none - - online invalid
The removed disk drive is still initialized and is available for future use. It
is in the free-disk pool and shows a status of auto:sliced and online
with no DISK or GROUP entry.
# vxdg list
NAME STATE ID
sdga enabled 1064619733.28.ns-east-104
pdga enabled 1065123027.40.ns-east-104
When a disk group is deported, the host ID stored on all disk drives in the
disk group is cleared (unless a new host ID is specified with -h).
Therefore, the disk group is not reimported automatically when the
system is rebooted.
The disk group can be deported with the host ID unchanged or you can
change the host ID to another system during the deport operation.
Use the vxdg import command option to import a disk group again. For
example:
# vxdg import newDG
If you forget the name of a deported disk group, you can use the vxdisk
command to identify currently deported disk groups. The following
example shows the names of currently deported disk groups enclosed in
parenthesis.
# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c0t1d0s2 auto:none - - online invalid
c2t1d0s2 auto:cdsdisk a5kdg01 sdga online nohotuse
c2t3d0s2 auto:sliced a5kdg02 sdga online nohotuse
c2t5d0s2 auto:simple a5kdg03 sdga online nohotuse
c2t16d0s2 auto:sliced - (newDG) online
c2t18d0s2 auto:sliced - - online
c2t20d0s2 auto:sliced - - online
c2t22d0s2 auto:none - - online invalid
c3t32d0s2 auto:cdsdisk pdga01 pdga online
c3t33d0s2 auto:cdsdisk pdga02 pdga online
c3t35d0s2 auto:cdsdisk pdga03 pdga online
c3t37d0s2 auto:sliced - (newDG) online
c3t50d0s2 auto:none - - online invalid
c3t52d0s2 auto:sliced - - online
You can also use the vxdiskadm utility option 8, Enable access to
(import) a disk group, to identify deported disk groups. The
following is an excerpt of the resulting output.
Select disk group to import [<group>,list,q,?] (default:
list) list
# vxdg list
NAME STATE ID
sdga enabled 1064619733.28.ns-east-104
pdga enabled 1065123027.40.ns-east-104
You can use the following vxedit command to rename VxVM disk
drives:
# vxedit -g sdga rename a5kdg01 sdga01
# vxedit -g sdga rename a5kdg02 sdga02
# vxedit -g sdga rename a5kdg03 sdga03
If there are mounted volumes associated with the disk group, the
disk group destroy operation fails, as shown in Figure 4-13.
2. Click the third mouse button on the highlighted disk, and then select
Rename from the pop-up menu.
3. Complete the Rename Disk form, as shown in Figure 4-15.
Preparation
If the tasks in this exercise are performed by small groups using disk
drives residing on a central VxVM server, each group must take care to
not interfere with another group’s storage resources and structures.
Ask your instructor to provide two unique code letters for your
workgroup (A and B, C and D, E and F, and so on).
Workgroup code letters: dg ___ dg ___
Copy the logical paths to six disk drives for your work group from the
information in Module 2, ‘‘Managing Data” in ‘‘Task 9 – Selecting Disk
Drives for Use’’ on page 2-32.
Disk: _______________ Disk: _______________
Disk: _______________ Disk: _______________
Disk: _______________ Disk: _______________
Assign each workgroup’s two code letters (A/B, C/D, E/F) so they can create two unique disk group names,
such as dgA and dgB, or dgE and dgF. A workgroup consists of two or more students working with six disk
drives from one keyboard on an administration workstation.
An antiquated, but useful, management tool is to have each workgroup write the logical paths of their
assigned disk drives on a 3 by 5 card and tape the card to their display monitor.
Many of the tasks are performed twice. The first time using the VEA GUI
and the second time using command-line programs. For most tasks, you
must destroy the structures before creating them again. Destroying and
deleting structures is part of regular VxVM administrative tasks.
The answer is c.
The answer is b.
The answer is d.
MANPATH=/usr/man:/opt/VRTS/man:/opt/VRTSlic/man
3. If you are working from a remote administration system, log out of
the VxVM server.
4. On the remote administration system, complete the following steps:
a. Type the env shell command.
b. Verify that the /opt/VRTS/bin and /opt/VRTSob/bin
directories are part of the PATH variable.
c. Verify that the /opt/VRTS/man directory is part of the MANPATH
variable.
Note – You must substitute the VxVM accessname of your disk drives.
Do not proceed until all your assigned disk drives are uninitialized.
To initialize three of your assigned disk drives using the VEA GUI,
complete the following steps:
1. Expand the VxVM server node in the object tree and click Disks.
2. Select three of your assigned disk drives in the grid area by
simultaneously pressing the Control key while using the mouse
button.
3. Click the third mouse button on one of your highlighted disk drives,
and select Initialize Disk in the pop-up menu.
4. In the Initialize Disk form, click Yes To All.
After a short delay, three of your assigned disk drives should appear
in the grid area with a status of Free.
5. Complete the following steps:
a. Use the vxdiskunsetup command to return each of your
assigned disk drives to an uninitialized state.
b. Replace the accessname variable in the following command
with the address of your disk drives (for example, c4t3d0).
# vxdiskunsetup -C accessname
6. Verify that the status of your assigned disk drives is once again
online invalid.
# vxdisk list
To create a new disk group using the VEA GUI, complete the following
steps:
1. Click New Group in the Toolbar.
The initial New Disk Group Wizard form appears.
2. Click the Do not show this page next time box, and then click
Next.
The disk selection form appears.
3. Complete the disk selection form as follows:
a. Type the name of your new disk group.
b. In the Available disk column, select three of your assigned disk
drives, and then click Add.
c. Do not enter disk names.
d. Click Next.
The organization principle form appears.
Use the vxdg command to create a second disk group that contains your
three remaining assigned disk drives. Name the disk group according to
your second workgroup letter. For example, if your work group letters are
A and B, then this second disk group should be named dgB.
2. Add the two remaining assigned disk drives to the new disk drive
group.
# vxdg -g dgname adddisk medianame=accessname
For example:
vxdg -g dgB adddisk dgB-02=c2t5d0 dgb-03=c2t6d0
3. Use the vxprint command to verify the status of your new disk
group.
4. Use the vxdg rmdisk command to remove one of the disk drives
from your new disk group.
# vxdg -g dgname rmdisk medianame
For example: vxdg -g dgB rmdisk dgB-03
5. Use the vxdg adddisk command to add the same disk drive back
into your disk group.
To import and deport disk groups using the VEA GUI, complete the
following steps:
1. Display disk groups in the grid area.
2. Click the name of one of your disk groups and select Deport Disk
Group from its pop-up menu.
The Deport Disk Group form appears.
Note – You can rename a disk group during a deport operation. You can
also assign ownership to a different host. You might do this if you needed
to take the current host down for maintenance and wanted a different
host system to manage the disk group for a while. You might also rename
the disk group if the second host already had a disk group with the same
name.
3. Click OK.
To destroy a disk group using the VEA GUI, complete the following steps:
1. Display Disk Groups in the grid area.
2. Click one of your disk groups and select Destroy Disk Group from
its pop-up menu.
3. Click New Group in the Toolbar and recreate the disk group you
destroyed.
To rename disk drives using the VEA GUI, complete the following steps:
1. Display the disk drives from one of your disk groups in the grid
area.
2. Click one of the disk drives and select Rename Disk from its pop-up
menu.
3. Enter a unique media name for your disk drive.
Using the vxedit command, restore the original media name of the disk
drive you modified in ‘‘Using the VEA GUI to Rename Disk Drives’’ on
page 4-40.
# vxedit rename oldname newname
For example: vxedit rename zx12 dgB-03.
Note – Substitute your workgroup codes for the X and Y in dgX and dgY.
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● Interpret volume structure listings
● Describe volume planning activities
● Create volumes using the VEA GUI
● Create volumes using the vxassist command
● Modify volume access attributes
● Add file systems to existing volumes
● Add and remove volume logs
● Use the VEA GUI to analyze volume structures
5-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Interpreting Volume Structure Listings
Subdisks
A subdisk is a set of contiguous disk blocks. A subdisk must reside
entirely on a single physical disk drive. The public region of a disk drive
in a disk group can be divided into one or more subdisks. The subdisks
cannot overlap or share the same portions of a public region.
The smallest possible subdisk is a single sector (512 bytes), and the largest
subdisk is the entire VxVM public region.
By default, subdisks are named based on the VxVM media name of the
disk drive on which they reside. This relationship is shown in Figure 5-1.
VxVM Disk
Subdisks
Physical Disk disk01-01
c3t12d0
disk01-02
disk01
VxVM Disk
disk02-01
Physical Disk
c4t33d0 disk02-02
disk02-03
disk02
Ask why spanning storage arrays might be a good idea. One answer is striping for performance or mirroring
for availability.
Plexes
The VxVM application uses subdisks to build virtual objects called plexes.
A plex consists of one or more subdisks located on one or more physical
disk drives. Figure 5-2 shows the relationship of subdisks to plexes in a
disk group named DGa.
disk01 vol01-01
disk02-01 disk02-01
Physical Disk
c4t33d0 disk02-02 disk02-02
vol01-02
disk02-03
disk02
Volumes
A volume consists of one or more plexes. By definition, a volume with
two plexes is mirrored. Figure 5-3 shows the relationship of plexes in a
mirrored volume in a disk group named DGa.
Volume
disk01 vol01-01
disk02-01 disk02-01
Physical Disk
c4t33d0 disk02-02 disk02-02
vol01-02
disk02-03
disk02 vol01
Three mirrors is usually as many as most customers ever have in critical applications.
The following vxdg output shows the amount of available disk space. The
OFFSET column represents the amount of space currently used and the
LENGTH column represents the amount of free space.
# vxdg -g newDG free
DISK DEVICE TAG OFFSET LENGTH FLAGS
newDG01 c2t16d0s2 c2t16d0 8388608 9291168 -
newDG02 c2t18d0s2 c2t18d0 0 17679776 -
newDG03 c2t20d0s2 c2t20d0 0 17679776 -
newDG04 c3t32d0s2 c3t32d0 4194304 13485472 -
newDG05 c3t33d0s2 c3t33d0 0 17679776 -
newDG06 c3t35d0s2 c3t35d0 0 17679776 -
Volume Planning
Creating volume structures is easy to do. It is also easy to make mistakes
unless you understand each aspect of the volume creation process.
Volume Distribution
A common mistake is to place all the disk drives in a single disk group.
The configuration records for a disk group cannot contain information for
more than 4096 objects. Each volume, plex, subdisk, and disk drive is
considered to be an object and requires 256 bytes of space in the private
region. The default private region length is 2048 blocks.
Another reason for organizing disk drives into separate disk groups is
that you might want to deport a disk group and import it to another
connected host system. This action can be part of a disaster recovery plan
or a load balancing measure.
You can design a disk group so that it is better for particular tasks. Each
disk group shown in Figure 5-4 has three disk drives, and each disk drive
is in a different storage array.
Host System
c0 c1 c2
DGa d1 d2 d3
DGb d4 d5 d6
DGc d7 d8 d9
Disk groups organized in this manner are good for creating striped
volume types (such as RAID 5) and for mirrored volumes. The most
important feature is that each disk drive in the disk group is in a separate
enclosure and on a different controller.
Note – Exercise care with disk groups that span storage arrays. You must
be sure that the loss of an entire array does not disrupt both mirrors in a
volume or more than one column in a RAID-5 volume.
Another disk group structure, such as the one shown in Figure 5-5, would
be better used with simple concatenated volumes.
Host System
c0 c1 c2
DGa d1 d2 d3
DGb d4 d5 d6
DGc d7 d8 d9
If the volumes are large, static, read-only structures that need only a
periodic backup to tape, they do not need any higher level of reliability or
availability.
Note – The VEA GUI New Volume Wizard also has limited space research
capabilities during new volume creation.
If you do not specify disk resources when creating volumes, the VxVM
software automatically finds portions of unused disk space and assembles
them into a volume. This action can lead to a disorganized structure and
create poor performance for some volume types.
Rather than letting VxVM find space anywhere within a disk group, it is
frequently better to direct VxVM to use a particular disk drive for a new
volume.
The disk group shown in Figure 5-6 can be used in several different ways
depending on the type of volume structures you require:
● A RAID-5 volume might use disk drives d1, d2, and d3
● A concatenated volume might use disk drives d1, d4, and d7.
● A mirrored and concatenated volume might use disk drives d1, d4,
and d7 for one mirror and disk drives d3, d6, and d9 for a second
mirror.
Host System
c0 c1 c2
d1 d2 d3
DGa d4 d5 d6
d7 d8 d9
The vxdg command gathers a rough estimate of available disk space. The
following is an example of using vxdg on a 9-Gbyte disk drive.
# vxdg -g newDG free
DISK DEVICE TAG OFFSET LENGTH FLAGS
ndg-01 c2t1d0s2 c2t1d0 0 17674902 -
ndg-02 c2t3d0s2 c2t3d0 0 17674902 -
ndg-03 c3t32d0s2 c3t32d0 0 17674902 -
ndg-04 c3t33d0s2 c3t33d0 0 17674902 -
After creating a 6-Gbyte mirrored volume using the ndg-01 and ndg-03
disks, the following disk space is available:
# vxdg -g newDG free
DISK DEVICE TAG OFFSET LENGTH FLAGS
ndg-01 c2t1d0s2 c2t1d0 12586455 5088447 -
ndg-02 c2t3d0s2 c2t3d0 0 17674902 -
ndg-03 c3t32d0s2 c3t32d0 12586455 5088447 -
ndg-04 c3t33d0s2 c3t33d0 0 17674902 -
This is a basic concatenation that uses almost all of the available space, that is, 22 Gbytes.
RAID-5 volume column size is limited by the size of the smallest available column, which is 5,088,447
blocks in these examples. Additionally, approximately one disk drive’s worth of space is lost to parity.
The vxassist maxsize command is not usually needed unless you have
especially limited disk drive space and need to maximize its use. It is a
good practice to leave a small amount of space for log placement.
Concatenated
Striped
Mirrored Concatenation
Mirrored Stripe
RAID 5
The Mirror Across and Stripe Across check boxes let you choose how you
want stripes or mirrors distributed across your storage configuration. The
Mirror Across Tray applies to specific storage array models that have
separate disk drive trays in a single array. Unless you later specify a
striped or mirrored volume structure, these features do not perform any
useful function.
The Ordered check box is an advanced function that uses the specified
storage first to create concatenation, then form columns, and finally to
create mirrors. Ordered allocation is an advanced subject presented later
in this course.
The New File System Details button enables newfs and mkfs option entry.
The Mount File System Details button allows volume ownership and
protection entry.
Note – The VEA server logs the commands that perform all functions in
the /var/vx/isis/command.log file. The log file is a useful learning tool.
The most basic form of the vxassist command, which creates a volume,
is:
# vxassist make vol02 50m
This form of the vxassist command is more explicit and guarantees that
the following are true:
● The disk group that is used is dg2.
● The name of the volume is newvol.
● The size of the volume is 2 Gbytes.
● This is a RAID-5 volume without a log and with three columns.
● All disk space comes from disk01, disk02, and disk03.
Although there are many vxassist command options, only a few are
commonly used. Some of them require careful study. Always read the
manual (man) pages and related documentation before attempting to use
most of these options.
The vxintro and vxassist man pages contain useful information that is
difficult to find elsewhere.
The first two variations are equivalent and create the same volume
structure. The last two are also equivalent.
The vxassist command can frequently determine the best way to use the
specified disk drives (media names) in a volume structure.
The owner, group, and mode are usually those of the root user. For some
volumes, especially raw volumes that are used by a database, the volume
ownership must be modified.
Caution – Do not use the chown, chgrp, or chmod command to set raw
volume attributes. This is because the attributes revert to their original
values after each system reboot. Change raw volume attributes using
VxVM commands.
Note – You can also use the VEA GUI New Volume Wizard to set volume
ownership and permissions during initial volume creation. Or, later, you
can use the Actions menu File System entry to create a file system and
configure mount information for an existing volume.
Note – VxVM also supports the VxFS file system type, but the VxFS
features are licensed separately.
Using the New File System Details Form, shown in Figure 5-11, you can
make fundamental file system changes.
You enter a comma-separated list of mkfs file system options in the Extra
options text field. Consult the mkfs and mkfs_ufs man pages for more
detailed option information.
You can enter valid file system mount options, as shown in Figure 5-12.
By default, the newfs utility calculates the minimum free space based
on partition size (64 Mbytes ÷ partition size × 100), rounded down to the
nearest integer. The default value is limited to between 1 percent and
10 percent.
When the file system is full, the free space can only be accessed by user
root. It can act as an emergency overflow.
# newfs -m 10 /dev/vx/rdsk/newDG/vol_01
In very large file systems, you can safely set the minimum free space to a
smaller percentage.
The newfs utility calculates the number of bytes per inode based on file
system size. By default, the newfs utility calculates the number of inodes
as follows:
● 2048 bytes per inode for 0-1 Gbyte file system size
● 4096 bytes per inode for 1-2 Gbytes file system size
● 6144 bytes per inode for 2-3 Gbytes file system size
● 8192 bytes per inode for file system larger than 3 Gbytes
If you intend to create a large file system that will contain a small number
of very large files, you might be able to decrease the total number of
inodes, for example:
# newfs -i 10240 /dev/vx/rdsk/newDG/vol01
If the logging option is specified for a file system, then logging is enabled
for the duration of the mounted file system. Logging is the process of
storing transactions (changes that make up a complete UFS operation) in a
log before the transactions are applied to the file system. After a
transaction is stored, the transaction can be applied to the file system. This
process prevents file systems from becoming inconsistent, therefore
eliminating the need to run the fsck command. Because the fsck
command can be bypassed, logging reduces the time required to reboot a
system if it crashes or after an unclean halt. The default behavior is no
logging.
The log is allocated from free blocks on the file system, and it is sized at
approximately 1 Mbyte per 1 Gbyte of file system, up to a maximum of
64 Mbytes. Logging can be enabled on any UFS, including root (/). The
log created by UFS logging is continually flushed as it fills up. The log is
totally flushed when the file system is unmounted or when the
lockfs -f command is used.
# mkdir /Test
# vi /etc/vfstab
/dev/vx/dsk/newDG/testvol /dev/vx/rdsk/newDG/testvol /Test
ufs 1 yes logging
# mount /Test
Using DRLs
A DRL is a VxVM log file that tracks data changes made to mirrored
volumes. The DRL speeds recovery time when a failed mirror must be
synchronized with a surviving mirror.
The following example shows a mirrored volume with a DRL. Notice that
the log subdisk does not reside on either of the mirror disk drives.
# vxprint -g newDG mirvol
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE
v mirvol fsgen ENABLED 4096 - ACTIVE
pl mirvol-01 mirvol ENABLED 7182 - ACTIVE
sd ndg-01-01 mirvol-01 ENABLED 7182 0 -
pl mirvol-02 mirvol ENABLED 7182 - ACTIVE
sd ndg-03-01 mirvol-02 ENABLED 7182 0 -
pl mirvol-03 mirvol ENABLED LOGONLY - ACTIVE
sd ndg-02-03 mirvol-03 ENABLED 33 LOG -
When RAID-5 logging is used, a copy of the data and parity are written to
the RAID-5 log before being written to the disk drives.
RAID-5 logging is optional, but RAID-5 logs are created by default. You
should always run a system with RAID-5 logs to ensure data integrity.
The following example shows a RAID-5 volume with a log. Notice that
the log subdisk does not reside on either of the stripe disk drives.
The default log size for a RAID-5 volume is four times the full stripe
width (the stripe unit size × the number of stripe columns).
You should plan for both RAID-5 logs and DRLs in advance.
You should take special care with RAID-5 log placement because the data
written to all RAID-5 stripe units is also written to the log.
As shown in Figure 5-13, leaving a small amount of free space at the end
of all disk drives ensures that you always have alternate locations for log
placement or relocation.
Volume 01
Volume 02
If possible, a log should not reside on the same disk drive as its related
volume.
As shown in Figure 5-14, you can either let VxVM automatically assign a
suitable log disk or you can enable manual disk assignment.
Adding a DRL
To prevent I/O bottlenecks, a DRL should not reside on a disk drive used
by its related volume. It is best to specify the disk drive (media name)
where the DRL should be placed. If the DRL location is not specified, the
vxassist command assesses the available disk space and decides where
to place the log. The following example shows the addition of a DRL to a
mirrored volume.
# vxassist addlog mirvol ndg-02
# vxprint -g newDG mirvol
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE
v mirvol fsgen ENABLED 4096 - ACTIVE
pl mirvol-01 mirvol ENABLED 7182 - ACTIVE
sd ndg-01-01 mirvol-01 ENABLED 7182 0 -
pl mirvol-02 mirvol ENABLED 7182 - ACTIVE
sd ndg-03-01 mirvol-02 ENABLED 7182 0 -
pl mirvol-03 mirvol ENABLED LOGONLY - ACTIVE
sd ndg-02-03 mirvol-03 ENABLED 33 LOG -
The process for adding a RAID-5 log is the same as for a adding a DRL.
The following example shows the addition of a log to a RAID-5 volume.
# vxassist addlog raidvol ndg-04
# vxprint -g newDG raidvol
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE
v raidvol raid5 ENABLED 4096 - ACTIVE
pl raidvol-01 raidvol ENABLED 7168 - ACTIVE
sd ndg-02-02 raidvol-01 ENABLED 3591 0 -
sd ndg-01-03 raidvol-01 ENABLED 3591 0 -
sd ndg-03-03 raidvol-01 ENABLED 3591 0 -
pl raidvol-02 raidvol ENABLED 3591 - LOG
sd ndg-04-02 raidvol-02 ENABLED 3591 0 -
The Quantity/Disk removal method provides two options. You can enter
the quantity of logs to be removed, or you can specify one or more disk
drives on which to preserve log copies.
The Volume Disk Map, shown in Figure 5-17, displays a map of each
volume and its associated disk drives.
Preparation
The purpose of this lab is to have you create and destroy VxVM objects
using both the VEA GUI and command-line programs. Each method has
advantages.
Note – Substitute your workgroup codes for the X and Y in dgX and dgY.
The answer is c.
3. Which of the following volume types has the fastest recovery time?
a. Striped volumes
b. Layered volumes
c. Mirrored volumes
The answer is b.
4. Which of the following volume types has the highest disk drive
failure tolerance?
a. Concatenated mirror
b. Mirrored concatenation
c. RAID 5
d. Striped mirror
The answer is d.
The answer is c.
The answer is c.
The answer is e.
To create a volume using the VEA GUI, complete the following steps:
1. Display your first disk group (dgX) in the grid area.
2. In the toolbar, select New Volume.
3. Configure the New Volume Wizard as follows:
● Manually select disks to use for the volume.
● Select only one of the disks for use in the new volume.
● Enter your assigned concatenated volume name.
● Select the Concatenated layout.
● Select Maxsize.
● After the Maxsize calculation has completed, type 200 in the
Size window and select MB from its pull-down menu.
● Select No file system.
● Review the final configuration parameters and click Finish.
Note – The Maxsize feature can be useful when you are trying to
maximize the size of a new volume and when you have limited disk drive
space.
4. Check the status of the new volume by using the vxprint command.
5. Verify that your new volume has a single plex with one subdisk and
that the volume and plex are ENABLED and ACTIVE.
6. Display the new volume in the grid area.
7. Click the new volume in the grid area and select Properties from its
pop-up menu.
8. Examine the volume’s properties and click Cancel when you are
finished.
To create a volume using the command line, use the man pages, as
necessary, to complete the following steps:
1. Open a window and use the rlogin or telnet command to log into
the VxVM server as user root.
2. Stop the volume you created in the previous procedure.
# vxvol -g disk_group stop volume_name
3. Recursively remove the volume.
# vxedit -g disk_group -rf rm volume_name
What is the purpose of the vxedit -f option?
_____________________________________________________________
4. Re-create the 200-Mbyte concatenated volume again by using the
vxassist command. You must specify the following items:
● The disk group the volume should be in (-g disk_group)
● The make option
● The name and size of the volume (volname 200m)
● The volume layout (layout=concat)
● The disk drive (media name) you want to use
5. Record the command you used to create the volume.
_____________________________________________________________
Note – If there is any problem with your new volume, consult with your
instructor.
3. Leave the first Add Mirror form configured with its default values
which should include the following:
● Number of mirrors to add: 1
● Layout=Concatenated
● Let Volume Manager decide which disks to use
4. On the VxVM server, check the status of the mirror
resynchronization by using the vxtask list command.
5. On the VxVM server, verify the state of your new mirror by using
the vxprint command.
You should now see two plexes in your volume. Large mirrors take a
while to synchronize.
Until the resynchronization is complete, the related plex is in a
TEMPRMSD state. Consult the vxinfo man page for volume state
definitions.
6. After the vxassist returns, use the vxprint command to verify the
volume has two plexes and its status is ENABLED and ACTIVE.
Consult with your instructor if you are having problems.
Note – You can also move a volume mirror to a different disk drive if it is
poorly placed and is causing a performance problem.
To add a file system using the VEA GUI, complete the following steps:
1. Click your mirrored volume in the grid area, and select File System
New File System from the pop-up menu.
2. Configure the New File System form as follows:
● Ensure that the File system type is ufs.
● Leave the Allocation at its default value (1024).
● Enter your assigned mount name.
● Select Create mount point.
● Deselect Read only and Honor setuid.
● Select Add to file system table and mount at boot.
● Set the fsck pass number to 2.
● Examine the New file System Details menu
● Examine the Mount File System Details menu.
● Click OK.
3. On the VxVM server, verify that the following are true:
● The mount point is present in the root directory.
● Your file system is mounted.
● The mount entry is in the /etc/vfstab file.
● The df -kl output seems appropriate for the volume size.
Review the VEA command-line operations recorded in the log file on the
VxVM server.
# tail -45 /var/vx/isis/command.log
The most recent commands are appended to the end of the file. Not all the
details are logged in the command file, such as the edits to the
/etc/vfstab file.
# mkdir /Junk
# vi /etc/vfstab
/dev/vx/dsk/dgX/xvol-01 /dev/vx/rdsk/dgX/xvol-01 /Junk ufs 1 yes logging
Note – The file system vfstab and mount options enable the UFS logging
feature. UFS logging is not necessary, but it offers additional file system
protection and is part of the Solaris OS.
To add a DRL using the VEA GUI, complete the following steps:
1. Verify there is a disk drive available for the DRL within the same
disk group that does not contain either plex of the mirrored volume.
2. Display your volume in the grid area and click it with the third
mouse button.
3. Select Log Add in the volume pop-up menu.
4. In the Add Log window, complete the following steps:
a. Click Manual disk assignment.
b. Select a disk drive that is not part of the mirrored volume.
c. Click OK.
5. Return to the command line on the VxVM server and use the
vxprint command to verify the following:
● The mirrored volume now has a log plex
● The log is not on the same disk drive as either of the volume
mirrors
The following example shows the command sequence to remove and add
a DRL to a volume. Practice removing and adding a DRL from your
volume using the command line.
The disk media name you specify should be on a different disk drive than
the disk drives used by the volume mirrors.
# vxassist remove log volume_name
Note – There are other methods of increasing a volume’s size and a file
system’s size. However, the VEA GUI and vxresize command reliably
increase the size of both the volume and its related file system at the same
time.
To resize file systems using the command line, complete the following
steps:
1. Add 2 Mbytes to the size of your mirrored volume and file system
by using the following command:
# vxresize -F ufs -g disk_group volume_name +2m
Note – You can also express the +2m as a new volume length without the
plus sign. There are also -s and -x options that ensure the requested size
value is appropriate. You can also specify disk media names (for example,
disk01, disk02) that you want to be used for the new space.
2. Examine the new volume and file system to ensure that the changes
have taken place.
Large changes can take a long time.
Note – You cannot shrink a volume with a file system unless the file
system is of VxFS type. Read the vxresize man page for a complete
description of restrictions.
To resize file systems using the VEA GUI, complete the following steps:
1. In the grid area, complete the following steps:
a. Display your volume.
b. Click your volume with the third mouse button.
c. Select Resize Volume in the pop-up menu.
2. Configure the Resize Volume form, shown in Figure 5-19, as follows:
● Enter 2 in the Add By window and select MB from its pull-
down menu.
● Let VxVM decide which disks to use for the additional space.
● Click OK.
6. Display the disk drives in your disk group in the grid area.
7. Select four of the disk drives by simultaneously pressing the Control
key while using the left mouse button.
8. Select the New Volume button in the toolbar.
9. Configure the New Volume form as follows:
● Enter your assigned RAID-5 volume name.
● Enter 200 in the Size field.
● Select MB from the Size pull-down menu.
● Choose RAID 5 in the Layout area. The Number of Columns
field should automatically be set to 3 with logging is enabled.
● Leave the default Stripe Unit Size at 32.
● Click Next.
Note – You can add the log later to better control its placement. The disk
drives are not necessarily used in the order you selected them.
2. Click the View menu with the third mouse button and select Data
Gathering Options from the pull-down menu.
5. Use the mkfile command to create some test data into your file
system volume.
6. Observe the activity levels in the Disk/Volume performance display
For example: mkfile 20m /Test/file1
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● Encapsulate and mirror the system boot disk
● Configure a best practice boot disk
● Administer hot spares and hot relocation
● Evacuate all subdisks from a disk drive
● Move a disk drive without preserving data
● Move a populated disk drive to a new disk group
● Backup and restore a disk group configuration
● Describe how to import a disk group after a system crash
● Perform a volume snapshot backup
● Perform an online volume relayout
● Create VxVM layered volumes
● Perform basic Intelligent Storage Provisioning administration
● Replace a failed disk drive
6-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Boot Disk Encapsulation and Mirroring
rootdg
SCSI disk group
c0
SCSI
c1
SOC
c2
rootvol-01 rootvol-02
newdg
disk group
Storage Array
The ideal boot disk hardware configuration has the following features:
● The boot disk and mirror are on separate interfaces.
● The boot disk and mirror are not in a storage array.
● Only the boot disk and mirror are in the rootdg disk group.
There must be at least 2048 sectors at the beginning or end of the boot
disk that are not assigned to any partition. These sectors are needed for
the private region. If necessary, VxVM takes the space from the end of the
swap partition, but this can create a difficult-to-manage boot disk
configuration.
Note – If your boot disk does not have any free cylinders, you boot the
system from CD-ROM in single-user mode. You use the format utility to
modify the swap partition size and relabel the disk.
A new disk group rootdg will be created and the disk device c0t0d0 will
be encapsulated and added to the disk group with the disk name rootdg01
This will update the /etc/vfstab file so that volume devices are used to
mount the file systems on this disk device. You will need to update any
other references such as backup scripts, databases,or manually created
swap devices.
You cannot boot the system using the device aliases until the OpenBoot
use-nvramrc? variable is set to true. After the variable is enabled, you
can boot from the primary or mirror boot device aliases, for example:
# eeprom “use-nvramrc?”=true
# init 0
ok boot vx-rootdg01
# init 0
ok boot vx-rootdg02
VxVM has two reserved variables named defaultdg and bootdg. Unless
special action is taken, both of the variables are set to a value of nodg.
When you encapsulate the system boot disk and place it in the rootdg
disk group, the bootdg variable is automatically updated.
# vxdg bootdg
rootdg
The following is the boot disk partition map after the encapsulation
process is completed. The public region (slice 3) is mapped to the entire
disk. The private region (slice 4) is mapped to the last cylinder. The root
and swap data are in the same location as before the encapsulation.
# format -d c0t1d0
partition> p
Current partition table (original):
Total disk cylinders available: 4924 + 2 (reserved cylinders)
# vxprint
Disk group: rootdg
When the boot disk is mirrored, the structure of the boot disk and the
mirror disk are not identical. This can be confusing and can add difficulty
to service and recovery situations.
The partition maps of a wort-case boot disk and mirror disk configuration
are organized in a different manner, for example:
# format -d c0t0d0
...
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 3614 3.72GB (3615/0/0) 7808400
1 swap wu 3618 - 3879 276.33MB (262/0/0) 565920
2 backup wm 0 - 3879 4.00GB (3880/0/0) 8380800
3 - wu 0 - 3879 4.00GB (3880/0/0) 8380800
4 - wu 3615 - 3617 3.16MB (3/0/0) 6480
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
# format -d c0t1d0
...
Part Tag Flag Cylinders Size Blocks
0 root wm 3 - 3617 3.72GB (3615/0/0) 7808400
1 swap wu 3618 - 3879 276.33MB (262/0/0) 565920
2 backup wu 0 - 3879 4.00GB (3880/0/0) 8380800
3 - wu 3 - 3879 3.99GB (3877/0/0) 8374320
4 - wu 0 - 2 3.16MB (3/0/0) 6480
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
Caution – Ensure that the mirror disk is initialized in the VxVM sliced
format and not the cdsdisk format. The cdsdisk format must not be
used in the rootdg disk group.
Consult the Sun BluePrints™ document, Towards a Reference Configuration for VxVM Managed Boot Disks,
for a more detailed explanation.
5. Remove the rootdg01 disk from the rootdg disk group and
complete the following steps:
a. Reinitialize the rootdg01 disk.
b. Add the rootdg01 disk back into the rootdg disk group.
# vxdg -g rootdg rmdisk rootdg01
# vxdisksetup -i c0t0d0 format=simple
# vxdg -g rootdg adddisk rootdg01=c0t0d0
You can now replace a defective boot disk drive in the same manner as
any other VxVM disk drive, and then resynchronize the mirrors.
Hot-Relocation Functionality
Free space can be found on disk drives that have been designated as hot
spares. If there are no designated hot spares, VxVM uses available free
space on any disk drive in the disk group that does not have the
nohotuse flag set.
Hot relocation can also be performed for subdisks that are part of a
RAID-5 volume.
Hot relocation is enabled by default and goes into effect, without system
administrator intervention, when a failure occurs.
Volume Hot
spare
You can also verify and modify disk drive hot-device status in the VEA
grid area, as shown in Figure 6-3.
A disk designated as a spare is used only for hot relocation. The vxassist
utility will not allocate a subdisk on that disk unless forced to by
command-line arguments. You designate disk drive as spares using the
vxedit command, and you verify the spare status of the disk drives with
the vxdisk command. The following example shows the command-line
process:
# vxedit -g dgY set spare=on dgY05
# vxdisk -g dgY list
DEVICE TYPE DISK GROUP STATUS
c2t1d0s2 sliced dgY04 dgY online
c2t3d0s2 sliced dgY05 dgY online spare
c2t5d0s2 sliced dgY01 dgY online
c3t32d0s2 sliced dgY06 dgY online
c3t33d0s2 sliced dgY02 dgY online
c3t35d0s2 sliced dgY03 dgY online
Note – If a disk drive is marked as a hot spare, the vxassist utility does
not create a subdisk on that disk drive unless the disk drive is specifically
designated in command-line arguments.
If hot relocation is enabled (the default), you can use any disk drive with
free space during the relocation of a failed subdisk if there is no hot-spare
space available. If you do not want a disk drive to be used for hot
relocation, you can mark it for no hot use as follows:
# vxedit -g dgY set nohotuse=on dgY04
# vxdisk -g dgY list
DEVICE TYPE DISK GROUP STATUS
c2t1d0s2 sliced dgY04 dgY online nohotuse
c2t3d0s2 sliced dgY05 dgY online spare
c2t5d0s2 sliced dgY01 dgY online
c3t32d0s2 sliced dgY06 dgY online
c3t33d0s2 sliced dgY02 dgY online
c3t35d0s2 sliced dgY03 dgY online
The Reserved for Allocator flag, if set, reserves a disk for exclusive use by
ISP utilities such as the vxpool and vxvoladm commands.
Monitoring Errors
You can also examine system error logs for evidence of disk drive
problems, but the email notification to root is usually sufficient.
Evacuation can only be performed on disk drives within the same group.
You should verify that the evacuation process is not going to create any of
the following conflicts:
● Both volume mirrors are on the same physical disk drive.
● More than one stripe column of a striped or RAID-5 volume is on the
same disk drive.
For this scenario, evacuate the disk dgX02 to disk dgX06. After the
evacuation has completed, replace disk dgX02. You might mark the new
dgX02 disk drive as the spare and unmark dgX06. You could also migrate
the stripe data from dgX06 back to the new dgX02 disk drive.
3. Use the pop-up menus available on the volume name and disk drive
name to perform any of the following actions:
● Stop any volumes on the disk drive that you want to remove.
● Delete the volume or volumes.
● Remove the disk from the disk group.
The disk drive is returned to the free disk pool and can now be
added to a different disk group.
The process of moving populated disks is easier if you have VERITAS FastResync. Currently, Sun does not
sell, license, or support this option. VERITAS FastResync enables the vxdg move/split/join options.
In the preceding example, the volume mirvol contains two plexes with a
single subdisk associated with each plex. The volume is associated with
two disk drives, olddg-01 and olddg-02.
Note – The VEA GUI’s Disk/Volume Map display can be very helpful
when you are trying to determine volume involvement with specific disk
drives.
Caution – If you are saving layered volumes that have sub-volumes (such
as striped mirror structures), you must add the r and L options to the
vxprint command. If you fail to do this, the saved configuration
information is incomplete.
Command Function
Note – Removing the definitions does not affect the data: it only removes
selected records from the configuration database. The -r option
recursively removes the volume and all associated plexes and subdisks.
Caution – It is critical that all of the disk drives retain their original media
names when they are added to the new disk group.
Caution – If the disk drives do not have their original media names, the
configuration reload fails.
Figure 6-7 shows how the kernel configuration table is checked by the
vxio driver before the driver attempts to access a virtual structure. The
disk-resident copies do not need to be examined.
Consult
Before Administrative
Access Modifications
vxio Kernel
Driver Update
Configuration vxconfigbackupd
Table
vxconfigd
Device
/etc/vx/cbr/bk
Access
Error Storage Array
Update
configdb
Data
When VxVM starts, the vxconfigd daemon imports disk groups that
belong to the VxVM server.
When disk groups are imported, the kernel configuration table is created
by the vxconfigd daemon, which reads the disk-resident configdb
records.
You use the vxconfigbackup command to backup the current disk group
configuration information. You restore a disk group configuration using
the vxconfigrestore command.
If there is damage to the disk group configuration records that are stored
in the private regions of one or more disk drives in a disk group, the disk
group import operation might fail. The vxconfigrestore command is
used to automatically correct the damaged configuration records or to
recreate a disk group from the beginning.
When restoring or repairing damaged disk group records, you must meet
the following criteria:
● Failed disk drives must be replaced prior to using the
vxconfigrestore command.
● Replacement disk drives must be initialized for VxVM use prior to
using the vxconfigrestore command.
● All disk drives must have the same physical configuration and
logical addresses as when the configuration backup was performed.
You use the -l option to specify a backup file location other than the
default location in /etc/vx/cbr/bk. The following example shows the
use of the vxconfigbackup command.
# vxdg list
NAME STATE ID
rootdg enabled 1066871088.21.ns-east-104
dgX enabled 1066748899.279.ns-east-104
# ls /etc/vx/cbr/mybackup
dgX.1066748899.279.ns-east-104
# ls /etc/vx/cbr/mybackup/dgX.1066748899.279.ns-east-104
1066748899.279.ns-east-104.binconfig
1066748899.279.ns-east-104.cfgrec
1066748899.279.ns-east-104.dginfo
1066748899.279.ns-east-104.diskinfo
# ls -l /R5demo
total 28736
-rw------T 1 root other 1048576 Oct 27 11:37 file1
-rw------T 1 root other 3145728 Oct 27 11:37 file2
-rw------T 1 root other 10485760 Oct 27 19:40 file3
drwx------ 2 root root 8192 Oct 27 11:35 lost+found
# umount /R5demo
# vxvol -g dgX stop r5demo
# vxvol -g dgX stop stdemo
# vxdg deport dgX
If a disk group must be imported after a system crash, the process can be
more difficult. Following are some of the possible variations.
● Performing a typical import of a clean disk group:
# vxdg import disk_group_name
● Importing a disk group to another system after a crash:
# vxdg -C import disk_group_name
The -C option is necessary to clear the old host IDs that were left on
the disk drives after the crash.
# vxdg -fC import disk_group_name
Caution – The -f option forces an import in the event that all the disk
drives are not usable. This option can be dangerous on dual-hosted
storage arrays because the disk group might also be imported to another
host. A disk group that is imported to two host systems can become
corrupted.
The difficult part is that you must know the unique rootdg group
identifier. This value must be known in advance. You can determine the
rootdg group identifier with the vxdisk command as follows:
# vxdisk -s list
Disk: c0t2d0s2
type: sliced
flags: online ready private autoconfig autoimport
imported
diskid: 791000525.1055.boulder
dgname: rootdg
dgid: 791000499.1025.boulder
hostid: boulder
Note – The vxdisk -s list command lists information for all attached
disk drives. Disk drives that belong to a cleanly deported disk group have
a blank hostid entry.
Snapshot Process
You must satisfy the following prerequisites before the snapshot process
can be started:
● You must know the name of the volume to be backed up.
● You must provide a name for the new temporary snapshot volume.
● You can specify a specific disk drive to use for the snapshot copy.
● You must have sufficient unused disk space for the snapshot.
The final operation detaches the temporary snapshot mirror and attaches
it to a regular volume with a name of your choosing.
4. Use the vxprint command to verify the status of the new snapshot
volume.
In the following case, the snapshot mirror/volume is named
SNAP-stdemo. The parent volume is named stdemo.
# vxprint -g dgX
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE
dg dgX dgX - - - -
# fsck -y /dev/vx/rdsk/dgX/SNAP-stdemo
# mkdir /Temp
# umount /Temp
The relayout feature can be used to perform many operations, such as:
● Adding more stripe columns to a RAID-5 volume
● Changing the stripe unit size of a volume
● Changing the type of volume from RAID 5 to mirrored or
concatenated
Note – You can also use the vxassist relayout command to accomplish
online volume relayout. Unless you explicity allocate storage, the
vxassist command automatically determines where to get the
permanent and temporary disk space that might be needed.
The RAID 5 and RAID 0 stripes are using a total of 27,648 blocks on each
disk drive or about 14 Mbytes. All disk drives in the disk group have
more than enough space left for use during the relayout operations.
In the preceding example, the volume Layout was set to RAID 5 and
the Number of Columns was increased to 6. Use the Show Options
button to enter additional relayout criteria, such as which disks
drives to use.
After you fill out the Change Volume Layout form and start the relayout
process, a Relayout Status window, shown in Figure 6-12, appears. You
use the controls in the Relayout Status window to:
● Temporarily stop the relayout process (pause)
● Abort the relayout process
● Continue the process after a pause
● Undo the relayout changes (reverse)
The VEA Relayout Status Monitor window also displays the percentage
complete status.
Note – The relayout task could fail if the target volume was not originally
created using the VEA GUI or the vxassist command.
Volume
Stripe Stripe
Volume
Subdisk
Mirror Mirror Log
Subdisk
Mirror Mirror Log
You can also use the vxassist maxsize option as follows to calculate
maximum available space for a specific structure. The example defaults to
two mirrors and logging.
# vxassist -g dgX maxsize \
layout=stripe-mirror ncolumn=2 \
dgX01 dgX02 dgX03 dgX04 dgX05 dgX06
Note – If you use the VEA GUI New Volume wizard to configure layered
volumes, you can use the Max Size button to estimate maximum available
space.
By default, the New Volume wizard configures two columns, two mirrors
for each column, and enables logging (DRLs).
Sub-Plex Subdisk
vol01-P01 disk01-01
5 GB 5 GB
Subdisk Sub-Volume
vol01-S01 vol01-L01 Sub-Plex Subdisk
Striped Plex Column: 0 5 GB vol01-P02 disk02-01
Volume 5 GB 5 GB
vol01-03
vol01
10 GB
10 GB Sub-Plex Subdisk
Col/Width:2x64
Subdisk Sub-Volume vol01-P03 disk03-01
vol01-S02 vol01-L02 5 GB 5 GB
Column: 1 5 GB
Sub-Plex Subdisk
vol01-P04 disk04-01
5 GB 5 GB
Sub-Plex Subdisk
vol01-P01 disk01-01
1.6 GB 1.6 GB
Subdisk Sub-Volume
vol01-S01 vol01-L01 Sub-Plex Subdisk
1.6 GB 1.6 GB vol01-P02 disk02-01
Volume Concat Plex 1.6 GB 1.6 GB
vol01 vol01-03
10.5 GB 10.5 GB Sub-Plex Subdisk
Subdisk Sub-Volume vol01-P03 disk03-01
vol01-S02 vol01-L02 8.9 GB 8.9 GB
8.9 GB 8.9 GB
Sub-Plex Subdisk
vol01-P04 disk04-01
8.9 GB 8.9 GB
With the advent of hardware RAID LUN technology, such as the Sun
StorEdge 3510/6020/9910 models and related SAN technology, a system
administrator might be faced with analyzing thousands of devices whose
underlying characteristics and capabilities are hidden and unknown.
The components, commands, volumes, and storage designated for ISP use
cannot be used by traditional VxVM commands such as vxassist,
vxdiskadm, and vxvol.
All ISP operations are performed using the vxvoladm command, the
vxpool command, or the VEA GUI. There are no other commands or
tools used to create ISP pools or application volumes.
You configure both data pools and clone pools within a disk group.
A data storage pool is created within a standard disk group. One or more
LUNs from a disk group are assigned to a named storage pool. Any
subsequent storage pools created in the same disk group are
automatically defined as clone pools.
Clone pools are used only to hold full-sized instant snapshots of data pool
volumes in the same disk group. If the instant snapshot feature is not
licensed on your system, clone pools have no use.
Application volumes reside only in an ISP data storage pool. You create
application volumes using either the vxvoladm command or VEA GUI.
You associate ISP templates with storage pools so that volumes created in
a storage pool are restrained by a fixed set of configuration rules. The
hierarchy of ISP templates is shown in Figure 6-18.
5-15 Each
1-2 Each
Capabilities (25)
0-2 Each
Variables (8)
Each storage pool set provides two storage pool definitions. For example,
the storage pool set, mirrored_data_striped_clones, provides the
mirrored_volumes storage pool definition for the data pool and the
striped_volumes storage pool definition for the clone pool.
As shown in Figure 6-19, the VEA GUI Organize Disk Group Wizard
organizes existing disk groups using one of the pre-defined storage pool set
templates. Default data pool and clone pool names can be modified. You
assign disk group disks to the pools afterwards.
By default, the data and clone pool names are a variation of the pool
templates, mirrored_volumes and striped_volumes.
After initial pool creation, you use the vxpool command to associate disk
group media names with each pool. You can also modify the default
storage pool names. An example follows.
# vxpool -g dgSP adddisk mirrored_volumes1 \
dm=dgSP01,dgSP03,dgSP04,dgSP02,dgSP07,dgSP06
The following example shows the use of the vxpool command to create a
storage pool, assign disks to it, and associate a template with the pool.
The first pool created in a disk group is automatically a data pool.
# vxdisk -g dgX list
DEVICE TYPE DISK GROUP STATUS
c2t1d0s2 auto:cdsdisk dgX03 dgX online
c2t3d0s2 auto:cdsdisk dgX04 dgX online
c2t5d0s2 auto:cdsdisk dgX05 dgX online
c2t16d0s2 auto:cdsdisk dgX01 dgX online
c2t18d0s2 auto:cdsdisk dgX02 dgX online
c3t32d0s2 auto:cdsdisk dgX06 dgX online
c3t33d0s2 auto:cdsdisk dgX07 dgX online
c3t35d0s2 auto:cdsdisk dgX08 dgX online
c3t37d0s2 auto:cdsdisk dgX09 dgX online
c3t50d0s2 auto:cdsdisk dgX10 dgX online
# vxpool listpooldefinitions
any_volume_type
mirror_stripe_volumes
mirrored_prefab_raid5_volumes
mirrored_prefab_striped_volumes
mirrored_volumes
prefab_mirrored_volumes
prefab_raid5_volumes
prefab_striped_volumes
raid5_volumes
stripe_mirror_volumes
striped_prefab_mirrored_volumes
striped_volumes
By default, the autogrow policy for pools is set to 2 (diskgroup). The pool
can be grown by bringing in additional storage from the disk group
outside of the storage pool.
Storage pool attributes can be modified after initial pool creation. See the
vxpool man page for more details.
st r5pool - - - - DATA
dm dgX01 c2t16d0s2 - 17679776 - -
dm dgX02 c2t18d0s2 - 17679776 - -
dm dgX03 c2t1d0s2 - 17679776 - -
dm dgX06 c3t32d0s2 - 17679776 - -
dm dgX07 c3t33d0s2 - 17679776 - -
dm dgX08 c3t35d0s2 - 17679776 - -
The initial results are unexpected because the autogrow policy allowed
the use of disk group disks outside of the pool and the default for the
nmaxcols variable is 20. The result is a 9 column RAID-5 volume that
uses all of the disk group disks, even those outside of the defined pool.
# vxvoladm -g dgX remove volume r5vol
When the r5vol volume is removed, the extra disks are automatically
removed from the r5pool.
# vxpool -g dgX getpolicy r5pool
AUTOGROW SELFSUFFICIENT
diskgroup pool
When you create a new volume in a disk group that contains configured
storage pools, the New Volume Wizard is aware that a storage pool exists.
It automatically displays all possible volume configuration capabilities
based on templates associated with the data pool in the disk group.
Initially, none of the capabilities are enabled. If you do not enable any of
the capabilities, the completed volume is a simple concatenation.
If you save a custom template, the next time you create a volume, you are
offered the opportunity to use your custom template and bypass the
manual capability process.
st mirrored_volumes1 - - - - DATA
dm dgSP01 c2t16d0s2 - 17679776 - -
dm dgSP02 c2t18d0s2 - 17679776 - -
dm dgSP03 c2t1d0s2 - 17679776 - -
dm dgSP04 c2t20d0s2 - 17679776 - -
dm dgSP06 c2t3d0s2 - 17679776 - -
dm dgSP07 c2t5d0s2 - 17679776 - -
st striped_volumes1 - - - - CLONE
dm dgSP08 c3t32d0s2 - 17679776 - -
dm dgSP09 c3t33d0s2 - 17679776 - -
dm dgSP10 c3t35d0s2 - 17679776 - -
dm dgSP11 c3t37d0s2 - 17679776 - -
dm dgSP12 c3t50d0s2 - 17679776 - -
dm dgSP13 c3t52d0s2 - 17679776 - -
You must identify the physical path to the failed disk drive before you
proceed. The most common tools you use to do this are:
● The vxprint command
● The vxdisk command
● The /var/adm/messages file
Failure Behavior
A plex (mirror) is detached if a persistent I/O error is encountered. There
are several things to be aware of before proceeding:
● Disk block read errors might affect one subdisk, while other subdisks
on the same physical disk drive remain functional.
● Errors are not detected until read or write operations are attempted.
● Severe disk drive errors, such as general access failures, result in
relocation of all redundant subdisks associated with the failed disk.
They are relocated to either designated hot spare disks or to any disk
that does not have the nohotuse flag set.
Hardware RAID storage units, such as the Sun StorEdge T3 array, present
LUNs to attached systems. Each LUN is actually a portion of a hardware
RAID structure that is monitored internally in the storage array for disk
failures. Typically, the internal RAID volumes are redundant, such as
RAID 5 or RAID 1, and the storage array internally relocates the failing
data to a designated spare drive. Hardware RAID internal failures are
usually transparent to the VxVM software. Hardware RAID storage
devices usually notify the user root mail account of internally detected
problems.
failed disks:
dgX04
failed plexes:
mirvol_01-02
The Volume Manager will attempt to find spare disks, relocate failed
subdisks and then recover the data in the failed plexes.
The following messages are seen when a disk drive is experiencing access
problems, such as hard write errors.
Nov 7 21:11:40 ns-east-104 vxio: [ID 245403 kern.warning] WARNING: VxVM
vxio V-5-0-151 error on Plex mirvol_01-02 while writing volume mirvol_01
offset 16 length 4
Nov 7 21:11:40 ns-east-104 vxio: [ID 786473 kern.warning] WARNING: VxVM
vxio V-5-0-4 Plex mirvol_01-02 detached from volume mirvol_01
Nov 7 21:11:40 ns-east-104 vxio: [ID 628984 kern.warning] WARNING: VxVM
vxio V-5-0-386 dgX04-01 Subdisk failed in plex mirvol_01-02 in vol
mirvol_01
Nov 7 21:11:40 ns-east-104 vxvm:vxconfigd: [ID 976563 daemon.notice] V-
5-1-768 Offlining config copy 1 on disk c2t3d0s2:
Nov 7 21:11:40 ns-east-104 vxvm:vxconfigd: [ID 672837 daemon.notice]
Reason: Disk write failure
Nov 7 21:11:41 ns-east-104 vxvm:vxconfigd: [ID 905431 daemon.notice] V-
5-1-7909 Detached disk dgX04
The vxprint command is the easiest way to check the status of all volume
structures. In the following excerpt, the status of two plexes in a volume is
bad. One of the plexes is a log.
# vxprint
When VxVM detects a disk drive failure, it can place a failed plex in a
number of different states. The two most common states for a failed plex
are:
● DETACHED/IOFAIL
● DISABLED/NODEVICE
In the previous example, the VxVM media name is disk7 and the
physical path is c5t0d0.
When the VxVM software loses complete contact with a disk drive, the
physical path in the vxprint -ht command’s output might be blank. At
those times, you must determine the media name of the failed disk drive
from the vxprint command, and then use the vxdisk list command to
associate the media name with the physical device.
# vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:sliced rootdg01 rootdg online
c0t1d0s2 auto:sliced rootdg02 rootdg online nohotuse
c2t1d0s2 auto:sliced dgX03 dgX online
c2t3d0s2 auto:sliced - - online
c2t5d0s2 auto:sliced dgX05 dgX online
c2t16d0s2 auto:sliced dgX01 dgX online
c2t18d0s2 auto:sliced dgX02 dgX online
c2t20d0s2 auto:sliced dgX06 dgX online
- - dgX04 dgX failed nohotuse was:c2t3d0s2
When a disk drive fails and becomes detached, the VxVM software cannot
find the disk drive, but it still knows the physical path. This information is
the origin of the failed was status. This status means that the disk drive
has failed and that the physical path is the value displayed in the STATUS
column.
Some storage arrays also require that specific disk replacement processes
are followed.
Disk drive failures in hardware RAID storage, such as the Sun StorEdge
T3 arrays, are frequently transparent to VxVM. This is because their
internal LUN structures are redundant and an internal spare is
automatically substituted for the failed drive.
Note – In some cases you might need to scan for new disk drives using
either the vxdiskconfig or vxdctl enable command.
Preparation
If your lab environment uses a central VxVM server instead of standalone
workstations, the ‘‘Encapsulating the System Boot Disk’’ section on page
6-4 and the ‘‘Mirroring the System Boot Disk’’ section on page 6-5 must be
performed as a demonstration on the VxVM server.
2. Which answer most accurately describes the best practice boot disk
configuration process?
a. Initialize, copy, mirror, delete
b. Encapsulate, mirror, delete, copy
c. Copy, delete, encapsulate, mirror
d. Mirror, encapsulate, initialize, copy
The answer is b.
The answer is b.
The answer is c.
The answer is a.
6. What is a key prerequisite for both hot spares and hot relocation?
a. Volumes must be striped
b. Volumes must be failure tolerant
c. Volumes must be mirrored
d. Volumes must be striped mirror
The answer is b.
The answer is b.
The answer is a.
The answer is c.
The answer is b.
The answer is c.
The answer is b.
A new disk group rootdg will be created and the disk device c0t0d0 will
be encapsulated and added to the disk group with the disk name rootdg01
VxVM vxroot INFO V-5-2-328 The Volume Manager will now set up your Boot
Disk as a managed disk.
VxVM vxroot INFO V-5-2-290 Saving original configuration...
80 blocks
rebooting...
Rebooting with command: boot
...
VxVM INFO V-5-2-3247 starting special volumes ( swapvol rootvol )...
VxVM vxvm-startup2 INFO V-5-2-503 VxVM general startup...
vxvm: NOTE: Setting partition /dev/dsk/c0t0d0s1 as the dump device.
5. Log into the system as user root and verify the boot disk
environment is correctly configured.
# vxprint
# eeprom nvramrc
# eeprom “use-nvramrc?”=true
# vxdg defaultdg
nodg
# vxdg bootdg
rootdg
Caution – This procedure assumes two identical disk drives. The primary
boot disk address is c0t0d0, and the mirror disk is c0t1d0. Ensure that
you are using the correct address or disk media name for each step in this
procedure.
At the prompt below, supply the name of the disk containing the volumes
to be mirrored.
4. Examine the OpenBoot PROM boot device aliases and verify that an
alias for the boot disk mirror was added.
# eeprom nvramrc
nvramrc=devalias vx-rootdg01
/pci@1f,4000/scsi@3/disk@0,0:a
devalias vx-rootdg02
/pci@1f,4000/scsi@3/disk@1,0:a
5. Halt the Solaris OS and boot from the VxVM boot disk device alias.
# init 0
...
ok boot vx-rootdg01
4. After the relayout has completed, examine the new volume structure
using the vxprint command and verify that the results are what
you anticipated.
5. Unmount the RAID-5 volume and delete it.
6. Create a two-disk mirrored volume, 200 Mbytes in size, with a
mounted file system.
7. Use the mkfile command to create some test files in the mirror-
volume’s file system.
# mkfile 10m /Test/file1
# mkfile 20m /Test/file2
Caution – If you do not specify a destination disk drive for the move,
VxVM uses any available disk space in the disk group. This might result
in a poorly configured volume with performance problems.
Note – The disk drives should now be in the free disk pool.
Caution – Step 10 is critical. If the disk drives do not have their original
media names, the configuration reload fails.
11. Use the vxmake command to reload the saved configuration for the
volumename volume.
# vxmake -g new_dg -d volumename.save
The -d option specifies the description file to use for building
subdisks, plexes, and volumes.
12. Use the vxvol command to bring the mirrored volume back online.
# vxvol -g new_dg init active volumename
13. Mount the mirrored volume file system to return the mirrored
volume to service.
14. Unmount the mirrored volume and destroy its disk group.
15. Add all of your disk drives into a single disk group again.
3. Click the mirrored volume in the VEA GUI grid area, and select
Snapshot Interactive from the pop-up menu.
Note – Although it is not absolutely necessary, you can assign disk drives
for temporary use in the Volume Snapshot form.
7. Back up the new snapshot volume to tape (if possible). The following
example shows the process.
# fsck -y /dev/vx/rdsk/dgY/SNAP-vol02
# mkdir /vol02_backup_081202
# mount /dev/vx/dsk/dgY/SNAP-vol02 /vol02_backup_081202
# cd /
# tar cvf /dev/rmt/0 ./vol02_backup_081202
8. Unmount and delete the snapshot volume.
# umount /vol02_backup_081202
# vxedit -g dgY -rf rm SNAP-vol02
Caution – Before proceeding, ensure that all of the disks in the disk group
are initialized in a sliced format. If the disks are in a cdsdisk format, the
failure simulation will not work. If necessary, destroy the disk group and
recreate it using the vxdiskadm utility.
Note – If you intend to hot-swap the failed disk without rebooting the
system, you might also use the vxdiskadm utility option 11, Disable
(offline) a disk device, to stop all VxVM access, such as polling.
Note – In some cases you might need scan for new disk drives using the
vxdctl enable command.
14. Use the vxdiskadm utility option 14, Unrelocate subdisks back
to a disk, to move the relocated volume components back to the
original disk location and complete the following steps:
● The only information you furnish is the media name of the
replacement disk, VxVM
● Answer no to Unrelocate to a new disk.
● Answer no to Use -f option.
15. Use the vxprint command to verify the mirrored volume is
returned to its original configuration.
The letters UR are added to the relocated subdisk name.
9. Click your disk group in the VEA GUI object tree and select New
Volume from its pop-up menu.
10. Create a mirrored ISP application volume as follows:
a. Click Next in the Select User Template window.
b. Click Data Redundancy and Data Mirroring capabilities, and
then click Next.
c. Leave the Let Volume Manager Decide button enabled, and
then click Next.
d. Complete the following steps:
1. Enter a volume name.
2. Set the size to 200 Mbytes
3. Click Next.
e. Do not create a file system on the volume.
f. Complete the following steps:
1. Enter a user template name.
2. Click Save.
3. Click Finish.
g. Use the VEA GUI and the vxprint command to examine the
resulting volume structure and storage pool organization.
h. Delete the mirrored ISP application volume.
11. Create another application volume in your data storage pool using
additional capabilities.
a. Click Next in the Select User Template window.
b. Click Mirrored DCO Logs, Data Redundancy and Data
Mirroring capabilities, and then click Next.
c. Leave the Let Volume Manager Decide button enabled, and
then click Next.
d. Complete the following steps:
1. Enter a volume name.
2. Set the size to 200 Mbytes.
3. Click Next.
e. Do not create a file system on the volume.
f. Click Finish.
12. Click your disk group in the VEA GUI object tree and select New
Storage Pool from its pop-up menu.
13. Create another storage pool in your disk group as follows:
a. Enter a storage pool name and click Next.
b. Click the RAID-5 storage pool template, and then click Next.
c. Examine the summary information, and then click Finish.
14. Display your storage pools in the VEA GUI grid area and verify that
the new storage pool is a clone pool.
15. Delete all of your storage pools and organize your disk group again
using a different storage pool set template, such as the Striped
Mirrored Data and Striped Snapshots template.
16. Create one more application volume using the new storage pool
capabilities.
The following section assumes that all disk drives and volume
components follow the default VxVM naming conventions.
5. Remove the rootdg01 disk from the rootdg disk group, and
complete the following steps:
a. Reinitialize the rootdg01 disk.
b. Add the rootdg01 disk back into the rootdg disk group.
# vxdg -g rootdg rmdisk rootdg01
# vxdisksetup -i c0t0d0 format=simple
# vxdg -g rootdg adddisk rootdg01=c0t0d0
You can now replace either the primary boot disk drive or its mirror in the
same manner as any other VxVM disk drive, and just resynchronize the
mirrors.
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, highlight just the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. Go over any trouble spots or
especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● Describe basic VxFS features
● Install the VxFS software
● Create VxFS file systems
● Use extended VxFS mount options
● Perform online VxFS administration tasks
7-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Basic VxFS Features
The VxFS intent log feature provides fast recovery following a system
crash or reboot. A file system check can be completed in seconds,
regardless of the file system size.
By allocating disk space in extents, disk I/O to and from a file can be done
in units of multiple blocks considerably faster than block-at-a-time
operations.
The VxFS file system provides recovery only seconds after a system
failure by using a tracking feature called intent logging. Intent logging is a
logging scheme that records pending changes to the file system structure.
During system recovery from a failure, the intent log for each file system
is scanned and operations that were pending are completed. The file
system can then be mounted without a full structural check of the entire
system.
When the disk has a hardware failure, the intent log might not be enough
for recovery and, in such cases, a full fsck check must be performed.
However, when the failure is due to software rather than hardware, a
system can be recovered in seconds.
The default intent log size is currently 64 Mbytes. The fsadm command
can be used to dynamically modify the intent log size. Larger intent logs
can improve system performance because they reduce the number of
times the log wraps around. An intent log that is too large can increase file
system recovery time after a system failure.
Note – See the fsadm_vxfs and mkfs_vxfs man pages for more
information.
The primary difference is that you must specify a file system type of vxfs.
You create a VxFS file system using the mkfs command as follows.
# mkfs -F vxfs /dev/vx/rdsk/dgX/mirvol
Note – Consult the mkfs_vxfs man pages for more information on other
VxFS mkfs command options, including inosize and ninode.
Note – Consult the mount_vxfs man pages for further details about other
available mount options, some of which relate to VxFS features that are
not licensed by Sun.
Online Defragmentation
The UFS software uses the concept of cylinder groups to limit
fragmentation. These are self-contained sections of a file system that are
composed of inodes, data blocks, and bitmaps, that indicate free inodes
and data blocks. Allocation strategies in UFS attempt to place inodes and
related data blocks near each other. This strategy reduces fragmentation,
but does not eliminate it. Over time, the original ordering of free resources
can be lost. As files are added and removed, gaps between used areas of
the disk can still occur.
Online Resizing
When UFS file systems are too small or become too large for their
assigned disk space, the following methods are used to correct the
problem:
● Users are moved to new or different file systems.
● Subdirectories are moved to other file systems.
● The file systems are backed up and restored to a different file system.
Preparation
VxFS is a separately licensed option. You must install a license key to
activate the software. Ask your instructor for a VxFS temporary license
key. Record the temporary key.
You can get VxVM, VxFS, and Shared Disk Group temporary licenses at the Sun Business Partners Web site
at http://webhome.ebay/partnersoftware/.
The answer is c.
The answer is d.
The answer is c.
The answer is a.
8. Verify that the VxFS man pages have been added to the
/opt/VRTS/man directory.
# man vxquot
# man vxdump
10. Verify that the special file system programs that are required early in
the boot process are present.
# ls /etc/fs/vxfs
mount qlogck system.preinstall
qlogattach qlogrec
Note – Use the Control-F sequence to enable the Adobe Acrobat Find
window.
To decrease the size of the VxFS file system created previously in this
exercise, complete the following steps.
1. Use the df -kl command to verify the amount of space available in
your new VxFS file system.
# df -kl
Filesystem kbytes used avail capacity Mounted on
....
/dev/vx/dsk/dgX/mirvol 26214400 39890 24538611 1 /VXFS
2. Use the mount command to verity that your VxFS file system was
mounted using the delaylog, largefiles, and
ioerror=mwdisable mount options.
3. Verify that you can create a file larger than 2-Gbytes in your file
system.
# mkfile 2500m /VXFS/file1
4. Delete all test files from your VxFS file system.
5. Click on the VxFS volume in the VEA GUI Object tree, and select
Resize Volume from its pop-up menu.
4. Defragment your VxFS file system. Substitute the name of your file
system.
# fsadm -de /VXFS
5. Verify your VxFS file system’s fragmentation and available space
have improved.
# fsadm -D /VXFS
# df -kl
Note – The vxdump and vxrestore utilities are functionally the same as
the Solaris OS ufsdump and ufsrestore utilities.
3. Unmount your VxFS file system and mount it again using the
blkclear security option.
# umount /VXFS
# mount -F vxfs -o blkclear /VXFS
4. Use the mount command to verify that the VxFS file system’s mount
options are now read, write, setuid, blkclear, delaylog,
largefiles, and ioerror=mwdisable.
5. Run a performance test on your VxFS file system again.
# ptime mkfile 100m /VXFS/file1
real 1:10.047
user 0.078
sys 5.145
# rm /VXFS/file1
6. Unmount your VxFS file system and mount it again with no special
options.
# mount -F vxfs /VXFS
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, then just highlight the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. You might want to go over any
trouble spots or especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.
Objectives
Upon completion of this module, you should be able to:
● Describe performance improvement techniques
● Use the vxstat and vxtrace performance analysis tools
● Describe RAID-5 write performance characteristics
8-1
Copyright 2004 Sun Microsystems, Inc. All Rights Reserved. Sun Services, Revision D
Performance Improvement Techniques
Controller c3 Controller c4
Array Array
Heavy-Use Low-Use
Volume Volume
Heavy-Use Low-Use
Volume Volume
Heavy-Use Low-Use
Volume Volume
Heavy-Use Low-Use
Volume Volume
In general, do not place file systems that have heavy I/O loading on the
same disk drives. Separate them into different storage arrays on various
controllers.
Another type of performance problem can occur when a log plex is placed
on the same disk drive as its associated data plex. In the case of RAID-5
logs, you should always consider that the data written to all RAID-5
columns must also be written to the log.
As shown in Figure 8-2, leaving unused space on all disk drives ensures
that you always have alternate locations to which to move logs. This is
why the Maxsize calculations of the vxassist command or VEA might
not be wise to use.
Volume 01
Volume 02
The log placement shown in Figure 8-2 would not work well if both
volumes were heavily accessed. The configuration would work best if at
least one of the volumes has low write activity.
Striping
If you can identify the most heavily accessed volumes (for file systems or
database tables) during the initial design stages, then you can eliminate
performance bottlenecks by striping them across several devices. The
example in Figure 8-3 shows a volume (Hot_Vol) that was identified as
being a data-access bottleneck. The volume is striped across four disk
drives, leaving the remainder of those four disk drives free for use by less
heavily used volumes.
Light Use
Light Use Light Use Light Use
Light Use
Mirroring
RAID 0+1
Layered Volumes
RAID 5
In the example in Figure 8-4, you set the read policy of the volume labeled
Hot_Vol to prefer for the striped plex labeled Plex 1. In this way, read
requests are directed to the striped plex that has the best performance
characteristics.
You can change volume read policies either from the command line or
using the VEA GUI.
In the VEA GUI, highlight the volume in the grid area, and click Props in
the toolbar. In the General properties tab, you can choose one of following
fixed read policy options:
● Based on Layout
● Round Robin
● Prefer (preconfigured)
Host System
Perferred stripe or
mirror configuration t1 t1
t2 t2
t3 t3
Array Array
The statistics include the number of reads, writes, atomic copies, verified
reads, verified writes, plex reads, and plex writes for each volume. As a
result, one write to a two-plex volume results in at least five operations:
one for each plex, one for each subdisk, and one for the volume.
A volume or disk drive with elevated read or write access times is not
necessarily a problem. If the slow response is not causing any apparent
problems for users or applications, then there might not be anything that
needs fixing.
Before obtaining statistics, clear (reset) all existing statistics by using the
vxstat -r command. Clearing statistics eliminates any differences
between volume or disk drives due to volumes being created. It also
removes statistics that are not typically of interest, such as information
about booting.
After clearing the statistics, allow the system to run during typical system
activity. When monitoring a system that is used for multiple purposes, try
not to exercise any one application more than it would usually be
exercised.
After identifying a volume that has an I/O-related problem, you can use
the vxtrace command to determine which system process is responsible
for the I/O requests. The volume of interest in this example is named
ctrl.
# vxtrace -o dev ctrl
40122 START write vdev ctrl block 16 len 4 concurrency 1 pid 10689
40122 END write vdev ctrl op 40122 block 16 len 4 time 1
40123 START write vdev ctrl block 16 len 4 concurrency 1 pid 10689
40123 END write vdev ctrl op 40123 block 16 len 4 time 2
40124 START write vdev ctrl block 16 len 4 concurrency 1 pid 10689
40124 END write vdev ctrl op 40124 block 16 len 4 time 4
40125 START write vdev ctrl block 16 len 4 concurrency 1 pid 10689
40125 END write vdev ctrl op 40125 block 16 len 4 time 0
^C
Read-Modify-Write Operations
When less than 50 percent of the data disk drives are undergoing write
operations in a single I/O, the read-modify-write sequence is used.
New Data
1 1
1 1 0
XOR
1 0 1 1 0 1
Reconstruct-Write Operations
If more than 50 percent of the data stripe will be modified, use the
reconstruct-write method.
New Data
0 1 1 0
XOR
0 1 1 0 0
1 0 1 1 0 1
New Data
0 1 0 0 1
XOR
0 1 1 0 1 0
1 0 1 1 0 1
Preparation
Unless your instructor says otherwise, the instructor conducts the
performance demonstrations on the VxVM server system.
The answer is c.
The answer is d.
The answer is b.
The answer is b.
The answer is c.
The answer is a.
The answer is c.
Note – In this task, all steps are directed toward the instructor. The words
you and your refer to the instructor, not the students.
Note – As you move past the 50-percent stripe write into full-stripe write,
the I/O should move through the three write categories (M, W, and F). The
full stripe width is 16 Kbytes times 5 columns, which is equal to
80 Kbytes.
Tell the class that a full-stripe write is only four of the five columns. One column is always used for parity. This
is evident in the last test loop using 64-Kbyte transfer size.
2. Run the r5demo.sh script again, but substitute the name and path of
the new striped volume, stdemo.
Enter disk group name (default: dgX)
Enter the name of the demo volume (default: r5demo) stdemo
Enter the raw path to the demo volume (default: /dev/vx/rdsk/dgX/r5demo)
/dev/vx/rdsk/dgX/stdemo
Enter data file location (default: /testfile)
Note – The vxstat command does not display any statistics for the
striped volume, but the ptime results are informative.
Exercise Summary
Manage the discussion here based on the time allowed for this module, which was given in the “About This
Course” module. If you find you do not have time to spend on discussion, highlight just the key concepts
students should have learned from the lab exercise.
● Experiences
Ask students what their overall experiences with this exercise have been. Go over any trouble spots or
especially confusing areas at this time.
● Interpretations
Ask students to interpret what they observed during any aspects of this exercise.
● Conclusions
Have students articulate any conclusions they reached as a result of this exercise experience.
● Applications
Explore with students how they might apply what they learned in this exercise to situations at their workplace.