Академический Документы
Профессиональный Документы
Культура Документы
Veritas Storage
Foundation 6.0 for
Windows
100-002707-A
COURSE DEVELOPERS
Gene Henriksen
Paul Johnston
TECHNICAL
CONTRIBUTORS AND
REVIEWERS
Ankit Vaishnav
Feng Liu
Jason Chu
Rahul Sarda
Roshan Swamy
Sachin Dhorage
Vikram Kamat
Wally Heim
Table of Contents
Course Introduction
Storage virtualization............................................................................... Intro-2
Veritas Storage Foundation for Windows................................................ Intro-6
Course overview.................................................................................... Intro-10
Additional resources.............................................................................. Intro-12
Typographic conventions used in this course ....................................... Intro-13
Lesson 1: Virtual Objects
Physical and virtual data storage ................................................................. 1-3
SFW storage objects .................................................................................. 1-11
SFW RAID levels and volume layouts........................................................ 1-14
Lesson 2: Installing and Accessing SFW Interfaces
Storage Foundation 6.0 for Windows: Overview .......................................... 2-3
Installing SFW 6.0 ...................................................................................... 2-11
Upgrading to SFW 6.0................................................................................ 2-25
SFW user interfaces................................................................................... 2-28
Lesson 3: Working with Disk Groups and Volumes
Preparing disks and disk groups for volume creation................................... 3-3
Creating a volume ...................................................................................... 3-13
Displaying disk, disk group, and volume information ................................. 3-20
Removing volumes, disks, and disk groups ............................................... 3-32
iii
Copyright 2012 Symantec Corporation. All rights reserved.
Course Introduction
7
CONFIDENTIAL - NOT FOR DISTRIBUTION
Storage virtualization
In this topic, you will learn how storage virtualization addresses the challenges of
storage management.
Challenges of storage management
Administrators must have the tools to skillfully manage large, complex, and
heterogeneous environments in order to create an efficient environment. Storage
virtualization helps businesses to simplify the complex IT storage environment
and gain control of capital and operating costs by providing consistent and
automated storage management.
Intro
With storage virtualization, the physical aspects of storage are masked to users.
Administrators can concentrate more on delivering access to necessary data and
less on physical aspects of storage.
Intro3
Copyright 2012 Symantec Corporation. All rights reserved.
The type of storage virtualization that you use depends on the following factors:
Heterogeneity of deployed enterprise storage arrays
Need for applications to access data contained in multiple storage devices
Importance of uptime when replacing or upgrading storage
Need for multiple hosts to access data within a single storage device
Value of the maturity of technology
Investments in a SAN architecture
Level of security required
Level of scalability needed
10
11
Intro5
Copyright 2012 Symantec Corporation. All rights reserved.
Intro
Through the support of RAID redundancy techniques, SFW protects against disk
and hardware failures, while providing the flexibility to extend the capabilities of
existing hardware.
12
13
Manageability
You can manage storage performed online in real time, eliminating the need for
planned downtime.
You can manage all online storage from an intuitive graphical user interface.
Storage Foundation provides consistent management across Windows, Solaris,
HP-UX, Linux, and AIX platforms.
Storage Foundation provides additional benefits for array environments, such
as inter-array mirroring.
Availability
Integrity of storage is maintained by true mirroring across all write operations.
Through software RAID techniques, storage remains available in the event of
hardware failure.
Hot relocation guarantees the rebuilding of redundancy in the case of a disk
failure.
Recovery time is minimized with logging and background mirror
resynchronization.
Intro7
Copyright 2012 Symantec Corporation. All rights reserved.
Intro
Performance
I/O throughput can be maximized by measuring and modifying volume layouts
while storage remains online.
Performance bottlenecks can be located and eliminated by using SFW analysis
tools.
Scalability
SFW 6.0 runs on 64-bit operating systems.
Storage can be deported from smaller platforms and then imported to larger
enterprise platforms seamlessly.
SFW can add new disk space to existing volumes.
Storage devices can be spanned.
14
15
When using SFW with RAID arrays, you can leverage the strengths of both
technologies:
You can use SFW to mirror between arrays to improve disaster recovery
protection against the failure of an array, particularly if one array is remote.
Arrays can be of different makes and types; that is, one array can be a RAID
array, and the other can be a JBOD.
SFW facilitates data reorganization and maximizes available resources.
SFW improves overall performance by making I/O activity parallel for a
volume through more than one I/O path (to and within the array).
You can use snapshots with mirrors in different locations, which is beneficial
for disaster recovery and off-host processing.
If you include Veritas Volume Replicator (VVR) in your environment, you can use
VVR to provide hardware-independent replication services.
Intro9
Copyright 2012 Symantec Corporation. All rights reserved.
Intro
SFW causes both the physical disks and the logical LUNs that are presented by a
RAID array to be virtual. Modifying the configuration of a RAID array may result
in changes in SCSI addresses of LUNs, which require modification of application
configurations. SFW provides an effective method of reconfiguring and resizing
storage across the logical devices that are presented by a RAID array.
Course overview
This training provides instruction on fundamental operational management
procedures for Veritas Storage Foundation for Windows.
16
Course resources
Administration course
This SFW Administration course covers fundamental operational management
procedures for Veritas Storage Foundation for Windows. This training shows you
how to install and configure Storage Foundation and how to manage disks, disk
groups, and volumes. This course also covers offline and off-host processing and
introduces basic troubleshooting, performance tuning, and recovery techniques.
Appendix A: Lab exercises
17
Intro
This section contains hands-on exercises that enable you to practice the concepts
and procedures that are presented in the lessons.
Intro11
Copyright 2012 Symantec Corporation. All rights reserved.
Additional resources
Reference book
Virtual Storage Redefined by Paul Massiglia provides an introduction to highavailability storage environments.
Title: Virtual Storage Redefined: Technologies and Applications for Storage
Virtualization
Author: Paul Massiglia with Frank Bunn
Publisher: Veritas Publishing
18
The following Symantec Web sites contains links to product guides, white papers,
demos, Webcasts, and other SFW-related information:
http://www.symantec.com/business/storage-foundationhigh-availability-for-windows
https://sort.symantec.com/documents/doc_details/
sfha/6.0/Windows/ProductGuides/
Font
Example
Courier New
C:\Program Files\Veritas\
Veritas Volume Manager\
logs\vxcli.log
C:\Program Files
CLI command
Courier New
vxdg list
The adddisk command...
The syntax to display the message Hi
there! using CLI is:
echo Hi there!
User input
Courier New
User input
with
instruction to
type
System output
Courier New
Hi there!
URL
Courier New
http://www.symantec.com
CLI command
with a variable
If the surrounding
text is bold,
variables
are nonbold; if the
surrounding text is
nonbold, variables
are bold.
19
Style
Examples
Note that all text in the Power Point slides is in Calibri font, and all regular text in the
Frame Maker book is in Times New Roman.
Intro13
Copyright 2012 Symantec Corporation. All rights reserved.
Intro
User input
with variable
20
Lesson 1
Virtual Objects
21
22
A basic disk is a physical disk that contains primary partitions, extended partitions,
or logical drives. Partitions and logical drives on basic disks are known as basic
volumes. You can only create basic volumes on basic disks. On Windows, physical
disks are automatically initialized as basic disks. On a basic disk, storage space is
organized as basic volumes.
23
With basic disks, you can have up to four primary partitions, or three primary
partitions and one extended partition.
Dynamic disks
You can convert basic disks to dynamic disks. On dynamic disks, storage space is
organized as dynamic volumes. These dynamic volumes can span multiple disks
and can provide fault tolerance.
A dynamic disk contains dynamic volumes such as simple volumes, spanned
volumes, striped volumes, mirrored volumes, and RAID-5 volumes. Dynamic
disks use a database to track information about dynamic volumes on the disk and
13
Copyright 2012 Symantec Corporation. All rights reserved.
about other dynamic disks in the computer. Each dynamic disk stores a replica of
the dynamic disk database. Therefore, you can repair a corrupted database on one
dynamic disk by using the database on another dynamic disk.
Some key advantages of dynamic disks include:
Using multidisk volumes to make better use of your available disk space by
combining areas of unallocated space in volumes
Improving I/O performance by providing for more I/O concurrency through
the use of multi-disk volume layouts
Making volumes fault tolerant
Note: Dynamic disks are not supported in Windows 7, Vista, and XP Home
Edition.
24
Disk arrays
A collection of identical physical disks is referred to as a disk array. In addition to
the physical disks, a typical disk array can also contain a hardware controller,
cache memory, and power supply. The hardware controller allows balanced I/O
across multiple disks.
The rate at which I/O operations to an individual disk can be performed is limited
by the physical characteristics of the mechanical components from which it is
made.
Reads and writes on unmanaged physical disks are slow processes. Disk arrays and
multipathed disk arrays improve I/O speed and throughput.
25
Individual disks in a disk array are generally identified by logical unit numbers, or
LUNs. A LUN is a logical reference to a portion of a storage subsystem. A LUN
can comprise a disk, a section of a disk, a whole disk array, or a section of a disk
array in the subsystem. The LUNs are perceived and handled by the OS as physical
disks.
Multipathed disk arrays
Some disk arrays provide multiple ports to access the physical disks. These ports,
coupled with the host bus adaptor (HBA) controller and any data bus or I/O
processor local to the array, constitute multiple hardware paths to access the disk
devices. This type of disk array is called a multipathed disk array. A multipathed
disk array provides multiple ports to access a disk array.
15
Copyright 2012 Symantec Corporation. All rights reserved.
26
All users and applications access volumes as contiguous address space in a manner
similar to accessing a disk partition.
27
17
Copyright 2012 Symantec Corporation. All rights reserved.
28
Each of the preceding features is discussed in detail in the lessons that follow.
However, FastResync, volume snapshots, and dynamic disk group split and join
require an additional license.
29
SFW 6.0 for Windows runs on Windows Server 2008. When you install SFW 6.0
on Windows Server 2008, both Windows Disk Management and SFW can coexist. In the previous versions, the coexistence was not supported. Installing SFW
5.1 on Windows Server 2003 makes Disk Management (previously known as
Logical Disk Manager, or LDM) unavailable.
Note: The Windows Server 2008 Disk Management does not support virtual
objects created by SFW, such as dynamic disks or dynamic volumes.
19
Copyright 2012 Symantec Corporation. All rights reserved.
When a basic disk is upgraded to a dynamic disk, SFW creates two regions on the
disk:
Public region: The public region consists of the majority of the space on the
disk. The public region represents the available space that SFW uses to assign
to volumes and the space where applications store user data.
Private region: The private region stores metadata, that is, information about
virtual objects. The private region contains information about the SFW name
for the disk, which system owns that disk, which disk group the disks belong
to, and a globally unique identifier (GUID) for that disk.
The private region may also contain a database with all the objects associated
with that disk and other disks in its disk group. For redundancy, this database is
reproduced to the private region of multiple dynamic disks in the disk group.
The private regions ability to hold this information means that the disk group
is not dependent on the disk information present in the registry of the operating
system. Further, the private region and public region are created on a single
dynamic disk. This is suitable for moving the disk between different operating
systems. The disk group can, therefore, be switched between systems without
your having to download and update registry information.
30
The default size of the private region is 1 MB, and it is located at the end of the
disk. This region is a small management overhead.
A disk group is a collection of SFW disks. Disks are grouped into disk groups for
ease of management. Organizing disks based on their usage facilitates tasks such
as server and storage provisioning and capacity management. For example, you
can organize data for accounting applications in a disk group called acctdg. A disk
group configuration is a set of records with detailed information about related
SFW objects in a disk group, their attributes, and their connections.
31
SFW objects cannot span disk groups. For example, a volumes subdisks, plexes,
and disks must be derived from the same disk group as the volume. You can create
additional disk groups as necessary. Disk groups enable you to group disks into
logical collections. Disk groups and their components can be moved as a unit from
one host machine to another.
Dynamic disks
A dynamic disk represents the public region of a physical disk that is under SFW
control. Some key advantages of dynamic disks include:
Using multidisk volumes to make better use of your available disk space by
combining areas of unallocated space in volumes
Improving I/O performance by providing for more I/O concurrency through
the use of multi-disk volume layouts
Making volumes fault tolerant
111
Copyright 2012 Symantec Corporation. All rights reserved.
Volumes
A volume is a virtual object that stores data and consists of one or more plexes,
dependent on its layout. The plexes may map to multiple physical disks. A volume
is used by applications in a manner similar to a physical disk. Due to their virtual
nature, volumes are not restricted by the physical size constraints that apply to a
physical disk. A volume can span multiple disks.
A volume can be formatted with a file system and can be accessed by a drive letter
or a mount path.
Plexes
SFW uses subdisks to build virtual objects called plexes. A plex is a structured or
ordered collection of subdisks that represents one copy of the data in a volume. A
plex consists of one or more subdisks located on one or more physical disks. The
default plex name is volumename-##. A non-mirrored volume contains only one
data plex. However, mirrored volumes contain two or more plexes, one for each
copy of data. In short, a plex is a copy of data within a volume.
Subdisks
A dynamic disk can contain one or more subdisks, each of which is a range of
contiguous disk blocks in the disk's public region. Subdisks are used for allocating
space in a volume and cannot overlap. Blocks on a dynamic disk that are not part
of a subdisk are considered to be unallocated (or free) space, which can be used to
create new volumes or to extend existing ones. You can relocate subdisks to other
disks in the same disk group to improve performance or to alleviate hot spots
(these topics are discussed in other lessons).
In the slide example, the accdg disk group comprises the expvol and payvol
volumes. The expvol volume has one expvol-01 plex. The payvol volume has two
plexes named payvol-01 and payvol-02, where the payvol-02 plex is a mirror
image of the payvol-01 plex. The expvol-01 plex contains the Disk1-01, Disk2-02
and Disk3-02 subdisks, while the payvol-01 plex contains the Disk1-02 and
Disk2-01 subdisks and the payvol-02 plex consists of the Disk3-01 subdisk.
32
Basic volumes: In SFW, basic volumes refer to all the volumes that are on basic
disks (primary and extended partitions and logical drives). With SFW, you can
manage these basic volumes.
33
113
Copyright 2012 Symantec Corporation. All rights reserved.
Volume layouts
34
Volume layouts are available for each RAID level and define how SFW writes data
across the disks in a volume. Each volume layout contains at least one plex (two or
more if mirroring) and multiple subdisks and the volume layout define how those
subdisks and plexes are accessed.
Disk spanning
Disk spanning is the combining of disk space from multiple physical disks to form
one logical drive. Disk spanning has two forms:
Concatenation: Concatenation is the mapping of data in a linear manner
across two or more disks.
Data redundancy
To protect data against disk failure, the volume layout must provide some form of
data redundancy. Redundancy is achieved in two ways:
Mirroring: Mirroring refers to maintaining two or more copies of volume
data. A mirrored volume uses multiple plexes to duplicate the information
contained in a volume. Although a volume can have a single plex, at least two
plexes are required for true mirroring (redundancy of data). Each of these
plexes must contain disk space from different disks for the redundancy to be
useful.
Parity: Parity is a value calculated using an exclusive OR (XOR) Boolean
function that is written to the volume along with the data. It is written to a
different column (or stripe unit) within each stripe and is used in the event of a
failure to reconstruct the lost data.
In comparison to the performance of striped volumes, write throughput of
RAID-5 volumes is slower, because parity information needs to be calculated
each time data is written. However, in comparison to mirroring, the use of
parity reduces the amount of space required.
35
115
Copyright 2012 Symantec Corporation. All rights reserved.
36
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 2
37
38
39
The Veritas Storage Foundation and High Availability Solutions 6.0 for Windows
release packages four Symantec products, which are listed as follows:
1 Veritas Storage Foundation and High Availability Solutions (SFWHA) 6.0
2 Veritas Storage Foundation (SFW) 6.0
3 Veritas Cluster Server (VCS) 6.0
4 Dynamic Multi-Pathing (DMP) 6.0
These products are used for enterprise data management and protection, high
availability, and disaster recovery in a Microsoft Windows environment.
The four products are listed and available for installation on a single Symantec
Product Installer GUI screen. In previous releases, these products were listed and
installed on different GUI screens. The installables for these four products are now
packaged and shipped on a single DVD or CD.
23
40
41
will not have to install it separately. VVR will be installed during the SFW
installation.
Dynamic multipathing: This option adds fault tolerance to disk storage by
using multiple paths between a computer and individual disks in an attached
disk storage system. Disk transfers that fail because of a path failure are
automatically rerouted to an alternate path. With dynamic multipathing, you
can configure, manage, and obtain status information about these multiple
paths. Dynamic multipathing also improves performance by allowing loadbalancing among paths.
Replace Disk Management Snap-in with SFW VEA GUI: This option
replaces the Disk Management snap-in in the Windows Computer
Management and Server Manager consoles with the SFW Veritas Enterprise
Administrator (VEA) GUI. SFW then takes control of all disks and handles all
disk administration. This option is available only when you install SFW, and is
not available when you install SFWHA.
Microsoft Failover Cluster: This option provides failover and increased
availability of applications and services when a node fails in a Microsoft
cluster environment. SFW can support up to eight nodes in a cluster
environment that is set up under the Microsoft cluster software on Windows
Server 2008. This optional feature requires that the cluster hardware and the
Microsoft cluster software be installed and a cluster be set up before installing
SFW.
Veritas Cluster Server Options: This option provides failover and high
availability of applications and services when a node fails in any clustered
environment. SFW HA provides built-in VCS support to set up cluster disk
groups for a VCS cluster on a Windows Server 2008 system.
VCS monitors systems and services on a cluster and fails over services to a
different system in case of a system crash or a service failure. VCS provides
policy-based, application-focused failover management, which enables
applications to be failed over to any server in the cluster or SAN environment
and to consecutive servers as necessary. VCS supports up to 32-node clusters
in SAN and traditional client-server environments.
Fast Failover: This option improves failover time for the storage stack
configured in a clustered environment. Fast failover provides significant
reduction in the failover time taken by storage resources during service group
failovers.
Global Cluster Option (GCO): The GCO option allows for the management
of multiple VCS clusters and their applications from a single console. GCO is
also a disaster recovery tool that facilitates replication support after a site
failure. GCO ensures that applications are failed over, in addition to data. GCO
is for cluster-to-cluster failover.
25
SFW editions
Veritas Storage Foundation for Windows and Veritas Storage Foundation for
Windows High Availability 6.0 are available in both the standard and the
enterprise license editions. The Enterprise edition includes all the available
product options. However, the Standard edition does not include the FlashSnap,
Microsoft Failover Cluster, and Dynamic Multi-path options.
SFW Basic is a limited version of the Veritas Storage Foundation for Windows
software. SFW Basic has the same functions of SFW except that SFW Basic is
limited in the number of dynamic volumes that it can support and the Veritas DMP
option is included. No other options are available in SFW Basic.
SFW Basic is a free version of the software and is limited to four volumes and two
processors for each physical server. SFW Basic needs to be downloaded
separately, and is not shipped with the SFW DVD/CD.
42
43
27
Keyless license
This is a new licensing scheme that lets you install SFW without any license key.
This scheme requires the host to be connected to and managed by Veritas
Operations Manager (VOM) 4.0. VOM is an application that lets you monitor and
manage the operations of Storage Foundation and Veritas Cluster Server
installations on multiple operating systems.
The Keyless license is similar to the evaluation license, and you can now use the
Keyless license for 60 days.
Note: Evaluation license keys are now deprecated.
Typically, if you install the product using the Keyless option, a message is logged
every day in the Event Viewer indicating that you must configure VOM within 60
days of product installation. Failing this, a non-compliance error is logged every
four hours.
44
Symantec requires that you perform one of the following tasks to convert the
keyless license into a permanent license.
Add the system as a managed host to a Veritas Operations Manager
Management Server.
Add an appropriate and valid license key on this system using the Symantec
Product Installer from Windows Add/Remove Programs.
45
29
The Veritas Storage Foundation for Windows software consists of the following
components:
Client software: The client software includes the console, the Veritas
Enterprise Administrator (VEA). The client enables you to configure and
manage storage attached to both local and remote hosts.
Server software: The server software, which runs on a managed node, is the
common repository for all storage objects.
Providers: The providers run on a managed server. Providers are similar to
drivers. Each provider manages a specific hardware or software storage
component. For example, there is a disk provider that manages all devices that
Windows Server 2008 detects as disks.
Providers discover the existing physical and logical entities and store that
information in the SFW distributed database. Providers update the database
whenever there is a change in the physical or logical entities present in the
hardware or software. For example, if a disk array is disconnected or turned
off, the providers for that array (disk, HBA, and possibly DMP) notify the
server software of the loss of service for that array. If the array is later
reconnected, the providers again notify the server software of the change in
service.
46
If you are installing DMP as an option, ensure that you have at least two I/O paths
from the server to the storage array for load balancing to happen.
47
The VVR Option requires a static IP for replication. If you install this option,
ensure that the system has at least one IP address configured that is not assigned by
Dynamic Host Configuration Protocol (DHCP).
211
Refer to the hardware and software compatibility lists, available from the
Symantec Support Web site, for detailed information about the system
requirements for installing SFW.
48
The installer lists the four products on its GUI. It also displays the Late Breaking
News link, the Windows Data Collector link, the SORT link, the Technical
Support link, and the links to basic SFW documentation.
49
The Late Breaking News link provides access to the latest information about
updates, patches, and software issues regarding this release. The Windows Data
Collector link is used to verify that your configuration meets all pertinent software
and hardware requirements. The SORT link takes you to Symantec Operations
Readiness Tools site.
For all SFW documentaion including release notes and product guides, refer to the
following Web site:
https://sort.symantec.com/documents/doc_details/sfha/
6.0/Windows/ProductGuides
213
Support resources
With each new release of the SFW software, changes are made that may affect the
installation or operation of SFW in your environment. By reading version release
notes and installation documentation that are included with the product, you can
stay informed of any changes.
For more information about specific releases of Veritas Storage Foundation, visit
the Support Web site at:
http://www.symantec.com/business/support/
index?page=landing&key=15227
This site contains product and patch information, a searchable knowledge base of
technical notes, access to product-specific news groups and e-mail notification
services, and other information about contacting technical support staff.
50
For specific information on hardware and software compatibility, search for the
HCL and SCL compatibility documents and locate the latest version.
51
215
In order to install SFW, you need to click the Install Server Components button.
This button is present at two locations: under the Veritas Storage Foundation and
High Availability Solutions 6.0 tab and under the Veritas Storage Foundation
6.0 tab.
52
53
217
Read the License Agreement, select the I accept the terms of the License
Agreement option, and click Next.
54
55
219
In the Product Options window, select all the SFW options you want to install by
marking the appropriate check boxes. Click Next to continue.
56
If you select the User entered license key as your license scheme, the License
Details panel is displayed by default. On the License Details panel, type the
license key and then click Add. You can add multiple licenses for the
various product options you want to use.
57
221
In the Pre-install Summary screen, the installer checks the prerequisites for the
selected computers and displays the results in the Pre-install Report window. You
can review the report, and select the Save Report button to save the report. You
can mark the check box if you want to automatically reboot after the installation
completes, and click Next to proceed.
58
The Installation screen displays status messages and the progress of the
installation. If an installation fails, click Next to review the report and address the
reason for failure. You may have to either repair the installation or uninstall and reinstall. When the installation completes, review the status message and click Next.
59
223
Rebooting
60
While you plan to install the product, you must upgrade your Windows operating
system to the supported minimum level. Symantec recommends that you perform
the Windows upgrade before upgrading the product.
61
225
The upgrade procedure is same as the procedure followed earlier in this lesson,
while installing the server components using the Symantec Product Installer
wizard.
62
Refer to the Veritas Storage Foundation and High Availability Solutions 6.0 for
Windows Installation and Upgrade Guide for more information on upgrading.
63
227
64
65
229
VEA GUI
VEA is a Java-based interface that consists of a server and a client. The VEA
server gets installed with SFW installation. You can install the VEA client on a
Windows machine that is running SFW. The VEA client can run on any machine
that supports the Java 1.4 Runtime Environment, which can be Windows, Solaris,
HP-UX, AIX, or Linux.
66
In the Help window, you can view help information in three ways:
Click a topic under the Contents tab.
Select a topic under the alphabetical index listing under the Index tab.
Search for a specific topic by clicking the Search tab.
67
231
Command-line interface
The Storage Foundation command-line interface (CLI) provides commands used
for administering SFW from the command window on a Windows system. You
can execute CLI commands individually for specific tasks or combine them into
scripts.
The SFW command set ranges from commands requiring minimal user input to
commands requiring detailed user input. Many of the SFW commands require an
understanding of Storage Foundation concepts. Most SFW commands require
administrator or other appropriate access privileges.
SFW has a log that captures commands issued through the CLI and the system
response to each command. The log file, vxcli.log, is typically located at
C:\Program Files\Veritas\Veritas Volume Manager\logs.
68
VOM
VOM is required for keyless licensing. You can convert a keyless license into a
permanent license by adding the system with the keyless license as a managed host
to a VOM Server. A VOM management server is a centralized management server
that you must set up before you can configure an SFW host as a managed host.
You can configure an SFW host as a managed host or as a stand-alone host.
This course does not cover VOM and managed hosts. For more information on
VOM, refer to the Veritas Operations Manager 4.1 Administrator's Guide and the
following Web sites:
https://sort.symantec.com/vom
https://sort.symantec.com/documents/doc_details/vom/
4.1/Windows%20and%20UNIX/ProductGuides/
69
233
70
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 3
71
72
73
33
The private region is 1 MB in size and is located at the end of the disk. The private
region contains a database with disk configuration information, such as disk group
and volume information.
You can convert multiple basic disks to dynamic disks at one time. Also, you can
directly convert empty basic disks and basic disks containing partitions or logical
drives.
Empty basic disks: These disks are converted to empty dynamic disks. You
can create dynamic volumes on the new dynamic disks.
Basic disks containing partitions or logical drives: Storage Foundation
converts the partitions and logical drives into subdisks, and then places plex
(mirror) and volume wrappers around the appropriate subdisk objects. This
creates a dynamic volume of the appropriate type, preserving all data contained
within.
74
After a signature is written on a disk, the disk is displayed as a basic disk. You can
then create partitions on the basic disk, or you can upgrade the disk to dynamic and
create volumes. To write a signature on a disk, right-click the unsigned disk and
select Write Signature.
75
35
76
Disk groups ease the administration of high availability environments. Disk drives
can be shared by two or more hosts, but accessed by only one host at a time. If one
host crashes, the other host can take over its disk groups and therefore, its disks.
77
A dynamic disk group is created when the first basic disk in the system is upgraded
to dynamic. You can have multiple dynamic disk groups. You can create a new
dynamic disk group whenever you upgrade a basic disk to dynamic. A dynamic
disk is limited to one dynamic disk group; it cannot participate in multiple
dynamic disk groups. Disks within a dynamic disk group share a common
configuration. Dynamic volumes are created within a dynamic disk group and are
restricted to using disks within that group.
Primary and secondary dynamic disk groups
In SFW, the primary dynamic disk group is the disk group that contains the
computers boot or system disk. Only one primary dynamic disk group can exist
on a single host computer. Additional groups that are created or imported on that
computer are secondary dynamic disk groups.
However, we cannot have a primary SFW dynamic disk group. A primary disk
group in Windows Server 2008 must be an Microsoft Disk Management disk
group. All SFW disk groups are secondary.
37
Note: A primary disk group upgraded from Microsoft Disk Management with
SFW running on Windows Server 2008 always becomes a secondary
dynamic disk group.
Microsoft Disk Management disk group
The Microsoft Disk Management Disk (MDM) group contains dynamic disks that
are controlled by Windows Server 2008. SFW can create and manage a Microsoft
Disk Management disk group on a Windows Server 2008. However, dynamic
disks belonging to a Microsoft Disk Management disk group do not support many
SFW features, including operations on subdisks, private dynamic disk group
protection, cluster disk groups, hot relocation, S.M.A.R.T monitoring, evacuate
disk, replace disk, changing internal name of the disk, shred volume, and so on.
Cluster disk groups
SFW has a special category of dynamic disk groups for disks involved in the
support of clustering software, such as Veritas Cluster Server (VCS) or Microsoft
Failover Cluster (MFC). These shared dynamic disk groups are called cluster disk
groups.
A cluster disk group is designed to be moved between systems under the control of
cluster management software, such as Veritas Cluster Server. The new system then
starts up any applications needed to make use of that group, such as Oracle or
Exchange. A cluster disk group has multiple layers of protection so that only one
system in the cluster can use the disk group at any time. This prevents data
corruption.
For information about using clusters effectively, refer to the Veritas Cluster Server
6.0 Administrator's Guide or Microsoft Failover Cluster documentation.
Private dynamic disk group protection
This SFW feature enables you to partition shared storage on a SAN or shared array
for exclusive ownership by a single machine. You partition the storage by using a
secondary dynamic disk group. The private dynamic disk group protection feature
provides hardware locking to the disks in the secondary dynamic disk group
through a SCSI reservation thread.
78
79
Select:
Navigation path:
Input:
39
-R
-TLDM
80
-s
Refer to the Veritas Storage Foundation 6.0 for Windows Administrators Guide
for detailed information on CLI commands.
Navigation path:
Input:
When the disk is placed under SFW control, the Type property changes to
Dynamic, and the Status property changes to Imported.
81
311
Note: There is no space between -g and the dynamic group name. If a space is
present, SFW assigns a random name to the disk group.
82
Creating a volume
This section covers all the steps involved in creating a volume.
Creating a volume: VEA
The steps involved in creating a volume using the New Volume Wizard are listed
on the slide.
83
313
Selecting disks
A disk group
Navigation path:
Input:
By default, SFW locates available space on all disks in the disk group and assigns
the space to a volume automatically based on the layout you choose.
84
Select:
Mirror Info: Symantec recommends mirroring. To mirror the volume, mark the
Mirrored check box.
85
315
In the Add Drive Letter and Path window, select one of the three choices:
Assign a drive letter: Use the pull-down menu to select a drive.
Do not assign a drive letter: If you do not want to assign a drive letter, select
this option. The volume is displayed as an icon with no name in the VEA
console. You can assign a drive letter later by right-clicking the volume and
then selecting the Modify Drive Letter option. If you do not assign a drive
letter, the partition is not accessible.
Mount as an empty NTFS folder: Type in a folder name or browse to select
the folder name. When you click the Browse button, click the New Folder
button in the Browse for drive path window. SFW can create a new folder for
you.
86
87
When you create a volume, you can place a file system on the volume and specify
options for mounting the file system. Mark the Format this volume check box and
specify:
File system type: Specify the file system type as FAT, FAT32, or NTFS.
FAT is a file system used by MS-DOS, Windows 3.x, and Windows 95/98.
Windows NT and Windows 2000 can also use the FAT file system, but
unlike NTFS, it provides no security, has lower overall performance, and
does not support online file compression.
FAT32 is an enhanced implementation of the FAT file system designed for
larger drives.
NTFS is an advanced file system designed for use specifically within the
Windows NT, 2000, 2003, and 2008 operating systems. Use this format if
you want to use file and folder compression or NTFS permissions. File and
folder compression are supported only on NTFS volumes.
File system options:
Allocation size: This is the smallest amount of space that can be allocated
or added to a file. A block can only be allocated to one file at a time.
File system label: If you do not enter a label, no default label is provided.
You can enter a label for or rename the file system later.
Perform a quick format: Formats the disk without checking for bad
sectors
Enable file and folder compression: Can only be used if you select the
NTFS format
317
A file system provides an organized structure to facilitate the storage and retrieval
of files. You add a file system to a volume when you create the volume initially,
and you can replace the file system at any time after you have created the volume.
88
Select:
A volume
Navigation path:
Input:
89
In the syntax:
-g specifies the disk group in which to create the volume.
make is the keyword for volume creation.
volume_name is a name you give to the volume.
length specifies the number of sectors in the volume. You can specify the
length in kilobytes, megabytes, or gigabytes by adding a k, m, or g to the
length. If no unit is specified, megabytes are assumed.
type specifies the type of volume to be created. The default is a spanned
volume.
driveletter specifies the drive letter for the volume. The default is no
assignment of a drive letter to the volume.
You can specify many additional attributes, such as volume layout or specific
disks. For detailed descriptions of all attributes that you can use with vxassist,
type vxassist help on your system.
The vxassist utility does not format the drive and leaves it in RAW format. You
must execute the format command to mount the drive successfully. To format
the H: drive using NTFS, you type: format H: /fs:NTFS
319
In VEA, disks are represented under the Disks node in the object tree, in the Disk
View window, and in the grid for several object types, including controllers, disk
groups, enclosures, and volumes.
90
In the grid of the main window, under the Disks tab, you can identify many disk
properties, including disk name, disk group name, size of disk, amount of unused
space, and disk status. In particular, the status of a disk can be:
No Disk Signature: A disk without a signature is not usable by Windows. The
disk can be used only after a signature is written on the disk, when it becomes a
basic disk. You can create partitions on it, or upgrade the disk to dynamic and
create volumes on it, only when the disk is basic.
Online: The disk is accessible and has no known problems. This is the normal
disk status for basic disks.
Imported: The disk is in an imported disk group.
91
Offline: The disk is in a deported disk group. There is no deported disk status.
Deported shows only if you look at the disk group view rather than the disk
view.
Disconnected: The system can no longer find the disk. The name of the disk
becomes Missing disk.
Import Failed: An import of the disk was unsuccessful.
Failing: I/O errors have been detected on a region of the disk. All the volumes
on the disk display Failed, Degraded, or Failing status, and you may not be
able to create new volumes on the disk.
Foreign: A disk may be foreign if:
This disk was moved to the system and has not been set up properly to be
accessed on the local system.
This disk contains a secondary disk group, and you have a dual-boot
system. When you switch between operating systems, the disk is marked as
foreign.
This disk was originally created on the local system, moved to another
system, and then moved back to the original system.
321
When you select a disk in the object tree, details of the disk layout are displayed in
the grid. You can access these details by clicking the associated tab:
Volumes: This page displays the volumes that use this disk.
Disk Regions: This page displays the disk regions (subdisks) of the disk.
Disk View: This page displays the layout of any subdisks created on this disk
media, and details of usage. The Disk View window has the same view of all
related disks with more options available. To launch the Disk View window,
select an object (such as a disk group or volume), and then select Actions >
Disk View.
Alerts: This page displays any problems with a drive.
92
The disk Properties window includes the capacity of the disk and the amount of
unallocated space. You can select the units for convenient display in the unit of
your choice.
93
323
The disk group Properties window is displayed. This window contains basic disk
group properties, including the:
Disk group name, status, and type
Number of disks and volumes
Disk group version
Disk group size and amount of free space
94
You use the vxdisk list command to display basic information about all disks
attached to the system. The vxdisk list command displays the:
Internal disk name
Size
Free space
Status of each disk
Disk style
The Master Boot Record (MBR) disk style is limited to four primary partitions
and is available on MS-DOS, Windows 95/98, and later Windows versions.
Another style of disk is GUID Partition Table (GPT). GPT allows a maximum
of 128 primary partitions.
95
325
Note: The disks that belong to a deported dynamic disk group have a status of
offline and show a zero capacity.
96
97
This command gives the names and numbers of the volumes and the disks in the
dynamic disk group. This command also includes the dynamic disk group name,
its state (either Imported or Deported), and its dynamic disk group ID.
327
You can view volumes and volume details by selecting an object in the object tree
and displaying volume properties in the grid, as follows:
To view the volumes in a disk group, select a disk group in the object tree and
click the Volumes tab in the grid.
To explore detailed components of a volume, select a volume in the object tree
and click each of the tabs in the grid.
98
Disk View
The Disk View window displays a close-up graphical view of the layout of
subdisks in a volume. To display the Disk View window, select a volume or disk
group and select Actions > Disk View.
99
Warning: You can move subdisks in the Disk View window by dragging subdisk
icons to different disks or to gaps within the same disk. Moving subdisks
reorganizes volume disk space and must be performed with care.
This is one way to move volumes to other disks or arrays. You can also move
subdisks using the Move Subdisk command or the vxevac utility.
329
Volume Properties
100 330
In the syntax:
101
[-v]
volinfo
volume_name|
drive_letter
To return a list with the volume name, disk group, and size for volume M, you
type:
vxvol volinfo M:
331
Removing a volume
102 332
Select:
A volume
Navigation path:
Input:
Prior to shredding a volume, ensure that the information has been backed up onto
another storage medium and verified, or that it is no longer needed.
103
333
The volume is entirely overwritten and immediately removed when the operation
has completed.
104 334
Evacuating a disk
Evacuating a disk moves the contents of the volumes on a disk to another disk. The
contents of a disk can be evacuated only to disks in the same disk group that have
sufficient free space.
105
Select:
The disk that contains the objects and data to be moved to another
disk
Navigation
path:
Input:
335
Removing a disk
If a disk is no longer needed in a disk group, you can remove the disk. After you
remove a disk from a disk group, the disk cannot be accessed.
Note: The remove operation fails if there are any subdisks on the disk. However,
the destroy disk group operation does not fail if there are volumes in the
disk group.
Before removing a disk, ensure that the disk contains no data, the data is no longer
needed, or the data is moved to other disks. Removing a disk that is in use by a
volume can result in lost data or lost data redundancy.
106 336
Select:
Navigation path:
Input:
Dynamic disk group name: The disk group that contains the
disk to be removed
Selected disks: The disk to be removed must be displayed in the
Selected disks field. Only empty disks are displayed in the list
of available disks as candidates for removal.
Note: If you select all disks for removal from the disk group, the disk group is
automatically destroyed.
Removing a disk: CLI
To remove a disk from a disk group using the command line, you use the vxdg
rmdisk command as follows:
vxdg gdiskgroup_name rmdisk Harddisk#
This command reverts the referenced disk from a dynamic disk to a basic disk. You
can verify the removal by using the vxdisk list command to display disk
information.
107
337
Note: Microsoft Disk Management disk groups do not support the Destroy
Dynamic Disk Group command.
108 338
109
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
339
110
Lesson 4
111
112
Each volume layout has different advantages and disadvantages. For example, a
volume can be extended across multiple disks to increase capacity, mirrored on
another disk to provide data redundancy, or striped across multiple disks to
improve I/O performance. The layouts that you choose depend on the levels of
performance and reliability required by your application.
Concatenated layout
113
A concatenated volume layout maps data in a linear manner onto one or more
subdisks in a plex. Subdisks do not have to be physically contiguous and can
belong to more than one SFW disk. Storage is allocated completely from one
subdisk before using the next subdisk in the span. Data is accessed in the
remaining subdisks sequentially until the end of the last subdisk. For example, if
you have 14 GB of data, then a concatenated volume can logically map the volume
address space across subdisks on different disks. The addresses 0 GB to 7 GB of
volume address space map to the first 8 GB subdisk, and addresses 8 GB to 13 GB
map to the second 6 GB subdisk. An address offset of 12 GB, therefore, maps to an
address offset of 4 GB in the second subdisk.
43
Striped layout
A striped volume layout maps data so that the data is interleaved, or allocated in
stripes, among two or more subdisks on two or more physical disks. Data is
allocated alternately and evenly to the subdisks of a striped plex.
The subdisks are grouped into columns. Each column contains one or more
subdisks and can be derived from one or more physical disks. To obtain the
maximum performance benefits of striping, do not use a single disk to provide
space for more than one column.
All columns must be of the same size. The minimum size of a column must equal
the size of the volume divided by the number of columns. The default number of
columns in a striped volume is based on the number of disks in the disk group.
Copyright 2012 Symantec Corporation. All rights reserved.
Data is allocated in equally sized units, called stripe units, that are interleaved
between the columns. Each stripe unit is a set of contiguous blocks on a disk. The
stripe unit size can be in units of sectors, kilobytes, megabytes, or gigabytes. The
stripe width of striped volumes in blocks is 512K. This provides adequate
performance for most general-purpose volumes. You can improve the performance
of an individual volume by matching the stripe unit size to the I/O characteristics
of the application using the volume.
114
Mirrored layout
Although a volume can have a single plex, at least two plexes are required to
provide redundancy of data. Therefore, a single plex can be considered to be a
potential mirror for data. Each of these plexes must contain disk space from
different disks to achieve redundancy.
115
SFW uses true mirrors, which means that all copies of the data are the same at all
times. When write to a volume occurs, all plexes must receive the write before the
write is considered complete.
45
RAID-5 layout
A RAID-5 volume layout has the same attributes as a striped plex, but it includes
one additional column of data that is used for parity. Parity provides redundancy.
Parity is a calculated value used to reconstruct data after a failure. While data is
being written to a RAID-5 volume, parity is calculated by performing an exclusive
OR (XOR) procedure on the data. The resulting parity is then written to the
volume. If a portion of a RAID-5 volume fails, the data that was on that portion of
the failed volume can be re-created from the remaining data and parity
information.
RAID-5 volumes keep a copy of the data and calculated parity in a plex that is
striped across multiple disks. Parity is spread equally across columns. Given a
five-column RAID-5 volume where each column is 1 GB in size, the RAID-5
volume size is 4 GB. One column of space is devoted to parity, and the remaining
four 1-GB columns are used for data.
116
The stripe width of RAID volumes in blocks is 512K. Each column must be the
same length but may be made from multiple subdisks of variable length. Subdisks
used in different columns must be located on separate physical disks.
RAID-5 requires a minimum of three disks for data and parity. When implemented
as Symantec recommends, an additional disk is required for the log.
RAID-5 cannot be mirrored.
Concatenation: Advantages
Removes size restrictions: Concatenation removes the restriction on the size
of storage devices imposed by physical disk size.
Offers better usage of free space: Concatenation enables better usage of free
space on disks by providing for the ordering of available discrete disk space on
multiple disks into a single addressable volume.
Simplifies administration: Concatenation enables large file systems to be
created and reduces overall system administration complexity.
Concatenation: Disadvantages
117
47
Striping: Disadvantages
Provides no redundancy: Striping alone offers no redundancy or recovery
features.
Single disk failure causes volume failure: Striping a volume increases the
chance that a disk failure results in failure of that volume.
Mirroring: Advantages
Improves reliability and availability: With concatenation or striping, failure
of any one disk makes the entire plex unusable. With mirroring, data is
protected against the failure of any one disk. Mirroring improves the reliability
and availability of a striped or concatenated volume.
Improves read performance: Reads benefit from having multiple places
from which to read the data.
Offers fast recovery through logging: Dirty region logging (DRL) greatly
speeds up the time that it takes to recover from a system crash for mirrored
volumes.
Mirroring: Disadvantages
Requires more disk space: Mirroring requires twice as much disk space,
which can be costly for large configurations. Each mirrored plex requires
enough space for a complete copy of the volumes data.
Provides slightly slower write performance: Writing to volumes is slightly
slower, because multiple copies have to be written in parallel. The overall time
the write operation takes is determined by the time that is needed to write to the
slowest disk involved in the operation.
The slower write performance of a mirrored volume is not generally significant
enough to decide against its use. The benefit of the resilience that mirrored
volumes provide outweighs the performance reduction.
RAID-5: Advantages
Provides redundancy through parity: With a RAID-5 volume layout, data
can be re-created from the remaining data and parity in case of the failure of
one disk.
Requires less space than mirroring: RAID-5 stores parity information, rather
than a complete copy of the data.
Improves read performance: RAID-5 provides similar improvements in read
performance as in a normal striped layout.
Offers fast recovery through logging: RAID-5 logging minimizes recovery
time in case of disk failure.
118
RAID-5: Disadvantages
Provides slower write performance: The performance overhead for writes
can be substantial, because a write can involve much more than simply writing
to a data block. A write can involve reading the old data and parity, computing
the new parity, and writing the new data and parity, as a multi step readmodify-write operation.
Performs poorly after a disk failure: After one column fails, all I/O
performance goes down. This is not the case with mirroring, where a disk
failure does not have any significant effect on performance.
119
49
Size: Specify a size for the volume. The default unit is MB. If you click the Max
Size button, SFW determines the largest size possible for the volume based on the
layout selected and the disks to which the volume is assigned. The size of the
volume must be less than or equal to the available free space on the disks. For a
mirrored volume, SFW allocates additional free space for the volumes additional
plexes.
120 410
121
If you want the volume to reside on specific disks, you can designate the disks by
adding the disk names to the end of the command, such as Harddisk2. The disk
name can also be indicated by the internal disk name or by p#c#t#l#, where the
hashs correspond to the port, channel, target, and LUN of a disk.
Refer to the Veritas Storage Foundation 6.0 for Windows Administrators Guide for
detailed information on CLI commands.
411
To create a striped volume, you add the layout type and other attributes to the
vxassist make command displayed on the slide.
122 412
In the syntax:
type=stripe designates the striped layout.
column=n designates the number of stripes, or columns, across which the
volume is created.
If you do not provide a number of columns, then SFW selects a number of
columns based on the number of free disks in the disk group. The minimum
number of stripes in a volume is two, and the maximum is eight.
stripeunit=size specifies the size of the striped volumes in blocks. The
default is 16K.
driveletter=letter specifies the drive letter to be assigned to the
volume. By default, no drive letter is assigned to the volume.
[disks...] To stripe the volume across specific disks, you can specify the
disk names at the end of the command. The order in which disks are listed on
the command line does not imply any ordering of disks within the volume
layout. By default, SFW selects any available disks with sufficient space.
When you create a mirrored volume, the volume initialization process requires that
the mirrors be synchronized. The vxassist command typically waits for the
mirrors to be synchronized before returning to the system prompt. To run the
process in the background, you add the -b option.
123
413
Basic disks use the disk partitioning mechanism used by Windows. A basic disk
can have up to either four primary partitions or three primary partitions plus an
extended partition:
You cannot subpartition or subdivide primary partitions.
You can subpartition extended partitions into multiple volumes called logical
drives. Use an extended partition if you want to have more than four drives.
124 414
Use SFW to create partitions on basic disks if you want computers running Linux,
Windows, or MS-DOS to access these partitions.
After you change a basic disk to a dynamic disk, the volumes on the disk cannot be
accessed by MS-DOS, Windows 95/98, or Windows NT.
When a basic disk that contains partitions is added to a dynamic disk group, the
partitions become simple volumes on the dynamic disk.
125
Select:
A basic disk
Navigation path:
Input:
Select disk and region: Select the disk and check the box for
the free space that you want to use.
Select Partition Type: Select the partition type (Primary or
Extended) and enter the size of the partition.
Assign a drive letter: Select a drive letter or drive path for this
partition.
Create File System: Mark the Format this volume check box.
Select the type of format that you want: FAT, FAT32, or NTFS.
Type a partition name in the File system label field. If you do
not enter a name, your partition is named New Volume by
default. Select an allocation unit size in bytes if you want to use
a size other than the default. However, Symantec strongly
recommends default settings for general use. Select a format
method: Perform a quick format or Enable file and folder
compression.
415
126 416
Select:
An extended partition
Navigation path:
Input:
Select disk and region: Select the disk and check the box for
the free space that you want to use.
Select Partition Type: The Logical drive button is selected by
default, and the window displays the largest logical drive size
that can be created in the extended partition.
Assign a drive letter: Select a drive letter or drive path for this
partition.
Create File System: Mark the Format this volume check box.
Select the type of format that you want. Type a partition name in
the File system label field. Select an allocation unit size in
bytes if you want to use a size other than the default. Select a
format method.
The partitions containing the startup and operating system files are commonly
named:
System partition: This partition is used for startup.
Boot partition: This partition is used for operating system files.
Note: The boot partition can be (but does not have to be) the same as the system
partition.
127
The system partition must be a primary partition that has been marked as active for
startup purposes. This partition must be located on the disk that the computer
accesses when starting up the system. There can only be one active system
partition at a time, which is displayed as Active in the status field. To use another
operating system, you must first mark its system partition as active before
restarting the computer.
The Mark Partition Active command enables you designate a basic primary
partition as active. You can only use the Mark Partition Active command on a
basic primary partition, not on a dynamic volume.
To mark a partition as active, right-click the primary partition that contains the
startup files for the operating system that you want to activate and select Mark
Partition Active. A message states that the partition is marked active and that the
operating system on that partition is started when you restart the computer.
417
128 418
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 5
129
130 52
131
This method does not require downtime. This is useful in many situations, for
example, if a company purchases a new array. With SFW, you:
1 Add the new array to the SAN.
2 Zone for the server to see the LUNs.
3 Rescan with VEA.
4 Add the LUNs from the new array to the disk group.
5 Mirror the volumes to the new array.
6 Remove the plexes on the old array.
7 Remove the LUNs that are on the old array from the disk group.
53
A mirrored volume requires at least two disks. You cannot add a mirror to a disk
that is already being used by the volume. A volume can have multiple mirrors, as
long as each mirror resides on separate disks.
132 54
Only disks in the same disk group as the volume can be used to create the new
mirror. Unless you specify the disks to be used for the mirror, SFW automatically
locates and uses available disk space to create the mirror.
A volume can contain up to 32 plexes (mirrors); however, the practical limit is 31.
One plex must be reserved for use by SFW for background repair operations.
Removing a mirror
When a mirror (plex) is no longer needed, you can remove it. You can remove a
mirror to provide free space, to reduce the number of mirrors, or to remove a
temporary mirror.
Caution: Removing a mirror results in the loss of data redundancy. If a volume
only has two plexes, removing one of them leaves the volume unmirrored.
133
Select:
Navigation path:
Input:
To verify that a new mirror has been added, view the total number of copies of the
volume as displayed in the main window. The total number of copies is increased
by the number of mirrors added.
Adding a mirror: CLI
Use the vxassist utility with the mirror option to add a mirror to an existing
volume. For example, to add two mirrors to volume Z: using Harddisk4 and
Harddisk5, you type:
vxassist mirror Z: Mirror=2 Harddisk4 Harddisk5
Instead of using the drive letter, you can provide a path:
vxassist mirror
\Device\HarddiskDmVolumes\DiskGroup1\Volume1 Mirror=2
Harddisk4 Harddisk5
55
134 56
Select:
Navigation path:
Input:
Breaking a mirror
Breaking (that is, breaking off) a mirror takes away a redundant mirror (or plex) of
a volume and assigns another drive letter to it. The data on the new volume is an
exact copy of the original volume at the time of breaking off.
The broken-off plex retains the other volume layout characteristics without the
mirror. For example, if you have a mirrored striped volume, the broken-off plex
becomes a striped volume.
Breaking off a plex of the mirrored volume does not delete the information, but it
means that the plex that is broken off no longer mirrors information from the other
plexes in the mirrored volume. For example, in a two-way mirrored volume,
breaking off a mirror creates two dynamic volumes on the disks with each volume
containing identical data. These volumes are no longer fault tolerant.
135
57
Navigation path:
Input:
Select which mirror to break: Select the mirror that you want
to break off.
For break off volume: Indicate whether to assign a drive letter
to the broken-off volume. You may either assign a specific drive
letter from the drop-down menu or accept the default.
136 58
Use the vxassist utility with the break option to break a mirror from an
existing volume. The syntax is displayed on the slide. You can use either the path
of the mirrored volume or its drive letter. The plex that is specified receives a new
drive letter.
To break a mirror from volume H and assign the new volume the drive letter Z,
type:
vxassist break H: plex=Volume1-01 DriveLetter=Z
Adding/Removing a log
Logging in SFW
When you enable logging, SFW tracks changed regions of a volume. You can then
use the log information to reduce plex synchronization times and speed the
recovery of volumes after a system failure. Although this feature is optional,
Symantec highly recommends logging, especially for large volumes.
You can add a log to a volume when you create the volume or at any time after
volume creation. The type of log that is created is based on the type of volume
layout.
137
Dirty region logging (DRL) is used with mirrored volume layouts. DRL keeps
track of the regions that have changed due to I/O writes to a mirrored volume.
Prior to every write, a bit is set in a log to record the area of the disk that is being
changed. In case of system failure, DRL uses this information to recover only the
portions of the volume that need to be recovered.
If DRL is not used and a system failure occurs, all mirrors of the volumes must be
restored to a consistent state by copying the full contents of the volume between its
mirrors. This process can be lengthy and I/O-intensive.
When you enable logging on a mirrored volume, one log plex is created by default.
The log plex uses space from disks already used for that volume, or you can
specify which disk to use. To enhance performance, consider placing the log plex
on a disk that is not already in use by the volume. You can create additional DRL
logs on different disks to mirror the DRL information.
59
Navigation path:
Input:
Navigation path:
Input:
Note: When you remove the only log from a volume, logging is no longer in
effect, and recovery time increases in the event of a system crash.
Adding a log: CLI
You can add a dirty region log to a mirrored volume by using the vxassist
addlog command, as follows:
vxassist addlog volume_name|drive_letter
For example, to add a log to the mirrored payvol volume in the acctdg disk group,
type:
vxassist addlog payvol
SFW recognizes that the layout is mirrored and adds a dirty region log.
138 510
You can specify additional attributes, such as the disks that must contain the log,
when you run the vxassist addlog command. When no disks are specified,
SFW uses space from the disks already in use by that volume, which may not be
best for performance.
Removing a log: CLI
You can remove a dirty region log by using the vxassist remove log
command with the name of the volume. The appropriate type of log is removed
based on the type of volume.
vxassist remove log volume_name|drive_letter
For example, to remove the dirty region log from drive H, you type:
vxassist remove log H:
One of the benefits of mirrored volumes is that you have more than one copy of the
data from which to satisfy read requests. You can specify which plex SFW must
use to satisfy read requests by setting the read policy. The read policy for a volume
determines the order in which volume plexes are accessed during I/O operations.
139
511
A volume
Navigation path:
Input:
140 512
The vxvol rdpol command sets the volume read policy on a volume with
multiple mirrors to designate a specific plex to be used for reads. This plex is
referred to as the preferred plex. The syntax of the command is as follows:
vxvol -gdiskgroup_name rdpol round
volume_name|drive_letter
vxvol -gdiskgroup_name rdpol prefer
volume_name|drive_letter preferred_plex
Resizing a volume
You can expand a volume across all dynamic disks within a disk group up to a
maximum of 256 disks. You can extend a volume while remaining online. You can
expand a volume only if:
The volume is formatted with NTFS.
There is unallocated space on the dynamic disks in the same disk group onto
which the volume is to be extended.
141
Important considerations:
You can only expand a system or boot volume in increments of the disks
cylinder size and only into contiguous space at the end of the volume.
You cannot expand a volume on a disk that is part of a cluster disk group that
has applications running and is being monitored by the cluster management
software.
513
142 514
Select:
Navigation path:
Input:
143
Select:
Navigation path:
Input:
515
The new volume size can be specified in sectors, kilobytes (K), megabytes (MB),
gigabytes (GB), or terabytes (TB), and the specified value must be less than the
maximum size of the volume. The new volume size is displayed in the Veritas
Enterprise Administrator GUI.
144 516
The volume shrink feature existed in the previous release of SFW, but users had to
bring down the application accessing a volume before they were able to shrink the
volume. The online volume shrink feature overcomes this drawback. You can now
shrink a volume while it is being written.
Otherwise, the computer crashes. This is a known Microsoft issue and there is no
workaround for it. To avoid this problem, you need to abstain from performing the
mentioned operations.
145
517
SFW also enables you to mount a volume at any empty folder on a local NTFS
volume. The volume can be a partition, a logical drive that was created in
Windows Disk Management, or a dynamic volume. For example, you can mount
the C:\Temp folder as another drive to provide additional disk space for
temporary files.
146 518
Select:
A volume
Navigation path:
Select Actions > File System > Change Drive Letter and
Path.
Input:
147
519
Drive paths are useful because they eliminate the 24-drive-letter limit on hard-disk
volume names. Drive letters A and B are reserved for floppy drives. The volume
can be a partition, a logical drive that was created using Windows Disk
Management utility, or a dynamic volume.
148 520
If Computer A fails, then Computer B, which is on the same SCSI bus as the
acctdg disk group, can take ownership or control of the disk group and all of its
components.
149
521
150 522
151
523
Before deporting a dynamic disk group, ensure that the disks are online and the
volumes are healthy. If the status is not healthy, repair the volumes before you
deport the disk group and move the disks.
152 524
The process of deporting a dynamic disk group puts the contained disks in the
Offline state and all volumes in the Stopped state. This placement applies only
while the dynamic disk group is deported. After an Import Dynamic Disk Group
command is issued, disks come back online and volumes return to the state they
were in at the time they were deported.
It is important to use the Deport Dynamic Disk Group command, especially if
you are moving hot-swappable disks between computers. The Deport Dynamic
Disk Group command stops access to disks. Using this command ensures that the
data has been flushed in a clean state before you move the disks to the other
computer. The Deport Dynamic Disk Group command also clears the host ID of
the computer on which the disk group is located so that it can be imported on
another computer.
153
Select:
Navigation path:
Input:
Deport a disk group in preparation for importing it to another computer. Disks and
volumes cannot be accessed until the disk group is imported.
Deporting a disk group: CLI
To deport the disk group, use the vxdg command, as follows:
vxdg -gdiskgroup_name [-f] deport
The -f option forces the disk group to be deported if one or more of its volumes
are still in use.
525
154 526
When you import a dynamic disk group, the system puts its host ID in the private
region of the disks in the disk group. If there is a previous host ID on the disks
because the dynamic disk group has not been deported from its previous host, the
import operation fails. However, you can force the import operation by clearing
the previous host ID while importing the dynamic disk group.
Navigation path:
Input:
155
To import a disk group for use on a new computer, use the vxdg command:
vxdg -gdiskgroup_name [-n new_dg_name][-s][-C][-f]
import
-s
-C
Clears the original host ID and stamps a new host ID onto the dynamic disk
group
-f
For example, to import the disk group previously called DynamicGroup and
rename it acctgdg, you type:
527
156 528
157
Select:
Navigation path:
Note: You cannot upgrade to a specific disk group version by using VEA. You
can only upgrade to the current version. To upgrade to a specific version,
use the command line.
Upgrading a disk group: CLI
To upgrade a disk group from the command line, use the vxdg upgrade
command, as follows:
vxdg -gdiskgroup_name upgrade
529
158 530
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
159
160
Lesson 6
161
162 62
163
63
164 64
Even when FlashSnap is performed on the same server, its very efficient mirror
breakoff and join process is much faster and takes less CPU availability than other
mirror breakoff procedures that use ordinary mirroring. FlashSnap is made
possible by several features in SFW. These features are:
Snapshot commands
Use the Snapshot commands to create the mirrored volumes or snapshots that
are useful for backup or other resource-intensive processing purposes.
Dynamic disk group split and join
Dynamic disk group split and join supports the ability to split a dynamic disk
group into two disk groups so that the newly formed disk group can be moved
to another server. This functionality enables you to split a mirror for backup
and have a separate server handle the backup. After the backup is completed,
the split-off disk group is moved back to the original server and joined to its
former disk group, and the mirror is reassociated with its mirror set and
resynchronized. Dynamic disk group split and join also can be performed on
the same server for same-host backup or for reorganizing the disk groups on
the server.
FastResync
FastResync supports resynchronizing of mirrors by copying only changes for
the temporarily split mirror by using FastResync logging. This reduces the
time that it takes to rejoin a split mirror with the mirror set and also reduces the
server CPU cycles needed to complete the resynchronization.
These features are necessary for the FlashSnap procedure, but they can also be
used for other, more general purposes. However, to use these commands, you must
purchase the license that enables FlashSnap.
Note: Dynamic disks belonging to a Microsoft Disk Management disk group do
not support snapshot commands and dynamic disk group split and join
operations.
165
These OHP phases are covered in greater detail throughout the lesson.
65
166 66
167
Select:
Navigation path:
Input:
Do not use existing mirror for snap: You see this option if you
already have a mirrored volume. A dialog box to select disks is
displayed next.
Select existing mirror for snap: You see this option if you
already have a mirrored volume. If you select an existing mirror
to be used for the snapshot mirror, the command completes at
this point.
After the Prepare command completes, a new snapshot mirror is attached to the
volume. For example, if Vol01 (H:) has a snapshot mirror attached to it, the
new mirror is added to the Mirrors tab for the volume. In this example, the mirror
is identified as a snapshot mirror and has the Snapshot icon. After the snapshot
mirror is synchronized with the volume, its status becomes Snap Ready.
Note: The Prepare command replaces the Snap Start command in VEA and the
CLI.
In this example, the disk change object (DCO) log is created by the Prepare
command. The DCO volume is created to track the regions on a volume that are
changed while a mirror is detached. The DCO volume is not included in the tree
view of the VEA because it is not a usable volume for user data. To view the DCO
67
volume, you must use the Disk View. To access the Disk View, click the Disk
View tab in the right pane or select Disk View or DCO from a disks or volumes
context menu.
Note: The terms disk change object and data change object are synonymous.
Creating a temporary mirror: CLI
You can create volume snapshots from the command line by using the vxassist
command.
To run vxassist prepare to create a snapshot mirror on the volume to be
backed up, the syntax is:
vxassist [-b] prepare volume_name|drive_letter
[plex=mirror_plex_name|disk_name]
If you specify the plex option as a mirror plex in the command, SFW converts a
specified mirror plex to a snap plex. This plex can be the plex name (such as
Volume2-01) or the GUID of the mirror plex. A GUID is a unique internal number
assigned to the plex. To determine the GUID for a given plex, use the vxvol -v
volinfo command for the mirrored volume that contains the plex.
If you specify the plex option as a disk, SFW creates the new snapshot on the
specified disk.
The vxassist prepare task creates a write-only mirror, which is attached to
and synchronized with the volume.
When fully synchronized, the mirror is used in the volume in the same way as any
other mirror. The mirror becomes part of the volume read policy, and all writes
also go to the mirror.
The mirror is ready to be used as a snapshot mirror. However, the mirror continues
to be updated until it is detached during the actual snapshot phase of the procedure.
168 68
Note: You can also use the volume shadow copy service (VSS) for SFW using
vxsnap to create snapshots. See the Veritas Storage Foundation for
Windows Administrators Guide for more information.
169
Select:
Navigation path:
Input:
The snapshot mirror is detached from the original volume, and a new volume is
created that is associated with the snapshot mirror. This process usually takes less
than a minute.
The snapshot mirror is no longer displayed on the Mirrors tab for the original
volume.
The new snapshot volume is displayed under the Volumes folder in the tree view.
The program assigns it the next available drive letter. In the example on the slide,
the new snapshot volume is shown as a drive letter.
69
170 610
Navigation path:
Input:
The snapshot plex is detached from the snapshot volume and attached to the
original volume. The data in the volume is resynchronized so that the plexes are
consistent and the snapshot volume is removed. By default, the data in the original
plex is used for the merged volume.
171
611
Navigation path:
Input:
This permanently breaks the association between a snapshot and its original
volume. The snapshot volume becomes an independent volume. The original
volume returns to the state that it was in before the Prepare command was
executed. Dissociating a snapshot is one method that you can use to keep a
permanent image of a volume for storage.
172 612
Snap Abort aborts the changes made by the Prepare or Snap Back command. In
both these commands, a snapshot mirror plex is attached to a volume. Snap Abort
either deletes this snapshot mirror plex or converts the snapshot mirror plex to an
ordinary mirror. In cases where the deleted snap plex is the last snap plex and the
resulting volume is simple or striped, the Snap Abort command also deletes the
DCO log volume. The command cannot be performed directly after a Snap Shot
command.
173
You can also remove the volume by using the vxassist remove volume
command. To remove the volume, the syntax is:
vxassist remove [volume|mirror|log]
volume_name|drive_letter [plex=plex_name|!disk_name]
613
174 614
In a dynamic disk group split operation, you use the Split Dynamic Disk Group
command to split a dynamic disk group into two dynamic disk groups. You can
move a self-contained set of SFW objects from one imported disk group to a new
target disk group that is created as part of the operation. In order to perform a split
operation, the source disk group must exist, and the target disk group must not
exist.
With the Split Dynamic Disk Group command, you can take some but not all of
the disks from one dynamic disk group to another. The source dynamic disk group
retains its identity as the original, while the other dynamic disk group, called the
target disk group, becomes a new dynamic disk group.
The Split Dynamic Disk Group command assumes that the split-off disk group
contains all disks that are needed to make the volumes in the new disk group
complete. If the disks that you select to split the disk group results in incomplete
volumes, the logic built into the command adds the remaining disks needed to split
the disk group with complete volumes.
What is a dynamic disk group join?
In a disk group join operation, you use the Join Dynamic Disk Group command
to join two dynamic disk groups into one merged disk group. You can join two disk
groups that were originally split apart with the Split Dynamic Disk Group
command, but you can also join two dynamic disk groups that started out as
separate disk groups.
You can move all SFW objects from an imported source disk group to an imported
target disk group, and then the source disk group is removed when the join is
complete. The target disk group must exist in order to perform the join.
Reorganization and accessibility
In dynamic disk group split or join operations, volumes that are being relocated
into a different disk group are temporarily inaccessible during the process.
Therefore, before moving volumes between disk groups, you must stop all
applications that are accessing the volumes and unmount all file systems that are
configured in the volumes.
Before reorganizing disk groups
Before you reorganize disk groups, ensure that the following requirements are met:
Primary dynamic disk groups cannot be split because dynamic disk groups can
contain the computers boot and system disks.
The objects involved must be top-level objects, such as disks or volumes.
When you perform the operation, all component objects, such as plexes and
subdisks, are affected.
The objects involved must be self-contained objects. You cannot share a disk,
or a volume, between disk groups at any time.
175
615
Before performing a split or join operation, stop all applications that are accessing
the volumes and ensure that the volumes to be split are healthy. Similarly, ensure
that any disks to be split do not have a Missing status.
176 616
Select:
Navigation path:
Input:
If the dynamic disk group split is successful, you are able to view the new target
dynamic disk group in the tree view and in the right pane of the console. By
default, the new target disk group is in the Imported state if you use the VEA to
perform the split. If the Split Dynamic Disk Group command fails, an error
dialog box is displayed showing the reason for the failure. The Dynamic Disk
Group Split operation fails if the target disk group already exists or if a problem
occurs when the split operation is taking place.
-s
Makes the new dynamic disk group a cluster dynamic disk group
-y
-v
Splits all disks in the dynamic disk group that contain snapshot volumes
object
Note: If you use the command line to execute the split, the new target disk group
is in the Deported state by default. This is because it is assumed that you
want to deport the disk group, and then import it on another computer.
Examples
For example, you can query to determine whether Harddisk5 and Harddisk7 in a
dynamic disk group named Dynamic1 compose the total disks that are needed to
have a dynamic disk group split, where all the volumes in the split-off dynamic
disk group are complete.
To query for the split closure, you type:
vxdg -gDynamic1 -y -n Dynamic1 split Harddisk5
Harddisk7
177
The output indicates that in order to have a successful split, or what is called split
closure, you must also add Harddisk6. To perform the actual split, you type:
vxdg -gDynamic1 -i -n Dynamic2 split Harddisk5
Harddisk6 Harddisk7
617
This command successfully splits the Dynamic1 dynamic disk group with the
target Dynamic2 dynamic disk group in the Imported state. The new dynamic disk
group has the Harddisk5, Harddisk6, and Harddisk7 disks.
In the example that follows, you designate the volumes to be included in a new
target disk group by typing:
vxdg -gDynamic1 -i -n Dynamic2 split
\Device\HarddiskDmVolumes\Dynamic1\mirrorvol1
\Device\HarddiskDmVolumes\Dynamic1\mirrorvol2
Note the path that is needed for volumes.
This command results in successfully splitting the Dynamic1 dynamic disk group
with the Dynamic2 target dynamic disk group in the Imported state. The new
dynamic disk group contains the
\Device\HarddiskDmVolumes\Dynamic2\mirrorvol1 and
\Device\HarddiskDmVolumes\Dynamic2\mirrorvol2 volumes.
178 618
SFW requires that the volumes transferred to another disk group must be
complete; that is, the source disk group cannot have missing disks. The disk group
type after the join is the type of the target disk group. For example, if the target
disk group before the join had private dynamic disk group protection, it has private
dynamic disk group protection after the join.
179
Select:
Navigation path:
Input:
If the join operation succeeds, the source dynamic disk group merges into the
target dynamic disk group. If you see an error message, the new dynamic disk
group after the join command has the same type as the target dynamic disk group.
For example, if a cluster dynamic disk group is joined with a normal dynamic disk
group, then the new dynamic disk group is a normal dynamic disk group.
619
180 620
181
FastResync is based on the fact that if a mirror becomes unavailable, some or all of
the data that is on the disk can still be valid. The FastResync feature keeps track of
mirrors that have been detached and the updates that were applied when the
mirrors were unavailable. A bitmap, called a FastResync map, is used to track
changes.
By keeping track of updates missed when a mirror was offline, and then applying
only those updates when the mirror is back online, you can reduce
resynchronization times. The time it takes to recover a mirror depends on the
amount of change that occurred when the mirror was offline.
621
182 622
Although both FastResync and dirty region logging (DRL) keep track of regions
on a volume where the mirrors are not synchronized, they perform different
functions. FastResync keeps track of data store updates missed by a detached
mirror, while DRL keeps track of whether a write to a mirrored volume has been
completed on all mirrors. The write region on the volume is considered dirty
because the mirrors are out of synchronization until the write to all mirrors is
completed. Use DRL to resynchronize mirrors following a system crash.
Note: If you are using snapshot commands, you do not need to use the following
steps, because FastResync is automatically enabled for snapshot
commands. These steps are needed only when you want to enable
FastResync on a volume that will not be used with any snapshot
commands.
183
To enable FastResync for a volume, select the mirrored volume for which you
want to enable FastResync and select Actions > FastResync > Add.
To disable FastResync for a volume, select the mirrored volume for which you
want to disable FastResync and select Actions > FastResync > Remove.
Enabling and disabling FastResync: CLI
To enable or disable FastResync, use the vxvol command. This command turns
FastResync on or off for the specified mirrored volume. The syntax to enable or
disable FastResync is as follows:
vxvol set fastresync=on|off volume_name|drive_letter
623
Note: If you have initiated a snapshot operation on a volume, you cannot turn
FastResync off for that volume. If you try to do so, the command-line
interface returns an error message.
For example, to turn the FastResync feature on for the volume with drive letter J,
you type:
vxvol set fastresync=on J:
To turn the FastResync feature on for Volume1, which belongs to DynDskGrp1,
type:
184 624
DCO volume
The DCO volume is created when you enable FastResync or when a snapshot
operation is started. The DCO volume keeps track of the changes made to a
volume while a mirror is detached. The DCO volume is not visible in the tree view
in the left pane of the VEA. This volume is visible in the Disk View (when Volume
Details is not selected).
185
625
Note: It is important to wait until the FFR process is complete before accessing
and using the restored file. Data corruption can occur if the file is used
before the resynchronization is complete.
186 626
Navigation path:
Input:
187
The CLI command does not support resynchronization of multiple files. To use
Fast File Resync to resynchronize a single file in a snapshot volume to the original
volume, the format of the command is as follows:
vxfsync -gdiskgroup_name -m master_volume -s
snap_volume -f file_name
The vxfsync command is only available from the Storage Foundation folder
found at the following path: Program Files\Veritas\Veritas Volume
Manager.
For example, to use the snapshot volume, vol1_snap01, to resynchronize or restore
the test.dat file on the master or original volume, vol1, you type:
vxfsync -gtestdg -m vol1 -s vol1_snap01 -f test.dat
627
This section provides an outline of how to apply off-host processing by using the
FlashSnap procedure, which is a combination of the snapshot, FastResync, and
disk group split and join features of SFW. You can use this outline to set up a
regular backup cycle or to set up a replica of a production database for decision
support purposes. Configuring a database and performing the backup itself are
beyond the scope of this course.
188 628
189
629
190 630
13 Join the mirrored volume (or snapshot) back to its original volume by using the
191
631
192 632
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 7
193
194 72
When a problem occurs in a computers storage subsystem, SFW alerts you with
error messages and error symbols placed on top of the disk or volume icons to
show the source of the problem.
195
73
Copyright 2012 Symantec Corporation. All rights reserved.
You can locate these problems using the Status column of the Disks View tab or
Volumes View tab. You can also see indications of abnormal status in the tree view
or the Disk View tab. If the status is not Healthy for volumes, Imported for
dynamic disks, or Online for disks, you need to determine if there is a problem,
and, if necessary, correct it.
196 74
You can display the status of basic and dynamic disks by selecting the Disks node
in the left pane of the VEA. In the Disks tab in the right pane, disk status tags are
displayed under the Status column.
You can display the status of basic and dynamic volumes by selecting the Volumes
node in the left pane of the console. You can view the status of all volumes by
clicking the Volumes folder in the left pane. In the General tab, volume status
tags are displayed under the Status column.
197
75
Copyright 2012 Symantec Corporation. All rights reserved.
Event notification
SFW provides event notification by SMTP e-mail, by pager, and through SNMP
traps that can be displayed in any trap receiver, such as HP OpenView, CA
Unicenter, and IBM Tivoli.
You can configure the notification service to send messages to specific individuals,
to groups, or to a management console in the case of SNMP traps. The event
notification service is implemented through SFWs Rule Management utility. If
you want to set up event notification, you must use the Rule Management utility to
set up rules that send out notifications after certain events occur. You access the
Rule Management utility through SFWs Control Panel.
If you want to send notification messages by SMTP e-mail or pager, your first step
in configuring the notification service is to set up the SMTP mail server.
198 76
In addition to the events listed in SFW, Windows also provides SFW entries in the
Event Viewer Windows Server logs. Many enterprises use products to look at these
OS logs to notify users of problems.
Creating rules
Creating rules involves the following steps:
1 In the Perspective bar, select the Control Panel. The Rule Manager icon is
displayed. Double-click the icon.
Note: The Perspective bar is located at the far left of the console, and it provides
quick access to different perspectives (views) of the system to which you
are connected.
In the Rule Manager window, click New Rule to start the New Rule wizard.
3 Follow the wizard prompts to perform the following:
a Select the type of rule you want to create. You have two options:
Create a rule for certain alerts that you identify by name.
Create a rule for all alerts that have a particular severity and/or
classification.
b Configure one or more actions to be taken when the events are detected:
Send email notifications.
Send SNMP trap notifications.
Execute a command.
199
77
Copyright 2012 Symantec Corporation. All rights reserved.
200 78
Displaying alerts
SFW maintains an alert log that is used to report application events, also known as
alerts. The Alert Log displays messages associated with the selected objects. The
listing of events can help you identify significant incidents, such as a disk failure
or a disk addition.
Click the Logs tab in the Perspective bar to display the alert logs. For each alert
listed, you can see information about the date and time of the message, the
message text, and its class.
Severity levels
201
You can configure the Alert Log through the Log Settings dialog box. To access
this dialog box, select the Control Panel Perspective, select the host to which you
are connected, and double-click the Log Settings icon in the right pane. Specify
the maximum file size for the log.
Note: The Task Log is not implemented.
79
Copyright 2012 Symantec Corporation. All rights reserved.
SFW records when a volume is first written to and marks it as dirty. When a
volume is closed by all processes or stopped cleanly by the administrator, all
writes have been completed, and SFW removes the dirty flag for the volume. Only
volumes that are marked dirty when the system reboots require resynchronization.
202 710
Not all volumes require resynchronization after a system failure. Volumes that
were never written or that had no active I/O when the system failure occurred do
not require resynchronization.
Two types of resynchronization:
Atomic-copy resynchronization refers to the sequential writing of all blocks of
the volume to a plex. This operation is used anytime a new mirror is added to a
volume, or an existing mirror is in stale mode and has to be resynchronized.
Read-writeback resynchronization makes all plexes identical by alternately
copying regions between plexes.
The process of resynchronization can impact system performance and can take
time. To minimize the performance impact of resynchronization, SFW offers the
following solutions:
Dirty region logging (DRL) for mirrored volumes
RAID-5 logging for RAID-5 volumes
FastResync for mirrored and snapshot volumes
203
711
Copyright 2012 Symantec Corporation. All rights reserved.
DRL logically divides a volume into a set of consecutive regions and keeps track
of the regions to which writes occur. A log is maintained that contains a status bit
representing each region of the volume. For any write operation to the volume, the
regions being written are marked dirty in the log before the data is written.
204 712
If a write causes a log region to become dirty when it was previously clean, the log
is synchronously written to disk before the write operation can occur. On system
restart, SFW recovers only those regions of the volume that are marked as dirty in
the dirty region log.
Log subdisks store the dirty region log of a volume that has DRL enabled. With
regard to log subdisks:
Only one log subdisk can exist per plex.
Multiple log subdisks can be used to mirror the dirty region log.
If a plex contains a log subdisk and no data subdisks, it is called a log plex.
Only a limited number of bits can be marked dirty in the log at any time. The dirty
bit for a region is not cleared immediately after writing the data to the region;
instead, it remains marked as dirty until the corresponding volume region becomes
the least-recently used.
205
The number of regions that can be dirty at any one time is maintained in a linked
list in memory. The maximum number of dirty regions maintained in the list is
2048. The oldest region is cleared based on elapsed time.
How the bitmaps are used in dirty region logging
Both bitmaps are zeroed when the volume is started initially, after a clean
shutdown. As regions transition to dirty, the log is flushed before the writes to the
volume occur.
If the system crashes, the active map performs a logical OR (Boolean) operation
with the recovery map.
Mirror resynchronization is now limited to the dirty bits in the recovery map.
The active map is reset simultaneously, and normal volume I/O is permitted.
Using two bitmaps in this fashion allows SFW to handle multiple system crashes.
713
Copyright 2012 Symantec Corporation. All rights reserved.
The vxcbr utility is the CLI equivalent. vxcbr provides the ability for users to
back up and restore their SFW configuration. This utility does not back up and
restore user data; it stores only the SFW configuration data, that is, the disk group
and logical volume layout on a server.
206 714
207
When a partial disk failure occurs (that is, a failure affecting only some subdisks
on a disk), redundant data on the failed portion of the disk is relocated. Existing
volumes on the unaffected portions of the disk remain accessible. With partial disk
failure, the disk is not removed from SFW control and is labeled as FAILING,
rather than as FAILED. Before removing a FAILING disk for replacement, you
must evacuate any remaining volumes on the disk.
Note: Hot relocation is only performed for redundant (mirrored) subdisks on a
failed disk. Nonredundant subdisks on a failed disk are not relocated, but
the system administrator is notified of the failure.
715
Copyright 2012 Symantec Corporation. All rights reserved.
208 716
When relocating subdisks, SFW attempts to select a destination disk with the
fewest differences from the failed disk. SFW:
1 Attempts to relocate to the same controller, target, and device as the failed
drive
2 Attempts to relocate to the same controller and target, but to a different device
3 Attempts to relocate to the same controller, but to any target and any device
4 Attempts to relocate to a different controller
5 Potentially scatters the subdisks to different disks
209
A spare disk must have a signature written to it and be placed in a disk group as a
spare before it can be used for replacement purposes.
Hot relocation attempts to move all subdisks from a failing drive to a single
spare destination disk, if possible.
If no disks have been designated as spares, SFW automatically uses any
available free space in the disk group not currently on a disk used by the
volume.
If there is not enough spare disk space, a combination of spare disk space and
free space is used. Free space that you exclude from hot relocation is not used.
In all cases, hot relocation attempts to relocate subdisks to a spare in the same disk
group, which is physically closest to the failing or failed disk.
When hot relocation occurs, the failed subdisk is removed from the configuration
database. The disk space used by the failed subdisk is not recycled as free space.
717
Copyright 2012 Symantec Corporation. All rights reserved.
The default for SFW is to have automatic Hot Relocation Mode active. This means
that if an I/O error occurs in a redundant subdisk, only that subdisk is
automatically relocated to another disk. The option to disable automatic hot
relocation mode is available from the Control Panel.
210 718
When you add a disk to a disk group, you can specify that the disk be added to the
pool of spare disks that are available to the hot-relocation feature of SFW. Any
disk in the same disk group can use the spare disk. Try to provide at least one hotrelocation spare disk per disk group. While designated as a spare, a disk is not used
in creating volumes unless you specifically name the disk on the command line.
211
Select:
A disk
Navigation Path:
Input:
Unrelocating a disk
The Undo Hot Relocation command is available only after a hot relocation or hot
sparing procedure has occurred. This command relocates subdisks back to their
repaired original disk or replacement disk. This command also restores a system to
its original configuration, less any failed volumes.
719
Copyright 2012 Symantec Corporation. All rights reserved.
If hot relocation has scattered subdisks from a failed disk to several disks within a
dynamic disk group, the Undo Hot Relocation command moves all of the
subdisks back to a single disk without requiring you to find and move each subdisk
individually.
Select:
Navigation path:
Input:
It is not possible to return relocated subdisks to their original disks if their disk
groups relocation information has been cleared.
212 720
213
Replacing a failed or corrupted disk involves both physically replacing the disk
and then logically replacing the disk and recovering volumes in SFW.
Disk replacement: When a disk fails, you replace the corrupt disk with a new
disk. The replacement disk cannot already be in a disk group. Disk
replacement can be performed only on a disk that has failed. The VEA console
identifies the disk by renaming it Missing Disk.
If the disk replacement is successful, the replacement disk takes on the
attributes of the failed disk, including the disk name.
Volume recovery: When a disk fails and is removed for replacement, the plex
on the failed disk is disabled until the disk is replaced. Volume recovery
involves starting disabled volumes and resynchronizing mirrors.
After successful recovery, the volume is available for use again. Redundant
(mirrored) volumes can be recovered by SFW. Nonredundant (unmirrored)
volumes must be restored from backup.
Note: When hot relocation takes place, SFW removes the disk from SFW control
and marks the disk as FAILED. Partial disk failure refers to disks that have
been marked with a status of FAILING.
721
Copyright 2012 Symantec Corporation. All rights reserved.
Navigation path:
Input:
214 722
215
The Rescan option rescans the SCSI bus for disk changes. This option also
performs the equivalent of the Refresh option. Symantec recommends that you
use Rescan every time you make disk changes, such as removing or adding a disk.
To rescan the system, select Actions > Rescan.
7
723
Copyright 2012 Symantec Corporation. All rights reserved.
If you have errors on a volume, use Reactivate Volume to regenerate the volume.
If the underlying disks for a volume are sound, the volume returns to a healthy
state. To reactivate a volume, right-click the volume and select Reactivate
Volume.
216 724
Checking a disk
Run chkdsk.exe to ensure that the data on the volume is not corrupted. Even if
the disk and volumes are back online, it is important to check whether the
underlying data is intact. If the data is corrupted, you may need to replace it with
data from backup storage.
To run chkdsk.exe, open a command prompt window and type the following
command:
chkdsk x: /f
In the syntax, x specifies the drive letter of the volume to check, and /f fixes any
errors that are found.
217
725
Copyright 2012 Symantec Corporation. All rights reserved.
218 726
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 8
Managing Performance
219
220 82
Monitoring I/O
Real-time I/O statistics
To begin collecting statistics, you must set the display options. To select the online
data display options, from the Tools menu, select Statistics View > Online Data
Display Options.
221
83
Copyright 2012 Symantec Corporation. All rights reserved.
The Online Monitoring window displays real-time statistics for selected storage
objects. The display can include disks, subdisks, and volumes. To access the
Online Monitoring window, from the Tools menu, select
Statistic View > Online Monitoring.
222 84
Throttling tasks
The Task Throttling window enables you to determine the priority of certain
tasks.
Using task throttling causes an operation to pause for the specified amount of time
whenever a disk I/O is performed, allowing the CPU to perform other tasks.
By selecting the Throttle all tasks check box, you apply the priority in the text
field to all SFW tasks. To apply different priorities to individual tasks, clear the
check box, type the number of milliseconds in each tasks text field, and click OK.
223
85
Copyright 2012 Symantec Corporation. All rights reserved.
Historical statistics
To perform historic data collection, select logging options and objects, begin
logging, and analyze the data. You may also want to stop logging after you have
analyzed the information.
Selecting logging options
The first task in the setup process is to configure the settings in the Historical
Statistics Settings window.
224 86
Navigation path:
Input:
The next task in the setup for the historical statistics is to select the storage objects
that you want to monitor and start the historical statistics data collection.
225
Navigation path:
Input:
After you have made your selection and clicked OK, the historical data collection
begins. This data collection continues in the background until one of the following
occurs:
You stop data collection with the Stop Historical Data Collection option.
SFW is stopped.
The computer is rebooted.
87
Copyright 2012 Symantec Corporation. All rights reserved.
Read and write request per second and blocks per second are quick indicators as to
the current (and ultimately sustainable) transfer rates.
226 88
SFW is fully integrated into the Windows OS and uses Windows performance
counters.
227
89
Copyright 2012 Symantec Corporation. All rights reserved.
Capacity monitoring in SFW alerts you when any volume reaches certain size
thresholds. You configure capacity monitoring settings on a volume-by-volume
basis by right-clicking the volume and selecting Capacity Monitoring. By
default, capacity monitoring is off for all volumes.
228 810
229
811
Copyright 2012 Symantec Corporation. All rights reserved.
Dynamic relayout eliminates the need for creating a new volume in order to obtain
a different volume layout. Relayout allows you to modify an existing volume into
all those layouts you can select when creating a volume.
230 812
Supported transformations
By using dynamic relayout, you can change the layout of an entire volume or a
specific plex. Use dynamic relayout to change the volume or plex layout to or
from:
Concatenated
Striped
231
813
Copyright 2012 Symantec Corporation. All rights reserved.
232 814
The volume relayout feature is implemented through the Add Mirror window.
That window has a section called Choose the layout.
233
Select:
Navigation path:
Input:
815
Copyright 2012 Symantec Corporation. All rights reserved.
Using SmartMove
What is SmartMove?
The performance of mirror operations and subdisk moves can be enhanced with
the SmartMove feature. SmartMove helps reduce the resynchronization time
required by mirror operations and subdisk moves. The resynchronization time is
reduced by using the NTFS file system metadata to resynchronize only selected
regions. You can improve the performance of operations that involve mirrors, like
adding a mirror to a volume, off-host backup, and array migration, by using the
SmartMove feature.
234 816
235
817
Copyright 2012 Symantec Corporation. All rights reserved.
236 818
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Lesson 9
Administering DMP
237
238 92
DMP is the method that SFW uses to manage two or more hardware paths to a
single disk in a storage array. A path is the connection between a computer and
disks. In a storage area network (SAN), a path consists of a host bus adapter
(HBA), fibre cabling, a switch, an array controller and a disk container.
239
93
Copyright 2012 Symantec Corporation. All rights reserved.
DMP modes
240 94
The paths on an array are set up to work in two ways: either in an Active/Active
mode, which provides load balancing of the data between multiple paths, or in an
Active/Passive mode, in which only one path is active and any remaining paths are
backups.
Active/Active: DMP performs load balancing by allocating the data transfer
across the possible paths. For example, if DMP implements a round-robin
algorithm, each path is selected in sequence for each successive data transfer to
or from a disk. That is, if two paths, A and B, are active, the first disk transfer
occurs on path A, the next on path B, and the next on path A again.
Active/Passive: A path is designated as the preferred path, and it is always
active. The other paths act as backups that are called into service if the current
operating path fails.
DMP mode considerations
When you specify the DMP mode, consider the following points:
Refer to the documentation for your storage array to determine which DMP
mode it supports.
Note that after the appropriate array setting is made, all the disks in an array
have the same load balancing setting as the array.
Note that if the array is set to Active/Active, you can change the setting on an
individual disk so that it has a different load balancing setting from the one on
its array. You cannot change the load balancing setting if the array is set to
Active/Passive.
DMP DSMs
SFW offers dynamic multipathing as DMP device-specific modules.
Device-specific modules (DSMs)
DSMs are designed to support a multipath disk storage environment set up with
the Microsoft multipathing input/output (MPIO) solution.
241
95
Copyright 2012 Symantec Corporation. All rights reserved.
The Microsoft MPIO solutions are designed to work in conjunction with DSMs
written by vendors. The MPIO driver package does not, by itself, form a complete
solution. Microsoft provides a sample DSM, which is designed to provide a
software interface between the multipath driver package and the hardware device.
242 96
The MPIO driver package includes generic code for vendors to adapt to their
specific hardware device so that usage and performance of the device can be
improved. Device-specific information is abstracted and exported to the bus driver
and to the disk objects under its control.
This joint solution allows vendors to design hardware solutions that are tightly
integrated with the Windows operating system, and also enables Microsoft to
correctly accommodate the non-generic characteristics of each vendor's storage
device (such as whether there are multiple active controllers or the controllers have
only standby capability), without having to design the MPIO solution in
anticipation of each possible difference. Compatibility with both the operating
system and other vendor storage devices is ensured through requiring that vendors
meet a set of standards (the Microsoft Logo program) designed to help ensure
proper vendor device functionality.
Use the vxdmpadm command to manage DSM from the command line. The
keywords for vxdmpadm are:
dsminfo
arrayinfo
deviceinfo
pathinfo
arrayperf
deviceperf
pathperf
setattr dsm
setattr array
setattr device
setattr path
97
Copyright 2012 Symantec Corporation. All rights reserved.
The disk_name parameter can contain the device name (such as Harddisk2) or
the internal disk name (such as Disk2). To specify the internal disk name, you
must use the -g option (for example, vxdmpadm -gDG1 arrayinfo
Disk2).
24498
The hashes in the p#c#t#l# parameters correspond to the port, channel, target,
and LUN of a disk.
The output of this command includes the array name, its type, the devices in the
array, and the monitor interval time. The output can also display tunable
parameters that affect the testing and failover of DMP paths.
For example, to display the array information for the array in which Harddisk5
participates, type:
vxdmpadm arrayinfo Harddisk5
You can set array properties by using the vxdmpadm setattr array
command.
This command sets the load balance policy and primary path of the array to which
the designated disk belongs. It also allows you to set tunable parameters (control
timer settings) that affect the testing and failover of the paths. The following
attributes apply:
loadbalancepolicy=FO
|RR|RS|LQ|WP|LB|BP
path#=state#
path#=weight#
blockshift=#
primarypath=#
testpathretrycount=#
scsicmdtimeout=#
kernalsleeptime=#
failoverretrycount=#
99
Copyright 2012 Symantec Corporation. All rights reserved.
246 910
Managing disks
You can display information about a hard disk in an array by using the vxdmpadm
deviceinfo command.
This command displays the device name, the internal disk name, number of paths,
type, and status.
For example, to display DMP-related information about Harddisk5 and
Harddisk6, type:
vxdmpadm deviceinfo Harddisk5 Harddisk6
You can set disk properties by using the vxdmpadm setattr device
command.
Copyright 2012 Symantec Corporation. All rights reserved.
For example, to set the properties for Harddisk6 in the array, type:
vxdmpadm setattr device loadbalancepolicy=FO
primarypath=1-1-0 Harddisk6
911
Copyright 2012 Symantec Corporation. All rights reserved.
248 912
913
Copyright 2012 Symantec Corporation. All rights reserved.
250 914
251
915
Copyright 2012 Symantec Corporation. All rights reserved.
252 916
When performance testing is complete, you can clear the statistics counters by
using the iostat, cleardeviceperf, cleararrayperf, or
clearallperf option with the vxdmpadm command.
253
917
Copyright 2012 Symantec Corporation. All rights reserved.
When you first set up an array under DMP, you must ensure you that have the load
balancing setting you want for the paths in the array. After the setup is complete,
all of the disks in the array have the same load balancing setting by default.
254 918
255
path is favored for data transfer. If two or more paths have the same weight and
are the lowest weight of all paths, then these paths are used each in turn, in
round-robin fashion, for the data transfer. For example, if you have three active
paths, path A with weight of 0, path B with weight of 0, and path C with
weight of 9, DMP DSMs use path A for one data transfer and then use path B
for the next. Path C is in standby mode and is used if path A or path B fails.
Least Blocks:This option selects the path with the least number of blocks of I/
O in its queue for the next data transfer. For example, if you have two active
paths, path A with one block of I/O and path B with none, DMP DSMs select
the path with the least number of blocks of I/O in its queue, path B, for the next
data transfer.
Balanced Path: This option is designed to optimize the use of caching in disk
drives and RAID controllers. The size of the cache depends on the
characteristics of the particular hardware. Generally, disks and LUNs are
logically divided into a number of regions or partitions. I/O to and from a given
region is sent on only one of the active paths. Adjusting the region size to be
compatible with the size of the cache is beneficial so that all the contiguous
blocks of I/O to that region use the same active path.
Fail Over Only (Active/Passive): With this option, you can specify one path to
be used for data transfer. The specified path is called the Primary Path, and is
the only path used for data transfer. This option does not provide load
balancing among paths.
919
Copyright 2012 Symantec Corporation. All rights reserved.
If an array is set to Active/Active and one or more of its disks has already been set
to Active/Passive, you can change those disks back to Active/Active.
256 920
Several load balancing options are available when using the vxdmpadm
command. Use the loadbalancepolicy command with the applicable option.
257
921
Copyright 2012 Symantec Corporation. All rights reserved.
Control timer settings for an array are a set of tunable parameters that affect the
testing of a paths status or health.
258 922
The vxdmpadm command has four options for the control timer:
Test Path Retry Count
SCSI Command Timeout
Kernel Sleep Time
Failover Retry Count
259
923
Copyright 2012 Symantec Corporation. All rights reserved.
260 924
Lab exercises and lab solutions for this lesson are located in the following
appendices:
Appendix A provides step-by-step lab instructions.
Appendix B provides complete lab instructions and solutions.
Appendix A
261
With standard mirroring, Volume Manager does not include any feature to enable a
different management for volumes mirrored across sites. Therefore, there is no
difference between a volume mirrored locally and a volume mirrored across
multiple sites with standard mirroring. For example, the hot relocation daemon is
unable to determine which disks belong to which sites and is able to easily replace
a failed disk at a remote location with a disk at the local site.
262 A2
With SF 6.0, new features are added to Volume Manager to make it site-aware. The
remote mirroring features in Volume Manager are enabled by an enterprise license.
These features are described in detail in the following section.
A3
264 A4
Commands such as vxdg and vxassist have new options that enable the user
to set site-based allocation of the physical resources.
A5
During volume creation, if you specify the volume site type as Site Separated.
This ensures that the volume is restricted to the disks on the selected site.
266 A6
You assign a disk to a site by tagging the disk with the site name. You can use
arbitrary tag names while tagging the disks for other purposes, such as support for
hardware cloning. For site-awareness, you must use the site tag. When you assign
a value to this tag, it is considered as the name of the site to which the disk
belongs. The commands used to set or modify the disk tags related to siteawareness are listed on the slide.
A7
The syntax for these commands is as shown on the slide. For more information
refer to the Veritas Storage Foundation Administrator's Guide.
268 A8
You can use the vxassist make command to create a volume for site-based
allocation.
A9
Use the vxassist mirror command to add a mirror to an existing site-based volume.
270 A10
271
A11
272 A12
Appendix B
273
274 B2
B3
276 B4
DataInsight for Storage manages unstructured data growth and implements charge
back, helping you to reclaim misused storage.
Disaster Recovery Advisor conducts automated, non-disruptive HA/DR testing
across your heterogeneous environment.
B5
278 B6
B7
The slide lists the supported Operating System for the Management Server and the
Managed Hosts. Many UNIX/Linux operating systems are also supported.
280 B8
This slide lists the system requirements for the Managed Server and the Managed
Hosts.
281
B9
The Web browsers that the Veritas Operations Manager console supports are:
Internet Explorer versions 6.x to 9.x
Firefox 3.x to 6.x
282 B10
Veritas Operations Manager uses the default ports as displayed on the slide to
transfer information.
B11
You can install the Veritas Operations Manager Management Server on a Windows
host using the Veritas_Operations_Manager_CMS_4.1_Win.exe file.
284 B12
B13
286 B14
Accept the End User License Agreement, and then click Install.
B15
288 B16
B17
You can install Veritas Operations Manager host component on a Windows host by
running the .msi file on it.
1 Log on to the target host as a user with administrator privileges.
Make sure that the host where you plan to install host management meets or
exceeds system and operating system requirements.
290 B18
291
B19
292 B20
In the Add Hosts wizard panel, select the Agent option to add the host(s) to
the Management Server. Enter the host details, and then click Next.
In the Results panel, verify that the host has been added successfully. Click
OK.
Verify that the host management program is installed and the required service has
started.
B21
294 B22
For Internet Explorer 7.0, or later, on Windows Server 2008, if the Web pages are
not automatically displayed, add each Web site to the Trusted Sites list.
To set up Internet Explorer 7.0, or later, on Windows Server 2008 for Veritas
Operations Manager, perform the following steps:
1 In Internet Explorer, select Tools > Internet Options.
2 Select the Security tab.
3 Click Sites to add the following Web sites:
https://hostname:5634/ - URL to configure Veritas Operations Manager
https://hostname:14161/ - URL to launch Veritas Operations Manager
where, hostname is the name of the Management Server host.
To set up Firefox 3.0, or later, for Veritas Operations Manager perform the
following steps:
1 On the security exception page that is displayed when you attempt to open an
Veritas Operations Manager Web page, click the Or you can add an
exception link.
2 Click Add Exception.
B23
4
5
6
For Firefox 3.6.x, or later, the users should first click the I Understand the
Risks button before they click the Add Exception button.
In the Add Security Exception dialog box, verify that the location is one of
the following:
https://hostname:5634/ - URL to configure Veritas Operations
Manager
https://hostname:14161/ - URL to launch Veritas Operations
Manager
where, hostname is the name of the Management Server host.
Click Get Certificate.
Select the Permanently store this exception check box.
Click Confirm Security Exception.
296 B24
B25
If the Administer link has a red arrow that is displayed after it, you cannot perform
the Windows managed host administration.
298 B26
B27
300 B28
301
B29
302 B30
Index
A
DWDM A-2
dynamic disk
benefits 1-4, 1-11
dynamic disk group 1-13
dynamic group 1-13
dynamic volume 1-13
active/active 9-4
active/passive 9-4
303
historical 8-7
historical statistics 8-6
displaying 8-8
monitoring 8-7
hot relocation 7-15
I
I/O statistics
historical 8-6
monitoring 8-3
viewing 8-4
installation
version release differences 2-14
L
log
dirty region log 5-9
M
mirror
removing 5-4
mirrored 4-5
multipathed disk array 1-5
volume 7-5
striped 4-4
Technical Support
web site 2-14
U
P
parity
definition 1-15
plex
definition 1-12
preferred plex 5-11
private region 1-10
public region 1-10
V
VEA
changing volume layout 8-15
using 2-30
VERITAS Enterprise Administrator 2-28
volume
basic 1-13
definition 1-12
dynamic 1-13
status tags 7-5
volume layout
changing online 8-12
comparing layout types 4-7
concatenation 1-14
mirrored 4-5
mirroring 1-15
parity 1-15
RAID-5 1-15, 4-6
striped 4-4
striping 1-15
volume snapshot
creating 6-6
R
RAID
definiiton 1-14
RAID-5 4-6
read policy
preferred plex 5-11
round robin 5-11
relayout 8-12
removing a mirror 5-4
round robin 5-11
S
site awareness
assigning disks to a site A-7
assigning hosts to a site A-8
definition A-2, A-5
site tag A-7
software packages
optional 2-4
spare disk 7-19
split and join 6-14
status description
disk 7-4
304 Index-2