Академический Документы
Профессиональный Документы
Культура Документы
Navisphere 5.x
Windows Manager
Administrator’s Guide
Printed in USA
July 2001 6864 5738–001
.
Unisys e-@ction
Navisphere 5.x
Windows Manager
Administrator’s Guide
UNISYS
Printed in USA
July 2001 6864 5738–001
NO WARRANTIES OF ANY NATURE ARE EXTENDED BY THIS DOCUMENT. Any product or related information
described herein is only furnished pursuant and subject to the terms and conditions of a duly executed agreement to
purchase or lease equipment or to license software. The only warranties made by Unisys, if any, with respect to the
products described in this document are set forth in such agreement. Unisys cannot accept any financial or other
responsibility that may be the result of your use of the information in this document or software material, including
direct, special, or consequential damages.
You should be very careful to ensure that the use of this information and/or software material complies with the laws,
rules, and regulations of the jurisdictions with respect to which it is used.
The information contained herein is subject to change without notice. Revisions may be issued to advise of such
changes and/or additions.
Notice to Government End Users: This is commercial computer software or hardware documentation developed at
private expense. Use, reproduction, or disclosure by the Government is subject to the terms of Unisys standard
commercial license for the products, and where applicable, the restricted/limited rights provisions of the contract data
rights clauses.
EMC Navisphere Server Software for Unisys e-@ction Navisphere 5.x Server
Windows Administrator’s Guide Software for Windows Administrator’s
(069-001067) Guide (6864 5969)
Preface.............................................................................................................................xv
6864 5738-001
Contents
Toolbar.........................................................................................3-9
Workspace.................................................................................3-10
Components of Trees, Connectivity Map, and Detailed View . 3-11
Accessible, Inaccessible, and Unsupported
Storage Systems .......................................................................3-11
Icons...........................................................................................3-12
Storage-System Menu .............................................................3-16
Main Window..................................................................................3-31
Application Icon ......................................................................3-32
Menu Bar ..................................................................................3-32
Toolbar.......................................................................................3-34
Workspace.................................................................................3-35
Status Bar ..................................................................................3-37
Window Configuration ..........................................................3-37
6864 5738-001
Contents
6864 5738-001
Contents
6864 5738-001
Contents
6864 5738-001
Contents
6864 5738-001
Contents
6864 5738-001
Contents
6864 5738-001
Tables
6864 5738-001
Tables
6864 5738-001
Figures
6864 5738-001
Figures
6864 5738-001
Preface
6864 5738-001
Preface
6864 5738-001
Preface
6864 5738-001
Preface
Convention Meaning
this typeface Text (including punctuation) that you type verbatim, all
commands, pathnames, and filenames, and directory names. It
indicates the name of a dialog box, field in a dialog box, menu,
menu option, or button.
this typeface Represent variables for which you supply the values; for
example, the name of a directory or file, your username or
password, and explicit arguments to commands.
↵ Represents the Enter key. (On some keyboards this key is called
Return or New Line.)
6864 5738-001
Invisible Body Tag
1
About EMC Navisphere
Manager
6864 5738-001
About EMC Navisphere Manager
1
Terminology
Term Meaning
C-series storage system A C3000, C2x00, C1900, or C1000 series storage system.
Non-RAID Group storage A storage system whose SPs are running Core or Base software without
system RAID Group functionality.
RAID Group storage system A storage system whose SPs are running Core or Base Software with RAID
group functionality.
Shared storage system A storage system with the EMC Access Logix™ option, which provides data
access control (Storage Groups) and configuration access control. A shared
storage system is always a RAID Group storage system.
Unshared storage system A storage system without the EMC Access Logix option.
6864 5738-001
About EMC Navisphere Manager
1
6864 5738-001
About EMC Navisphere Manager
1
Graphical
Navisphere User
management station Interface
SP A SP B
SP Agent SP Agent SP A SP B
6864 5738-001
About EMC Navisphere Manager
1
Graphical
Navisphere
User
management station
Interface
Hub Hub
SP A SP B
SP A SP B SP A SP B
Managed unshared
C-series storage system
Managed unshared Managed unshared
FC-series storage system FC-series storage system
6864 5738-001
About EMC Navisphere Manager
1
Configuration Management
Manager lets you configure the storage systems on local and remote
servers. Using Manager you can
• Configure the Host Agents and SP Agents.
• Set the configuration, memory, and cache properties for storage
systems.
• Combine physical disks into RAID Groups and create logical
units (LUNs) on those RAID Groups on storage systems with
RAID Group support; or combine physical disks into LUNs on
storage systems without RAID Group support.
• Mirror a LUN to a remote server using the MirrorView™ option
to provide for disaster recovery.
• Change the user-defined parameters of LUNs, such as their
rebuild time and storage processor (SP) owner.
• Update the Core Software and programmable read-only memory
(PROM) code that controls storage systems.
Using Manager on shared storage systems, you can
• Set the access control and fair access properties for a storage
system.
• Combine LUNs into Storage Groups and connect servers to
Storage Groups to provide the servers access to specific LUNs on
the storage system.
• Change the user-defined properties of a Storage Group, such as
the LUNs it contains and its name.
• Copy a LUN at an instant in time using the SnapView™ option.
6864 5738-001
About EMC Navisphere Manager
1
6864 5738-001
About EMC Navisphere Manager
1
Manager Architecture
Manager communicates with the Host Agent running on the same or
other servers on the network, and also with the SP Agent running in
any FC4700 storage systems. The management station and the remote
Agents communicate with each other over a TCP/IP network.
In a shared storage-system environment, the Host Agent on a server
communicates with ATF or CDE, which in turn communicates with
the Base or Core Software running in a storage system’s storage
processors (SPs). All shared storage systems have Fibre Channel
interfaces to the server, so the Host Agent uses a SCSI protocol over a
Fibre Channel (FC) connection to communicate with the Base or Core
Software.
With FC4700 storage systems, the management station uses a
separate network connection to perform management functions. In
addition to the Agent running on the host, an SP Agent runs in each
SP. The architectural components of a shared storage system are
shown in Figure 1-3.
FC and management
FC connection
connection
Shared
Base Software Core Software
Shared FC4700 non-FC4700
with Access Logix, with Access Logix
Storage System SP Agent Storage System
optional MirrorView,
optional SnapView
6864 5738-001
About EMC Navisphere Manager
1
No storage systems with SCSI server interfaces are shared storage systems,
and only certain models of storage systems with Fibre Channel server
interfaces are shared storage systems.
Host Agent
Server
ATF or CDE
FC and management
connection
6864 5738-001
About EMC Navisphere Manager
1
NOTE: Until you enable data access control for a shared storage system, any
server connected to it can write to any LUN on it. To ensure that servers do
not write to LUNs that do not belong to them, the procedures below assume
that either just one server is physically connected to the shared storage system
or that just one server has been powered up since the servers were connected
to the storage system. You will use this server (called the configuration
server) to configure the storage system.
6864 5738-001
About EMC Navisphere Manager
1
3. Enable data access control for the storage system (Chapter 6).
4. Enable configuration access control for the storage system and
enable configuration access for the configuration server
(Chapter 6).
5. Create RAID Groups and LUNs in the RAID Groups (Chapter 7).
6. Connect other servers to the storage system or power up other
servers connected to the storage system.
7. Create Storage Groups and connect each server to its Storage
Group (Chapter 8).
8. Make the LUNs available to the server’s operating system. (See
the Navisphere Server Software Administrator’s or User Guide
for the server’s operating system.)
9. For an FC4700 storage system with the MirrorView remote mirror
option, set up and use remote mirrors (Chapter 9).
10. For an FC4700 storage system with the SnapView snapshot copy
option, set up the snapshot cache and snapshot (Chapter 10).
After you have configured all the storage systems connected to the
configuration server, you can physically connect other servers to the
storage system, or power up the other servers connected to the
storage system.
6864 5738-001
About EMC Navisphere Manager
1
6864 5738-001
Invisible Body Tag
2
Installing and Running
Manager
This manual assumes that you are familiar with the Windows environment
for your management station.
6864 5738-001
Installing and Running Manager
2
What Next?
Continue to the next section to install the new revision of Manager.
6864 5738-001
Installing and Running Manager
2
Installing Manager
The host on which you install Manager must have the following
hardware and software:
• Color graphics console with a minimum resolution of 1024 x 768
pixels.
• Windows NT 4.0 operating system with Service Pack 5 or higher
or Windows 2000.
• TCP/IP Services configured with connections to the servers with
storage systems that Manager will manage.
For the latest information on which hosts you can use and the required
software revisions and service packs, refer to the Manager Release Notes.
6864 5738-001
Installing and Running Manager
2
Any user who can access the management station can change or delete
the Manager files you just installed. You need to change the permissions
on these files if you want to limit access to them.
What Next?
Continue to the next section, Starting a Manager Session.
6864 5738-001
Installing and Running Manager
2
You will use this server (called the configuration server) to send the storage
system the configuration commands that you issue from Manager on a
Navisphere management station. The Agent configuration file on each SP
(FC4700 Series) or on the configuration server (FC4500 Series) must be set up
to give you configuration access from the management station.
Task Described in
Install the storage systems and connect Storage-system installation and service
them to the servers directly or through hubs manual and hub or switch documentation.
or switches.
Set up the servers whose storage systems Server software manual for the server.
you want to manage. (Setup includes HBA driver manual for the server.
installing CDE or ATF, if using, and installing
the host Agent.)
Any user can run a Manager session from any management station
on which Manager is installed to monitor storage systems. However,
only an authorized user can use Manager to configure or reconfigure
a storage system. A user is authorized if the Agent on the SPs
(FC4700 Series) or on the server (non-FC4700 Series) is set up with
configuration access for the user, as described in Chapter 5.
! CAUTION
The Agent allows more than one Manager session to access the
same storage system at the same time. As a result, two authorized
users are able to configure or reconfigure the same storage system
at the same time, but doing this may damage the data.
6864 5738-001
Installing and Running Manager
2
To Start a Manager Before starting a session, make sure that all storage systems you want
Session to manage with this session are powered up, and that the Host Agent
is running on all servers connected to these storage systems.
1. Log in to the Windows management station as either
Administrator or someone who has administrative privileges.
2. From the Windows taskbar, follow the path below to start either
Manager or all installed Navisphere management applications:
Manager
Start →Programs →Navisphere version →Navisphere Manager
All management applications
Start →Programs →Navisphere version →Navisphere Enterprise
3. Click anywhere in the Navisphere Manager splash screen or wait
for the screen to close automatically in 3 seconds.
The Main window opens using the values in the default
application configuration file. (For information on the application
configuration, see the section Window Configuration on page 3-36).
Manager first looks for the file containing the list of servers
(hosts) that were managed when you closed your last Manager
session. The default file for this list is
drive:\install_directory\Profiles\username\HostAdmin.txt
6864 5738-001
Installing and Running Manager
2
If you start Manager while an Agent is starting up, Manager may receive a
time-out error from that Agent. If such a time-out occurs, Manager displays a
dialog box informing you of the time-out. Once the Agent is running, you can
either restart Manager or add the server to the list of managed hosts using the
Agent Selection dialog box (page 2-10).
What Next?
Continue to the next section, Setting User Options for Manager.
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Installing and Running Manager
2
Option Setting
To Set the User 1. In the Main window, follow the menu path
Options for Manager
View →Options
A User Options dialog box opens, similar to the following.
6864 5738-001
Installing and Running Manager
2
b. In Save File Path, type or select the path to use for the default
application configuration file.
For information on the application configuration, see the
section Window Configuration on page 3-36.
c. In Polling Interval, type or select the number of seconds for
the polling interval.
d. Select the Automatic Polling check box to enable automatic
polling for the session, or clear it to disable automatic polling
for the session.
When automatic polling is enabled for the session, an
individual storage system is polled only if automatic polling is
enabled for the storage system. You enable automatic polling
for an individual storage system from the General tab on
Storage System Properties dialog box for the storage system
(page 6-10).
What Next?
Continue to the next section, Selecting Storage Systems to Manage.
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Installing and Running Manager
2
4. When the Managed Agents box contains all the desired Agents,
click OK.
The dialog box closes, and Manager does the following:
• Adds new selected subnets and new Host Agents and SP
Agents to the host file.
• Contacts each managed Agent whose location is in the file to
determine the state of the storage systems connected to it.
• For each storage system that it finds, displays a
storage-system icon in the Equipment and Storage trees in
each open Enterprise Storage dialog box.
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Installing and Running Manager
2
The application starts searching the subnets for any NAS devices.
When it finds any devices, it displays an icon and the locations for
the devices in the Unmanaged NAS Devices box. The Scanning
subnets status bar tracks the progress of the search.
6864 5738-001
Installing and Running Manager
2
What Next?
6864 5738-001
Installing and Running Manager
2
6864 5738-001
Invisible Body Tag
3
Trees, Connectivity
Map, and Main
Window
This manual assumes that you are familiar with the Windows environment
for your management station.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Trees
Trees show the relationships between the physical and logical
components of managed storage systems. Trees are analogous to the
hierarchical folder structure of Microsoft Windows Explorer.
The Equipment tree shows icons for the physical components of the
managed storage systems and servers and their host bus adapter
(HBA) ports to which the managed storage systems are connected.
The Storage and Hosts trees show icons for the logical components of
the managed storage systems. The Storage tree shows the icons from
a storage-system viewpoint, and the Hosts tree shows them from a
host viewpoint.
A tree appears in the selected tab in the open Enterprise Storage
dialog boxes in the Main window. The Equipment tree appears in the
Equipment tab; the Storage tree appears in the Storage tab; and the
Hosts tree appears in the Hosts tab, as shown on the following pages.
The managed storage systems are the base components in the
Equipment and Storage trees. These trees display a storage-system
icon for each managed storage system. The managed servers are the
base icons for the Hosts tree. This tree displays a host icon for each
managed server. It also displays an icon for each unmanaged server
connected to a managed storage system.
You can expand and collapse the storage-system or host icons to
show icons for their components (such as SP icons, disk icons, LUN
icons, RAID Group icons) just as you can expand and collapse the
Explorer folder structure. You use the icons to perform operations on
and display the status and properties of the storage systems, their
components, and their host connections.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage-system
icon
Trees 3-3
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage-system
icon
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Host icon
You select icons on a tree in the same way that you select items in
other Microsoft Windows applications.
To select a single icon:
Click the icon.
To select multiple icons:
Do either of the following:
• Press Shift while left-clicking the first icon and last icon to select
the first and last icon and all icons between them.
• Press Ctrl while left-clicking the icons you want to select.
Trees 3-5
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Connectivity Map
The Connectivity Map shows the logical connections for each
currently managed storage system and the hosts using its storage. It
uses the same icons as the tree views to represent the storage system
and hosts. You use the icons to perform operations on and display the
status and properties of the storage systems, their components, and
their host connections.
The managed storage systems and the hosts to which they connect
are the base components in the Connectivity Map. The map displays
a storage-system icon for each managed storage system and a host
icon for each host connected to a managed storage system. If the
hosts are connected to storage systems through switches, one switch
icon is shown between the hosts and storage systems.
You can display:
• The connectivity between hosts and storage systems.
• A detailed view of a storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Detailed View
The Detailed View window provides a graphical view of the
relationships among the servers connected to the selected storage
system and the Storage Groups (shared storage systems only), SPs,
LUNs, RAID Groups, and disks in the storage system.
The Detailed View window uses the same icons as the tree views to
represent servers, storage systems, LUNs, RAID Groups, Storage
Groups, SPs, and disks. You can right-click any of these icons to
display the single-select menu for the component.
To Display a Detailed From any tree view, right-click the icon for the storage sytsem and
View click Detailed View, or
In the Connectivity Map, double-click the icon for a storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Toolbar
Workspace
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Toolbar The buttons on the Detailed View window toolbar change the
appearance and content of the window.
The toolbar buttons are toggle buttons.
Select View Wide angle view (view with small icons) or normal view
(view with large icons). The default is the wide angle
view.
Disk IDs Yellow label on each disk with the disk ID. The default is
not to display disk ID labels.
LUN Devices File system mappings below each LUN. The default is to
display the mappings.
Host Connections Connection lines from each server to either the Storage
Group to which it can perform I/O (shared storage
systems) or the SPs to which it is connected (unshared
storage systems). The default is to hide these lines.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage-System
Container Type Function
Storage Group container Shared Represents a Storage Group in the storage system, and contains an icon
for each LUN in the group. It identifies the Storage Group it represents by
name.
Right-clicking a Storage Group container displays the Storage Group
menu.
Storage Processor container Unshared Represents a storage processor (SP) in the storage system, and contains
an icon for each LUN owned by the SP. It identifies the SP it represents by
name (SP A or SP B).
Right-clicking a Storage Processor container displays the SP menu.
Unowned LUNs container All Contains an icon for each LUN in the storage system not owned by an SP.
RAID Group container Unshared, Represents a RAID Group in the storage system, and contains an icon for
RAID-Group each disk in the group. It identifies the RAID Group it represents by name
or and type.
shared Right-clicking a RAID Group container displays the RAID Group menu.
Unassigned disk container Unshared, Contains icons for each disk in the storage system that is not assigned to
RAID-Group a RAID Group.
or
shared
Enclosure container Unshared, non-RAID Represents an enclosure in the storage system, and contains an icon for
Group each disk in the enclosure. It identifies the enclosure it represents by the
enclosure name.
Right-clicking an enclosure container displays the enclosure menu.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Term Explanation
inaccessible Manager has never been able to communicate with the storage system. A storage system can be
inaccessible for any of these reasons:
• The Agent is not running on the server. In this case, Manager displays an error message when you try
to select the server for management. Manager does not display an icon for the storage system that is
inaccessible for this reason.
• The Agent running on the server was started by a user who was not logged in as root or with
Administrative privileges. Manager displays an icon for a storage system that is inaccessible for this
reason, and the icon indicates that the storage system is inaccessible.
• The storage system’s name is wrong in the Agent configuration file on its server. Manager displays an
icon for a storage system that is inaccessible for this reason, and the icon indicates that the storage
system is inaccessible.
unsupported The storage system’s device entry in the Agent configuration file on its server is one that Manager does
not support. Examples are an internal disk on the server and a 7-slot storage system with SCSI disks.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Grey or green None The component and all of its components are working
and grey normally.
Faded grey or None The component is a ghost; that is, it is an FC4700 SP that is
green and not managed or is part of a non-FC4700 storage system that
grey is not managed.
The main components of the Equipment and Storage trees are the
icons for the managed storage systems, and the main components of
the Hosts tree are the icons for the servers connected to managed
storage system. Server icons are described below; storage-system
icons are described on page 3-14; and the icons for storage-system
components are described on page 3-14.
Icons for Servers The icons for the managed servers (hosts) and their host bus adapters
(HBAs) connected to managed storage systems appear in the
Equipment tree. The icons for all servers (managed or unmanaged)
connected to managed storage systems appear in all trees and the
Connectivity Map.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Figure 3-4 Icon Images and Descriptions for Servers and Server HBAs
You display the properties of a server using the menu associated with
the host icon for the server.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage-System Icons Icons for individual storage systems appear in all the trees and the
Connectivity Map. In the Host tree, icons for individual storage
systems connected to a host appear under a multiple-storage systems
icon.
Table 3-7 Individual Storage-System Icon Images
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage-System Descriptions
A storage-system description has the following format:
storage_system_name [type]
where
storage_system_name is a name that uniquely identifies the storage
system. For a storage system connected to a
server running Agent revision 4.X or 5.X, its
format is either A-serial# or B-serial#
where
A or B identifies either SP A or SP B as the SP
used for communications with the storage
system.
serial# is the unique serial number of
enclosure 0 in an FC-series storage system or
the chassis in a C-series storage system.
You can change this name.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
If automatic polling for the session (background polling) is enabled, the word
“polling” appears in brackets after the description in each storage-system
icon during a poll operation.
To Assign a Custom 1. In the Enterprise Storage dialog box, click the Equipment or
Name to a Storage Storage tab.
System
2. Right-click the icon for the storage system whose name you want
to change, and then click Set Name.
3. In the Set Storage System Name dialog box, type the new name
and click OK.
Changing the name does not affect the agent configuration file.
Storage-System You can perform operations on storage systems using the menu
Menu associated with the storage-system icon. You can display this menu
for single or multiple storage systems.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Option Use to
Software Installation Update existing software or install new software on the storage
system.
Faults Display the Fault Status Report for the storage system.
Create Storage Groups Create Storage Groups on the storage system (shared storage
systems only).
Connect Hosts Connect servers to a Storage Group on the storage system so the
servers can perform I/O to the LUNs in the Group.
Manage MirrorView Add or remove logical connections between storage systems that
Connections are physically connected, managed and have MirrorView installed.
Create Remote Mirror Create a remote mirror of a LUN in the storage system (FC4700
storage systems only).
Detailed View Display a graphical view of the relationships between the servers
connected to the storage system and storage-system
components.
Start Snapshot Start a snapshot session on the storage system (FC4700 storage
Session systems only).
SnapView summary Display the status of any snaphsots and snapshot sessions for the
selected storage system.
Manage NAS (IP4700 Only menu option for IP4700 series. Open the Web based
series only) network-attached file server (NAS) device management tool.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Option Use to
Software Installation Update existing software or install new software on all selected
storage systems if they all are the same type of storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Storage Groups Storage tree Storage Groups in the storage system or accessible from the
Host tree host.
StorageGroupname Storage tree Individual Storage Group in the storage system or accessible
Host tree from the host.
StorageGroupname is the name of the Storage Group.
PSM LUN Storage tree LUN in an FC4700 storage system reserved exclusively for
Host tree storage-system SPs to store critical information.
Detailed View
LUN LUNID [RAID 5; Storage tree RAID 5 LUN in RAID Group or storage system.
hostnames - Host tree LUNID is the ID assigned when you bound the LUN. It is a
devicename] Detailed View hexadecimal number. Hostnames is a list of the names of
LUN LUNID [RAID 5; each server connected to the storage system. devicename is
hostnames - the device name for the LUN on those servers.
devicename-
mirrorstatus] See Note at end of table
LUN LUNID [RAID 3] Storage tree RAID 3 LUN in RAID Group or storage system.
LUN LUNID [RAID 3; Host tree LUNID is the ID assigned when you bound the LUN. It is a
mirrorstatus] Detailed View hexadecimal number.
LUN LUNID [RAID Storage tree RAID 1/0 LUN in RAID Group or storage system.
1/0] Host tree LUNID is the ID assigned when you bound the LUN. It is a
LUN LUNID [RAID Detailed View hexadecimal number.
1/0; mirrorstatus]
See Note at end of table
LUN LUNID [RAID 1; Storage tree RAID 1 LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Table 3-11 Basic Storage Component Icons: Images and Descriptions (cont)
LUN LUNID [RAID 0; Storage tree RAID 0 LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.
LUN LUNID [Disk; Storage tree Individual disk LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.
LUN LUNID [Hot Spare] Storage tree Hot spare in RAID Group or storage system.
Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.
Unowned LUNs Storage tree LUNs, such as hot spares, that are not owned by either SP.
Host tree
Detailed View
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Table 3-11 Basic Storage Component Icons: Images and Descriptions (cont)
RAID Group Storage tree Individual RAID Group identified by RAIDGroupID in the
RAIDGroupID Host tree storage system.
[RAIDtype] RAIDGroupID is the ID assigned when you created the RAID
Group. It is a hexadecimal number between 0x00 and 0x1F.
RAIDtype is Unbound if no LUNs are bound on the Group.
Available RAID types are: RAID 5, RAID 3, RAID 1/0, RAID 1,
RAID 0, Disk, or Hot Spare.
For example, 0x03[RAID 5].
Disk diskID Equipment tree For an FC-series storage system, the disk in the enclosure
Storage tree and slot identified by diskID, which has the format m-n where
Detailed View m is the enclosure number and n is the slot in the enclosure
containing the disk.
For a C-series storage system, the disk in the slot identified by
diskID, which has the format mn where m is the letter (A, B, C,
D, or E) of the SCSI bus for the slot and n is the position on
the bus containing the disk.
Note
If the storage system has the MirrorView option, mirrorstatus indicates the LUN’s remote mirror status, which can be any of the
following:
Mirrored - LUN is the primary image LUN of a remote mirror.
Mirrored/No Secondary Image - Remote mirror does not contain secondary image.
Secondary Copy - LUN is a secondary image LUN for a remote mirror.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Remote Mirrors Storage tree Container for all remote mirrors in the storage system. This icon
appears even when no remote mirror instances are defined on the
storage system.
Remote Mirror Image Storage tree Imagename is the name of the image. Imagetype identifies whether
imagename - imagetype the image is a primary or secondary image. The image can have
[state] one of these states:
In-Sync (or identical or congruent) - Secondary image is identical to
the primary. This state persists only until the next write to the
primary image, at which time the image state becomes Consistent.
Consistent - Secondary image is identical to the primary, or it was
identical in the past. If the mirror is not fractured, the software will
try to make the secondary image In-Sync after receiving no I/O for a
given period of time (the quiesce threshold).
Synchronizing - Software is applying changes to the secondary
image to mirror the primary, but the current contents of the
secondary are not known and are likely not usable.
Out-of-Sync - None of the above; the secondary image requires
synchronization with the primary image.
Snapshot cache Storage Tree Container for SP A’s and SP B’s snapshot cache.
snapshot cache - SP A Storage tree SP A’s snapshot cache, which consists of any LUNs owned by SP A
selected to participate in snapshot sessions.
Snapshot Cache - SP B Storage tree SP B’s snapshot cache, which consists of any LUNs owned by SP B
selected to participate in snapshot sessions.
snapshot sessions Storage tree Container for all snapshot sessions running in the storage system.
This icon appears even when no snapshot sessions are active in
the storage system.
snapshot session name Storage tree Individual snapshot session running in the storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Table 3-13 Menu Options for a Single Basic Storage Component (cont)
LUN LUNID [RAIDtype] Unbind LUN Unbind the LUN, destroying all the data on it and making its disks available
for another LUN or RAID Group.
Update Host Information Scan SCSI devices (including storage systems) connected to all servers
connected to the storage system. Updates the Navisphere server
information based on the results of the scan.
Add to Storage Groups Add the LUN to one or more Storage Groups
Create Secondary Create a secondary image of the LUN on another storage system.
Image LUN
Create a snapshot Create a virtual LUN to maintain the snapshot of the data on the LUN at
this moment.
SP A or SP B Event Log Display the event log for the storage processor (SP).
Reset Statistics Logging Set statistics for LUNs, disks, and storage-system caching to zero.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
LUN LUNID [RAIDtype] Unbind LUN Unbind all selected LUNs, destroying all the data on them and making their
disks available for another LUN or RAID Group.
Update Host Scan SCSI devices (including storage systems) connected to all servers
Information connected to the storage system. Updates the Navisphere server information
based on the results of the scan.
RAID Group RAIDGroupID Properties Display the properties of all selected RAID Groups.
[RAIDtype]
Destroy Dissolve all selected RAID Groups.
Add Secondary Image Add a new secondary image for the remote mirror.
Force Destroy Destroy the remote mirror when Destroy will not work.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Table 3-15 Menu Options - Single MirrorView and SnapView Components (cont)
Remote Mirror Image Synchronize Start to synchronize an Out-Of-Sync secondary mirror image.
Fracture Fracture the secondary mirror image from the primary mirror image.
Snapshot Session Stop Snapshot Session Stop the selected snapshot session.
Snapshot Start Snapshot Session Start a snapshot session on the storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Enclosure 0 DPE (Disk-Array Processor Enclosure) in any FC-series storage system except
an FC5000 series.
Enclosure 0 Fan B SP fan pack in enclosure 0 in an FC-series storage system with a DPE.
Power supplies Power supplies in the enclosure for an FC-series storage system or in the
storage system for a C-series storage system.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Enclosure n power Power supply in power supply slot A in enclosure n in an FC-series storage
supply A system.
Enclosure n power Power supply in power supply slot B in enclosure n in an FC-series storage
supply B system.
Standby Power SPSs connected to enclosure 0 of an FC-series storage system that supports
Supplies write caching.
Battery Backups BBUs in a C-series storage system that supports write caching.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Enclosure n Fan A, Enclosure 0 Fan State Display the state of a fan pack or fan module.
B, FAN A, FAN B
Enclosure 0 SPS A, Enclosure 0 SPS Properties Display the state of the SPS or BBU.
B, BBU
Enclosure n Fan A, Enclosure 0 Fan State Display the state of all selected fan packs or fan modules.
B, FAN A, FAN B
Enclosure 0 SPS A, Enclosure 0 Properties Display the state of all selected SPSs or BBUs.
SPS, BBU
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Main Window
Application icon
Menu bar
Toolbar
Storage-system
selection filters
Equipment,
Storage, and Workspace
Host tabs
Status bar
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Application Icon The Application icon on the left side of the title bar provides overall
status of all storage systems managed by the current Manager session
as follows:
Menu Bar From the menu bar in the Main window you can display these
menus: File, View, Operations, Window, and Help.
File Menu
Option Use To
Select Agents Change the list of agents that the Manager session uses to
determine which storage systems to manage.
Exit Exit the Manager session and close the Main window.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
View Menu
Option Use To
Options Set the network timeout, set the name and location of the host
file, set the name and location of the save file, set the automatic
polling interval, and enable or disable automatic polling
(background polling) for the Manager session.
Operations Menu
Option Use To
Automatic Polling Enable or disable automatic polling (background polling) for the
Manager session.
Poll All Storage Systems Manually poll all managed storage systems; that is, survey
them once for status changes.
Software Installation Update the software on the managed storage systems you
select.
Faults Display a list of any faulted storage systems and their faulted
components.
Failover status Display the status of the Application Transparent Failover (ATF)
or CLARiiON Driver Extensions (CDE) software on the servers
connected to the managed storage systems.
SnapView Summary Display the status of all storage system snapshots, and active
snapshot sessions.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Window Menu
Option Use To
Tile Horizontally Tile horizontally the open Enterprise Storage dialog boxes.
Tile Vertically Tile vertically the open Enterprise Storage dialog boxes.
Help Menu
Option Use To
Contents & Index Display the online help table of contents and index.
Toolbar The buttons on the toolbar in the Main window let you perform
operations on all managed storage systems at once. To perform
operations on individual storage systems, use the menu associated
with the storage-system icon (page 3-16). When you position the cursor
over a toolbar button, a brief description of a toolbar button displays.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
Workspace The workspace in the Main window contains the dialog boxes that
you use to perform storage-system tasks. It always contains at least
one Enterprise Storage dialog box, unless you have closed it. You can
open additional Enterprise Storage dialog boxes in the workspace. If
you have installed any additional Navisphere applications on the
management station, another type of dialog box may open in the
workspace when you start the application.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
All N/A
6864 5738-001
Trees, Connectivity Map, and Main Window
3
The Equipment tab displays the Equipment tree; the Storage tab
displays the Storage tree; and the Hosts tab displays the Hosts tree.
You use the Equipment tree to manage the physical components of
the managed storage systems; the Storage tree to manage the logical
components of the managed storage systems; and the Hosts tree to
manage the LUNs and the storage systems to which the servers
connect.
You perform operations on
• All managed storage systems using the menu bar, or on selected
managed storage systems using the menu associated with the
storage-system icon
• Selected managed storage-system components using the menu
associated with the component’s icon
• Selected servers using the menu associated with the host icon
Status Bar The status bar in the Main window contains information fields that
provide the following:
• Automatic Polling indicator. If Automatic Polling is highlighted,
automatic polling is enabled for the session; if it is dimmed,
automatic polling is disabled for the session.
• Feedback about application operation.
• Brief description of a toolbar button when you position the cursor
over the button.
Window Configuration
When the Main window opens, it uses the default application
configuration values for the following:
• The size and position of the Main Window and any open
Enterprise Storage dialog boxes.
• In the Enterprise Storage dialog boxes, any Filter By and Filter
For settings and the selected tab.
If you change any of these values (for example, you filter for FC4700
storage systems and select the Storage tab), you can save them to
either
• the default application configuration file so future sessions open
the Main window with these values, or
6864 5738-001
Trees, Connectivity Map, and Main Window
3
2. In Save File Path, type or select the path to use for the default
application configuration file.
6864 5738-001
Trees, Connectivity Map, and Main Window
3
3. In Save As dialog box, select the folder to hold the new custom
configuration file.
4. In File name, enter the name for the new custom configuration
file.
5. Click Save.
6. In the confirmation window that opens, click Yes.
The current application configuration values are saved to the new
custom application configuration file.
What Next?
• To install software on an FC4700 storage system, continue to
Chapter 4.
• To configure the remote Agent, go to Chapter 5, Configuring the
Remote Agent.
6864 5738-001
Invisible Body Tag
4
Installing Software on
an FC4700 Storage
System
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
If you select multiple files in this dialog box, the system automatically
encloses all file names in double quotation marks and separates them
from the other file names with a space.
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
Column Description
Storage System Name of each storage system selected for software installation. Displays a separate listing
for each package file selected for installation onto that system. (Therefore, if you select three
packages to be installed on Storage System A, Storage System A will be listed three times.)
6864 5738-001
Installing Software on an FC4700 Storage System
4
Once you click Upgrade, you cannot cancel the installation process.
6864 5738-001
Installing Software on an FC4700 Storage System
4
To view status updates in the Software Operation Status dialog box, you
must enable automatic polling for all storage systems undergoing a
software operation.
6864 5738-001
Installing Software on an FC4700 Storage System
4
Column Description
Storage Name and icon for each storage system selected for a software operation.
System
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
You may want to sort the packages list by status. To do this, click the
Status column header.
• Package name
• Current version of software
6864 5738-001
Installing Software on an FC4700 Storage System
4
What Next? Continue to the next section to commit the software you installed.
6864 5738-001
Installing Software on an FC4700 Storage System
4
4. Click Revert.
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Installing Software on an FC4700 Storage System
4
6864 5738-001
Invisible Body Tag
5
Configuring the
Remote Agent
You can use Manager to configure Navisphere 4.3 or higher remote Agents
only. How you do this depends on whether you have FC4700 or non-FC4700
series storage systems.
6864 5738-001
Configuring the Remote Agent
5
6864 5738-001
Configuring the Remote Agent
5
What Next?
To continue editing the SP Agent configuration file, go to the next
section. If you have finished editing the file, go to Chapter 6 to set
storage-system properties.
Setting a Polling Polling Interval lets you specify the number of seconds between each
Interval poll of the storage system. Valid values are 10,20, 30, 60, 120, 180, 240,
300, 600, 1200, and 1800.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system, for which you want
to set the polling interval.
3. Right-click the icon for the SP (A or B), and then click Properties.
4. In Polling Interval, select a valid polling interval value.
5. Click OK in the Agent tab to save your changes and close the SP
Properties dialog box.
What Next?
Go to Chapter 6 to set storage-system properties.
6864 5738-001
Configuring the Remote Agent
5
Any user who can log in to a host that is a Navisphere management station
can monitor the status of any of the managed storage systems.
6864 5738-001
Configuring the Remote Agent
5
where
name is the person’s username. The format of this name differs
depending on whether the person will be using Manager
on a local or remote host.
For a local host - The format is user
where
user is the person’s user account name.
For a remote host - The format is user@hostname
where
user is the person’s user account name.
hostname is the name of the remote host.
For example, if you want to allow user anne to edit the Host Agent
configuration file and configure a host’s storage system using the
Navisphere Manager running on either remote host img01 or remote
host img02, you must add the following entries to the server’s agent
configuration file:
user anne@img01
user anne@img02
For these changes to take effect, you must save the agent configuration file,
and then restart the Host Agent.
What Next? You can now use Manager to edit the Host Agent configuration file so
the Agent can communicate with the storage system. The Agent tab
in the Host Properties dialog box lets you remotely configure a
Navisphere Agent—including basic settings, communication
channels, and privileged users—on a supported host.
Scanning for Before the Host Agent can communicate with a storage system, you
Devices must add a communication channel (device entry) to the Host Agent
configuration file.
If, when you were installing the server software, you edited the Host Agent
configuration file to include the entry, device auto auto "auto", you do not
need to add communication channel device entries to the Host Agent
configuration file.These are created dynamically each time you start the Host
Agent. Go to the section, Updating Parameters on page 5-11.
6864 5738-001
Configuring the Remote Agent
5
6864 5738-001
Configuring the Remote Agent
5
If the Host Agent configuration file was never edited to add device
entries, the Communications Channels list is empty.
6864 5738-001
Configuring the Remote Agent
5
Scan SCSI Buses lets you view specific information for all
midrange storage devices and non-midrange storage devices.
b. Click Close to close the dialog box and return to the Host
Properties - Agent tab.
What Next?
To add privileged users, go to Adding Privileged Users on page 5-10.
To make changes to the Communications Channels list, go to
Updating the Communications Channels List on page 5-9.
To Update Host Agent parameters, go to Updating Parameters on
page 5-11.
6864 5738-001
Configuring the Remote Agent
5
Adding Devices The remote Host Agent lets you add new devices to the
Communication Channels list.
Deleting Devices When you delete a device, you remove it from the Communication
Channels list. When you remove the device from the
Communication Channels list, the device can no longer manage the
storage system.
1. In the Advanced Device Configuration dialog box, select the
device that you want to delete.
2. Click Delete Device.
The device is deleted from the Communication Channels list.
3. Click Close to return to the Agent tab.
4. Click Apply in the Agent tab to save your changes and continue
editing the agent configuration file, or click OK to save your
changes and close the Host Properties dialog box.
6864 5738-001
Configuring the Remote Agent
5
Clearing Devices Clearing devices removes all the current devices from the
Communication Channels list.
Adding Privileged Privileged users can configure the storage system, including binding
Users and unbinding LUNs. When you add a privileged user, the system
adds the user to the host agent.config file.
What Next?
Go to the next section to change the polling interval, set the serial line
baud rate for the storage system, or select the size of the log to be
transferred.
6864 5738-001
Configuring the Remote Agent
5
Updating Updating parameters includes setting the polling interval, the serial
Parameters line baud rate, and the log entries to transfer. To update parameters,
you must have privileges.
Polling Interval - Lets you specify the number of seconds between
each poll of the storage system. Valid values are 10, 15, 30, 60, and
120.
Serial Line Baud Rate - Lets you select the serial communication
baud rates. Valid values are 9600, 19200, and 38400.
Log Entries to Transfer - Lets you select the log size to be transferred.
Valid values are 100, 2048, 5000, and All.
What Next?
Continue to Chapter 6 to set storage-system properties.
6864 5738-001
Configuring the Remote Agent
5
6864 5738-001
Invisible Body Tag
6
Setting Storage-System
Properties
When you set up a storage system with SPs, you can change its
general, memory, cache, data access, and configuration access
properties or use the default values for these properties.
For all shared storage systems - The Data Access tab is visible.
For non-FC4700 storage systems - The Configuration Access tab is visible.
For FC4700 storage systems - The Storage tab is visible, and if MirrorView is
installed, the Remote Mirrors tab is visible.
If you want to use read or write caching or create RAID 3 LUNs, you
must set certain storage-system memory and cache properties. If you
are using caching, you may also want to change the time for running
the self-test of each standby power supply (SPS) or the battery
backup unit (BBU).
This chapter describes:
• Setting Storage-System Configuration Access Properties
(non-FC4700 Storage Systems).........................................................6-2
• Setting Storage-System General Configuration Properties........6-10
• Setting Storage-System Memory Properties ................................6-14
• Setting the Cache Properties...........................................................6-21
• Setting the Storage-System Hosts Property .................................6-24
• Setting the Battery Test Time ..........................................................6-30
6864 5738-001
Setting Storage-System Properties
6
Configuration Access
Non-FC4700 shared storage systems provide configuration access
control. This feature lets you restrict the server ports that can send
configuration commands to the storage system. We recommend that
you use the storage system’s configuration access control to give this
privilege to one or two servers only.
By default, any user whose username is entered in a Host Agent
configuration file can configure any non-FC4700 shared storage
system connected to the server.
Such a privileged user can perform any configuration task, such as
binding or unbinding LUNs from the management station.
Configuration access control lets you restrict the servers that can send
configuration commands from a privileged user to an attached
storage system. Without configuration access control, any server can
send configuration commands from a privileged user to any
connected storage system.
Configuration access is governed by a management login password
that you set when you set up the storage system.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
All servers can send certain LUN configuration commands to the storage
system even when configuration access to the storage system is disabled for
them. These commands set the user-defined properties on the General,
Cache, and Prefetch tabs in the LUN Properties dialog box, which are the
properties listed below.
If configuration access control is not enabled for a storage system, any server
connected to the storage system can send configuration commands to the
storage system.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
You will need this password to enable or disable configuration access for
the storage system, and to enable configuration access for a host. If no
one can remember the current password, then you must connect a
management station to the serial port on a storage-system SP to change
the password.
What Next?
Continue to the next section to enable configuration access for
servers.
6864 5738-001
Setting Storage-System Properties
6
IMPORTANT: Before you can enable configuration access for a server to the
storage system, you must enable configuration access control for the storage
system (see page 6-4).
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
2. Under Host Access Status, look for an entry for each initiator
(HBA) connected to the storage system.
If the host access status for the selected host is Disabled and it
should be Enabled, click the Basic tab and repeat the procedure,
To Enable Configuration Access for Servers on page 6-7.
If the host access status for the selected host is Enabled and it
should be Disabled, click the Basic tab and repeat the procedure,
To Disable Configuration Access for Servers on page 6-8.
3. Click OK to close the Properties dialog box.
The servers with access enabled can now send configuration
commands to the storage system. The servers with access
disabled cannot send configuration commands to the storage
system.
What Next?
Continue to the next section, Setting Storage-System General
Configuration Properties.
6864 5738-001
Setting Storage-System Properties
6
When enable automatic polling is set (the default) for a storage system,
automatic polling of that storage system occurs only if automatic polling
(background polling) for the Manager session is enabled (that is, only when
Automatic Polling is selected on the Operations menu on the Main window
toolbar). By default, automatic polling for the session is disabled.
6864 5738-001
Setting Storage-System Properties
6
You can change the automatic polling interval for all managed storage
systems, but not for an individual one.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
What Next?
• If you need to allocate memory for caching or for binding RAID 3
LUNs continue to the next section, Setting Storage-System Memory
Properties.
• If you do not need to allocate memory for caching or binding
RAID 3 LUNs, go to Setting the Storage-System Hosts Property on
page 6-24.
6864 5738-001
Setting Storage-System Properties
6
! CAUTION
Before you bind a RAID 3 LUN do the following:
6864 5738-001
Setting Storage-System Properties
6
! CAUTION
Changing the RAID 3 partition size causes the application to reboot
the storage system. This terminates all outstanding I/O to the
storage system.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
Write Cache Memory - Sets the size in Mbytes of the write cache
on each SP.
RAID 3 Memory - Sets the size in Mbytes of the RAID 3 memory
partition on both SPs.
4. Under User Customizable Partitions, type the size or move the
slider to adjust the size of each memory partition that you want to
change.
When you do this, Manager reassigns memory in one of two
ways:
• From free memory to a partition whose size you are increasing
• To free memory from a partition whose size you are decreasing
The pie charts reflect the changes in memory assignment.
As a general guideline, we recommend that you make the
write-cache partition about twice the size of the read-cache
partition on each SP. For example, if total memory for each SP is
256, you can assign 150 to the write-cache partition and 75 to the
read-cache partition on each SP. For precise allocation, type the
size instead of using the slider.
5. When you complete the memory assignment, do one of the
following:
• Click Apply to save your changes and leave the dialog box
open so that you can change other storage-system properties.
• Click OK to save your changes and close the dialog box.
What Next?
Your next action depends on whether you assigned memory to the
read-cache or write-cache memory partitions.
Memory assigned to cache partitions - Continue to the next section,
Setting Storage-System Cache Properties.
Memory not assigned to cache partitions - Go directly to Chapter 7,
which describes how to create LUNs.
6864 5738-001
Setting Storage-System Properties
6
FC4400/4500,
FC4700, C1900, C2x000,
Icon FC5600/5700 FC5200/5300 C3x00 C1000
Disks 0-0 through 0-0 through A0, B0, C0, D0, E0 A0 through A4
0-8 0-4
6864 5738-001
Setting Storage-System Properties
6
Page Size Page size sets the number of Kbytes stored in one cache page. The
storage processors (SPs) manage the read and write caches by pages
instead of sectors. The larger the page size, the more continuous
sectors the cache stores in a single page. The default page size is 2
Kbytes.
As a general guideline, we recommend the following page sizes:
• For general file server applications: 8 Kbytes
• For database applications: 2 or 4 Kbytes
6864 5738-001
Setting Storage-System Properties
6
Water-
mark Impact of High or Low Default
Type Definition Values Value
High Percentage of dirty A low value for the high 96%
pages in the write cache watermark causes the
which, when reached, SPs to begin flushing
causes the SPs to begin their write caches sooner
flushing their write than a high value
caches
Low Percentage of dirty A high value for the low 80%
pages in the write cache watermark causes the
which, when reached, SPs to stop flushing their
causes the SPs to stop write caches sooner than
flushing their write a low value
caches
Type of
Read
Caching Function Method
SP A Enables or disables Enables or disables the
storage-system read read cache on SP A
caching for SP A
SP B Enables or disables Enables or disables the
storage-system read read cache on SP B
caching for SP B
6864 5738-001
Setting Storage-System Properties
6
The minimum write cache and read cache partition size is 1Mbyte for
non-FC4700 storage systems, and 3Mbytes for FC4700 storage systems.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Setting Storage-System Properties
6
What Next?
Your next action depends on whether the storage system is a shared
storage system.
Shared storage system - Continue to the next section, Setting the
Storage-System Hosts Property.
Unshared storage system - If you are using caching and want to
change the battery test time, go to the section, Setting the Battery Test
Time on page 6-30; otherwise, go to Chapter 7 to create LUNs on the
storage system.
6864 5738-001
Setting Storage-System Properties
6
For FC4700 storage systems Enforce Fair Access is always enabled, but
appears dimmed and is unavailable for change.
This section:
• Describes fair access to storage-system resources
• Explains how to set the enforce fair access property for a
non-FC4700 storage system
6864 5738-001
Setting Storage-System Properties
6
What Next?
Your next action depends on which storage system you are setting
up:
For an FC4700 storage system - If you have not set the IP address for
the SPs and ALPAs, go on to the next section, Setting the SP Network
and ALPA Properties (FC4700 Series Only). If you have set them and
you plan to change the battery and use caching, go to the section
6864 5738-001
Setting Storage-System Properties
6
The network properties are initially set by EMC service personnel to work at
your site. Do not change any value unless you are moving the SP to another
LAN or subnet. If you change any value, after you click OK or Apply, the SP
will restart and use the new value.
To Set the SP Network 1. In the Enterprise Storage dialog box, click the Equipment or
Properties Storage tab.
2. Right-click the SP whose properties you want to change.
3. Click Properties, and then click the Network tab.
6864 5738-001
Setting Storage-System Properties
6
5. After specifying the new network name and address settings you
want, click OK or Apply.
6. Click Yes to confirm the change and close the dialog box.
The SP restarts using the new values specified.
What Next?
The SP network properties are independent of other SP properties;
there is no related setting you need to change next. Depending on
your reason for changing this SP’s network properties, you may want
to change one or more network properties of the other SP in this
storage system.
Setting the SP Network and ALPA Properties (FC4700 Series Only) 6-27
6864 5738-001
Setting Storage-System Properties
6
The SCSI IDs are initially set by EMC service personnel to work at your site.
Do not change any value unless you are installing a new SP and need to
change its SCSI IDs from the SP ship values of 0 and 0.
If you change any value, after you click OK or Apply, the SP will restart and
use the new values.
6864 5738-001
Setting Storage-System Properties
6
Setting the SP Network and ALPA Properties (FC4700 Series Only) 6-29
6864 5738-001
Setting Storage-System Properties
6
What Next?
The SP port ALPA addresses (SCSI IDs) are independent of other SP
properties; there is no related setting you need to change next.
Depending on your reason for changing this SP’s port SCSI IDs, you
may want to change the IDs of the other SP in this storage system.
If you are using caching and want to change the battery test time,
continue on to the next section, Setting the Battery Test Time; otherwise,
go directly to Chapter 7 to create LUNs on the storage system.
6864 5738-001
Setting Storage-System Properties
6
The Battery Test Time dialog box opens, similar to the following.
6. In Test Every, click the day on which you want the test to run.
7. In at, enter the time for the test to start in the following format
hh:mm where hh is the hour in 24-hour format and mm are the
minutes.
For example, for 2:47 PM, enter 14:47.
8. Click OK to apply the settings and close the dialog box.
What Next?
Continue to Chapter 7 to create LUNs on the storage system.
6864 5738-001
Setting Storage-System Properties
6
6864 5738-001
Invisible Body Tag
7
Creating LUNs and
RAID Groups
You can create LUNs on any storage system with SPs, that is, any
storage system except an FC5000 series storage system (JBOD
configuration). You can create RAID Groups on any storage system
that supports the RAID Group feature.
This chapter describes the following:
• LUNs, LUN RAID Types, and Properties.......................................7-2
• Creating LUNs in a Non-RAID Group Storage System .............7-10
• Creating RAID Groups....................................................................7-20
• Creating LUNs on RAID Groups...................................................7-27
• Verifying or Editing Device Information in the Host Agent
Configuration File (Non-FC4700 storage systems) .....................7-38
6864 5738-001
Creating LUNs and RAID Groups
7
LUNs A logical unit (LUN) is a grouping of one or more disks into one span
of disk storage space. A LUN looks like an individual disk to the
server’s operating system. It has a RAID type and properties that
define it.
You can have Manager create standard LUNs using the disks and
default property values that it selects, or you can create your own
custom LUNs with the disks and property values that you select. In a
storage system that supports RAID Groups, you create LUNs on
RAID Groups; therefore, you need to create a RAID Group before you
create a LUN.
RAID Types The RAID type of a LUN determines the type of redundancy, and
therefore, the data integrity provided by the LUN.
The following RAID types are available:
RAID 5 - An individual access array, which provides data integrity
using parity information that is stored on each disk in the LUN. This
RAID type is best suited for multiple applications that transfer
different amounts of data in most I/O operations.
RAID 3 - A parallel access array, which provides data integrity using
parity information that is stored on one disk in the LUN. This RAID
type is best suited for single-task applications, such as video storage,
that transfer large amounts of data in most I/O operations.
RAID 1 - A mirrored array, which provides data integrity by
mirroring (copying) its data onto another disk in the LUN. This RAID
type provides the greatest data integrity at the greatest cost in disk
space, and is well suited for an operating system disk.
RAID 0 - An individual access array without parity, which provides
the same individual access features as the RAID 5 type, but does not
have parity information. As a result, if a disk in the LUN fails, the
information on the LUN is lost.
6864 5738-001
Creating LUNs and RAID Groups
7
RAID 5 3 - 16
RAID 1 2
RAID 0 3 - 16
Disk 1
Hot Spare 1
6864 5738-001
Creating LUNs and RAID Groups
7
Element Size The element size is the number of disk sectors (512 bytes) that the
storage system can read or write to a single disk without requiring
access to another disk. (This assumes that the transfer starts at the
first sector in the stripe). The element size can affect the performance
of a RAID 3, RAID 5 or RAID 1/0 LUN. For non-FC4700 storage
systems, a RAID 3 LUN has a fixed element size of one sector. For
FC4700 storage systems, a RAID 3 LUN has a fixed element size of 16
sectors.
The smaller the element size, the more efficient the distribution of
data read or written. However, if the size is too small for a single I/O
6864 5738-001
Creating LUNs and RAID Groups
7
Rebuild Priority The rebuild priority is the relative importance of reconstructing data
on either a hot spare or a new disk that replaces a failed disk in a
LUN. It determines the amount of resources the SP devotes to
rebuilding instead of to normal I/O activity.
HIGH 6
MEDIUM 12
LOW 18
The rebuild priorities correspond to the target times listed above. The
storage system attempts to rebuild the LUN in the target time or less.
The actual time to rebuild the LUN depends on the I/O workload,
the LUN size, and the LUN RAID type.
For a RAID Group with multiple LUNs, the highest priority specified
for any LUN on the group is used for all LUNs on the group. For
example, if the rebuild priority is High for some LUNs on a group
and Low for the other LUNs on the group, all LUNs on the group will
be rebuilt at High priority.
You set the rebuild priority for a LUN when you bind it, and you can
change it after the LUN is bound without affecting the data on the
LUN.
Verify Priority The verify priority is the relative importance of checking parity
sectors in a LUN. If an SP detects parity inconsistencies, it starts a
background process to check all the parity sectors in the LUN. Such
inconsistencies can occur after an SP fails and the LUN is taken over
6864 5738-001
Creating LUNs and RAID Groups
7
by the other SP. The priority determines the amount of resources the
SP devotes to checking parity instead of to normal I/O activity.
Valid verify priorities are ASAP (as soon as possible), HIGH,
MEDIUM, and LOW. A verify operation with an ASAP or HIGH
priority checks parity faster than one with a MEDIUM or LOW
priority, but may degrade storage-system performance. The default
priority is LOW, and though a verify with this priority may take
many hours, it is adequate for most LUNs.
You set the verify priority for a LUN when you bind it, and you can
change it after the LUN is bound, without affecting the data on the
LUN.
Default Owner The default owner is the SP that assumes ownership of the LUN
when the storage system is powered up. If the storage system has two
SPs, you can choose to bind some LUNs using one SP as the default
owner and the rest using the other SP as the default owner. The
primary route to a LUN is the route through the SP that is its default
owner, and the secondary route is through the other SP.
LUNs that are not currently owned by an SP are unowned. A hot spare that is
not in use is an unowned LUN.
Enable Read Cache Enable read cache enables (default) or disables read caching for a
LUN. For a LUN with read caching enabled to actually use read
caching, the read cache on the SP that owns the LUN must also be
enabled. If the read cache for the SP owning the LUN is enabled, then
the memory assigned to that read cache is shared by all LUNs that are
owned by that SP and have read caching enabled.
Generally, you should enable read caching for every RAID type that
supports it. If you want faster read performance on some LUNs than
on others, you may want to disable read caching for the lower
priority LUNs.
6864 5738-001
Creating LUNs and RAID Groups
7
You enable or disable read caching for a LUN when you bind it. You
can also enable or disable read caching after the LUN is bound
without affecting its data.
Enable Write Cache Enable write cache enables (default) or disables write caching for a
LUN. For a LUN with write caching enabled to actually use write
caching, the write cache for the storage system must also be enabled.
If the storage-system write cache is enabled, then the memory
assigned to the write cache is shared by all LUNs that have write
caching enabled.
Generally, you should enable read caching for every RAID type
(especially for a RAID 5 or RAID 1/0 LUN) that supports it. If you
want faster write performance on some LUNs than on others, you
may want to disable write caching for the lower priority LUNs.
You enable or disable write caching for a LUN when you bind it. You
can also enable or disable write caching after the LUN is bound,
without affecting its data.
Enable Auto Assign Enable auto assign enables or disables (default) auto assignment for a
LUN. Auto assignment controls the ownership of the LUN when an
SP fails in a storage system with two SPs.
With auto assignment enabled, if the SP that owns the LUN fails and
the server tries to access that LUN through the second SP, the second
SP assumes ownership of the LUN so the access can occur. The
second SP continues to own the LUN until the failed SP is replaced
and the storage system is powered up. Then, ownership of the LUN
returns to its default owner.
If auto assign is disabled in the previous situation, the other SP does
not assume ownership of the LUN, so the access to the LUN does not
occur.
If you are running Application Transparent Failover (ATF) software
on a UNIX server connected to the storage system, you must disable
auto assignment for all LUNs that you want the software to fail over
to the working SP when an SP fails.
You enable or disable auto assignment for a LUN when you bind it.
You can also enable or disable it after the LUN is bound, without
affecting the data on it.
LUN properties are not available for the Hot Spare RAID type because it is
simply a replacement disk for a failed disk in a LUN.
6864 5738-001
Creating LUNs and RAID Groups
7
Alignment Offset Alignment offset sets the host Logical Block Address (LBA)
alignment to a stripe boundary on the LUN, resulting in a storage
system performance improvement. Problems can arise when a host
operating system records private information at the start of a LUN.
This can interfere with the RAID stripe alignment so that when data
I/O crosses the RAID stripe boundary the storage system
performance is degraded
Table 7-5 Default LUN Property Values for Different RAID Types
Default owner SP A for one SP; Auto for two SPs. Auto distributes the LUNs as equally as possible
between the two SPs.
Enable auto assign state Cleared Cleared Cleared Cleared Cleared Cleared
6864 5738-001
Creating LUNs and RAID Groups
7
What Next? What you do next depends on whether the storage system supports
RAID Groups.
For a non-RAID Group storage system - Continue to the next
section, Creating LUNs in a Non-RAID Group Storage System on
page 7-10.
For a RAID Group storage system - Go to the section Creating RAID
Groups on page 7-20.
6864 5738-001
Creating LUNs and RAID Groups
7
! CAUTION
Before you bind a RAID 3 LUN, you must assign memory for it to
the RAID 3 memory partition. If this partition does not have
adequate memory for the LUN, you will not be able to bind it.
Changing the size of the RAID 3 memory partition reboots the
storage system.
6864 5738-001
Creating LUNs and RAID Groups
7
4. In the RAID Type list, click the RAID type for the new LUN.
6864 5738-001
Creating LUNs and RAID Groups
7
Only supported RAID types for the storage system are available.
You cannot change the RAID type without unbinding the LUN
(losing its data), and then rebinding it with a new ID.
5. In the LUN ID list, click the ID for the new LUN.
Each LUN in a storage system has a unique LUN ID, which is a
hexadecimal number. The default ID for the LUN is the smallest
available one. You cannot change the ID without unbinding the
LUN (and thus losing its data), and then binding a new LUN with
the new ID.
6. In the Number of Disks list, click the number of disks to include
in each LUN.
Only numbers supported for the selected RAID type are
available.
7. Click Apply to bind the LUN.
8. In the dialog box that opens, click Yes to confirm the bind
operation.
A LUN icon for the new LUN appears in the Storage tree under
the icon for the SP that owns it.
Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).
6864 5738-001
Creating LUNs and RAID Groups
7
What Next?
Go to the section Verifying or Editing Device Information in the Host
Agent Configuration File (Non-FC4700 storage systems) on page 7-38.
RAID
Type Recommendations
RAID 5 Binding five disks uses disk space efficiently. In a C-series storage system,
selecting disks on different internal SCSI buses provides the greatest data
integrity.
RAID 3 In a C-series storage system, selecting disks on different internal SCSI buses
provides the greatest data integrity.
RAID 1/0 Disks are paired into mirrored images in the order in which you select them.
The first and second disks you select are a pair of mirrored images; the third
and fourth disks you select are another pair of mirrored images; and so on.
For highest data integrity in a C-series storage system, the first disk you select
in each pair should be on a different internal SCSI bus than the second disk
you select.
RAID 0 In a C-series storage system, selecting disks on different internal SCSI buses
provides the highest data integrity.
RAID 1 None
Disk None
6864 5738-001
Creating LUNs and RAID Groups
7
6864 5738-001
Creating LUNs and RAID Groups
7
4. Click Advanced.
The Advanced Bind LUN dialog box opens, similar to the
following. For information on the properties in the dialog box,
click Help.
6864 5738-001
Creating LUNs and RAID Groups
7
5. In the RAID Type list, click the RAID type for the new LUN.
Only supported RAID types for the storage system are available.
You cannot change the RAID type without unbinding the LUN
(thereby losing its data), and then rebinding it with a new ID.
6864 5738-001
Creating LUNs and RAID Groups
7
All disks in a LUN must have the same physical capacity to fully use
the storage space on the disks. The physical capacity of a disk bound
as a hot spare must be at least as great as the physical capacity of the
largest disk module in any LUN on the storage system.
c. For each disk under Selected Disks that you do not want in
the LUN, click the disk, and then click ←.
The disk moves into Available Disks.
d. For each disk under Available Disks that you want in the
LUN, click the disk, and then click →.
The disk moves into Selected Disks.
e. When Selected Disks contains all the disks you want in the
LUN, click OK.
f. In the LUN ID list, click the ID for the new LUN.
Each LUN in a storage system has a unique LUN ID, which is
a hexadecimal number. The default ID is the smallest available
one. You cannot change the ID without unbinding it (thereby
losing its data), and then binding a new LUN with the new ID.
9. Under LUN Properties, change any of the user-defined properties
that you want to have new values:
6864 5738-001
Creating LUNs and RAID Groups
7
Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).
6864 5738-001
Creating LUNs and RAID Groups
7
12. If you want additional LUNs on the storage system, repeat steps 3
through 11.
13. When you have bound all the LUNs you want on the storage
system, click Close.
14. Reboot each server connected to the storage system to make the
LUNs in the storage system visible to the server.
A LUN bound with read caching enabled uses caching only if the
read cache is enabled for the SP that owns it (page 6-21). Similarly, a
LUN bound with write caching enabled uses caching only if
storage-system write caching is enabled (page 6-21).
What Next?
Go to the section Verifying or Editing Device Information in the Host
Agent Configuration File (Non-FC4700 storage systems) on page 7-38.
6864 5738-001
Creating LUNs and RAID Groups
7
RAID Groups A RAID Group is a set of disks on which you bind one or more LUNs.
Each LUN you bind on a RAID Group is distributed equally across
the disks in the Group.
The RAID Group supports the RAID type of the first LUN you bind
on it. Any other LUNs that you bind on it have the same RAID type.
The number of disks you can have in a RAID Group is determined by
the number of disks available for the RAID type of the LUNs that you
will bind on it (page 7-3).
You can expand a RAID Group by adding one or more disks to it.
Expanding a RAID Group does not automatically increase the user
capacity of already bound LUNs. Instead, it distributes the capacity
of the LUNs equally across all the disks in the RAID Group, freeing
space for additional LUNs.
If you expand a RAID Group that has only one bound LUN with a
user capacity equal to the user capacity of the RAID Group, you can
choose to have the user capacity of the LUN equal the user capacity
of the expanded Group. Whether you can actually use the increased
user capacity of the LUN depends on the operating system running
on the servers connected to the storage system.
If you unbind and bind LUNs on a RAID Group, you may create gaps
in the contiguous space across the Group’s disks. This activity,
fragmenting the RAID Group, leaves you with less space for new
LUNs. You can defragment a RAID Group to compress these gaps
and provide more contiguous free space across the disks.
6864 5738-001
Creating LUNs and RAID Groups
7
Defragmentation may also shorten file access time, since the disk
read/write heads need to travel less distance to reach data.
When a disk in a RAID Group is replaced or fails, the rebuild
operation reconstructs the data on the replacement disk or hot spare
one LUN at a time, starting with the first LUN.
Property Value
What Next?
You next action depends on whether you want to create standard or
customer RAID Groups.
Standard RAID Groups - Continue on to the next section, Creating
Standard RAID Groups.
Custom RAID Groups - Go to the section Creating Custom RAID
Groups on page 7-23.
6864 5738-001
Creating LUNs and RAID Groups
7
To Create Standard 1. In the Enterprise Storage dialog box, click the Equipment or
RAID Groups Storage tab.
2. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Create RAID Group.
The Create RAID Group dialog box opens (similar to the
following).
3. In the RAID Group ID list, click the ID for the new RAID Group.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
4. In the Support RAID Type list, click the RAID type for the new
RAID Group.
Only supported RAID types for the storage system are available.
5. In the Number of Disks list, click the number for the new RAID
Group.
Only numbers supported for the RAID type are available.
6. Click Apply to create the RAID Group.
6864 5738-001
Creating LUNs and RAID Groups
7
7. In the dialog box that opens, click Yes to confirm the RAID Group
creation operation.
An unbound RAID Group icon for the new RAID Group appears
in the Storage tree under the RAID Groups icon.
8. If you want another RAID Group on the storage system, repeat
steps 2 through 7.
9. When you have created all the RAID Groups you want on the
storage system, click Close.
What Next?
When you have created the RAID Groups you want, go to the section
Creating LUNs on RAID Groups on page 7-27 to create one or more
LUNs on each of them.
To Create Custom 1. In the Enterprise Storage dialog box, click the Equipment or
RAID Groups Storage tab.
2. Right-click the icon for the storage system on which you want to
create the RAID Group, and then click Create RAID Group.
The Create RAID Group dialog box opens (similar to the
following).
3. Click Advanced.
6864 5738-001
Creating LUNs and RAID Groups
7
4. In the RAID Group ID list, click the ID for the new RAID Group.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
5. From Choose Disks under Disks, either click Automatically to
have Manager choose the disks for the new RAID Group or click
Manually to choose the disks for the new LUN yourself.
6. If Automatically is selected, follow these steps (if manually is
selected, proceed to step 7):
a. In the Support RAID Type list, click the RAID type for the
new RAID Group.
6864 5738-001
Creating LUNs and RAID Groups
7
b. In the Number of Disks list, click the number for the new
RAID Group.
Only numbers supported for the RAID type are available.
c. Go to step 8.
7. If Manually is selected:
a. In the Support RAID Type list, click the RAID type for the
new RAID Group.
b. Under Manual Disk Selection, click Select.
A Disk Selection dialog box opens, similar to the following.
All disks in a RAID Group must have the same physical capacity to
fully use the storage space on the disks. The physical capacity of a
RAID Group that supports the Hot Spare RAID type must be at least
as great as the physical capacity of the largest disk module in any
LUN on the storage system.
d. For each disk under Selected Disks that you do not want in
the RAID Group, click the disk, and then click ←.
The disk moves into Available Disks.
6864 5738-001
Creating LUNs and RAID Groups
7
e. For each disk under Available Disks that you want in the
RAID Group, click the disk, and then click →.
The disk moves into Selected Disks.
f. When Selected Disks contains all the disks you want in the
RAID Group, click OK.
8. Under RAID Group Parameters, change any of the user-defined
properties for which you want to change the values:
a. In the Expansion/Defragmentation Priority list, click the
priority for the new RAID Group.
b. Select the Automatically Destroy check box to enable
automatic dissolution of the RAID Group when the last LUN
is unbound, or clear the check box to disable automatic
dissolution.
9. Click Apply to create the RAID Groups.
10. In the dialog box that opens, click Yes to confirm the RAID Group
creation operation.
An unbound RAID Group icon for each new RAID Group
appears in the Storage tree under the RAID Groups icon.
11. If you want additional RAID Groups on the storage system,
repeat steps 2 through 10.
12. When you have created all the RAID Groups you want on the
storage system, click Close.
What Next?
When you have created the RAID Groups you want, continue to the
next section to create one or more LUNs on each of them.
6864 5738-001
Creating LUNs and RAID Groups
7
6864 5738-001
Creating LUNs and RAID Groups
7
! CAUTION
Before you bind a RAID 3 LUN do the following:
6864 5738-001
Creating LUNs and RAID Groups
7
4. In RAID Type, select the RAID Type for the new LUN.
6864 5738-001
Creating LUNs and RAID Groups
7
If you change the ID, the LUN is unbound and loses all its data.
You then bind a new LUN with the new ID.
6. In the RAID Group list, click the ID of the RAID Group on which
you want to bind the new LUNs.
The list displays only those RAID Groups available for the
selected RAID type. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is
created.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
If the storage system does not have the RAID Group you want, you can
create one by clicking New, which opens the Create RAID Group dialog
box (page 7-22 for a standard RAID Group; page 7-23 for a custom RAID
Group).
Block Count is not available for all FC4700 storage systems. Refer to
the Manager release notes.
6864 5738-001
Creating LUNs and RAID Groups
7
The LUN size property is unavailable for a RAID Group that supports
the RAID 3, Disk, or Hot Spare RAID type because each of these LUNs
uses all the disk space on the RAID Group.
All disks in a LUN must have the same capacity to fully use the storage
space on the disks. The capacity of a disk bound as a Hot Spare must be
at least as great as the capacity of the largest disk module in any LUN on
the storage system.
Binding LUNs can take as long as two hours. Some storage systems have
disks that have been preprocessed at the factory to speed up binding.
You can determine the progress of a bind operation from Percent Bound
on the LUN Properties dialog box (page 11-24).
10. If you want to create another LUN on a RAID Group, repeat steps
3 through 9.
11. When you have created all the LUNs you want on the storage
system, click Close.
All the LUNs you create have read caching enabled. However, they
can only use read caching if the read cache is enabled for the SP that
owns them (page 6-21). If the storage system supports write caching,
all LUNs that you create have write caching enabled. However, they
can only use write caching if storage-system write caching is enabled
(page 6-21).
What Next?
What you do after you have created all the LUNs you want depends
on whether the storage system is shared or unshared.
Unshared storage system - Reboot each server connected to the
storage system to make the LUNs in the storage system visible to the
server, and then go to the section Verifying or Editing Device
Information in the Host Agent Configuration File (Non-FC4700 storage
systems) on page 7-38.
Shared storage system - Go to Chapter 8, Setting Up Access Logix, to
create Storage Groups containing the LUNs you bound.
6864 5738-001
Creating LUNs and RAID Groups
7
6864 5738-001
Creating LUNs and RAID Groups
7
3. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Bind LUN.
A Bind LUN (RAID Groups) dialog box opens, which is similar
to the following.
4. Click Advanced.
An Advanced Bind LUN (RAID Groups) dialog box opens,
which is similar to the following.
5. In RAID Type, select the RAID type for the new LUN.
This sets the parent RAID type for the new LUN.
6864 5738-001
Creating LUNs and RAID Groups
7
6. Under RAID Group Selection, in the RAID Group for new LUN
list, click the ID of the RAID Group on which you want to bind
the new LUNs.
The list displays only those RAID Groups available for the
selected RAID type. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is
created.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
If the storage system does not have the RAID Group you want, you can
create one by clicking New, which opens the Create RAID Group dialog
box (page 7-22 for a standard RAID Group; page 7-23 for a custom RAID
Group).
7. Under LUN Properties, in the LUN ID list, click the ID for the
new LUN.
Each LUN in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the LUN is unbound and loses all its data.
You then bind a new LUN with the new ID.
8. Under LUN Properties, change any of the user-defined properties
that you want to have new values:
a. In the Element Size list, click the new element size.
b. In the Rebuild Priority list, click the new rebuild priority.
c. In the Verify Priority list, click the new verify priority.
d. Select the Enable Read Cache check box to enable read
caching for the new LUNs, or clear it to disable read caching
for them.
6864 5738-001
Creating LUNs and RAID Groups
7
Block Count is not available for all FC4700 storage systems. Refer to
the Manager release notes.
6864 5738-001
Creating LUNs and RAID Groups
7
The LUN size property is unavailable for a RAID Group that supports
the RAID 3, Disk, or Hot Spare RAID type because each of these LUNs
uses all the disk space on the RAID Group.
All disks in a LUN must have the same capacity to fully use the storage
space on the disks. The capacity of a disk bound as a Hot Spare must be
at least as great as the capacity of the largest disk module in any LUN on
the storage system.
Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).
A LUN that is bound with read caching enabled uses caching only if the read
cache is enabled for the SP that owns it (page 6-21). Similarly, a LUN bound
with write caching enabled uses caching only if storage-system write caching
is enabled (page 6-21).
6864 5738-001
Creating LUNs and RAID Groups
7
What Next?
What you do after you have created all the LUNs you want depends
on whether the storage system is unshared or shared.
Unshared storage systems - Reboot each server connected to the
storage system to make the LUNs in the storage system visible to the
server, and then go to the section Verifying or Editing Device
Information in the Host Agent Configuration File (Non-FC4700 storage
systems) (page 7-38).
Shared storage system - Go to the Chapter 8, Setting Up Access Logix,
to create Storage Groups containing the LUNs you bound.
6864 5738-001
Creating LUNs and RAID Groups
7
What Next?
AIX, HP-UX, Linux, or NetWare on the server views the LUNs in a
storage system as identical to standard single disk drives. For AIX,
HP-UX, Linux, or NetWare to use the LUNs, you must make them
available to the operating system as described in the Navisphere
Server Software Administrator’s or User Guide for the operating
system.
6864 5738-001
Creating LUNs and RAID Groups
7
CDE or ATF is Not Installed and There Are No Bound LUNs Before You Created LUNs
If you are binding LUNs in a storage system connected to a Solaris
server and no LUNs exist on the storage system, edit the device
information in the agent configuration file on each server connected
to the storage system in one of the following ways:
1. Open the agent configuration file.
2. Enter the following line:
device auto auto "auto"
3. Save the agent configuration file.
4. Stop and then start the Agent.
or
1. Open the agent configuration file.
2. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for UNIX
environments.
3. Comment out (insert a # before) any device entries for the SPs in
the storage system.
4. Save the agent configuration file.
5. Stop and then start the Agent.
What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.
Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 7-39
6864 5738-001
Creating LUNs and RAID Groups
7
What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.
What Next?
Windows NT or Windows 2000 on the server views the LUNs in the
storage system as identical to standard single disk drives. For
Windows NT or Windows 2000 to use the LUNs, you must make
them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Windows.
6864 5738-001
Invisible Body Tag
8
Setting Up
Access Logix
6864 5738-001
Setting Up Access Logix
8
6864 5738-001
Setting Up Access Logix
8
The Data Access tab of the Storage System Properties dialog box
opens (similar to the following). For information on the
properties in the dialog box, click Help.
4. Select the Access Control Enabled check box, and then click
Apply.
5. Click Yes to confirm that you want to enable data access control.
6. Click OK to apply your changes and close the Properties dialog
box.
6864 5738-001
Setting Up Access Logix
8
Unique ID The unique ID is the unique identifier for the Storage Group. It is
assigned automatically to the Storage Group when you create it. You
cannot change this name.
Storage Group By default, Storage Group Name has the format Storage Group n,
Name where n is the total number of Storage Groups plus one. You can
change the default name when you create the group or at any later
time.
Sharing Sharing sets the sharing state of the Storage Group to dedicated or
sharable. A shareable state lets you connect the group to multiple
servers, and is primarily used for clustered environments. A
dedicated state lets you connect the group to just one server. The
default setting for a new Storage Group is dedicated.
6864 5738-001
Setting Up Access Logix
8
LUNs in Storage LUNs in Storage Group lists the LUNs currently in the Storage
Group Group. You cannot select the list entries. Each entry in the list consists
of the following fields:
Field Meaning
Capacity User capacity, that is, the amount of space for user data
on the LUN
Connected Hosts Connected hosts lists the servers currently connected to the Storage
Group. You cannot select the list entries. Each entry in the list
consists of the following fields:
Field Meaning
Manager can determine what operating system is running on the server only
if the revision of the Agent running on the server is greater than 4.1.
6864 5738-001
Setting Up Access Logix
8
Used Host Used host connection paths is an advanced property that does the
Connection Paths following:
• Lists all paths that connect the selected server to the Storage
Group
• Tells whether that path is enabled or disabled.
If the check box for a path is selected, the path is enabled. If the
check box is cleared, the path is disabled. All paths to a host are
either enabled or disabled.
Each path consists of the following fields:
Field Meaning
HBA Device name for the HBA in the server connected to the storage system
HBA Port Unique ID for the port on the HBA connected to the storage system
SP ID SP A or SP B
6864 5738-001
Setting Up Access Logix
8
You create Storage Groups using the Create Storage Group dialog
box. The procedure in this section tells you how to open this dialog
box from the storage-system menu. You can also open it by clicking
New in the Connect Hosts to Storage dialog box or in the Data
Access tab of the Storage System Properties dialog box.
To Create Storage Before you can create Storage Groups on a storage system, you must have
Groups enabled data access control for the storage system, as described in Creating
Storage Groups on page 8-7.
6864 5738-001
Setting Up Access Logix
8
3. In the Storage System list, select the name of the storage system
on which you want to create a Storage Group.
4. If you want to assign the Storage Group your own name, enter
the name in Storage Group.
5. From Sharing, either click Dedicated to allow a single host to
access the new Storage Group, or click Sharable to allow multiple
hosts to access it.
6. If you want to assign LUNs from other Storage Groups to the new
group (which we do not recommend), click Show LUNs in Other
Storage Groups.
The Unassigned LUNs list is updated to include the LUNs in all
Storage Groups on the storage system.
7. Assign one or more LUNs to the new group by selecting the
LUNs from the Unassigned LUNs list and clicking →.
The LUNs move to the Selected LUNs list.
6864 5738-001
Setting Up Access Logix
8
If the Storage Group is dedicated, you can connect it to only one server.
6864 5738-001
Setting Up Access Logix
8
11. Click OK to apply your changes, and close the Connect Hosts
dialog box.
The selected hostname appears in Hosts Connected To Storage
Group in the Create Storage Group dialog box.
12. If you want to create another Storage Group, click Apply;
otherwise, click OK.
13. In the confirmation dialog box that opens, click Yes to create the
Storage Group and connect the selected servers to it.
14. If you clicked Apply, repeat steps 6 through 13 to create another
Storage Group.
15. When you have created all the Storage Groups you want on the
storage system, click OK.
Each server you selected for a Storage Group should now have a
connection to the Storage Group through each server’s HBA ports
(initiators) connected to the storage system.
What Next?
Continue to the next section to verify the connections to the Storage
Groups you just created.
6864 5738-001
Setting Up Access Logix
8
6864 5738-001
Setting Up Access Logix
8
6. On each server tab in the Used Host Connection Paths list, look
for an enabled path for each SP on the storage system with the
Storage Group.
A path is enabled if its check box is selected. If each SP has an
enabled path, then the server is correctly connected to the Storage
Group.
6864 5738-001
Setting Up Access Logix
8
What Next?
For Non-FC4700 storage systems - Continue to the next section
Verifying or Editing Device Information in the Host Agent Configuration
File (Non-FC4700 storage systems) on page 8-15.
6864 5738-001
Setting Up Access Logix
8
For FC4700 storage systems - You must make the LUNs available to
the operating system as described in the Navisphere Server Software
Administrator’s or User Guide for the operating system.
6864 5738-001
Setting Up Access Logix
8
What Next?
AIX, HP-UX, Linux, or NetWare on the server views the LUNs in a
storage system as identical to standard single disk drives. For AIX,
HP-UX, Linux, or NetWare to use the LUNs, you must make them
available to the operating system as described in the Navisphere
Server Software Administrator’s or User Guide for the operating
system.
Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 8-15
6864 5738-001
Setting Up Access Logix
8
What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.
6864 5738-001
Setting Up Access Logix
8
What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.
Auto Detect does not find FC4700 storage systems because they are managed
through their SP Agents and not through the Host Agent on the server.
What Next?
Windows NT or Windows 2000 on the server views the LUNs in the
storage system as identical to standard single disk drives. For
Windows NT or Windows 2000 to use the LUNs, you must make
them available to the operating system as described in the
Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 8-17
6864 5738-001
Setting Up Access Logix
8
6864 5738-001
Invisible Body Tag
9
Setting Up and Using
MirrorView
The features in this chapter function only with a storage system that has the
optional MirrorView software installed.
6864 5738-001
Setting Up and Using MirrorView
9
MirrorView Overview
MirrorView is a software application that maintains a copy image of a
logical unit (LUN) at separate locations in order to provide for
disaster recovery; that is, to let one image continue if a serious
accident or natural disaster disables the other.
The production image (the one mirrored) is called the primary image;
the copy image is called the secondary image. Each image resides on
a storage system. The primary image receives I/O from a host called
the production host; the secondary image is maintained by a separate
storage system that can be a standalone storage system or connected
to its own computer system. Both storage systems are managed by
the same management station, which can promote the secondary
image if the primary image becomes inaccessible.
The following figure shows two sites and a primary and secondary
image that includes one LUN.
Highly available cluster
File Server Mail Server Database Server Mail Server Database Server
Operating Operating Operating Operating Operating
system A system A system B system A system B
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
SP A SP B SP A SP B
LUN LUN
Cluster LUN LUN
Mail Server
Storage
LUN Storage Group LUN
Group
LUN LUN
6864 5738-001
Setting Up and Using MirrorView
9
MirrorView Terminology
Active state Condition in which a remote mirror is running normally.
Consistent state (of Condition in which a secondary image is identical to either the
image) current primary image or to some previous instance of the primary
image. This means that the secondary image is potentially
recoverable when it is promoted.
Fracture log A bit map, maintained in SP memory, that indicates which portions of
the primary images might differ from the secondary image(s). Use to
shorten the synchronization process after fractures. The image is
maintained in SP memory, so if the SP that controls the primary
image fails, the fracture log is lost and full synchronization of the
secondary image(s) is needed.
Note: This is a double-failure scenario.
Image state Condition of an image. The image states are in-sync, consistent,
synchronizing, and out-of-sync. See States.
Inactive state Remote mirror state where the mirror is unavailable for host I/O.
Attempts to write to or read from a mirror in the Inactive state results
in the error, STATUS_INVALID_DEVICE_STATE.
In-sync state The state in which the data in the secondary image is identical to that
in the primary. On the next I/O, the image state will change to
consistent. Also, see states.
MirrorView mirroring A feature that provides the means for disaster recovery by
maintaining one or more copies (mirrors) of LUNs at distant
locations. MirrorView can work in conjunction with, but is
independent of, the other major CLARiiON® software features
6864 5738-001
Setting Up and Using MirrorView
9
Out-of-sync state Remote mirror state in which the software does not know how the
primary and secondary images differ; therefore a full synchronization
is required to make the secondary image(s) usable. Also, see image
state.
Promote (to primary) The operation by which the administrator changes a secondary image
of a remote mirror to the primary image. As part of this operation, the
previous primary image becomes a secondary image. If the previous
primary image is unavailable when you promote the secondary
image (perhaps because the primary site suffered a disaster), the
software does not include it as a secondary image in the new mirror.
Primary image The LUN that serves as a source for the remote mirrored LUN, which
is the secondary image. There is one primary image and zero or one
secondary images. A remote mirror is ineffective for recovery unless
it has at least one secondary image.
Quiesce threshold or The time period after which, without I/O from the host, any
idle threshold secondary image in the consistent state and not fractured is marked
as being in the in-sync state (the default is 60 seconds). An
administrator can promote an in-sync secondary image to primary
image with no synchronization action required, whereas promoting a
consistent image might lose the latest updates unacknowledged to
the host.
Remote mirror A LUN that is mirrored at different sites. The LUN at one site is
designated as the primary image, and a LUN at another site is called
a secondary image. The software maintains the secondary image as a
byte-for-byte copy of the primary image. If the system at the primary
site fails, a secondary image may be promoted to take over the
primary role, thus allowing access to the data at a remote location.
Remote mirror image The LUN at one site that participates in a remote mirror. The image
(image for short) can be either the primary or a secondary image.
Secondary image A LUN that contains a copy of the primary image LUN. There can be
zero or one secondary images.
6864 5738-001
Setting Up and Using MirrorView
9
States There are two types of states: remote mirror states and image states.
The remote mirror states are inactive, active, and attention. The image
states are in-sync, consistent, synchronizing, and out-of-sync. Note
that I/O can occur to the primary image only when the remote mirror
is in the active state.
Synchronize The process of updating each secondary image with changes from a
primary image. There are several levels of synchronization:
synchronization based on a fracture log, synchronization based on
the optional write intent log, and full synchronization (virtually a
copy). Synchronization based on the fracture or write intent log
requires copying only part of the primary image to the secondary
image(s).
Synchronizing state A secondary image in the process of synchronization. The data in the
image is not guaranteed to be usable until the synchronize operation
completes. Thus, an image in the synchronizing state cannot be
promoted to the primary image. Also, see States.
Write intent log (WIL) A record of changes that were made to the primary image but have
not yet been written to all secondary images. This record is stored in
persistent memory on a private LUN reserved for the mirroring
software. If the primary storage system fails (not catastrophically),
the optional write intent log can be used to quickly synchronize the
secondary image(s) when the primary storage system becomes
available. This avoids the need for full synchronization of the
secondary images, which can be a very lengthy process.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Cross Mirroring
The primary or secondary role applies to an image on the storage
device. A storage system can maintain both primary and secondary
images on the same system, just not in the same mirror.
6864 5738-001
Setting Up and Using MirrorView
9
MirrorView Example The following figure (a copy of the previous one) shows a sample
remote mirror configuration:
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
Adapter
SP A SP B SP A SP B
LUN LUN
Cluster LUN LUN
Mail Server
Storage
LUN Storage Group LUN
Group
LUN LUN
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Restoring the Original Mirror Configuration After Recovery of a Failed Primary Image
If the old primary image becomes accessible after a failure, and the
old mirror is repaired, the old mirror cannot communicate with the
new mirror.
To restore the mirror on the primary host to its original configuration
after the primary image is recovered, do the following:
1. If present, remove the secondary image from the new mirror.
This is the original primary image (LUN xxxx) from the original
mirror.
2. Destroy the original mirror using the Navisphere Manager Force
Destroy menu option.
3. Add a secondary image to the new mirror using the LUN that
was the primary image for the original mirror (LUN xxxx).
4. Synchronize the secondary image.
New Mirror
New Mirror
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
The following table shows how MirrorView might help you recover
from system failure at the primary and secondary sites. It assumes
that the mirror is active and is in the in-sync or consistent state.
Host or storage system running Option 1 - Catastrophic failure, repair is difficult or impossible.
primary image fails. On standby host, mirror goes to attention state. At secondary site, an Administrator
promotes secondary image and then takes other prearranged recovery steps required for
application startup on standby host.
Note: Any writes in progress when the primary storage image fails may not propagate to
the secondary image. Also, if the remote image was fractured at the time of the failure, any
writes since the fracture will not have propagated.
Host or storage system running The mirror goes to attention state, yet access to the primary image continues.The
secondary image fails. administrator has a choice: If the secondary can easily be fixed (for example, if someone
pulled out a cable), then the administrator could have it fixed and let things resume. If the
secondary can't easily be fixed, the administrator can reduce the minimum number of
secondary images allowed (if the mirror requires a secondary image) to let the mirror
become active. The secondary can be fixed and its image added and synchronized later.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Write Intent Log The write intent log keeps track of writes that have not yet been made
to the secondary mirror image. It provides for fast recovery when the
primary storage system fails. When the primary fails and is
recovered, the write intent log is used to synchronize the data on the
secondary mirror image. Otherwise, a full resynchronization would
be required for the secondary mirror image.
The write intent log consists of two private 128 Mbyte LUNs, one
assigned to each SP in the storage system.
By default, newly created remote mirrors do not use this feature and
therefore, it is not necessary to allocate space for the write intent log.
However, if you decide to use this feature for even one remote mirror,
then it becomes necessary to allocate the private disk space for the
write intent log.
To Allocate the The Allocate Write Intent Log dialog box contains two sets of
Write Intent Log controls to specify a RAID Group for each SP. The controls behave the
same for both SPs, except as specifically noted.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Right-click the storage system for which you want to allocate the
write intent log, click Properties, and then click the Remote
Mirrors tab.
6864 5738-001
Setting Up and Using MirrorView
9
If the write intent log is already allocated, Allocate Write Intent Log
changes to Reallocate Write Intent Log.
6864 5738-001
Setting Up and Using MirrorView
9
The RAID type assigned to the write intent log’s RAID Group will be
RAID 0.
6864 5738-001
Setting Up and Using MirrorView
9
If you selected User Specified, and the RAID Group you select for the
write intent log has no LUNs (it is Unbound), the Select RAID Type
dialog box opens. Assign a RAID Type for the write intent log’s LUNs.
No Previous Write Intent Log The application attempts to allocate the new space for the
write intent log (bind new LUNs) and specify those LUNs as
the write intent log LUNs.
Previous Write Intent Log The application attempts to deallocate (unbind) the current
LUNs assigned to the write intent log and allocate (bind)
new ones to the log.
6864 5738-001
Setting Up and Using MirrorView
9
Use either the basic or advanced Create Remote Mirror dialog box to
create a remote mirror.
Creating a Remote The Create Remote Mirror basic dialog box lets you to create a
Mirror - Basic remote mirror with the minimum amount of information. It assumes
the default values for some of the more advanced parameters.
The primary LUN and secondary LUN must have the same number of
blocks. You set the block count when you bind the LUN.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Ensure that a secondary image exists on the secondary system and that it
matches the requirements of the primary image.
Verify the status of the logical connections between storage system. See
Managing MirrorView Connections on page 9-36.
6864 5738-001
Setting Up and Using MirrorView
9
Creating a Remote The Advanced Create Remote Mirror dialog box lets you supply
Mirror - Advanced your own values for the advanced parameters.
The primary LUN and secondary LUN must have the same number of
blocks. You set the block count when you bind the LUN.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Ensure that a secondary image exists on the secondary system and that it
matches the requirements of the primary image.
Verify the status of the logical connections between storage system. See
Managing MirrorView Connections on page 9-36.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
You can also identify remote mirrors on a storage system using the
Storage System Properties - Remote Mirrors tab.
6864 5738-001
Setting Up and Using MirrorView
9
Always view and modify remote mirror properties from the primary storage
system. Information displayed from the secondary storage system may not
be accurate, especially if the primary storage system has lost contact with the
secondary storage system.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
State Meaning
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
State Meaning
6864 5738-001
Setting Up and Using MirrorView
9
8. To apply any changes and close the dialog box, click OK. To apply
any changes and leave the dialog box open, click Apply.
9. Use these buttons to access other options:
• Click System Properties to open the Storage System
Properties dialog box for the storage system selected in
Storage System.
• For each LUN selected in the Images LUNs list, click LUN
Properties to open the LUN Properties dialog box.
• Click Promote to promote the secondary mirror image so that
it becomes the primary image for the remote mirror. The
current primary, if accessible, is demoted so that it is now a
secondary mirror image for the remote mirror.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
3. In the Storage System box, verify that you have selected the right
storage system. If not select another from the list.
The Storage System list includes only those storage systems that
support MirrorView.
Connected Connection is usable and fully None needed, unless you want to remove
established. (SP A <-> SP A and SP B a logical connection.
<-> SP B)
Partially Connected Connection is usable, but not fully Establish a logical connection between
established. (SP A <-> SP A, but SP B SP B on both storage systems, or remove
<-> SP B does not exist) the existing connection.
Unusable (one-way) Connection is unusable since the Try to establish a two-way connection
connection is a one-way connection. (SP between one (Partially Connected) or
A > SP A, or SP B < SP B) both SPs (Connected), or remove any
unusable connections.
Unmanaged Connection is not verifiable since the Manage the storage system and then try
storage system is unmanaged. to establish logical connections, or
remove any connections.
6864 5738-001
Setting Up and Using MirrorView
9
Unknown The connection status cannot be Manage the storage system or determine
determined because either the storage why the storage system is inaccessible.
system is unmanaged or inaccessible.
6864 5738-001
Setting Up and Using MirrorView
9
You can also deactivate a remote mirror using the Remote Mirrors
Properties-General tab.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Advanced Add The Advanced Add Secondary Image dialog box lets you supply
Secondary Image your own values for the advanced parameters.
1. In the basic Add Secondary Image dialog box, click Advanced to
open the Advanced Add Secondary Image dialog box.
The Advanced Add Secondary Image dialog box opens, similar
to the following. For information on the properties in the dialog
box, click Help.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Setting Up and Using MirrorView
9
Create Secondary The Create Secondary Image LUN dialog box lets you create a
Image LUN secondary image that is the same LUN size and RAID type as the
primary image.
To create a secondary image LUN, there must a RAID Group on the
secondary storage system that matches the RAID type of the primary
image. If one does not exist, you can create one.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the primary
image resides.
3. Double-click the RAID Groups icon, and then double-click the
icon for the RAID Group on which the primary LUN resides.
4. Right-click the icon for the LUN for which you want to create a
secondary image LUN, and then click Create Secondary Image
LUN.
The Create Secondary Image LUN dialog box opens, similar to
the following. For information on the properties in the dialog box,
click Help.
6864 5738-001
Setting Up and Using MirrorView
9
7. In Select RAID Group, select the RAID Group for the secondary
image LUN.
Select RAID Group only lists valid RAID Groups for the
secondary LUN. A RAID Group is valid if it is the same RAID
type as the primary LUN or unbound.
If a valid RAID Group does not exist on the secondary storage system,
click New RAID Group to open the Create RAID Group dialog box.
If the RAID Group you select for the secondary image has no LUNs (it is
Unbound), the Select RAID Type dialog box opens. Assign a RAID Type
for the secondary mirror image LUN here.
6864 5738-001
Setting Up and Using MirrorView
9
To Promote a If the existing primary image is accessible, you should deactivate the mirror
Secondary Image and remove the primary image from any Storage Groups before promoting
the secondary image.
You can also use the Remote Mirrors Properties - Secondary Image tab
to promote a secondary image.
6864 5738-001
Setting Up and Using MirrorView
9
To Synchronize a You can also use the Remote Mirrors Properties - Secondary Image tab
Secondary Image to synchronize a secondary image.
6864 5738-001
Setting Up and Using MirrorView
9
To Fracture a 1. In the Enterprise Storage dialog box, click the Storage tab.
Secondary Image 2. Double-click the icon for the storage system on which the
secondary image resides.
3. Double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click a secondary mirror image, and then click Fracture.
You can also use the Remote Mirrors Properties - Secondary Image tab
to fracture a secondary image.
6864 5738-001
Setting Up and Using MirrorView
9
You can also use the Remote Mirrors Properties - Secondary Image tab
to remove a secondary image.
6864 5738-001
Setting Up and Using MirrorView
9
! CAUTION
Force Destroy should only be used in disaster recovery situations.
Normal safety checks are bypassed during the force destroy
operation. The force destroy operation can cause SP failures if used
incorrectly.
To use Destroy, the storage system hosting the primary image must
be managed by the application and you must do the following:
• Remove all secondary images from the remote mirror. See
Removing a Secondary Image from a Remote Mirror on page 9-49.
• Deactivate the mirror. See Deactivating a Remote Mirror on
page 9-39.
To use Force Destroy, the primary image should be removed from
any Storage Groups. See Adding or Removing LUNs from Storage
Groups on page 12-38.
Force Destroy destroys a remote mirror regardless of whether there are any
secondary images.
6864 5738-001
Setting Up and Using MirrorView
9
! CAUTION
Force Destroy should only be used in disaster recovery situations.
Normal safety checks are bypassed during the force destroy
operation. The force destroy operation can cause SP failures if used
incorrectly.
6864 5738-001
Setting Up and Using MirrorView
9
6864 5738-001
Invisible Body Tag
10
Setting Up and Using
SnapView
The features in this chapter function only with a storage system that has the
optional SnapView software installed.
6864 5738-001
Setting Up and Using SnapView
10
SnapView Overview
SnapView is a software application that captures a snapshot image of
a LUN and retains the image independently of any subsequent
changes to the LUN. The snapshot image can serve as a base for
decision support, revision testing, backup, or in any situation where
you need a consistent, stable image of real data.
SnapView can create or destroy a snapshot in seconds, regardless of
the LUN size, since it does not actually copy data. The snapshot
image consists of the unchanged LUN blocks and, for each block that
changes from the snapshot moment, a copy of the original block. The
software stores the copies of original blocks in a private LUN called
the snapshot cache. For any block, the copy happens only once, when
the block is first modified. In summary:
snapshot = unchanged-blocks-on-source-LUN + cached-blocks
As time passes, and I/O modifies the LUN, the number of blocks
stored in the snapshot cache grows. However, the snapshot,
composed of all the unchanged blocks — some from the source LUN
and some from the snapshot cache — remains unchanged.
The snapshot does not reside on disk like a conventional LUN.
However, the copy appears as a conventional LUN to another host.
The snapshot is readable and writable by any other host. This host
can access the copy for data processing analysis, testing, or backup.
A snapshot is accessible for only as long as the snapshot session lasts.
If a the storage system loses power while the session is running, the
snapshot is lost.
6864 5738-001
Setting Up and Using SnapView
10
Continuous I/O
cache
Snapshot
cache
Snapshot
LUN
SnapView Components
SnapView uses three components: a production host, a second host,
and a snapshot session.
The production host
• runs the customer applications that you want to copy.
• owns the source LUN.
6864 5738-001
Setting Up and Using SnapView
10
SnapView Requirements
SnapView has the following requirements:
• The snapshot cache must be established on one or more LUNs
that do not belong to a Storage Group.
• The source and cache LUNs must be owned by the same SP.
• You can use the snapshot in only one snapshot session at a time.
• The source LUN and snapshot must be assigned to different
Storage Groups. The source LUN Storage Group must be
accessible to the production host and the snapshot Storage Group
to the other host.
• The production and second hosts must run the same operating
system.
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Setting Up and Using SnapView
10
Snapshot Session The following figure shows how a snapshot session starts, runs, and
stops.
1. Before session starts 2. At session start (2:00 pm) 3. At start of operation (2:02 pm)
Production Second Production Second Production Second
host host host host host host
Snapshot copy
LUN cache (pointers to chunks)
6864 5738-001
Setting Up and Using SnapView
10
Setting Up SnapView
Before starting a snapshot session, you must complete the following
tasks.
• Make sure that you have bound LUNs available for the snapshot
cache.
• Create snapshots and, for shared storage systems, assign the
snapshots to Storage Groups.
• Configure the snapshot cache.
The snapshot cache must be established on one or more LUNS that do not
belong to a Storage Group.
Snapshot Cache Size An adequate snapshot cache is essential. Since the snapshot cache
stores only blocks of the source LUN’s original data when that data is
first updated on the source LUN, a general guideline for cache size is
10% of the size of the LUN you want to copy. For example, if the LUN
you want to copy belongs to SP A and is 10 Gbytes in size, you will
need a snapshot cache size of at least 1 Gbyte for SP A. See
Configuring an SP’s Snapshot Cache on page 10-12.
If you intend to write to the snapshot, make sure that the snapshot cache is
large enough to store these writes, since all writes to the snapshot are stored
in the snapshot cache.
The same SP that owns the snapshot source LUNs must own the
snapshot cache LUNs. The SP manages the cache space and
apportions it to all source LUNs that are involved in a snapshot
session. Therefore, if you plan to create snapshots for LUNs owned
by SP A and LUNs owned by SP B, configure the snapshot cache for
both SPs.
6864 5738-001
Setting Up and Using SnapView
10
4. In Storage System and Snapshot Source LUN, verify that you are
creating the snapshot for the correct source LUN on the correct
storage system.
5. In Snapshot Name, enter a unique name for the snapshot.
If you enter an invalid name and then click OK, the application displays
an error message.
6864 5738-001
Setting Up and Using SnapView
10
The default value for Server Accessibility is None - no host has access to
the new snapshot.
Before you can access a snapshot, you must add the new snapshot to a
Storage Group and connect a host to the Storage Group.
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Setting Up and Using SnapView
10
To Configure an SP’s If you plan to create snapshots for LUNs owned by SP A and LUNs owned
Snapshot Cache by SP B, configure the snapshot cache for both SPs.
6864 5738-001
Setting Up and Using SnapView
10
4. In Chunk Size (Blocks), select the chunk size (size of each cache
write) for both SPs.
5. In Available LUNs, select the LUNs that you want to add to the
snapshot cache for each SP.
Available LUNs lists only those LUNs that are eligible for
inclusion in the snapshot cache.
6864 5738-001
Setting Up and Using SnapView
10
7. To remove LUNs from the Member LUNs list, select the LUNs
you want to remove, and then click Remove from Cache.
8. When you have added all the LUNs you want to the snapshot
cache, click OK to apply the changes and close the dialog box.
The snapshot cache LUNs are added to either the SP A Cache or
SP B Cache container in the Storage tree.
We recommend that you assign the snapshot to a Storage Group other than
the Storage Group that holds the source LUN.
If the host that will have access to the snapshot already connects to a Storage
Group, add the snapshot to that Storage Group. If you create a new Storage
Group for the snapshot and then connect the host to the new Storage Group,
the host will be removed from the original Storage Group and will no longer
have access to the LUNs in that Storage Group.
6864 5738-001
Setting Up and Using SnapView
10
3. Double-click the icon for the SP that owns the snapshot source
LUN.
4. Double-click the icon for the Snapshots container.
5. Right-click the icon for the snapshot you want to add to a Storage
Group, and then click Add to Storage Groups.
6. In available Storage Groups, select the Storage Group to which
you want to add the snapshot.
The Storage Group moves to Selected Storage Groups.
7. Click OK to add the snapshot to the Storage Group.
8. To connect a host to the Storage Group, right-click the Storage
Group, and then click Connect Hosts.
The Connect Hosts to Storage dialog box opens. To connect the
host to the Storage Group, refer to page 8-9.
Destroying a Snapshot
When you destroy a snapshot, the following is true:
• If the snapshot is participating in a snapshot session, the
application stops the session prior to destroying the snapshot.
• If the snapshot belongs to one or more Storage Groups and you
destroy the snapshot, the hosts connected to the Storage Groups
will no longer have access to the destroyed snapshot.
To Destroy a Snapshot 1. Right-click the icon for the snapshot you want to destroy, and
then click Destroy Snapshot.
2. In the confirmation dialog box, click Yes to destroy the snapshot.
The application removes the snapshot icon from the Snapshots
container in the Storage tree.
6864 5738-001
Setting Up and Using SnapView
10
Using SnapView
Use SnapView to start and stop a snapshot session; to display the
status and properties of the snapshot cache, snapshot sessions, and
snapshots; to verify that the snapshot cache is the necessary size.
A snapshot is accessible for only as long as the snapshot session lasts. If a the
storage system loses power while the session is running, the snapshot is lost.
Normal Mode A normal snapshot session stores both a copy of the unchanged
Snapshot Session source LUN data and statistical data, such as the number of writes
and reads to the cache. When a session is active, a host can read data
from the snapshot.
To Start a Session in Normal Mode
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Right-click the icon for the storage system on which you want to
start the snapshot session, and click Start Snapshot Session.
6864 5738-001
Setting Up and Using SnapView
10
If you do not enter a unique name and then click OK, the software
displays an error message.
6864 5738-001
Setting Up and Using SnapView
10
On the second host, if you have installed the admsnap utility (supplied
with the SnapView software), then use admsnap as follows to make the
new session available for use.
If you have not installed admsnap, then you must reboot the second host
or, using some other means, cause it to recognize the new device created
when you started the snapshot session. Installing admsnap is explained
in the admsnap Host Management Utility Administrator’s Guide.
Simulation Mode Starting a session in simulation mode helps you verify that you have
Snapshot Session correctly configured the size of the snapshot cache. Unlike a session
run in normal mode, a session run in simulation mode, does not store
a copy of the unchanged source LUN data. It records only the
statistical data, such as the number of writes to the cache that would
have occurred had this not been a simulation. This data provides a
reasonable approximation of how large the snapshot cache should be
for this session. We recommend that you make the snapshot cache
larger than required so that you do not run out of cache disk space.
While the session is running, use the Snapshot Session Properties dialog box
to monitor the snapshot cache usage for the SP. (See To Monitor the Snapshot
Cache Usage on page 10-24.)
6864 5738-001
Setting Up and Using SnapView
10
What Next?
While the session is running, monitor the session to determine if the
snapshot cache is the needed size. See To Monitor the Snapshot Cache
Usage on page 10-24.
6864 5738-001
Setting Up and Using SnapView
10
To Display Snapshot 1. In the Enterprise Storage dialog box, click the Storage tab.
Properties
2. Double-click the storage system on which the snapshot resides.
One way to display the snapshot icon is the following:
a. Double-click the Storage Groups icon, and then double-click
the Storage Group on which the snapshot LUN resides.
b. Double-click Snapshot.
3. Right-click the snapshot icon for which you want to display
properties, and click Properties.
6864 5738-001
Setting Up and Using SnapView
10
To Display Snapshot 1. In the Enterprise Storage dialog box, click the Storage tab.
Cache Properties
2. Double-click the storage system on which the snapshot cache
resides, and double-click the Snapshot Cache icon.
3. Right-click SP A or SP B, and click Properties.
6864 5738-001
Setting Up and Using SnapView
10
To Display Snapshot You can view statistics, such as total reads and writes, for an active
Session Properties session. You can also monitor the snapshot cache usage for the SPs
and determine if the cache is the necessary size.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the
snapshot session is running, and double-click Snapshot Sessions.
3. Right-click the icon for the session you want to monitor, and click
Properties.
6864 5738-001
Setting Up and Using SnapView
10
4. To display the statistics for this session, click the Statistics tab.
To Monitor the 1. To help determine if the snapshot cache is the necessary size,
Snapshot Cache under Snapshot Cache in the dialog box, check the value for
Usage Session Usage for SP A or SP B (%).
If the SP usage registers at 80% to 90%, you may want to increase
the size of the snapshot cache. (See Configuring an SP’s Snapshot
Cache on page 10-12.)
2. To display a list of all LUNs participating in this session, click the
Member LUNs tab.
3. Click Close to close the dialog box.
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Setting Up and Using SnapView
10
6864 5738-001
Invisible Body Tag
11
Monitoring
Storage-System
Operation
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
Because the Agent does not always poll a storage system every time a
client application requests a poll, the client application represents
only the information that the Agent currently has for the storage
system. If the information for a storage system has changed, but the
polling interval has not elapsed at the time of the poll request, the
Agent does not poll the storage system. As a result, it cannot notify
the client application of the change.
For example, suppose the Agent polling interval is 60 seconds. If a
client application sends the Agent a request to poll a storage system
at 6:00:00, the Agent polls the storage system and notifies the client
application of any change in the storage system. In this situation, the
client application reflects the current state of the storage system after
the poll request.
As determined by the polling interval, the Agent does not poll the
storage system again until at least 6:01:00. If a client application
requests a poll of the storage system between 6:00:00 and 6:01:00, the
application reflects only the state of the storage system at 6:00:00.
Thus, if a disk in the storage system fails at 6:00:25, the client
applications that request a poll of the storage system between 6:00:25
and 6:01:00 are not notified of the disk failure.
The Agent does not poll the storage system again until it receives the
first client application request to poll the storage system after 6:01:00.
At this time, the Agent:
• polls the storage system
• updates its information on the storage system
• notifies the request from a client application of the disk failure
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
To Disable or Re-enable Automatic Polling or Set Polling Priority for an Individual Storage
System
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Right-click the icon for the storage system whose automatic
polling properties you want to change, and click Properties.
6864 5738-001
Monitoring Storage-System Operation
11
The Properties dialog box for the storage system opens, similar to
the following. For information on the properties in the dialog box,
click Help.
For all shared storage systems - The Data Access tab is visible.
For non-FC4700 storage systems - The Configuration Access tab is visible.
For FC4700 storage systems - The Storage tab is visible, and if MirrorView is
installed, the Remote Mirrors tab is visible.
3. Under Configuration
a. Clear the Enable Automatic Polling check box to disable
automatic polling for the storage system, or select it to
re-enable automatic polling for the storage system.
b. In the Automatic Polling Priority box, enter the desired
priority.
6864 5738-001
Monitoring Storage-System Operation
11
Manager will poll the storage system only if automatic polling for the
session (background polling) is enabled.
6864 5738-001
Monitoring Storage-System Operation
11
1. If icons for any of the storage systems you want to monitor do not
appear in the Enterprise Storage dialog box, follow these steps:
a. In the Main window, select the File menu and click Select
Agents.
The Agent Selection dialog box opens.
b. For each server whose storage systems are missing from the
Main window, do one of the following:
• If the server is in the Agents list, click the icon and click →.
• If the server is not in the Agents list, enter the server
hostname in Agent to Add, and click →.
c. When Managed Agents contains just the storage systems that
you want to manage, click OK.
2. If you want to monitor only some of the storage systems on a
managed server, right-click the icon for the server in the
Enterprise Storage dialog box, and click Unmanage.
3. Either enable automatic polling for all the storage systems, or
manually poll them periodically.
4. Periodically look at the Application icon or the storage-system
icons.
If you are managing many storage systems, it is more convenient
to look at the Application icon than the storage-system icons.
You can update the information the icons reflect by clicking Poll
on the Main window toolbar. As soon as the Agent on the server
for a storage system polls a selected storage system for
information, it responds to your request for updated information.
See the following tables for descriptions of the icon states.
6864 5738-001
Monitoring Storage-System Operation
11
Grey All managed storage systems are in a normal operating state For more information about a storage
system, display its Properties dialog box by right-clicking its icon and clicking Properties.
Flashing All managed storage systems are in a transitional operating state. For more information about a
blue storage system, display its Properties dialog box by right-clicking its icon and clicking Properties.
Flashing One or more storage systems are faulted. Look for orange storage-system icons, and go to
orange Storage-System Faults on page 11-10.
6864 5738-001
Monitoring Storage-System Operation
11
Storage-System Faults
For information about all storage systems with faults, start with step
1 in the following procedure. For information about a specific storage
system with faults, start with step 2 in the following procedure.
You can also display the Fault Status Report dialog box by right-clicking
the storage-system icon, and then clicking Faults.
6864 5738-001
Monitoring Storage-System Operation
11
• If the dialog box has different tabs, click a tab to view the
additional properties.
d. For each orange icon that does not have a menu associated
with it, do the following:
• Double-click the icon to display the icons for its
components.
• For each orange component icon, repeat steps c and d.
e. For more information about a FRU represented by an orange
icon, go to the selection listed below for the FRU.
Orange Disk Icon An orange disk icon indicates that the disk it represents is in one of
the states listed below.
State Meaning
Removed Removed from the enclosure; applies only to a disk that is part
of a LUN.
You can determine the state of a disk from the General tab of its Disk
Properties dialog box (page 11-26).
6864 5738-001
Monitoring Storage-System Operation
11
! CAUTION
Removing the wrong disk can introduce an additional fault that
shuts down the LUN containing the failed disk. Before removing a
disk, be sure to verify that the suspected disk has actually failed by
checking its orange check or fault light or the SP event log for the
SP that owns the LUN containing the disk. In addition to checking
the log for messages about the disk, also check for any other
messages that indicate a related failure, such as a failure of a SCSI
bus or a general shutdown of an enclosure. Such a message could
mean the disk itself has not failed. A message about the disk will
contain its module ID.
The icon for a working hot spare in a RAID Group may be orange
instead of blue if you replace the failed disk that the hot spare is
replacing while the hot spare is transitioning into a group. When
this happens, the icon for a working SP is orange instead of green.
The Fault Status Report dialog box says the storage system is
normal instead of transitioning, the state property for the hot spare
is faulted instead of transitioning, and the state of the SP is normal
(the correct state).
After you confirm the failure of a disk, the system operator or service
person should replace it, as described in the storage-system
installation and service manual.
You must replace a failed disk with one of the same capacity and format.
The rest of this section discusses a failed disk in RAID 0 or Disk LUN
or a RAID 5, 3, 1, or 1/0 LUN and a failed vault disk when
storage-system write caching is enabled.
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
C1000 A0 through A4
Orange SP Icon An orange SP icon indicates that the SP it represents has failed. When
an SP fails, one or more LUNs may become inaccessible and the
storage system’s performance may decrease. In addition, the SP’s
check or service light turns on, along with the check or service light
on the front of the storage system.
If the storage system has a second SP and ATF (Application
Transparent Failover) software is running on the server, the LUNs
that were owned by the failed SP may be accessible through the
working SP. If the server is not running failover software and a
number of LUNs are inaccessible, you may want to transfer control of
the LUNs to the working SP (Chapter 12).
6864 5738-001
Monitoring Storage-System Operation
11
! CAUTION
The icon for a working SP may appear orange instead of green
when you replace a failed disk in a RAID Group with a working
hot spare while it is transitioning into the group to replace the
failed disk. When this happens, the icon for the hot spare is orange
instead of blue. The Fault Status Report dialog box says the storage
system is normal instead of transitioning, the state property for the
SP is normal (the correct state), and the state property for the hot
spare is faulted instead of transitioning.
Orange LCC Icon An orange link control card (LCC) icon indicates that the LCC that it
represents has failed. In addition, the LCC’s fault light turns on, along
with the service light on the front of the storage system.
When an LCC fails, the SP it is connected to loses access to its LUNs,
and the storage system’s performance may decrease. If the storage
system has a second LCC and the server is running failover software,
the LUNs that were owned by the SP connected to the failed LCC
may be accessible through the SP connected to the working LCC. If
the server is not running failover software, you may want to transfer
control of the inaccessible LUNs to the SP that is connected to the
working LCC (Chapter 12).
The system operator or service person can replace the LCC under
power, without interrupting applications to accessible LUNs. The
storage-system installation and service manual describes how to
replace an LCC.
6864 5738-001
Monitoring Storage-System Operation
11
For any C-series storage system, an orange Fan A icon and a green
and grey Fan B icon indicate that its fan module has one fault. An
orange Fan A icon and an orange Fan B icon indicate that its fan
module has two or more faults.
Drive Fan Pack If one fan fails in a drive fan pack, the other fans speed up to
compensate so that the storage system can continue operating. If a
second fan fails and the temperature rises, the storage system shuts
down after about two minutes.
If you see an orange Fan A icon in an FC-series storage system, the
system operator or a service person should replace the entire drive
fan pack as soon as possible. The storage-system installation and
service manual describes how to replace the fan pack.
Do not remove a faulted drive fan pack until a replacement unit is available.
You can replace the drive fan pack while the DPE or DAE is powered up.
If the drive fan pack in a DPE is removed for more than two minutes,
the SPs and the disks power down. The SPs and disks power up
when you reinstall a drive fan pack.
If the drive fan pack in a DAE is removed for more than two minutes,
the Fibre Channel interconnect system continues to operate, but the
disks power down. The disks power up when you reinstall a drive
fan pack.
SP Fan Pack If one fan fails in an SP fan pack, the other fans speed up to
compensate so that the storage system can continue operating. If a
second fan fails and the temperature rises, the storage system shuts
down after about two minutes.
If you see an orange Fan B icon, the system operator or a service
person should replace the entire fan pack or module as soon as
possible. The storage-system installation and service manual
describes how to replace the fan pack or module.
6864 5738-001
Monitoring Storage-System Operation
11
Do not remove a faulted SP fan pack until a replacement unit is available. You
can replace the fan pack when the DPE is powered up. If the fan pack is
removed for more than two minutes, the SPs and the disks power down.
They power up when you reinstall an SP fan pack.
Fan Module Each C-series storage system has one fan module. The following table
shows the number of fans per number of slots in the enclosure.
30-slot 9
20-slot 6
10-slot 3
If any fan fails, the fault light on the back of the fan module turns on.
The storage system can run after one fan fails; however, if another fan
failure occurs, the storage system shuts down after two minutes.
If you see an orange Fan A icon in a C-series storage system, the
system operator or a service person should replace the entire fan
module as soon as possible. The storage-system installation and
service manual describes how to replace the fan module.
Swinging the fan module away from the enclosure or removing it for more
than two minutes may cause the storage system to overheat. To prevent
damage to the disks from overheating, the storage system shuts down if you
unlatch or remove the fan module for more than two minutes. You should
not leave a fan module unlatched or removed for more than the absolute
minimum amount of time that you need to replace it.
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
Orange SPS Icon An orange standby power supply (SPS) icon indicates that the SPS it
represents has an internal fault. When the SPS develops an internal
fault, it may still be able to run on line, but the SPs disable write
caching. The storage system can use the write cache only when a fully
charged, working SPS is present.
However, if the storage system has a second SPS, write caching can
continue when one SPS has an internal fault or is not fully charged.
The status lights on the SPS indicate when it has an internal fault,
when it is recharging, and when it needs replacing because its battery
cannot be recharged.
Each week, the SP runs a battery self-test to ensure that the
monitoring circuitry is working in each SPS. While the test runs,
storage-system write caching is disabled, but communication with
the server continues. I/O performance may decrease during the test.
When the test is finished, storage-system write caching is re-enabled
automatically. The factory default setting has the battery test start at
1:00 a.m. on Sunday, which you can change (Chapter 6).
When the SPS Fault light or the SPS Replace Battery light is lit, the
system operator or service person should replace the SPS as soon as
possible. The SPS installation and service manual describes how to
replace an SPS.
If the storage system has two SPSs, you can replace one of them while
the DPE is powered up, but we recommend that you disable
storage-system write caching before removing the SPSs. (Chapter 6).
6864 5738-001
Monitoring Storage-System Operation
11
Orange BBU Icon An orange battery backup unit (BBU) icon indicates that the BBU in a
C-series storage system is in one of the states listed below.
You can determine the state of a BBU from its Properties dialog box
(page 11-26).
When a BBU fails:
• Storage-system write caching is disabled and storage-system
performance may decrease.
You can determine whether storage-system write caching is
disabled from the Cache tab of the storage-system Properties
dialog box (page 11-22).
Storage-system write caching remains disabled until the BBU is
replaced.
• The BBU service light turns on, indicating that the BBU is either
charging or not working.
After a power outage, a BBU takes 15 minutes or less to recharge.
From total depletion, recharging takes an hour or less.
Each week, the SP runs a self-test to ensure that the BBU’s monitoring
circuitry is working. While the test runs, storage-system caching is
disabled, but communication with the server continues. I/O
performance may decrease during the test. When the test is finished,
storage-system caching is re-enabled automatically. The factory
default time for the BBU test to start is 1:00 a.m. on Sunday, which
you can change (Chapter 6).
A system operator or service person can replace a failed BBU under
power without interrupting applications. The storage-system
installation and service manual describes how to replace the BBU.
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
To Display SP Properties
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Double-click the icon for the storage system with the SP whose
properties you want to display.
3. If the Equipment tab is displayed, then for any FC-series storage
system except an FC5000 series, double-click the Enclosure 0 icon.
4. Double-click the SPs icon.
5. Right-click the icon for the SP whose properties you want to
display, and click Properties.
The SP Properties dialog box opens with the General tab
displayed. For a description of each property, click Help in the
dialog box.
6. If you want to view the other SP properties, click one of the
following tabs. The tab opens in the SP Properties dialog box.
click Help for more information.
• For SP cache information, click the Cache tab.
• For SP statistics, click the Statistics tab.
• If the selected SP is in an FC4700 storage system, and you
want to view its additional properties
• For SP network information, click the Network tab.
• For information on the SCSI ID s for the SP’s front-end
ports, click the ALPA tab.
• For information on the SP Agent, click the Agent tab.
6864 5738-001
Monitoring Storage-System Operation
11
4. Right-click the icon for the RAID Group whose properties you
want to display, and click Properties.
The RAID Group Properties dialog box opens with the General
tab displayed.
5. If you want to display information about the LUNs on the RAID
Group, click the Partitions tab.
The Partitions tab is displayed in the RAID Group Properties
dialog box. For a description of each property, click Help in the
dialog box.
6864 5738-001
Monitoring Storage-System Operation
11
5. Right-click the icon for the SPS or BBU whose properties you
want to display, and click Properties.
The Battery Test Time dialog box opens. For a description of each
property, click Help in the dialog box.
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
Displaying Events You can display all the events in the log or filter the events in the log
to display the following:
• All events as of a specified date and time.
• All events for all components (that is, all FRUs and LUNs) or a
specified component.
• All events as of a specified date and time for all components or a
specified component.
You specify all components or an individual component by selecting
an entry from the Filter by and Filter for lists. The default selection
for Filter by is All and Filter for is unavailable.
You specify the date and time using one of the following formats or
by selecting an entry from a list.
Format Meanings
MM /DD/YY HH:MM:SS Display all events logged since hour HH, minute MM,
second SS on month MM, day DD, year YY.
Yesterday Display all events logged since 00:00:00 on the previous day.
Last week Display all events logged since 00:00:00 seven days ago.
6864 5738-001
Monitoring Storage-System Operation
11
! CAUTION
Clearing events permanently deletes all the events in the log file.
All the events in the SP log may not be displayed if the Agent is configured to
limit the numbers of entries that it retrieves on startup.
To Display Only Events for a Specific Component from a Specific Date and Time
1. In Show Events as of, enter the desired date and time.
The list of events is updated to show only events from the new
date and time.
2. In the Filter by list, click the type of component whose events you
want to display.
The Filter for list selection updates to the default component of
the specified component type. The list of events is updated to
show only events for the default component of the specified
component type.
3. In the Filter for list, click the specific component whose event you
want to display.
The list of events is updated to show only events for the specified
component.
To sort the events in the table:
Click the header for the column you want to use to sort the events in
the table.
The events are sorted by the values in the selected column. The first
time you click a column to sort the events, they are listed in ascending
6864 5738-001
Monitoring Storage-System Operation
11
order. The second time you click the same column, the events are
listed in descending order. The third time you click the same column,
the events are listed in ascending order, and so on.
6864 5738-001
Monitoring Storage-System Operation
11
You can also open the event logs from the Event Monitor Configuration
window. To do so, right-click the icon for the monitoring host (SP Agent)
for which you want to view the storage-system logs, and click View
Events.
6864 5738-001
Monitoring Storage-System Operation
11
Event Code - The hexadecimal code for the type of event that
occurred.
Description - An abbreviated description of the message. See the
Storage-System and Navisphere Event Messages Reference for a more
detailed description of codes.
Subsystem - The name of the storage system that the event is
about.
SP - The name of the SP that the event is about.
Host - Name of the host that the Agent is running on.
Filtering Events If you have an event log with multiple events, you can reduce the log
size by filtering the events. Filtering events lets you view only the
event types you specify.
1. In the Events window, click Filter.
The Event Filter dialog box opens.
6864 5738-001
Monitoring Storage-System Operation
11
2. When you have finished viewing the event detail, you can do any
of the following:
• Click Next to view the next event detail.
• Click Previous to view the previous event detail.
• Click Close to close the Event Detail dialog box.
3. For more information about the properties in the dialog box, click
Help.
6864 5738-001
Monitoring Storage-System Operation
11
Printing Events You can print all the events displayed in the Event window by
clicking the Print button in the window. Since the number of
displayed events may be every large, we recommend that you save
the events to a file, and print the file using another application, such
as Microsoft Excel or Microsoft Word, as follows:
1. In the Events window, click Save.
A Save as dialog box opens.
2. In File name, enter the name of the file in which you want to save
the events displayed in the Events window.
3. In Save as type, select Text Files (*.txt) from the list.
4. Open the file in another application, such as Microsoft Excel or
Notepad.
5. Highlight only the text that you want to print, and copy the text
to the clipboard.
6. Paste the events on a fresh page in the application.
7. Print your file.
Clearing Events
! CAUTION
Clearing events permanently deletes all the events in the log file.
6864 5738-001
Monitoring Storage-System Operation
11
6864 5738-001
Monitoring Storage-System Operation
11
If multiple events occur at the same time (or close enough that the
zooming level does not allow separate pixels for each event), the
color of the event marker is that of the highest priority event. The
height of the event marker shows how many events are represented
by the event marker.
Use the Zoom In and Zoom Out buttons on the toolbar to change the
displayed time interval of the timeline.
The default time interval for the timeline is twelve hours. The time
intervals you can display are:
• 2 days
• 1 day
• 12 hours
• 6 hours
• 1 hour
• 10 minutes
The system updates the timeline as events occur unless you freeze the
timeline with the Stop button. When the system updates the timeline,
it also updates the start and end times above the timeline.
When you move the mouse over an event marker, the timeline
displays information about that marker’s event. If the event marker
represents more than one event, the timeline displays information for
the most recent event with the highest priority.
The graphic in the upper right corner of the timeline window
represents the category of event codes displayed.
When you click an event marker, information about the events in that
marker appears in a separate Event Selection window.
6864 5738-001
Monitoring Storage-System Operation
11
Zoom Out Displays the timeline at the next largest time interval.This button
is not active when you are already displaying the maximum
allowed time.
Time Updates the timeline so that the end time is the current time.
This does not change the current time interval.
Stop A toggle switch to stop and start timeline updates. If you click this
button, the system will not update timeline with new events until
you click the button again.
Subsystem Name of the storage system that generated the event. Displays N/A for
non-device event types.
Event Code Displays the numerical code that pertains to the particular event.
6864 5738-001
Monitoring Storage-System Operation
11
The list of events includes all events that the selected event
marker represents.
2. In the Line column, select an event by clicking its severity icon or
line number. (In the above example, the second event in the
window is selected.)
3. Click OK.
6864 5738-001
Monitoring Storage-System Operation
11
The Event Selection dialog box closes and the Event Detail
dialog box opens, similar to the following. For more information
about the dialog box, click Help.
The Previous and Next buttons appear dimmed and are unavailable
when you open this dialog box from within Event Monitor.
6864 5738-001
Invisible Body Tag
12
Reconfiguring LUNs,
RAID Groups, and
Storage Groups
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Reconfiguring LUNs
After you bind a LUN, you can change all of the LUN’s properties
without unbinding it (and thus losing its data), except for the
following:
• Unique ID
• Element size
• RAID type
To change any of these three properties, follow the procedures below.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Changing the LUN Enable Read Cache or Enable Write Cache Properties
Changing the enable read cache or enable write cache properties for a
LUN does not affect the data stored on the LUN.
1. Display the icon for the LUN whose cache properties you want to
change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
The LUN Properties dialog box for the LUN opens.
4. Select the Read Cache Enabled check box to enable read caching
for the LUN, or clear it to disable read caching for the LUN.
5. Select the Write Cache Enabled check box to enable write caching
for the LUN, or clear it to disable write caching for the LUN.
6. Click OK to apply the settings and close the dialog box.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
A LUN with read caching enabled uses default values for its
prefetching properties. The next section describes how to change
these properties.
A LUN with read caching enabled can use read caching only if the read cache
for the SP that owns it is enabled. Similarly, a LUN with write caching
enabled can use write caching only if the storage-system write cache is
enabled. To enable the read cache for an SP or the storage-system write cache,
see Chapter 6.
Changing the Rebuild Priority, Verify Priority, or Auto Assign Property for a LUN
Changing the rebuild priority, verify priority, or auto assign property
for a LUN does not affect the data stored on the LUN.
1. Display the icon for the LUN whose properties you want to
change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties that you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Prefetch multiplier 4
Segment multiplier 4
Idle count 40
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
We recommend that you use the default values, unless you are certain that
the applications accessing the LUN will benefit from changing the values.
To Change Prefetch 1. Display the icon for the LUN whose prefetch properties you want
Properties to change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
The LUN Properties dialog box for the LUN opens.
3. Click the Prefetch tab.
The LUN Properties - Prefetch tab opens, similar to the
following. For information on the properties in the dialog box,
click Help.
4. If you want to use the default values, select the Use Default
Values check box, and click OK to apply the default values and
close the dialog box.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
The auto assign property of a LUN and the ATF software or its
equivalent can also transfer control of a LUN from one SP to another. For
information on the auto assign property, see Chapter 6; for information
on ATF, see the ATF manual. If you have failover software on the server,
you should use it to handle the failure situations just listed, instead of the
procedure in this section.
To Transfer Default 1. Display the icon for the LUN whose default SP owner you want
Ownership of a LUN to change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
The LUN Properties dialog box for the LUN opens, similar to the
following.
If the MirrorView feature is installed, the LUN Properties dialog box has a
Mirror tab.
Allow at least 3 minutes for the storage system to power up and become
ready. Polling may fail while the storage system is reinitializing.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Unbinding a LUN Typically, you unbind a LUN only if you want to do any of the
following:
• Destroy a RAID Group on a RAID Group storage system. (You
cannot destroy a RAID Group with LUNs bound on it.)
• Add disks to the LUN. (If the LUN is the only LUN in a RAID
Group, you can add disks to it by expanding the RAID Group.)
• Use the LUN’s disks in a different LUN or RAID Group.
• Recreate the LUN with different capacity disks.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
In any of these situations, you should make sure that the LUN
contains the disk you want. In addition, if the LUN is part of a
Storage Group, you must remove it from the Storage Group before
you unbind it.
This section describes how to do the following:
• Determine which disks make up a specific LUN (page 12-15).
• Remove a LUN from the Storage Groups that contain it when you
know which server uses the LUN (page 12-15) or which storage
system contains the LUN (page 12-19).
• Unbind a LUN (page 12-22).
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
7. In the Selected LUNs list under Select LUNs for Storage Group,
select the LUN you want to remove from the group, and click ←.
The LUN moves from Selected LUNs to Unassigned LUNs.
8. Click OK to save the change and return to the Storage System
Properties dialog box.
9. For each Storage Group containing the LUN, repeat steps 5
through 8.
10. In the Storage System Properties box, click OK to remove the
LUN from the Storage Groups and close the dialog box.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
To Unbind a LUN You cannot unbind a LUN in a Storage Group until you remove the
LUN from the group as described in one of the two previous
procedures.
! CAUTION
Unbinding a LUN destroys any data on it. Before unbinding a
LUN, make a backup copy of any data on it that you want to retain.
Do not unbind the last LUN owned by an SP connected to a
NetWare or Solaris server unless it is absolutely necessary. If you
do unbind it, do the following:
For each server with access to the LUN that you want to unbind,
follow the step below for the operating system running on the server:
AIX or HP-UX - Unmount all file systems on the server associated
with the LUN, and deactivate and then export the volume group
associated with the LUN.
NetWare - Unmount all volumes on all partitions that are
associated with the LUN, and then delete these volumes and
partitions.
Solaris - Unmount all partitions that are associated with the LUN.
Windows - Stop all processes on the partitions associated with
the LUN and delete the partitions.
10. Display the icon for the LUN you want to unbind.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN you
want to unbind.
c. Double-click the icon for the SP that owns the LUN.
11. Right-click the icon for the LUN to unbind, and click Unbind.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
To Change the User 1. Back up any data you want to retain on the LUN whose user
Capacity of a LUN capacity you want to change.
2. Unbind the LUN (page 12-22).
3. Bind the LUN with the new user capacity (page 7-10 for a LUN on
a non-RAID Group storage system; page 7-27 for a LUN on a
RAID-Group storage system).
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
RAID 5 3 through 16
RAID 0 3 through 16
To Expand a RAID 1. Display the icon for the RAID Group that you want to expand.
Group
One way to display the icon for a RAID Group is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to expand.
c. Double-click the RAID Groups icon.
2. Right-click the icon for the RAID Group you want to expand, and
click Properties.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
All disks in a RAID Group must have the same capacity to fully use the
storage space on the disks. The capacity of a RAID Group that supports
the Hot Spare RAID type must be at least as great as the capacity of the
largest disk module in any LUN on the storage system.
5. Under Available Disks, for each disk that you want to add to the
RAID Group, click the icon for the disk and then click →.
The disk icon moves into Selected Disks.
6. If the RAID Group contains only one LUN with a user capacity
equal to the RAID Group’s user capacity and you want that
LUN’s user capacity to increase by the user capacity of the added
disks, select the Expand LUN with RAID Group check box.
Otherwise, clear the check box.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
7. When Selected Disks contains only the icons for the disks you
want to add to the RAID Group, click OK.
The RAID Group expansion dialog box closes and the expansion
operation starts. Percent Expanded in the RAID Group
Properties dialog box displays the percentage of the operation
that is completed. When the percentage is 100, the operation is
finished.
What Next? What you do next depends on whether you cleared or selected the
Expand LUN with RAID Group check box.
Check box cleared - You can bind additional LUNs on the RAID
Group.
Check box selected - You need to make the additional space on the
LUN available to the operating system on the server as follows:
• AIX - Change the size of the file system on the LUN using the
following command: chfs -a size where size is the capacity of the
LUN in 512 blocks.
• Solaris - Change the size of the file system on the LUN using the
Solstice Disk Suite command growfs.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Before you can destroy a RAID Group, you must unbind all the LUNs on it.
Unbinding a LUN destroys all the data on it.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
! CAUTION
Unbinding a LUN destroys any data on it. Before unbinding a
LUN, make a backup copy of any data on it that you want to retain.
Do not unbind the last LUN owned by an SP connected to a
NetWare or Solaris server unless it is absolutely necessary. If you
do unbind it, you will have to do the following:
1. For each server with access to any LUN in the RAID Group that
you want to destroy, follow the step below for the operating
system running on the server:
AIX or HP-UX:
a. Unmount all file systems on the server associated with each
LUN in the RAID Group.
b. Deactivate and then export the volume group associated with
each LUN.
NetWare:
a. Unmount all volumes on all partitions that are associated with
each LUN in the RAID Group.
b. Delete these volumes and partitions.
Solaris:
Unmount all partitions that are associated with each LUN in the
RAID Group.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Windows:
Stop all processes on the partitions associated with each LUN in
the RAID Group and delete the partitions.
2. Display the icon for the RAID Group that you want to destroy.
One way to display the RAID Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to destroy.
c. Double-click the RAID Groups icon.
3. Unbind the LUNs in the RAID Group you want to destroy as
follows:
a. Double-click the icon for the RAID Group you want to destroy.
b. For each LUN in the RAID Group, right-click its icon, click
Unbind LUN, and then click Yes in the confirmation dialog
box that opens.
4. Right-click the icon for the RAID Group to destroy, and click
Destroy.
A confirmation dialog box opens warning you that destroying a
RAID Group destroys all data stored on the Group and asking
you to confirm the destroy operation.
5. Click Yes to confirm the operation.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
Removing a LUN from a Storage Group makes the LUN inaccessible to the
servers connected to the Storage Group. Adding a LUN to a Storage Group
makes the LUN accessible to the servers connected to the Storage Group.
You can add a selected LUN to or remove a selected LUN from one or
more Storage Groups using the Select Storage Groups dialog box,
which you display from the LUN icon (page 12-38).
You can add one or more LUNs to or remove one or more LUNs from a
selected Storage Group using the Modify Storage Group dialog box,
which you display from the Storage Group icon (page 12-39).
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
2. Right-click the icon for the Storage Group to which you want add
or remove the LUN, and click Properties.
A Properties dialog box for the Storage Group opens.
3. Click Select LUNs.
A Modify Storage Group dialog box opens, similar to the
following.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
3. For the servers that you want to connect to the selected Storage
Group, do the following:
a. If a server is connected to a different Storage Group, select the
Show Hosts Connected to Other Storage Groups check box.
All servers connected to Storage Groups on the storage system
are listed in the Available Hosts list.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
All servers connected to a Storage Group lose access to the LUNs in the
Storage Group after you destroy the group. The LUNs in a Storage Group are
not unbound when you destroy it.
To Destroy a Single 1. Display the icon for the Storage Group you want to destroy.
Storage Group
One way to display the Storage Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for a storage system with the Storage
Group you want to destroy.
c. Double-click the Storage Groups icon.
2. Right-click the icon for the Storage Group you want to destroy,
and click Destroy.
3. In the confirmation dialog box, click Yes to destroy the Storage
Group and close the Destroy Storage Groups dialog box.
The Storage and Hosts trees are updated to reflect the change.
To Destroy One or 1. In the Enterprise Storage dialog box, click the Storage tab.
More Storage Groups
2. Right-click the icon for the storage system with the Storage
Groups you want to display, and click Properties.
The Storage System Properties dialog box opens.
3. Click the Data Access tab.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
The Data Access tab in the Storage System Properties dialog box,
opens, similar to the following.
4. In the Storage Groups list, select the Storage Groups that you
want to destroy, and click Destroy.
5. In the confirmation dialog box, click Yes to destroy the Storage
Groups and close the Destroy Storage Groups dialog box.
The Storage and Hosts trees are updated to reflect the change.
6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12
6864 5738-001
Invisible Body Tag
13
Reconfiguring Storage
Systems
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
When you install Core Software, at least two of the database disks
must be online, and ideally, all of them should be online. A disk is
online if it is fully powered up and not faulted; that is, if Current
State is Normal on its Disk Properties dialog box. If you try to power
up the storage system without two of these disks in place, the
powerup fails.
The file for the new Core Software revision must be on a host that can be
reached across a network from the server connected to the storage systems
whose Core Software you want to upgrade.
To Upgrade Core 1. In the Enterprise Storage dialog box in the Main window, click
Software either the Equipment tab or the Storage tab to display the
storage-system tree.
2. Right-click the system or systems on which you want to install or
upgrade the Core Software.
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
FC4700,
FC4400/4500, C1900,
Icon FC5600/5700 FC5200/5300 C2x000, C3x00 C1000
Disks 0-0 through 0-0 through A0, B0, C0, D0, E0 A0 through A4
0-8 0-4
6864 5738-001
Reconfiguring Storage Systems
13
Setting Up Caching 1. Assign memory to the partitions for the caches you will use
(page 6-15).
2. Enable the storage-system (SP) caches that you will use, and set
the other storage-system cache properties (page 6-18).
3. Enable read or write caching for each LUN that you want to use
read or write caching, as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN that
will use caching.
c. Double-click the SP icon that owns the LUN.
d. Right-click the LUN icon, and then click Properties.
e. Click the Cache tab.
f. Select the Read Cache Enabled check box to enable read
caching for the LUN.
g. Select the Write Cache Enabled check box to enable write
caching for the LUN.
h. Click OK to apply the changes and close the LUN Properties
dialog box.
6864 5738-001
Reconfiguring Storage Systems
13
To Replace a Group of Disks That Does Not Include All the Databse Disks
You do not need to power off the storage system during the following
procedure.
6864 5738-001
Reconfiguring Storage Systems
13
To Replace a Group of Disks That Does Include All the Database Disks
! CAUTION
Do not power off the storage system during the following
procedure.
6864 5738-001
Reconfiguring Storage Systems
13
b. Type the name of the new Host Agent in the Agent to Add
box, and click →.
The Host Agent displays in the Managed Agents box.
c. Click OK to manage the Host Agent and close the dialog box.
An icon for each Host Agent displays in the Equipment and
Hosts trees.
6864 5738-001
Reconfiguring Storage Systems
13
5. In the Storage Groups list, select the Storage Group that you
want to connect to the new server and click Connect Storage.
6864 5738-001
Reconfiguring Storage Systems
13
6. In the Available Hosts list, select the new server and click ↓.
The new server moves into the Hosts to be Connected list.
7. Click OK to apply your changes.
8. In the confirmation dialog box that opens, click Yes to connect the
new server to the Storage Group.
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Reconfiguring Storage Systems
13
6864 5738-001
Invisible Body Tag
A
Troubleshooting
Manager Problems
6864 5738-001
Troubleshooting Manager Problems
A
6864 5738-001
Troubleshooting Manager Problems
A
6864 5738-001
Troubleshooting Manager Problems
A
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
device information in Agent configuration file, Storage System Selection 4-5, 13-4
editing User Options 2-9, 3-37
AIX server 7-38, 8-15 disable size property, defined 12-8
HP-UX server 7-38, 8-15 disk IDs button 3-9
NetWare server 7-38, 8-15 Disk Selection dialog box 7-16
Solaris server 7-39, 8-16 disk type, defined 7-3
Windows server 7-40, 8-17 disk-array storage system, see storage system
dialog boxes disks
Advanced Bind LUN 7-15 cache vault 11-14
Advanced Bind LUN (RAID Groups) 7-33 database 13-2
Agent Selection 2-11 failure states 11-11
Battery Test Time 6-31 faulted 11-11
Bind LUN 7-11 rebuilding on hot spare 11-13
Bind LUN (RAID Groups) 7-29 icons for 3-21
Connect Hosts to Storage, advanced 12-44 what to do when orange 11-11
Create RAID Group in LUN 12-15
advanced 7-24 in RAID Group 12-33
basic 7-22 increasing capacity of 13-9
Create Snapshot 10-10 menu 3-24, 3-25
Create Storage Group 8-7 number in RAID types 7-3
Disk Properties 13-3 properties, displaying 11-26
Disk Selection 7-16 displaying the connectivity map 3-6
Enable Management Login 6-6 drive fan pack, faulted 11-16
Enterprise Storage 3-34 dual board unbind error A-2
Failover Status 11-21
Fault Status Report 11-10
E
Host Properties 5-6
element size property
Host Properties, Storage tab 12-16
defined 7-4
LUN Properties
setting when binding LUN
Cache tab 12-4
non-RAID Group storage system 7-18
General tab 12-6
RAID Group storage system 7-34
Prefetch tab 12-10
enable auto assign property
RAID Group Properties
defined 7-7
General tab 12-26
setting when binding LUN
Partitions tab 12-32
non-RAID Group storage system 7-18
Software Installation 4-4, 13-4
RAID Group storage system 7-35
SP Properties 5-2
enable automatic polling property
Storage Group Properties
default value 11-4
Advanced tab 8-12
defined 6-10
General tab 8-11
setting 6-11
Storage System Properties 4-11
Enable Management Login dialog box 6-6
Cache tab 6-22
enable read cache property
Configuration Access tab 6-6
defined 7-6
Data Access tab 8-3
setting after binding LUN 12-3
General tab 6-12
setting when binding LUN
Hosts tab 6-25
non-RAID Group storage system 7-18
Memory tab 6-15
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
low watermark, defined 6-19 used host connection paths, defined 8-6
LUN write caching, defined 6-21
defined 7-4
displaying 11-24
R
prefetch
RAID 0
defined 12-7
LUN, icon for 3-20
setting 12-9
type, defined 7-2
setting after binding LUN 12-3
RAID 1
setting when binding LUN
LUN, icon for 3-19
non-RAID Group storage system
type, defined 7-2
7-17
RAID 1/0
RAID Group storage system 7-34
LUN, icon for 3-19
see also LUN properties
type, defined 7-3
LUNs in Storage Group, defined 8-5
RAID 3
mirrored write cache, defined 6-20
LUN
page size, defined 6-19
icon for 3-19
RAID Group 7-21
LUN, icon for 3-19
displaying 11-25
memory partition 6-17
setting after creating Group 12-25
type, defined 7-2
server, displaying 11-23, 11-24
RAID 5
sharing
LUN, icon for 3-19
defined 8-4
type, defined 7-2
SP A read caching, defined 6-20
RAID Group Properties dialog box
SP A statistics logging
General tab 12-26
defined 6-11
Partitions tab 12-32
setting 6-11
RAID Group storage system
SP B read caching, defined 6-20
binding LUNs 7-27
SP B statistics logging
custom 7-32
defined 6-11
standard 7-28
setting 6-11
creating RAID Groups 7-20
SPs, displaying 11-25, 11-26
custom 7-23
Storage Group name, defined 8-4
standard 7-22
Storage Group, defined 8-4
see also storage system
storage-system
RAID Groups
cache, defined 6-18
binding LUNs on
configuration access
custom 7-32
defined 6-2, 6-3
standard 7-28
data access
creating 7-20
defined 8-2
custom 7-23
setting 8-2
standard 7-22
displaying 11-22
defined 7-20
general configuration
defragmenting 12-30
defined 6-10
destroying 12-33
setting 6-11
disks in 12-33
hosts, defined 6-24
expanding 12-27
unique ID (for Storage Group), defined 8-4
icons for 3-20
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
6864 5738-001
Index
W
Window menu 3-33
Write Cache Memory partition 6-17
write caching
hardware requirements for 6-18
storage-system
enabling or disabling 6-23
setting properties for 6-21
write intent log, allocating 9-16
6864 5738-001
.
*68645738-001*
68645738–001