Вы находитесь на странице: 1из 414

Unisys e-@ction

Navisphere 5.x

Windows Manager
Administrator’s Guide

Printed in USA
July 2001 6864 5738–001
.
Unisys e-@ction
Navisphere 5.x

Windows Manager
Administrator’s Guide

UNISYS

û 2001 Unisys Corporation.


All rights reserved.

Printed in USA
July 2001 6864 5738–001
NO WARRANTIES OF ANY NATURE ARE EXTENDED BY THIS DOCUMENT. Any product or related information
described herein is only furnished pursuant and subject to the terms and conditions of a duly executed agreement to
purchase or lease equipment or to license software. The only warranties made by Unisys, if any, with respect to the
products described in this document are set forth in such agreement. Unisys cannot accept any financial or other
responsibility that may be the result of your use of the information in this document or software material, including
direct, special, or consequential damages.

You should be very careful to ensure that the use of this information and/or software material complies with the laws,
rules, and regulations of the jurisdictions with respect to which it is used.

The information contained herein is subject to change without notice. Revisions may be issued to advise of such
changes and/or additions.

Notice to Government End Users: This is commercial computer software or hardware documentation developed at
private expense. Use, reproduction, or disclosure by the Government is subject to the terms of Unisys standard
commercial license for the products, and where applicable, the restricted/limited rights provisions of the contract data
rights clauses.

Correspondence regarding this publication can be e-mailed to doc@unisys.com.

©2000, 2001 EMC Corporation. All rights reserved.


Unisys and e-@ction are registered trademarks of Unisys Corporation in the United States and other countries.
EMC² , EMC, CLARiiON, and Navisphere are registered trademarks and Access Logix, MirrorView, and SnapView are
trademarks of EMC Corporation.
All other brands and products referenced in this document are acknowledged to be the trademarks or registered
trademarks of their respective holders.
Unisys e-@ction Unisys e-@ction
Navisphere 5.x
Navisphere 5.x Windows Manager
Windows Manager Administrator's
Guide
Administrator's Guide

6864 5738–001 6864 5738–001

Bend here, peel upwards and apply to spine.


.
Errata Sheet

About This Guide


This Administrator’s Guide covers the EMC Manager program for the EMC/Clariion
FC4000 and FC5000 series Fibre Channel storage subsystems. This document also
applies to the Unisys OEM versions of the OSR6000 series of Unisys e-@ction
Entry-Level Fibre Channel storage subsystems and the OSR7000 series of Unisys
e-@ction Midrange Fibre Channel storage subsystems.

EMC and Unisys Corresponding Models


The EMC/Clariion subsystem models mentioned in this guide correspond to Unisys
subsystem models. The table includes the storage type and the Unisys e-@ction
category of the model.

EMC/Clariion Unisys Storage Unisys e-@ction


Model Model Type Category

FC5000 OSR701/702 JBOD JBOD


FC5300 OSR6700 RAID Entry-Level
None OSR6800 SAN Entry-Level
FC5700 OSR7700 RAID Midrange
FC4500 OSR7800 SAN Midrange
FC4700 OSR7900 SAN Midrange

Document Cross Reference


The following table lists the EMC documents referenced in this guide for which there is a
Unisys equivalent document.

For references to EMC document… See the Unisys document…

EMC Navisphere Server Software for Unisys e-@ction Navisphere 5.x Server
Windows Administrator’s Guide Software for Windows Administrator’s
(069-001067) Guide (6864 5969)

6864 5738–001 E–1


Errata Sheet

For references to EMC document… See the Unisys document…

EMC Navisphere Command Line Interface Unisys e-@ction Navisphere 5.x


(CLI) Version 5.X Reference (069-001038) Command Line Interface (CLI)
Reference Guide (6864 5761)
EMC FC-Series and C-Series Storage Unisys e-@ction Navisphere 5.x Event
System and Navisphere Event Codes Codes Reference Guide (6864 5753)
Version 5.X Reference (069-001061)
EMC Navisphere Event Monitor Unisys e-@ction Navisphere 5.x
Administrator’s Guide (069-001037) Windows Event Monitor
Administrator’s Guide (6864 5746)
EMC Navisphere Analyzer Version 5.X Unisys e-@ction Navisphere 5.x
Administrator’s Guide (069-001043) Windows NT Analyzer Installation and
Operations Guide (6864 5787)

E–2 6864 5738–001


1
Contents

Preface.............................................................................................................................xv

Chapter 1 About EMC Navisphere Manager


Terminology.......................................................................................1-2
Navisphere Management Environments.......................................1-3
Configuration Management ............................................................1-6
Fault and Problem Monitoring .......................................................1-7
Manager Architecture.......................................................................1-8
Storage-System Configuration and Management......................1-10
Shared Storage-System Configuration and Management .1-10
Unshared Storage-System Configuration and
Management............................................................................1-11

Chapter 2 Installing and Running Manager


Removing Manager or Supervisor .................................................2-2
Installing Manager............................................................................2-3
Starting a Manager Session .............................................................2-5
Setting User Options for Manager..................................................2-8
Selecting Storage Systems to Manage ..........................................2-10
Selecting Storage Systems for Direct or SAN Attach .........2-10
Selecting Storage Systems for NAS Attach ..........................2-14
Managing a NAS Device................................................................2-17

Chapter 3 Trees, Connectivity Map, and Main Window


Trees ....................................................................................................3-2
Connectivity Map .............................................................................3-6
Detailed View ....................................................................................3-7

EMC Navisphere Manager Version 5.X Administrator’s Guide iii

6864 5738-001
Contents

Toolbar.........................................................................................3-9
Workspace.................................................................................3-10
Components of Trees, Connectivity Map, and Detailed View . 3-11
Accessible, Inaccessible, and Unsupported
Storage Systems .......................................................................3-11
Icons...........................................................................................3-12
Storage-System Menu .............................................................3-16
Main Window..................................................................................3-31
Application Icon ......................................................................3-32
Menu Bar ..................................................................................3-32
Toolbar.......................................................................................3-34
Workspace.................................................................................3-35
Status Bar ..................................................................................3-37
Window Configuration ..........................................................3-37

Chapter 4 Installing Software on an FC4700 Storage System


FC4700 Storage-System Software ...................................................4-2
Installing Storage-System Software .............................................4-4
Identifying the Source of a Software Installation
Problem .....................................................................................4-12
Committing (Finalizing) the Software Installation ....................4-13
Reverting Back to the Previous Software Version......................4-14
Displaying Status for All Installed Software Packages .............4-14

Chapter 5 Configuring the Remote Agent


Configuring SP Agents - FC4700 Series.........................................5-2
Setting a Polling Interval ..........................................................5-3
Configuring Host Agents Remotely (Non-FC4700 Series) .........5-4
Adding a User to the Agent Configuration File....................5-4
Scanning for Devices.................................................................5-5
Updating the Communications Channels List.....................5-9
Adding Privileged Users ........................................................5-10
Updating Parameters .............................................................. 5-11

Chapter 6 Setting Storage-System Properties


Setting Storage-System Configuration Access Properties
(non-FC4700 Storage Systems) .......................................................6-2
Configuration Access ................................................................6-2
Storage-System Configuration Access Properties ................6-3
Enabling Configuration Access Control for a Non-FC4700
Shared Storage System..............................................................6-4

iv EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Contents

Enabling and Disabling Configuration Access for Servers. 6-6


Setting Storage-System General Configuration Properties...... 6-10
General Configuration Properties ........................................ 6-10
Setting the General Configuration Properties .................... 6-11
Setting Storage-System Memory Properties............................... 6-14
Assigning Memory to Partitions........................................... 6-15
Setting Storage-System Cache Properties ................................... 6-18
Storage-System Cache Properties ......................................... 6-18
Setting the Cache Properties.................................................. 6-21
Setting the Storage-System Hosts Property................................ 6-24
Fair Access to the Storage-System Resources ..................... 6-24
Enabling Fair Access to a Non-FC4700 Storage System.... 6-24
Setting the SP Network and ALPA Properties (FC4700 Series
Only) ................................................................................................ 6-26
Setting the SP Network Properties ....................................... 6-26
Setting the SP ALPA (Arbitrated Loop Physical Address)
Properties ................................................................................. 6-28
Setting the Battery Test Time ........................................................ 6-30

Chapter 7 Creating LUNs and RAID Groups


LUNs, LUN RAID Types, and Properties..................................... 7-2
LUNs........................................................................................... 7-2
RAID Types................................................................................ 7-2
Number of Disks in a LUN...................................................... 7-3
LUN Properties ......................................................................... 7-4
Creating LUNs in a Non-RAID Group Storage System ........... 7-10
Creating Standard LUNs on a Non-RAID Group Storage
System....................................................................................... 7-10
Creating Custom LUNs in a Non-RAID Group Storage
System....................................................................................... 7-14
Creating RAID Groups .................................................................. 7-20
RAID Groups ........................................................................... 7-20
Creating Standard RAID Groups ......................................... 7-22
Creating Custom RAID Groups............................................ 7-23
Creating LUNs on RAID Groups................................................. 7-27
Creating Standard LUNs on a RAID Group ....................... 7-28
Creating Custom LUNs on a RAID Group ......................... 7-32
Verifying or Editing Device Information in the Host Agent
Configuration File (Non-FC4700 storage systems) ................... 7-38
Verifying or Editing Host Agent Device Information on an
AIX, HP-UX, Linux, or NetWare Server .............................. 7-38

EMC Navisphere Manager Version 5.X Administrator’s Guide v

6864 5738-001
Contents

Verifying or Editing Agent Device Information on a Solaris


Server.........................................................................................7-39
Verifying or Editing Agent Device Information on a
Windows Server ......................................................................7-40

Chapter 8 Setting Up Access Logix


Setting the Storage-System Data Access Property .......................8-2
Data Access Control and Storage Groups..............................8-2
Enabling Data Access Control for a Storage System ...........8-2
Storage Group Properties ................................................................8-4
Unique ID ...................................................................................8-4
Storage Group Name ................................................................8-4
Sharing ........................................................................................8-4
LUNs in Storage Group ............................................................8-5
Connected Hosts........................................................................8-5
Used Host Connection Paths ...................................................8-6
Creating Storage Groups .................................................................8-7
Verifying Server Connections to a Storage Group ..................... 8-11
Verifying or Editing Device Information in the Host Agent
Configuration File (Non-FC4700 storage systems)....................8-15
Verifying or Editing Host Agent Device Information on an
AIX, HP-UX, Linux, or NetWare Server ..............................8-15
Verifying or Editing Agent Device Information on a Solaris
Server.........................................................................................8-16
Verifying or Editing Agent Device Information on a
Windows Server ......................................................................8-17

Chapter 9 Setting Up and Using MirrorView


MirrorView Overview......................................................................9-2
MirrorView Terminology.................................................................9-3
MirrorView Features and Benefits .................................................9-6
MirrorView Example.................................................................9-8
How MirrorView Handles Failures .............................................9-10
Primary Image Fails ................................................................9-10
Failure of the Secondary Image.............................................9-12
MirrorView Operations Overview ...............................................9-14
Allocating the Write Intent Log ....................................................9-16
Write Intent Log .......................................................................9-16
To Allocate the Write Intent Log ...........................................9-16
Creating a Remote Mirror..............................................................9-20
Creating a Remote Mirror - Basic ..........................................9-20

vi EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Contents

Creating a Remote Mirror - Advanced ................................ 9-23


Activating a Remote Mirror.......................................................... 9-26
Identifying Remote Mirrors on a Storage System ..................... 9-27
Viewing or Modifying Remote Mirrors or Images.................... 9-28
To View or Modify a Remote Mirror’s General
Properties ................................................................................. 9-28
To View or Modify a Primary Remote Mirror Image........ 9-31
To View or Modify a Secondary Remote Mirror Image.... 9-32
Managing MirrorView Connections............................................ 9-36
Deactivating a Remote Mirror...................................................... 9-39
Adding a Secondary Image to a Remote Mirror........................ 9-40
Add Secondary Image............................................................ 9-40
Advanced Add Secondary Image......................................... 9-42
Create Secondary Image LUN............................................... 9-44
Promoting a Secondary Image to Primary ................................. 9-45
To Promote a Secondary Image............................................. 9-45
Synchronizing a Secondary Image .............................................. 9-46
To Synchronize a Secondary Image...................................... 9-47
Fracturing a Secondary Image ..................................................... 9-48
To Fracture a Secondary Image............................................. 9-48
Removing a Secondary Image from a Remote Mirror .............. 9-48
Destroying a Remote Mirror......................................................... 9-50
To Destroy a Remote Mirror Using Destroy....................... 9-51
To Destroy a Remote Mirror Using Force Destroy ............ 9-51

Chapter 10 Setting Up and Using SnapView


SnapView Overview ...................................................................... 10-2
SnapView Components.......................................................... 10-3
SnapView Requirements........................................................ 10-4
SnapView Operations Overview .......................................... 10-4
Snapshot Session ..................................................................... 10-8
Setting Up SnapView ..................................................................... 10-9
Binding LUNs for the Snapshot Cache ................................ 10-9
Configuring an SP’s Snapshot Cache................................. 10-12
Adding a Snapshot to a Storage Group .................................... 10-15
Destroying a Snapshot................................................................. 10-16
Using SnapView ........................................................................... 10-17
Starting a Snapshot Session ................................................. 10-17
Stopping a Snapshot Session ............................................... 10-20
Displaying Snapshot Component Properties........................... 10-21
Displaying Status of All Snapshots and Snapshot Sessions .. 10-25
To Display Status for all Snapshots.................................... 10-25

EMC Navisphere Manager Version 5.X Administrator’s Guide vii

6864 5738-001
Contents

To Display Status for all Snapshot Sessions ......................10-25

Chapter 11 Monitoring Storage-System Operation


Updating Storage-System Information ....................................... 11-2
Currentness of Manager’s Storage-System Information....11-2
Automatic and Manual Polling for Managed Storage-System
Information...............................................................................11-4
Setting Automatic Polling Properties ...................................11-4
Manually Polling Storage Systems .......................................11-7
Monitoring Storage-System Operation ....................................... 11-8
Storage-System Faults...........................................................11-10
Orange Disk Icon ................................................................... 11-11
Orange SP Icon....................................................................... 11-14
Orange LCC Icon ................................................................... 11-15
Orange Fan A or Fan B Icon.................................................11-16
Orange Power Supply or VSC Icon ....................................11-18
Orange SPS Icon..................................................................... 11-19
Orange BBU Icon ................................................................... 11-20
Monitoring Failover Software..................................................... 11-21
Displaying Storage-System Component and Server Status ... 11-22
Displaying NAS Device Status ................................................... 11-28
Using the SP Event Log - Non-FC4700 Series........................... 11-29
Displaying the Event Log for an SP ....................................11-29
Displaying Events.................................................................. 11-31
Using the SP Event Logs - FC4700 Series .................................. 11-34
Displaying the Event Logs for an FC4700 SP ....................11-34
Filtering Events ...................................................................... 11-35
Viewing Event Details ..........................................................11-36
Saving Events to a Log File ..................................................11-37
Printing Events....................................................................... 11-37
Clearing Events......................................................................11-37
Opening the Events Timeline Window..............................11-38
Using the Events Timeline Window .......................................... 11-38
Description of the Events Timeline Window ....................11-39
Viewing Events Represented by Event Markers ..............11-40
Viewing Event Details from the Timeline..........................11-41

Chapter 12 Reconfiguring LUNs, RAID Groups, and Storage Groups


Reconfiguring LUNs ......................................................................12-2
Changing the LUN Enable Read Cache or Enable Write
Cache Properties ......................................................................12-3

viii EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Contents

Changing the Rebuild Priority, Verify Priority, or Auto


Assign Property for a LUN.................................................... 12-5
Changing LUN Prefetch (Read Caching) Properties ......... 12-7
Transferring the Default Ownership of a LUN ................ 12-11
Unbinding a LUN ................................................................. 12-14
Changing the User Capacity of a LUN .............................. 12-23
Reconfiguring RAID Groups...................................................... 12-25
Changing the Expansion/Defragmentation Priority or the
Automatically Destroy After Last LUN Unbound
Property of a RAID Group .................................................. 12-25
Expanding a RAID Group ................................................... 12-27
Defragmenting a RAID Group............................................ 12-30
Destroying a RAID Group................................................... 12-33
Reconfiguring Storage Groups................................................... 12-36
Changing the Name or Sharing State of a Storage
Group...................................................................................... 12-36
Adding or Removing LUNs from Storage Groups.......... 12-38
Connecting Servers to a Storage Group or Disconnecting
Servers from a Storage Group............................................. 12-41
Destroying Storage Groups ................................................. 12-46

Chapter 13 Reconfiguring Storage Systems


Upgrading Storage-System Software .......................................... 13-2
Upgrading Software on a Non-FC4700 Storage System ... 13-2
Upgrading a Storage System to Support Caching..................... 13-7
Installing the Hardware Components for Caching............ 13-7
Setting Up Caching ................................................................. 13-8
Replacing Disks with Higher Capacity Disks............................ 13-9
Connecting a New Server to a Shared Storage System .......... 13-11
Disconnecting a Server from a Shared Storage System .......... 13-14

Appendix A Troubleshooting Manager Problems


Unable to Connect to a Storage-System Server or Time Out
Connecting to a Storage-System Server ...................................... A-2
Dual Board Unbind Error .............................................................. A-2
Caller Not Privileged ..................................................................... A-3
LUN Not Visible on a Solaris Server ........................................... A-3
LUNs Are Unowned ...................................................................... A-3
Enclosure x: Bypass Error ............................................................. A-4

Index ................................................................................................................................ i-1

EMC Navisphere Manager Version 5.X Administrator’s Guide ix

6864 5738-001
Contents

x EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Tables

2-1 Default Settings for User Options .............................................................. 2-9


3-1 Toolbar Buttons ............................................................................................. 3-9
3-2 Detailed View Containers ......................................................................... 3-10
3-3 Accessible, Inaccessible, and Unsupported Storage Systems .............. 3-11
3-4 Icon Colors ................................................................................................... 3-12
3-5 Menu Options for Single Servers ............................................................. 3-13
3-6 Menu Option for Multiple Servers ........................................................... 3-13
3-7 Individual Storage-System Icon Images ................................................. 3-14
3-8 Multiple Storage-System Icon Image ....................................................... 3-15
3-9 Storage-System Menu Options for a Single Storage System ................ 3-17
3-10 Storage-System Menu Options for Multiple Storage Systems ............. 3-18
3-11 Basic Storage Component Icons: Images and Descriptions ................. 3-20
3-12 MirrorView and SnapView Storage Component Icons ........................ 3-23
3-13 Menu Options for a Single Basic Storage Component .......................... 3-24
3-14 Menu Options for Multiple Basic Storage Components ....................... 3-26
3-15 Menu Options - Single MirrorView and SnapView Components ...... 3-27
3-16 Hardware Component Icon Images and Descriptions ......................... 3-28
3-17 Menu Options for a Single Hardware Component ............................... 3-29
3-18 Menu Options for Multiple Hardware Components ............................ 3-30
3-19 Application Icon Image ............................................................................. 3-31
3-20 Filters for Displaying Managed Storage Systems .................................. 3-34
6-1 Hardware Requirements for Write Caching .......................................... 6-18
7-1 Number of Disks You Can Use in RAID Types ....................................... 7-3
7-2 Disks That Cannot Be Hot Spares .............................................................. 7-4
7-3 Valid Rebuild Priorities ............................................................................... 7-5
7-4 LUN Properties Available for Different RAID Types ............................. 7-8
7-5 Default LUN Property Values for Different RAID Types ...................... 7-8
7-6 Restrictions and Recommendations for Creating LUNs ...................... 7-13
7-7 Default RAID Group Property Values .................................................... 7-21
7-8 Maximum Number of LUNs Per RAID Group ...................................... 7-21

EMC Navisphere Manager Version 5.X Administrator’s Guide xi

6864 5738-001
Tables

11-1 Default Automatic Polling Property Values ........................................... 11-4


11-2 Application Icon States .............................................................................. 11-9
11-3 Storage-System Icon States ........................................................................ 11-9
11-4 Disk Failure States ..................................................................................... 11-11
11-5 Cache Vault Disks ..................................................................................... 11-14
11-6 Number of Fans in a C-Series Storage System ...................................... 11-17
11-7 BBU Failure States ..................................................................................... 11-20
12-1 Default Prefetch Properties Values ........................................................... 12-8
12-2 Available Prefetch Properties Values - General ..................................... 12-9
12-3 Available Prefetch Properties Values - Constant ................................... 12-9
12-4 Available Prefetch Properties Values - Variable ..................................... 12-9
12-5 Number of Disks You Can Use in RAID Types .................................... 12-27
13-1 Database Disks for Different Storage-System Types ............................. 13-3
13-2 Hardware Requirements for Write Caching ........................................... 13-7
13-3 Database Disks ............................................................................................ 13-9

xii EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Figures

1-1 Sample Navisphere Shared Storage Configuration ................................. 1-4


1-2 Sample Navisphere Unshared Storage Configuration ............................ 1-5
1-3 Architectural Components of a Shared Storage Configuration ............. 1-8
1-4 Architectural Components of an Unshared Storage System .................. 1-9
3-1 Sample Partially Expanded Equipment Tree ........................................... 3-3
3-2 Sample Partially Expanded Storage Tree .................................................. 3-4
3-3 Sample Partially Expanded Hosts Tree ..................................................... 3-5
3-4 Icon Images and Descriptions for Servers and Server HBAs ............... 3-13
3-5 Main Window ............................................................................................. 3-31
3-6 Detailed View Toolbar Buttons ................................................................ 3-33
9-1 Remote Mirror Configuration Sample ...................................................... 9-2
9-2 Sample Remote Mirror Configuration ...................................................... 9-8
10 -1 Snapshot Session Overview ...................................................................... 10-3
10 -2 How a Snapshot Session Starts, Runs, and Stops .................................. 10-8

EMC Navisphere Manager Version 5.X Administrator’s Guide xiii

6864 5738-001
Figures

xiv EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Preface

This manual describes how to install and use EMC Navisphere


Manager on a host running the Microsoft Windows NT or
Windows 2000 operating system. You use EMC Navisphere
Manager to configure and manage disk-array storage systems. These
storage systems may be connected to a host running the Windows NT
or Windows 2000 operating system or one of several UNIX
operating systems, such as the Sun Solaris operating system.
You should read this manual if you are a system administrator who is
responsible for using EMC Navisphere Manager to configure and
manage storage systems. This manual assumes that you are familiar
with the Windows operating system on the host.

How This Manual Is


Organized Chapter 1 Introduces EMC Navisphere Manager, and outlines
the tasks you will perform to configure and manage
storage systems.
Chapter 2 Explains how to install EMC Navisphere Manager,
start an EMC Navisphere Manager session, and set
EMC Navisphere user options.
Chapter 3 Describes the EMC Navisphere storage-system trees
and the Main and Detailed View windows that you
use to monitor and configure storage systems.
Chapter 4 Explains how to install software options, such as
SnapView™ or MirrorView™, on an FC4700 storage
system

EMC Navisphere Manager Version 5.X Administrator’s Guide xv

6864 5738-001
Preface

Chapter 5 Explains how to configure Remote Agents for FC4700


and non-FC4700 series storage systems.
Chapter 6 Explains how to set the general, memory, and cache
properties; how to set the SP and ALPA properties;
and how to set the battery test time for a storage
system.
Chapter 7 Explains how to create logical units (LUNs) that are
composed of disks. This chapter also explains how to
create RAID Groups (and the LUNs on the RAID
groups).
Chapter 8 Explains how to use the optional Access Logix™
feature — software that lets you create Storage
Groups on an storage system so multiple hosts can
have their own LUNs.
Chapter 9 Introduces the optional EMC MirrorView™ remote
mirroring feature — the ability to maintain a mirrored
copy of current information at a remote location. This
chapter also explains how to monitor the remote
mirror, modify the remote mirror, and perform other
activities.
Chapter 10 Introduces the optional EMC SnapView™ feature —
software that captures a snapshot image of a LUN that
can be used for decision support, testing, or backup
while the production work continues. This chapter
also describes how to create snapshot images.
Chapter 11 Describes how EMC Navisphere Manager indicates
overall storage-system operation and updates its
information about storage systems. This chapter also
explains how to monitor storage-system operation
and display storage-system event messages.
Chapter 12 Explains how to reconfigure LUNs, RAID Groups,
and Storage Groups.
Chapter 13 Explains how to reconfigure storage systems.
Appendix A Explains how to troubleshoot problems that may
occur when you run EMC Navisphere Manager.

xvi EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Preface

Other Navisphere Manuals


For AIX Storage-System Servers - Navisphere 5.X
EMC Navisphere Server Software for AIX Administrator’s Guide
(069001075)
EMC Navisphere Command Line Interface (CLI) Version 5.X Reference
(069001038)
Storage-System and Navisphere Event Codes Version 5.X Reference
(069001061)
For HP-UX Storage-System Servers - Navisphere 5.X
EMC Navisphere Server Software for HP-UX Administrator’s Guide
(069001076)
EMC Navisphere Command Line Interface (CLI) Version 5.X Reference
(069001038)
Storage-System and Navisphere Event Codes Version 5.X Reference
(069001061)
For Linux Storage-System Servers - Navisphere 4.X
EMC Navisphere Server Software for Linux Administrator’s Guide
(069001054)
EMC Navisphere Command Line Interface (CLI) Version 4.X Reference
(069001058)
Storage-System and Navisphere Event Codes Version 4.X Reference
(069001041)
For NetWare Storage-System Servers - Navisphere 4.X
EMC Navisphere Server Software for NetWare Administrator’s Guide
(069001055)
EMC Navisphere Command Line Interface (CLI) Version 4.X Reference
(069001058)
Storage-System and Navisphere Event Codes Version 4.X Reference
(069001041)
For Solaris Storage-System Servers - Navisphere 5.X
EMC Navisphere Server Software for Solaris Administrator Guide
(069001068)
EMC Navisphere Command Line Interface (CLI) Version 5.X Reference
(069001038)
Storage-System and Navisphere Event Codes Version 5.X Reference
(069001061)

EMC Navisphere Manager Version 5.X Administrator’s Guide xvii

6864 5738-001
Preface

For Windows Storage-System Servers - Navisphere 5.X


EMC Navisphere Server Software for Windows Administrator Guide
(069001067)
EMC Navisphere Command Line Interface (CLI) Version 5.X Reference
(069001038)
Storage-System and Navisphere Event Codes Version 5.X Reference
(069001061)
Navisphere Event Monitor
EMC Navisphere Event Monitor Version 5.X Administrator Guide
(069001037)
Storage-System and Navisphere Event Codes Version 5.X Reference
(069001061)
Navisphere Integrator
EMC Navisphere Integrator Version 5.X User Guide (069001071)
Navisphere Analyzer
EMC Navisphere Analyzer Version 5.X Administrator Guide (069001043)

Conventions Used in This manual uses the following format conventions:


This Manual

Convention Meaning

this typeface Text (including punctuation) that you type verbatim, all
commands, pathnames, and filenames, and directory names. It
indicates the name of a dialog box, field in a dialog box, menu,
menu option, or button.

this typeface Represents a system response (such as a message or prompt),


a file or program listing.

this typeface Represent variables for which you supply the values; for
example, the name of a directory or file, your username or
password, and explicit arguments to commands.

x →y Represents a menu path. For example,


Operations →Poll All Storage Systems tells you to click Poll All
Storage Systems on the Operations menu.

↵ Represents the Enter key. (On some keyboards this key is called
Return or New Line.)

xviii EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
1
About EMC Navisphere
Manager

EMC Navisphere Manager provides a graphical user interface that


lets you configure and manage disk-array storage systems connected
to hosts on a network.
This chapter briefly describes the following:
• Terminology ........................................................................................1-2
• Navisphere Management Environments........................................1-3
• Configuration Management .............................................................1-6
• Fault and Problem Monitoring ........................................................1-7
• Manager Architecture........................................................................1-8
• Storage-System Configuration and Management .......................1-10

About EMC Navisphere Manager 1-1

6864 5738-001
About EMC Navisphere Manager
1

Terminology

Term Meaning

Analyzer EMC Analyzer (a performance analyzer).

ATF EMC Application Transparent Failover.

C-series storage system A C3000, C2x00, C1900, or C1000 series storage system.

CDE CLARiiON® Driver Extensions.

CLI EMC Navisphere Command Line Interface.

Event Monitor EMC Navisphere Event Monitor.

FC-series storage system An FC4700, FC4300/4500, FC5600/5700,


FC5200/5300 or FC5000 series Fibre Channel storage system.

Host Agent The Navisphere Agent that runs on a host system.

Managed Agent A Host Agent or SP Agent that you selected to manage.

Managed Host See Managed Agent.

Managed storage system A storage system managed by Manager.

Manager Navisphere Manager.

Non-RAID Group storage A storage system whose SPs are running Core or Base software without
system RAID Group functionality.

RAID Group storage system A storage system whose SPs are running Core or Base Software with RAID
group functionality.

Server or managed server A host with a managed storage system.

Shared storage system A storage system with the EMC Access Logix™ option, which provides data
access control (Storage Groups) and configuration access control. A shared
storage system is always a RAID Group storage system.

SAN Storage area network storage system.

NAS Network attached file server.

SP Agent The Navisphere Agent that runs in an SP (FC4700 systems only).

Unshared storage system A storage system without the EMC Access Logix option.

Windows Windows NT or Windows 2000.

1-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
About EMC Navisphere Manager
1

Navisphere Management Environments


Manager runs on a Microsoft Windows NT® or Windows® 2000 host,
which is called a management station. It can manage storage systems
connected to the management station or to servers that are accessible
to the management station over a LAN (local area network). The
storage systems can connect to the servers directly, through hubs, or
through Fibre Channel switches.
Servers with storage systems managed by Manager can run a
Windows operating system, NetWare®, or one of several UNIX®
operating systems. Servers connected to the same shared storage
system (storage system with the Access Logix option) can run
different operating systems. Servers connected to the same unshared
storage system (storage system without the Access Logix option)
must run the same operating system. A server can be connected to
both shared and unshared storage systems.
Manager communicates with a managed storage system using either
the Host Agent (non-FC4700 series) or the SP Agent (FC4700 series).
For non-FC4700 storage systems, each server with a managed storage
system must run the Host Agent. With FC4700 storage systems, the
management station uses a separate network connection to perform
management functions. In addition to the Agent running on the host,
an SP Agent runs in each SP.
The following management environment has a Windows host that is
the management station for two FC4700 shared storage systems
connected to servers with Fibre Channel switches. The management
station connects by LAN to the servers. The management station
could also be a storage-system server. It manages the two Windows
servers and two UNIX servers shown.

Navisphere Management Environments 1-3

6864 5738-001
About EMC Navisphere Manager
1

Graphical
Navisphere User
management station Interface

Windows NT, Manager, optional CLI,


Event Monitor, Integrator,
optional Analyzer, optional Organizer
LAN

Windows NT server Windows 2000 server UNIX server UNIX server

Windows NT, Windows 2000, UNIX, UNIX,


Host Agent, Host Agent, Host Agent, Host Agent,
ATF, ATF, ATF, ATF,
optional CLI optional CLI optional CL optional CLI

Fibre Channel switch fabric Fibre Channel switch fabric

SP A SP B
SP Agent SP Agent SP A SP B

Managed shared FC4700 Managed shared


storage system non-FC4700 storage system
Legend
Management connection (FC4700 only)
First data path and management connection
Second data path and management connection

Figure 1-1 Sample Navisphere Shared Storage Configuration

1-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
About EMC Navisphere Manager
1

The following sample management environment shows a


Windows NT host that is a management station for unshared storage
systems connected directly to their server. The management station
shown is not a storage-system server, but it could be. The station
manages the two Windows NT servers and two UNIX servers shown.

Graphical
Navisphere
User
management station
Interface

Windows NT, Manager, optional CLI,


Event Monitor, Integrator,
optional Analyzer, optional Organizer
LAN
Windows NT server UNIX server Windows 2000 server UNIX server

Windows NT, UNIX, Windows 2000, UNIX,


Host Agent, Host Agent, Host Agent, Host Agent,
optional CLI optional CLI optional CLI optional CLI

Hub Hub

SP A SP B

SP A SP B SP A SP B

Managed unshared
C-series storage system
Managed unshared Managed unshared
FC-series storage system FC-series storage system

Figure 1-2 Sample Navisphere Unshared Storage Configuration

Navisphere Management Environments 1-5

6864 5738-001
About EMC Navisphere Manager
1

Configuration Management
Manager lets you configure the storage systems on local and remote
servers. Using Manager you can
• Configure the Host Agents and SP Agents.
• Set the configuration, memory, and cache properties for storage
systems.
• Combine physical disks into RAID Groups and create logical
units (LUNs) on those RAID Groups on storage systems with
RAID Group support; or combine physical disks into LUNs on
storage systems without RAID Group support.
• Mirror a LUN to a remote server using the MirrorView™ option
to provide for disaster recovery.
• Change the user-defined parameters of LUNs, such as their
rebuild time and storage processor (SP) owner.
• Update the Core Software and programmable read-only memory
(PROM) code that controls storage systems.
Using Manager on shared storage systems, you can
• Set the access control and fair access properties for a storage
system.
• Combine LUNs into Storage Groups and connect servers to
Storage Groups to provide the servers access to specific LUNs on
the storage system.
• Change the user-defined properties of a Storage Group, such as
the LUNs it contains and its name.
• Copy a LUN at an instant in time using the SnapView™ option.

1-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
About EMC Navisphere Manager
1

Fault and Problem Monitoring


Manager lets you monitor faults and problems with storage systems
on local and remote servers. Using Manager you can
• Monitor the status of the storage system, physical disks and other
components that comprise it and the Storage Groups, RAID
Groups, and LUNs on the storage system.
• Monitor the status of failover software on the storage-system
servers.
To determine the status of managed storage systems, Manager
periodically polls them and records events. When a storage system is
operating normally, its icon is grey; when it has a failure, its icon is
orange; and when it is in a transition state, its icon is blue. You can
expand the Equipment tree or Storage trees for a storage system to
display its hardware components, managed hosts connected to it, its
RAID Groups, SPs, and Storage Groups, and to show which of these
components failed. Similarly, you can expand the Hosts tree for a
host to display the LUNs, the Storage Group, and the storage systems
connected to it, and to show which of these components failed.

Fault and Problem Monitoring 1-7

6864 5738-001
About EMC Navisphere Manager
1

Manager Architecture
Manager communicates with the Host Agent running on the same or
other servers on the network, and also with the SP Agent running in
any FC4700 storage systems. The management station and the remote
Agents communicate with each other over a TCP/IP network.
In a shared storage-system environment, the Host Agent on a server
communicates with ATF or CDE, which in turn communicates with
the Base or Core Software running in a storage system’s storage
processors (SPs). All shared storage systems have Fibre Channel
interfaces to the server, so the Host Agent uses a SCSI protocol over a
Fibre Channel (FC) connection to communicate with the Base or Core
Software.
With FC4700 storage systems, the management station uses a
separate network connection to perform management functions. In
addition to the Agent running on the host, an SP Agent runs in each
SP. The architectural components of a shared storage system are
shown in Figure 1-3.

Management station Manager

TCP/IP network (LAN)

Host Agent Host Agent


Server Management Server
connection
ATF or CDE ATF or CDE

FC and management
FC connection
connection

Shared
Base Software Core Software
Shared FC4700 non-FC4700
with Access Logix, with Access Logix
Storage System SP Agent Storage System
optional MirrorView,
optional SnapView

Figure 1-3 Architectural Components of a Shared Storage Configuration

1-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
About EMC Navisphere Manager
1

In an unshared storage configuration, the Host Agent on a server


communicates with the Core Software running in a storage system’s
storage processors (SPs), unless the server is running ATF or CDE. If
failover software is running, the Agent communicates with the
software, which in turn communicates with the Core Software.
If an unshared storage system has a Fibre Channel server interface,
the Host Agent uses a SCSI protocol over a Fibre Channel (FC-AL)
connection to communicate with the Core Software. If an unshared
storage system has a SCSI server interface, the Host Agent uses a
SCSI protocol over a SCSI bus to communicate with the Core
Software. The architectural components of an unshared
storage-system configuration are shown in Figure 1-4.

No storage systems with SCSI server interfaces are shared storage systems,
and only certain models of storage systems with Fibre Channel server
interfaces are shared storage systems.

Management station Manager

TCP/IP network (LAN)

Host Agent
Server

ATF or CDE

FC and management
connection

Unshared Core Software


Storage System

Figure 1-4 Architectural Components of an Unshared Storage System

Manager Architecture 1-9

6864 5738-001
About EMC Navisphere Manager
1

Storage-System Configuration and Management


How you configure or manage storage systems with Manager
depends on whether the storage systems are shared or unshared.

Shared Storage-System Configuration and Management


Before you can configure or manage shared storage systems with
Manager, you need to set up the Navisphere environment.

NOTE: Until you enable data access control for a shared storage system, any
server connected to it can write to any LUN on it. To ensure that servers do
not write to LUNs that do not belong to them, the procedures below assume
that either just one server is physically connected to the shared storage system
or that just one server has been powered up since the servers were connected
to the storage system. You will use this server (called the configuration
server) to configure the storage system.

To Set Up the Navisphere Management Station


1. Install Manager on the management station (Chapter 2).
2. Start Manager and select the configuration server as a host with
storage systems to manage (Chapter 2).

To Install Optional Software on an FC4700 Storage System


Install SnapView™ or RemoteView™ software on the FC4700 storage
system (Chapter 4).

To Set Up the Host or SP Agent


The Agent you set up depends on the storage-system type:
For an FC4700 storage system - Configure the SP Agent on each SP in
the storage system (Chapter 5).
For a storage system other than type FC4700 - Configure the Host
Agent on the server (Chapter 5).

To Configure a Shared Storage System with Manager


1. Set storage-system general properties, memory, and cache
properties (Chapter 6).
2. If you want fair access to the storage system, set its host
properties (Chapter 6).

1-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
About EMC Navisphere Manager
1

3. Enable data access control for the storage system (Chapter 6).
4. Enable configuration access control for the storage system and
enable configuration access for the configuration server
(Chapter 6).
5. Create RAID Groups and LUNs in the RAID Groups (Chapter 7).
6. Connect other servers to the storage system or power up other
servers connected to the storage system.
7. Create Storage Groups and connect each server to its Storage
Group (Chapter 8).
8. Make the LUNs available to the server’s operating system. (See
the Navisphere Server Software Administrator’s or User Guide
for the server’s operating system.)
9. For an FC4700 storage system with the MirrorView remote mirror
option, set up and use remote mirrors (Chapter 9).
10. For an FC4700 storage system with the SnapView snapshot copy
option, set up the snapshot cache and snapshot (Chapter 10).
After you have configured all the storage systems connected to the
configuration server, you can physically connect other servers to the
storage system, or power up the other servers connected to the
storage system.

To Manage a Shared Storage System with Manager


1. Monitor storage-system operation and the failover software
operation so you can recover from any failures (Chapter 11).
2. For an FC4700 storage system with the SnapView option, if you
want to create snapshots of LUNs, set up and run a snapshot
session (Chapter 10).
3. Reconfigure Storage Groups, LUNs, or RAID Groups, if desired
(Chapter 12).
4. Add a server to or remove it from a shared storage system, if
desired (Chapter 13).
5. Reconfigure storage-system hardware, if desired (Chapter 13).

Unshared Storage-System Configuration and Management


Before you can configure or manage unshared storage systems with
Manager, you need to set up the Navisphere environment.

Storage-System Configuration and Management 1-11

6864 5738-001
About EMC Navisphere Manager
1

To Set Up the Navisphere Management Station


1. Install Manager on the management station (Chapter 2).
2. Start Manager and select Agents connected to storage systems
you want to manage (Chapter 2).

To Configure an Unshared Storage System with Manager


Manager cannot configure storage systems without SPs (FC5000
series storage systems), that is, storage systems in a JBOD
(just-a-bunch-of disks) configuration. Manager can only monitor the
operation of FC5000 series storage systems.
The following procedure assumes that you have:
• Physically connected the storage systems you want to manage to
their servers
• Installed Navisphere CDE or ATF and Navisphere Agent on the
servers
To configure an unshared storage system with Manager:
1. Set storage-system general properties, memory, and cache
properties (Chapter 6).
2. Create RAID Groups and LUNs in the RAID Groups if the storage
system supports RAID Groups, or create LUNs if the storage
system does not support RAID Groups (Chapter 7).
3. Make the LUNs available to the server’s operating system. (See
the Navisphere Server Software Administrator’s or User Guide
for the server’s operating system.)

To Manage an Unshared Storage System with Manager


Only step 1 in the procedure below applies to storage systems
without SPs (FC5000 series storage systems).
1. Monitor storage-system operation and the ATF operation so you
can recover from any failure (Chapter 11).
2. Reconfigure LUNs or RAID Groups, if desired (Chapter 12).
3. Reconfigure storage-system hardware, if desired (Chapter 13).

1-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
2
Installing and Running
Manager

This chapter describes:


• Removing Manager or Supervisor ..................................................2-2
• Installing Manager .............................................................................2-3
• Starting a Manager Session...............................................................2-5
• Setting User Options for Manager...................................................2-8
• Selecting Storage Systems to Manage ...........................................2-10

Subsequent chapters describe storage trees, connectivity maps, and


the Main window. They describe how to use the Main window to
configure and monitor storage systems.

This manual assumes that you are familiar with the Windows environment
for your management station.

Installing and Running Manager 2-1

6864 5738-001
Installing and Running Manager
2

Removing Manager or Supervisor


If a version of Manager or Supervisor is already installed on the
management station, you must remove it before installing a new
revision of Manager.

To Remove Manager 1. Log in to the Windows management station as either


or Supervisor Administrator or someone who has administrative privileges.
2. If Manager is running, stop it by following the menu path
File →Exit
3. From the Windows taskbar, follow the path
Start →Settings →Control Panel
4. In the Control Panel dialog box, double-click Add/Remove
Programs.
5. In the Add/Remove Program Properties dialog box, click
Navisphere Manager or Navisphere Supervisor, and then click
Add/Remove (Windows NT) or Change/Remove (Windows
2000).
6. In the Confirm File Deletion dialog box, click OK to confirm the
removal of Manager or Supervisor.
7. If the Shared File Detected dialog box opens, select Don’t
display this message again check box, and then click Yes only if
you are removing Manager or Supervisor to install a different
version of one of them, or you are removing all Navisphere
software from the host; otherwise, click No.
8. In the InstallShield Wizard dialog box, click Finish.
9. In the Add/Remove Program Properties dialog box, click OK to
close the dialog box.
10. Close the Control Panel dialog box.

What Next?
Continue to the next section to install the new revision of Manager.

2-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

Installing Manager
The host on which you install Manager must have the following
hardware and software:
• Color graphics console with a minimum resolution of 1024 x 768
pixels.
• Windows NT 4.0 operating system with Service Pack 5 or higher
or Windows 2000.
• TCP/IP Services configured with connections to the servers with
storage systems that Manager will manage.

For the latest information on which hosts you can use and the required
software revisions and service packs, refer to the Manager Release Notes.

To Install Manager 1. Log in to the Windows management station as either


Administrator or someone who has administrative privileges.
2. If a version of Manager or Supervisor is already installed, remove
it as described in the previous section before installing the new
version of Manager.
3. Insert the Manager CD-ROM in the management station’s
CD-ROM drive.
The installation of Manager starts automatically, and the
Navisphere Manager Install splash screen and InstallShield
Wizard dialog box appear. If you do not see this splash screen and
dialog box, follow these steps to start the installation:
a. From the Windows taskbar, follow the path
Start →Run
b. Enter the following program name, and then click OK:
drive: \setup.exe
where drive is the letter for the CD-ROM drive.
When the InstallShield preparation is complete, the Navisphere
Manager Setup window opens.
4. In the Navisphere Manager Setup window, click Next.
5. In the License Agreement dialog box, read the license agreement
and click Yes to accept the terms.

Installing Manager 2-3

6864 5738-001
Installing and Running Manager
2

6. In the Choose Destination Location dialog box, click Next to


select the default location.
The default location is C:\Program Files\EMC\Navisphere
version, where version is a number identifying the version of
Manager that you are installing.
7. In the Select Program Folder dialog box, click Next to select the
default folder (Navisphere version).
The installation starts, and the Setup Status dialog box opens to
track the progress of the installation. When the installation is
complete, the InstallShield Wizard Complete dialog box opens.
8. In the InstallShield Wizard Complete dialog box, either select
the Run Navisphere Manager check box to start Manager now
and click Finish, or just click Finish to start Manager later.

Any user who can access the management station can change or delete
the Manager files you just installed. You need to change the permissions
on these files if you want to limit access to them.

What Next?
Continue to the next section, Starting a Manager Session.

2-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

Starting a Manager Session


Important: Until you enable data access control for a shared storage system,
any server connected to it can write to any LUN on it. To ensure that servers
do not write to LUNs that do not belong to them, the procedure in this
section assumes that either just one server is physically connected to the
shared storage system or just one server has been powered up since the
servers were connected to the storage system.

You will use this server (called the configuration server) to send the storage
system the configuration commands that you issue from Manager on a
Navisphere management station. The Agent configuration file on each SP
(FC4700 Series) or on the configuration server (FC4500 Series) must be set up
to give you configuration access from the management station.

Before you use Manager, the following tasks must be completed.

Task Described in

Install the storage systems and connect Storage-system installation and service
them to the servers directly or through hubs manual and hub or switch documentation.
or switches.

Set up the servers whose storage systems Server software manual for the server.
you want to manage. (Setup includes HBA driver manual for the server.
installing CDE or ATF, if using, and installing
the host Agent.)

Any user can run a Manager session from any management station
on which Manager is installed to monitor storage systems. However,
only an authorized user can use Manager to configure or reconfigure
a storage system. A user is authorized if the Agent on the SPs
(FC4700 Series) or on the server (non-FC4700 Series) is set up with
configuration access for the user, as described in Chapter 5.

! CAUTION
The Agent allows more than one Manager session to access the
same storage system at the same time. As a result, two authorized
users are able to configure or reconfigure the same storage system
at the same time, but doing this may damage the data.

Starting a Manager Session 2-5

6864 5738-001
Installing and Running Manager
2

To Start a Manager Before starting a session, make sure that all storage systems you want
Session to manage with this session are powered up, and that the Host Agent
is running on all servers connected to these storage systems.
1. Log in to the Windows management station as either
Administrator or someone who has administrative privileges.
2. From the Windows taskbar, follow the path below to start either
Manager or all installed Navisphere management applications:
Manager
Start →Programs →Navisphere version →Navisphere Manager
All management applications
Start →Programs →Navisphere version →Navisphere Enterprise
3. Click anywhere in the Navisphere Manager splash screen or wait
for the screen to close automatically in 3 seconds.
The Main window opens using the values in the default
application configuration file. (For information on the application
configuration, see the section Window Configuration on page 3-36).
Manager first looks for the file containing the list of servers
(hosts) that were managed when you closed your last Manager
session. The default file for this list is

drive:\install_directory\Profiles\username\HostAdmin.txt

where drive is the drive and install_directory is the directory where


the Windows operating system is installed, and username is your
user name.)

2-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

If this host file exists, Manager tries to:


• Extract hostnames from the file.
• Contact each server in the file to determine the state of the
storage systems connected to it.
• For each storage system it finds, display a storage-system icon
in the Equipment and Storage trees in each open Enterprise
Storage dialog box in its Main window.
• For each server connected to a storage system that it finds,
display a host icon in the Hosts tree in each open Enterprise
Storage dialog box in its Main window.
If the host file does not exist, the Enterprise Storage dialog boxes
remain empty.

If you start Manager while an Agent is starting up, Manager may receive a
time-out error from that Agent. If such a time-out occurs, Manager displays a
dialog box informing you of the time-out. Once the Agent is running, you can
either restart Manager or add the server to the list of managed hosts using the
Agent Selection dialog box (page 2-10).

What Next?
Continue to the next section, Setting User Options for Manager.

Starting a Manager Session 2-7

6864 5738-001
Installing and Running Manager
2

Setting User Options for Manager


Manager has the following options that you can set to determine how
it operates:
Network time out
Sets the time interval in seconds to establish a connection to a
managed storage system and to execute the operation that Manager
requested. If the time interval is exceeded, Manager terminates the
connection.
Host file path
Sets the path of the file where the list of hosts with managed storage
systems is saved between Navisphere sessions.
Save file path
Sets the path of the file where the application configuration is saved
between Navisphere sessions.
Polling interval
Sets the time interval in seconds that determines how often Manager
performs a poll operation if automatic polling for the Manager
session (background polling) is enabled, as described next.
Automatic polling
Enables or disables background polling for the session. Background
polling maintains the polling interval counter and performs poll
operations.
When background polling is enabled, Manager automatically
requests an Agent to poll a managed storage system for updated
information only if automatic polling for the storage system is
enabled.
If automatic polling is enabled for a storage system, the polling
interval and the automatic polling priority for a storage system
determine how often Manager requests Agents to poll storage
systems. If automatic polling is not enabled for a storage system
when a poll operation occurs, the information in the Manager’s
image of the storage system remains unchanged.
When background polling is disabled, no poll operations occur.

2-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

Table 2-1 Default Settings for User Options

Option Setting

Network timeout 240 seconds

Host file path drive:\install_directory\Profiles\username\HostAdmin.txt


where drive is the drive and install_directory is the directory where
you installed the Windows operating system, and username is your
username.

Save file path drive:\install_directory\Profiles\username\Save.nfx


where drive is the drive and install_directory is the directory where
you installed the Windows operating systems, and username is your
username.

Polling interval 60 seconds

Automatic polling Cleared (disabled)

To Set the User 1. In the Main window, follow the menu path
Options for Manager
View →Options
A User Options dialog box opens, similar to the following.

2. Change any of the options that you want to change as follows:


a. In Host File Path, type or select the path to use for the
managed host file.

Setting User Options for Manager 2-9

6864 5738-001
Installing and Running Manager
2

b. In Save File Path, type or select the path to use for the default
application configuration file.
For information on the application configuration, see the
section Window Configuration on page 3-36.
c. In Polling Interval, type or select the number of seconds for
the polling interval.
d. Select the Automatic Polling check box to enable automatic
polling for the session, or clear it to disable automatic polling
for the session.
When automatic polling is enabled for the session, an
individual storage system is polled only if automatic polling is
enabled for the storage system. You enable automatic polling
for an individual storage system from the General tab on
Storage System Properties dialog box for the storage system
(page 6-10).

3. Click OK to apply the settings and close the dialog box.


All settings are saved for your future Manager sessions.

What Next?
Continue to the next section, Selecting Storage Systems to Manage.

Selecting Storage Systems to Manage


You can manage storage systems in three attach environments -
Direct attach (no hubs or switches), SAN (storage area network), and
NAS (network attached storage). The method you use to select
storage systems to manage is the same for the Direct and SAN attach,
but different for NAS.

Selecting Storage Systems for Direct or SAN Attach


You select Direct attach and SAN storage systems to manage using
the Agent Selection dialog box. If the Host Agents and SP Agents
you want to manage were selected when you exited the previous
session, you do not need to select them again.

2-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

When managing an FC4700 storage system, you must manage both SP


Agents in the storage system, and the Host Agent connected to the storage
system.

To Open the Agent Selection Dialog Box


On the File menu in the Main window, click Select Agents.
The Agent Selection dialog box opens, similar to the following. For
information on the properties in the dialog box, click Help.

If all locations (hostname or IP address) for all Host Agents or SP


Agents for the storage systems that you want to manage with this
Manager session are listed under Managed Agents, click OK. You
are now ready to do one of the following:
• Learn about storage-system trees, connectivity maps, and the
Main window, if you are not familiar with them, by reading
Chapter 3.

Selecting Storage Systems to Manage 2-11

6864 5738-001
Installing and Running Manager
2

• Configure the Navisphere Agents for managed storage systems


with SPs, as described in Chapter 5.
• Monitor the operation of storage systems without SPs - JBOD
configurations, as described in Chapter 11.
If not all locations are listed or the list is empty, continue with this
section.
The Agent Selection dialog box lets you manage Agents for storage
systems by specifying the Host Agent or AP Agent by location or by
searching one or more subnets as follows.

To Specify an Agent by Location


1. For each Host Agent or SP Agent for storage systems you want to
manage, type its location in the Agent to Add box, and click →.
The agent locations move to the Managed Agents box.
2. When the Managed Agents box contains all the SP Agents and
Host Agents for storage systems you want to manage, click OK.
The dialog box closes, and Manager does the following:
• Adds the Host Agents and SP Agents to the host file.
• Contacts each Agent in the file to determine the state of the
storage systems connected to it.
• For each storage system it finds, displays a storage-system
icon in the Equipment and Storage trees in each open
Enterprise Storage dialog box.
• For each Host Agent (managed or unmanaged) connected to a
storage system that it finds, displays a host icon in the Hosts
tree in each open Enterprise Storage dialog box in its Main
window.
The default host file is
drive:\install_directory\Profiles\username\HostAdmin.txt where
drive is the drive and install_directory is the directory where the
Windows operating system was installed and username is your
username. To change the name and location of this file, follow the
Main window menu path View →Options.
3. If you do not want to manage a storage system with an icon on
the Equipment and Storage trees, right-click the icon and click
Unmanage.

2-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

Manager removes the storage-system icon from the Equipment


and Storage trees in each open Enterprise Storage dialog box.

To Search a Subnet for Storage Systems to Manage


1. For each subnet, type its subnet in the Subnet to Add box, and
click →.
The subnet moves into the Subnets to Search box.
2. When the Subnets to Search box contains the addresses of all the
desired subnets, click Find Agents.
Manager starts searching the subnets for any Host Agents or SP
Agents. When it finds any Agents, it displays the locations for the
Agents in the Unmanaged Agents box. The Scanning subnets
status bar tracks the progress of the search.

3. When the search is complete, in the Unmanaged Agents box,


select the locations of the Agents for storage systems you want to
manage, and click →.
The agent locations move into the Managed Agents box.

Manager displays host icons for each Host Agent connected to a


managed storage system regardless of whether you selected the Agent.
The icon for an unmanaged Agent functions differently than the icon for
a managed agent. As a result, we strongly recommend that you select all
agents connected to storage systems that you want to manage.

4. When the Managed Agents box contains all the desired Agents,
click OK.
The dialog box closes, and Manager does the following:
• Adds new selected subnets and new Host Agents and SP
Agents to the host file.
• Contacts each managed Agent whose location is in the file to
determine the state of the storage systems connected to it.
• For each storage system that it finds, displays a
storage-system icon in the Equipment and Storage trees in
each open Enterprise Storage dialog box.

Selecting Storage Systems to Manage 2-13

6864 5738-001
Installing and Running Manager
2

• For each Host Agent (managed or unmanaged) connected to


each storage system that it finds, displays a host icon in the
Hosts tree in each open Enterprise Storage dialog box in its
Main window. (Only locations of managed Agents are in the
host file).
• The default host file is
drive:\install_directory\Profiles\username\HostAdmin.txt
where drive is the drive and install_directory is the directory
where the Windows operating system was installed, and
username is your username. To change the name and location
of this file, follow the Main window menu path View →
Options.
5. If you do not want to manage a storage system with an icon on
the Equipment and Storage trees, right-click the icon and click
Unmanage.
Manager removes the storage-system icon from the Equipment
and Storage trees in each open Enterprise Storage dialog box.

Selecting Storage Systems for NAS Attach


You select IP4700 NAS (network-attached file servers) devices you
want to manage using the NAS Device Selection dialog box. If the
NAS devices you want to manage were selected when you exited the
previous session, you do not need to select them again.
On the File menu in the Main window, click Select NAS Devices.

2-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

The NAS Device Selection dialog box opens, similar to the


following. For information on the properties in the dialog box, click
Help.

To Search a Subnet for NAS Devices to Manage


1. Under Subnets, in the Subnet to Add box, type the address of
the subnet you want to search, and click →.
Enter the subnet in the form a.b.c.d, where a, b, c and d range from
0 through 255 and a cannot be 0. (example: 128.222.34.53)
The subnet moves into the Subnets to Search box.
2. Repeat step 1 until the Subnets to Search list contains the
addresses of all the desired subnets.
To delete subnets from Subnets to Search, select the subnets, and
click Delete.
To delete all subnets, click Clear.
3. Click Find NAS.

Selecting Storage Systems to Manage 2-15

6864 5738-001
Installing and Running Manager
2

The application starts searching the subnets for any NAS devices.
When it finds any devices, it displays an icon and the locations for
the devices in the Unmanaged NAS Devices box. The Scanning
subnets status bar tracks the progress of the search.

4. When the search is complete, in the Unmanaged Agents box,


select the locations of the NAS devices you want to manage, and
click →.
The NAS locations move into the Managed NAS Devices box.
To delete devices from Unmanaged NAS Devices, select the
devices, and click Delete.
To delete all devices. click Clear.
5. When Managed NAS Devices contains entries for all the desired
devices, click OK.
The dialog box closes, and the application places an icon, IP
address, and model number for each managed NAS device in the
Equipment and Storage trees. The icon is not expandable.

To Specify a NAS Device by Name


1. For each NAS device you want to manage, enter the IP address in
the NAS Device to Add box, and click →.
The device locations move to the Managed Agents box.
2. Repeat step 2 until you have selected all the NAS devices you
want to manage.
3. To move NAS devices from Managed NAS Devices to
Unmanaged NAS Devices, select the devices and then click ←.
To delete a selected NAS device from Unmanaged NAS Devices,
click Delete.
To delete all NAS devices form Unmanaged NAS Devices, click
Clear.
4. Click OK to save any changes and close the dialog box.
The dialog box closes, and the application places an icon, IP
address, and model number for each managed NAS device in the
Equipment and Storage trees. The icon is not expandable.

2-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing and Running Manager
2

Managing a NAS Device


Use the Web based NAS device management tool and its online Help
to manage your NAS devices.
1. In the Equipment or Storage tree, right-click the icon for the NAS
device, and click Manage NAS.
2. In the Enter Network Password dialog box, enter your username
and network password, and click OK.
The Web based NAS device management tool opens.

What Next?

You are now ready to do one of the following:


• Learn about storage-system trees, connectivity maps, and the
Main window, if you are not familiar with them, by reading
Chapter 3.
• Configure the Navisphere Agents for managed storage systems
with SPs, as described in Chapter 5.
• Monitor the operation of storage systems without SPs (JBOD
configurations), as described in Chapter 11.

Managing a NAS Device 2-17

6864 5738-001
Installing and Running Manager
2

2-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
3
Trees, Connectivity
Map, and Main
Window

Manager uses tree structures and a connectivity map to show the


storage system environment it is managing. It displays the trees and
connectivity map in its Main window. If other Navisphere
applications are installed on the management station, you may see
additional menu options. For information on these options, see the
online help or the manual for the applications.
This chapter describes:
• Trees .....................................................................................................3-2
• Connectivity Map...............................................................................3-6
• Detailed View .....................................................................................3-7
• Components of Trees, Connectivity Map, and Detailed View .. 3-11
• Main Window ...................................................................................3-30

Subsequent chapters describe how to configure and monitor storage


systems.

This manual assumes that you are familiar with the Windows environment
for your management station.

Trees, Connectivity Map, and Main Window 3-1

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Trees
Trees show the relationships between the physical and logical
components of managed storage systems. Trees are analogous to the
hierarchical folder structure of Microsoft Windows Explorer.
The Equipment tree shows icons for the physical components of the
managed storage systems and servers and their host bus adapter
(HBA) ports to which the managed storage systems are connected.
The Storage and Hosts trees show icons for the logical components of
the managed storage systems. The Storage tree shows the icons from
a storage-system viewpoint, and the Hosts tree shows them from a
host viewpoint.
A tree appears in the selected tab in the open Enterprise Storage
dialog boxes in the Main window. The Equipment tree appears in the
Equipment tab; the Storage tree appears in the Storage tab; and the
Hosts tree appears in the Hosts tab, as shown on the following pages.
The managed storage systems are the base components in the
Equipment and Storage trees. These trees display a storage-system
icon for each managed storage system. The managed servers are the
base icons for the Hosts tree. This tree displays a host icon for each
managed server. It also displays an icon for each unmanaged server
connected to a managed storage system.
You can expand and collapse the storage-system or host icons to
show icons for their components (such as SP icons, disk icons, LUN
icons, RAID Group icons) just as you can expand and collapse the
Explorer folder structure. You use the icons to perform operations on
and display the status and properties of the storage systems, their
components, and their host connections.

3-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Storage-system
icon

Figure 3-1 Sample Partially Expanded Equipment Tree

Trees 3-3

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Storage-system
icon

Figure 3-2 Sample Partially Expanded Storage Tree

3-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Host icon

Figure 3-3 Sample Partially Expanded Hosts Tree

You select icons on a tree in the same way that you select items in
other Microsoft Windows applications.
To select a single icon:
Click the icon.
To select multiple icons:
Do either of the following:
• Press Shift while left-clicking the first icon and last icon to select
the first and last icon and all icons between them.
• Press Ctrl while left-clicking the icons you want to select.

Trees 3-5

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Connectivity Map
The Connectivity Map shows the logical connections for each
currently managed storage system and the hosts using its storage. It
uses the same icons as the tree views to represent the storage system
and hosts. You use the icons to perform operations on and display the
status and properties of the storage systems, their components, and
their host connections.

To Display the On the Operations menu, click Connectivity Map.


Connectivity Map
The Connectivity Map opens in the open Enterprise Storage dialog
box in the Main window, similar to the following.

The managed storage systems and the hosts to which they connect
are the base components in the Connectivity Map. The map displays
a storage-system icon for each managed storage system and a host
icon for each host connected to a managed storage system. If the
hosts are connected to storage systems through switches, one switch
icon is shown between the hosts and storage systems.
You can display:
• The connectivity between hosts and storage systems.
• A detailed view of a storage system.

3-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

To display connectivity between components


Click a single icon in the map.
The map highlights all the connections relevant to the component
represented by the icon.
To display a Detailed View from the Connectivity Map
Double-click the icon for a storage system.

Detailed View
The Detailed View window provides a graphical view of the
relationships among the servers connected to the selected storage
system and the Storage Groups (shared storage systems only), SPs,
LUNs, RAID Groups, and disks in the storage system.
The Detailed View window uses the same icons as the tree views to
represent servers, storage systems, LUNs, RAID Groups, Storage
Groups, SPs, and disks. You can right-click any of these icons to
display the single-select menu for the component.

To Display a Detailed From any tree view, right-click the icon for the storage sytsem and
View click Detailed View, or
In the Connectivity Map, double-click the icon for a storage system.

Detailed View 3-7

6864 5738-001
Trees, Connectivity Map, and Main Window
3

The Detailed View opens similar to the following.

Toolbar

Workspace

3-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Toolbar The buttons on the Detailed View window toolbar change the
appearance and content of the window.
The toolbar buttons are toggle buttons.

Table 3-1 Toolbar Buttons

Button Name Use To Display

Select View Wide angle view (view with small icons) or normal view
(view with large icons). The default is the wide angle
view.

LUN Ownership Yellow label on each LUN indicating whether SP A or SP


B owns it. No label appears on any unowned LUNs. The
default is not to display LUN ownership labels.

Disk IDs Yellow label on each disk with the disk ID. The default is
not to display disk ID labels.

LUN Devices File system mappings below each LUN. The default is to
display the mappings.

Host Connections Connection lines from each server to either the Storage
Group to which it can perform I/O (shared storage
systems) or the SPs to which it is connected (unshared
storage systems). The default is to hide these lines.

Help Online help.

Detailed View 3-9

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Workspace The workspace in the Detailed View window displays a graphical


view of the relationships between the servers connected to the
selected storage system and certain physical and logical components
within the storage system. For all types of storage systems, the
Detailed View contains icons for LUNs, disks, and servers. It also
contains the following containers, depending on the storage-system
type:

Table 3-2 Detailed View Containers

Storage-System
Container Type Function

Storage Group container Shared Represents a Storage Group in the storage system, and contains an icon
for each LUN in the group. It identifies the Storage Group it represents by
name.
Right-clicking a Storage Group container displays the Storage Group
menu.

Storage Processor container Unshared Represents a storage processor (SP) in the storage system, and contains
an icon for each LUN owned by the SP. It identifies the SP it represents by
name (SP A or SP B).
Right-clicking a Storage Processor container displays the SP menu.

Unowned LUNs container All Contains an icon for each LUN in the storage system not owned by an SP.

RAID Group container Unshared, Represents a RAID Group in the storage system, and contains an icon for
RAID-Group each disk in the group. It identifies the RAID Group it represents by name
or and type.
shared Right-clicking a RAID Group container displays the RAID Group menu.

Unassigned disk container Unshared, Contains icons for each disk in the storage system that is not assigned to
RAID-Group a RAID Group.
or
shared

Enclosure container Unshared, non-RAID Represents an enclosure in the storage system, and contains an icon for
Group each disk in the enclosure. It identifies the enclosure it represents by the
enclosure name.
Right-clicking an enclosure container displays the enclosure menu.

3-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Components of Trees, Connectivity Map, and Detailed View


This section describes accessible, inaccessible, and unsupported
storage systems and the icons that appear on the Equipment,
Storage, and Hosts trees and the Connectivity Map and Detailed
View.

Accessible, Inaccessible, and Unsupported Storage Systems


For Manager, a storage system is either accessible, inaccessible, or
unsupported.

Table 3-3 Accessible, Inaccessible, and Unsupported Storage Systems

Term Explanation

accessible Manager can communicate with the storage system.

inaccessible Manager has never been able to communicate with the storage system. A storage system can be
inaccessible for any of these reasons:
• The Agent is not running on the server. In this case, Manager displays an error message when you try
to select the server for management. Manager does not display an icon for the storage system that is
inaccessible for this reason.
• The Agent running on the server was started by a user who was not logged in as root or with
Administrative privileges. Manager displays an icon for a storage system that is inaccessible for this
reason, and the icon indicates that the storage system is inaccessible.
• The storage system’s name is wrong in the Agent configuration file on its server. Manager displays an
icon for a storage system that is inaccessible for this reason, and the icon indicates that the storage
system is inaccessible.

unsupported The storage system’s device entry in the Agent configuration file on its server is one that Manager does
not support. Examples are an internal disk on the server and a 7-slot storage system with SCSI disks.

Each managed storage system is represented by a storage-system


icon on the Equipment, Storage, and Hosts trees. This icon consists
of an image and a description.

Components of Trees, Connectivity Map, and Detailed View 3-11

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Icons Each icon in a tree consists of an image representing the component


and a description of the component. The color of the image and the
letter it contains reflect the condition of the component as follows:

Table 3-4 Icon Colors

Color Character Condition

Grey or green None The component and all of its components are working
and grey normally.

Faded grey or None The component is a ghost; that is, it is an FC4700 SP that is
green and not managed or is part of a non-FC4700 storage system that
grey is not managed.

Orange F Either the component or one or more of its components has


failed.

X Storage system is unsupported.

? Storage system is inaccessible.

Blue T Either the component or one or more of its components is in a


transition state.

The main components of the Equipment and Storage trees are the
icons for the managed storage systems, and the main components of
the Hosts tree are the icons for the servers connected to managed
storage system. Server icons are described below; storage-system
icons are described on page 3-14; and the icons for storage-system
components are described on page 3-14.

Icons for Servers The icons for the managed servers (hosts) and their host bus adapters
(HBAs) connected to managed storage systems appear in the
Equipment tree. The icons for all servers (managed or unmanaged)
connected to managed storage systems appear in all trees and the
Connectivity Map.

3-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Figure 3-4 Icon Images and Descriptions for Servers and Server HBAs

Image Description Displayed in Meaning

Hosts All trees Hosts connected to the storage


Connectivity Map system.

hostname All trees Server with name hostname


Connectivity Map connected to the storage system.

Port: UniqueID Equipment HBA port in the server connected to


the storage system.
UniqueID is the unique identifier of
the port.

You display the properties of a server using the menu associated with
the host icon for the server.

To Display the Server For a single server


Menu
Right-click the host icon for the server whose menu you want to
display.

Table 3-5 Menu Options for Single Servers

Icon Description Menu Option Use To

hostname Connect Storage Connect the host to a Storage Group


(shared storage systems only).

hostname Properties Display the properties of the selected


servers.

For multiple servers


Select the icons for the servers whose menus you want to display, and
right-click.

Table 3-6 Menu Option for Multiple Servers

Icon Description Menu Option Use To

hostname Properties Display the properties of the selected


servers.

Components of Trees, Connectivity Map, and Detailed View 3-13

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Storage-System Icons Icons for individual storage systems appear in all the trees and the
Connectivity Map. In the Host tree, icons for individual storage
systems connected to a host appear under a multiple-storage systems
icon.
Table 3-7 Individual Storage-System Icon Images

Image Type Meaning

FC4700 Rackmount storage-area network (SAN)


storage system with 4 Fibre Channel host ports
and Fibre Channel disks.

IP4700 Rackmount Network-Attached (NAS) file server


(Icon is always with 4 Fibre Channel host ports and Fibre
blue) Channel disks.

FC4300/4500 Rackmount or deskside storage system with 2


FC5600/5700 Fibre Channel host ports and Fibre Channel
disks.

FC5200/5300 Rackmount or deskside storage system with 2


Fibre Channel host ports and Fibre Channel
disks.

FC5000 Rackmount or deskside storage system with


Fibre Channel disks in a JBOD
(just-a-bunch-of-disks) configuration. This
storage system does not have SPs.

C3x00 30-slot rackmount storage system with SCSI


disks.

C3x00 30-slot deskside storage system with SCSI


disks.

C2x00 20-slot rackmount storage system with SCSI


disks.

C2x00 20-slot deskside storage system with SCSI


disks.

3-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-7 Individual Storage-System Icon Images (cont)

Image Type Meaning

C1900 10-slot rackmount Telestor storage system with


SCSI disks.

C1000 10-slot rackmount storage system with SCSI


disks.

C1000 10-slot deskside storage system with SCSI


disks.

Table 3-8 Multiple Storage-System Icon Image

Image Description Tree Meaning

Storage Systems Host Storage systems connected to the host.

Storage-System Descriptions
A storage-system description has the following format:
storage_system_name [type]
where
storage_system_name is a name that uniquely identifies the storage
system. For a storage system connected to a
server running Agent revision 4.X or 5.X, its
format is either A-serial# or B-serial#
where
A or B identifies either SP A or SP B as the SP
used for communications with the storage
system.
serial# is the unique serial number of
enclosure 0 in an FC-series storage system or
the chassis in a C-series storage system.
You can change this name.

Components of Trees, Connectivity Map, and Detailed View 3-15

6864 5738-001
Trees, Connectivity Map, and Main Window
3

type is the storage-system type:


For example, A-95-2694-261 [FC4700]
Depending on the status of the storage
system, the storage system type may be
replaced with Inaccessible or
Unsupported.
For example, A-95-2694-261
[Inaccessible]

If automatic polling for the session (background polling) is enabled, the word
“polling” appears in brackets after the description in each storage-system
icon during a poll operation.

To Assign a Custom 1. In the Enterprise Storage dialog box, click the Equipment or
Name to a Storage Storage tab.
System
2. Right-click the icon for the storage system whose name you want
to change, and then click Set Name.
3. In the Set Storage System Name dialog box, type the new name
and click OK.

Changing the name does not affect the agent configuration file.

Storage-System You can perform operations on storage systems using the menu
Menu associated with the storage-system icon. You can display this menu
for single or multiple storage systems.

To Display the Storage System Menu


For a single storage system
Right-click the icon for the storage system whose menu you want to
display.
For multiple storage systems
Select the icons for the storage systems whose menus you want to
display, and right-click.

Not all menu options are supported by all storage systems.

3-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-9 Storage-System Menu Options for a Single Storage System

Option Use to

Poll Poll the storage system for status changes.

Unmanage Stop managing the storage system.

Bind LUN Bind disks into a LUN.

Create RAID Group Create a RAID Group from selected disks.

Software Installation Update existing software or install new software on the storage
system.

Set Name Assign a custom name to the storage system.

Faults Display the Fault Status Report for the storage system.

Disk Summary Display a summary of the disks in the storage system.

Create Storage Groups Create Storage Groups on the storage system (shared storage
systems only).

Connect Hosts Connect servers to a Storage Group on the storage system so the
servers can perform I/O to the LUNs in the Group.

Connectivity Status Display the Connectivity Status dialog box.

Manage MirrorView Add or remove logical connections between storage systems that
Connections are physically connected, managed and have MirrorView installed.

Create Remote Mirror Create a remote mirror of a LUN in the storage system (FC4700
storage systems only).

Detailed View Display a graphical view of the relationships between the servers
connected to the storage system and storage-system
components.

Start Snapshot Start a snapshot session on the storage system (FC4700 storage
Session systems only).

SnapView summary Display the status of any snaphsots and snapshot sessions for the
selected storage system.

Properties Display or set the storage-system properties.

Report Generate a report for the storage system.

Manage NAS (IP4700 Only menu option for IP4700 series. Open the Web based
series only) network-attached file server (NAS) device management tool.

Components of Trees, Connectivity Map, and Detailed View 3-17

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-10 Storage-System Menu Options for Multiple Storage Systems

Option Use to

Poll Poll all selected storage systems for status changes.

Unmanage Stop managing all selected storage systems.

Software Installation Update existing software or install new software on all selected
storage systems if they all are the same type of storage system.

Detailed View Display a graphical representation of the relationships between the


components of all the selected storage systems.

Properties Display or set properties on all selected storage systems.

If one of the selected storage systems is inaccessible or unsupported, Install


Software is not available (dimmed). If other Navisphere 4.X or 5.X
applications are installed on the management station, you may see additional
menu options. For information on these options, see the online help index or
the manual for the application.

Storage Component Icons


The basic storage components for a storage system are
• Persistent Storage Manager (PSM) LUN
• Storage Groups (shared storage system only)
• LUNs
• RAID Groups (RAID-Group storage system only)
• Storage processors (SPs)
• Disks
• Private LUNs
The storage components for a storage system with the MirrorView
options are
• Remote mirrors
• Remote mirror
• Remote mirror images

3-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

The storage components for a storage system with the SnapView


option are
• Snapshot caches
• Snapshots
• Snapshot session
These components are represented by icons on the trees,
Connectivity Map, and Detailed View.
Table 3-11 Basic Storage Component Icons: Images and Descriptions

Image Description Displayed on Meaning

Storage Groups Storage tree Storage Groups in the storage system or accessible from the
Host tree host.

StorageGroupname Storage tree Individual Storage Group in the storage system or accessible
Host tree from the host.
StorageGroupname is the name of the Storage Group.

PSM LUN Storage tree LUN in an FC4700 storage system reserved exclusively for
Host tree storage-system SPs to store critical information.
Detailed View

LUN LUNID [RAID 5; Storage tree RAID 5 LUN in RAID Group or storage system.
hostnames - Host tree LUNID is the ID assigned when you bound the LUN. It is a
devicename] Detailed View hexadecimal number. Hostnames is a list of the names of
LUN LUNID [RAID 5; each server connected to the storage system. devicename is
hostnames - the device name for the LUN on those servers.
devicename-
mirrorstatus] See Note at end of table

LUN LUNID [RAID 3] Storage tree RAID 3 LUN in RAID Group or storage system.
LUN LUNID [RAID 3; Host tree LUNID is the ID assigned when you bound the LUN. It is a
mirrorstatus] Detailed View hexadecimal number.

LUN LUNID [RAID Storage tree RAID 1/0 LUN in RAID Group or storage system.
1/0] Host tree LUNID is the ID assigned when you bound the LUN. It is a
LUN LUNID [RAID Detailed View hexadecimal number.
1/0; mirrorstatus]
See Note at end of table

LUN LUNID [RAID 1; Storage tree RAID 1 LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.

See Note at end of table

Components of Trees, Connectivity Map, and Detailed View 3-19

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-11 Basic Storage Component Icons: Images and Descriptions (cont)

Image Description Displayed on Meaning

LUN LUNID [RAID 0; Storage tree RAID 0 LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.

See Note at end of table

LUN LUNID [Disk; Storage tree Individual disk LUN in RAID Group or storage system.
mirrorstatus] Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.

See Note at end of table

LUN LUNID [Hot Spare] Storage tree Hot spare in RAID Group or storage system.
Host tree LUNID is the ID assigned when you bound the LUN. It is a
Detailed View hexadecimal number.

Unowned LUNs Storage tree LUNs, such as hot spares, that are not owned by either SP.
Host tree
Detailed View

SPs Equipment tree SPs in the storage system.


Storage tree

SP A Equipment tree In an FC-series storage system, the SP in the SP A slot in


Storage tree enclosure 0.
In a C-series storage system, the SP in the SP A slot in the
enclosure.

SP B Equipment tree In an FC-series storage system, the SP in the SP B slot in


Storage tree enclosure 0.
In a C-series storage system, the SP in the SP B slot in the
enclosure.

RAID Groups Storage tree RAID Groups on the storage system.


Host tree

3-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-11 Basic Storage Component Icons: Images and Descriptions (cont)

Image Description Displayed on Meaning

RAID Group Storage tree Individual RAID Group identified by RAIDGroupID in the
RAIDGroupID Host tree storage system.
[RAIDtype] RAIDGroupID is the ID assigned when you created the RAID
Group. It is a hexadecimal number between 0x00 and 0x1F.
RAIDtype is Unbound if no LUNs are bound on the Group.
Available RAID types are: RAID 5, RAID 3, RAID 1/0, RAID 1,
RAID 0, Disk, or Hot Spare.
For example, 0x03[RAID 5].

Disks Equipment tree Disks in the storage system.


Storage tree

Disk diskID Equipment tree For an FC-series storage system, the disk in the enclosure
Storage tree and slot identified by diskID, which has the format m-n where
Detailed View m is the enclosure number and n is the slot in the enclosure
containing the disk.
For a C-series storage system, the disk in the slot identified by
diskID, which has the format mn where m is the letter (A, B, C,
D, or E) of the SCSI bus for the slot and n is the position on
the bus containing the disk.

Note
If the storage system has the MirrorView option, mirrorstatus indicates the LUN’s remote mirror status, which can be any of the
following:
Mirrored - LUN is the primary image LUN of a remote mirror.
Mirrored/No Secondary Image - Remote mirror does not contain secondary image.
Secondary Copy - LUN is a secondary image LUN for a remote mirror.

Components of Trees, Connectivity Map, and Detailed View 3-21

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-12 MirrorView and SnapView Storage Component Icons

Image Description Displayed on Meaning

Remote Mirrors Storage tree Container for all remote mirrors in the storage system. This icon
appears even when no remote mirror instances are defined on the
storage system.

Remote Mirror Storage tree Individual remote mirror.


mirrorname [state] Mirrorname is the name of the remote mirror. It can have one of
these states:
Active - The remote mirror is running normally.
Inactive - The mirror is unavailable for host I/O. This occurs if the
mirror was deactivated.
Attention - Remote mirror is not running because a required
condition is not met, for example, only one secondary image is
available (when two are required).

Remote Mirror Image Storage tree Imagename is the name of the image. Imagetype identifies whether
imagename - imagetype the image is a primary or secondary image. The image can have
[state] one of these states:
In-Sync (or identical or congruent) - Secondary image is identical to
the primary. This state persists only until the next write to the
primary image, at which time the image state becomes Consistent.
Consistent - Secondary image is identical to the primary, or it was
identical in the past. If the mirror is not fractured, the software will
try to make the secondary image In-Sync after receiving no I/O for a
given period of time (the quiesce threshold).
Synchronizing - Software is applying changes to the secondary
image to mirror the primary, but the current contents of the
secondary are not known and are likely not usable.
Out-of-Sync - None of the above; the secondary image requires
synchronization with the primary image.

Snapshot cache Storage Tree Container for SP A’s and SP B’s snapshot cache.

snapshot cache - SP A Storage tree SP A’s snapshot cache, which consists of any LUNs owned by SP A
selected to participate in snapshot sessions.

Snapshot Cache - SP B Storage tree SP B’s snapshot cache, which consists of any LUNs owned by SP B
selected to participate in snapshot sessions.

snapshot sessions Storage tree Container for all snapshot sessions running in the storage system.
This icon appears even when no snapshot sessions are active in
the storage system.

snapshot session name Storage tree Individual snapshot session running in the storage system.

3-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-12 MirrorView and SnapView Storage Component Icons (cont)

Image Description Displayed on Meaning

Snapshots Storage tree Container for LUNs participating in snapshot sessions.

snapshotname Storage tree Individual LUN participating in a snapshot session.


snapshotname is the name of the snapshot.

Menus for Storage Components


You can perform operations on storage components using the menu
associated with the icon for the component. You can display this
menu for single or multiple storage components of the same type.

To Display Storage Components Menus


For a single storage component
Right-click the icon for the component whose menu you want to
display.
For multiple storage components of the same type
Select the icons for the storage components whose menus you want
to display, and right-click.

Not all menu options are supported by all storage systems.

Table 3-13 Menu Options for a Single Basic Storage Component

Icon Description Menu Option Use To

StorageGroupname Destroy Destroy the Storage Group.

Connect Hosts Connect servers to the Storage Group.

Properties Display the properties of the Storage Group.

Components of Trees, Connectivity Map, and Detailed View 3-23

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-13 Menu Options for a Single Basic Storage Component (cont)

Icon Description Menu Option Use To

LUN LUNID [RAIDtype] Unbind LUN Unbind the LUN, destroying all the data on it and making its disks available
for another LUN or RAID Group.

Update Host Information Scan SCSI devices (including storage systems) connected to all servers
connected to the storage system. Updates the Navisphere server
information based on the results of the scan.

Add to Storage Groups Add the LUN to one or more Storage Groups

Mirror Start a remote mirror.

Create Secondary Create a secondary image of the LUN on another storage system.
Image LUN

Create a snapshot Create a virtual LUN to maintain the snapshot of the data on the LUN at
this moment.

Properties Display the properties of the LUN.

SP A or SP B Event Log Display the event log for the storage processor (SP).

Reset Statistics Logging Set statistics for LUNs, disks, and storage-system caching to zero.

Properties Display the properties of the SP.

RAID Group Properties Display the properties of the RAID Group.


RAIDGroupID [RAIDtype]
Destroy Dissolve the RAID Group, unbinding all its LUNs.

Disk diskID Properties Display the properties of the disk.

3-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

If other Navisphere 4.X or 5.X applications are installed on the management


station, you may see additional menu options. For information on these
options, see the online help index or the manual for the application.

Table 3-14 Menu Options for Multiple Basic Storage Components

Icon Description Menu Option Use To

StorageGroupname Destroy Destroy the Storage Group.

Connect Hosts Connect servers to the Storage Group.

Properties Display the properties of the Storage Group

LUN LUNID [RAIDtype] Unbind LUN Unbind all selected LUNs, destroying all the data on them and making their
disks available for another LUN or RAID Group.

Update Host Scan SCSI devices (including storage systems) connected to all servers
Information connected to the storage system. Updates the Navisphere server information
based on the results of the scan.

Properties Display the properties of all selected LUNs.

SP A or SP B Properties Display the properties of all selected SPs.

RAID Group RAIDGroupID Properties Display the properties of all selected RAID Groups.
[RAIDtype]
Destroy Dissolve all selected RAID Groups.

Disk diskID Properties Display the properties of all selected disks.

Table 3-15 Menu Options - Single MirrorView and SnapView Components

Icon Description Menu Option Use To

Remote Mirror Active Activate or deactivate the remote mirror.

Add Secondary Image Add a new secondary image for the remote mirror.

Destroy Destroy the remote mirror.

Force Destroy Destroy the remote mirror when Destroy will not work.

Properties View or modify the properties of the remote mirror.

Components of Trees, Connectivity Map, and Detailed View 3-25

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-15 Menu Options - Single MirrorView and SnapView Components (cont)

Remote Mirror Image Synchronize Start to synchronize an Out-Of-Sync secondary mirror image.

Promote Promote a secondary image to the primary mirror image.

Remove Remove the secondary mirror image.

Fracture Fracture the secondary mirror image from the primary mirror image.

Properties View or modify the properties of the remote mirror image.

Snapshot Cache - SP A Properties Display the properties of the SP A or SP B snapshot cache.


Snapshot Cache - SP B

Snapshot Session Stop Snapshot Session Stop the selected snapshot session.

Properties Display the properties of the snapshot session.

Snapshot Start Snapshot Session Start a snapshot session on the storage system.

Destroy Snapshot Destroy the snapshot.

Properties Display the properties of the snapshot.

3-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Hardware Component Icons


Icons for the following hardware components that are not also
storage components appear on the Equipment tree only: enclosures,
fans, link control cards (LCCs), power supplies, voltage
semi-regulated converters (VSCs), standby power supplies, and
battery backup units.

Table 3-16 Hardware Component Icon Images and Descriptions

Image Description Meaning

Enclosure 0 DPE (Disk-Array Processor Enclosure) in any FC-series storage system except
an FC5000 series.

Enclosure n DAE (Disk-Array Enclosure) with enclosure address n in an FC-series storage


system.

Fans Fans in the storage system.

Enclosure n Fan A Drive fan pack in enclosure n in an FC-series storage system.

Enclosure 0 Fan B SP fan pack in enclosure 0 in an FC-series storage system with a DPE.

FAN A Fan in fan slot A in a C-series storage system.

FAN B Fan in fan slot B in a C-series storage system.

LCCs LCCs in an enclosure of an FC-series storage system.

Enclosure n LCC A SP in SP A slot in enclosure n.

Enclosure n LCC B SP in SP B slot in enclosure n.

Power supplies Power supplies in the enclosure for an FC-series storage system or in the
storage system for a C-series storage system.

Components of Trees, Connectivity Map, and Detailed View 3-27

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-16 Hardware Component Icon Images and Descriptions (cont)

Image Description Meaning

Enclosure n power Power supply in power supply slot A in enclosure n in an FC-series storage
supply A system.

Enclosure n power Power supply in power supply slot B in enclosure n in an FC-series storage
supply B system.

VSC A Voltage semi-regulated converter (power supply) in power supply slot A in a


C-series storage system.

VSC B Voltage semi-regulated converter (power supply) in power supply slot B in a


C-series storage system.

Standby Power SPSs connected to enclosure 0 of an FC-series storage system that supports
Supplies write caching.

Battery Backups BBUs in a C-series storage system that supports write caching.

Enclosure 0 SPS A SPS connected to SP A in enclosure 0.

Enclosure 0 SPS B SPS connected to SP B in enclosure 0.

BBU BBU in the storage-system enclosure.

Menus for Hardware Components


You can perform operations on hardware components using the
menu associated with the icon for the component. You can display
this menu for single or multiple hardware components of the same
type.

To Display Hardware Components Menus


For single hardware components
Right-click the icon for the component whose menu you want to
display.
For multiple hardware components of the same type
Select the icons hardware components whose menus you want to
display, and right-click.

3-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-17 Menu Options for a Single Hardware Component

Icon Description Menu Option Use To

Enclosure n Flash LEDs On Start flashing lights (LEDs) on an enclosure.

Flash LEDs Off Stop flashing lights (LEDs) on an enclosure.

Enclosure n Fan A, Enclosure 0 Fan State Display the state of a fan pack or fan module.
B, FAN A, FAN B

LCC A, LCC B State Display the state of the LCC.

Enclosure 0 SPS A, Enclosure 0 SPS Properties Display the state of the SPS or BBU.
B, BBU

If other Navisphere 4.X or 5.X applications are installed on the management


station, you may see additional menu options. For information on these
options, see the online help index or the manual for the application.

Table 3-18 Menu Options for Multiple Hardware Components

Icon Description Menu Option Use To

Enclosure n Fan A, Enclosure 0 Fan State Display the state of all selected fan packs or fan modules.
B, FAN A, FAN B

LCC A, LCC B State Display the state of all selected LCCs.

Enclosure 0 SPS A, Enclosure 0 Properties Display the state of all selected SPSs or BBUs.
SPS, BBU

Components of Trees, Connectivity Map, and Detailed View 3-29

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Main Window

Application icon
Menu bar
Toolbar

Storage-system
selection filters

Equipment,
Storage, and Workspace
Host tabs

Status bar

Figure 3-5 Main Window

The Main window is common to all Navisphere 5.X management


applications. The menu bar and toolbar icons and the menu options
available when you right-click an icon in a dialog box in the
workspace may vary with the applications installed. All other Main
window components and functions are identical for all applications.
When Manager is installed, the Main window lets you perform all the
tasks needed to set up a storage system, monitor its operation, and
display its properties.

3-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Application Icon The Application icon on the left side of the title bar provides overall
status of all storage systems managed by the current Manager session
as follows:

Table 3-19 Application Icon Image

Icon Color Meaning

Grey Manager has detected no failures in any managed storage


system.

Flashing Manager has detected one or more storage systems in a


blue transitional operating state.

Flashing Manager has detected a failure in one or more storage


orange systems, or one or more storage systems are inaccessible.

Menu Bar From the menu bar in the Main window you can display these
menus: File, View, Operations, Window, and Help.

If other Navisphere 5.X applications are installed on the management station,


you may see additional menus. For information on these menus, see the
online help index or the manual for the application.

File Menu
Option Use To

New Window Open a new Enterprise Storage dialog box.

Select Agents Change the list of agents that the Manager session uses to
determine which storage systems to manage.

Select NAS Devices Select an IP4700 network-attached file server to manage.


Use the IP4700 Web Administration Interface to manage
the IP4700 NAS devices.

Save Save the application’s configuration to the most recently


opened application configuration file for use by the next
session.

Save As Save the application’s configuration to a new, unnamed


application configuration file you specify for use by the next
session.

Open Restore the application’s configuration to the one defined by


the application configuration file you select.

Exit Exit the Manager session and close the Main window.

Main Window 3-31

6864 5738-001
Trees, Connectivity Map, and Main Window
3

View Menu
Option Use To

Toolbar Show or hide the toolbar.

Status Bar Show or hide the status bar.

Options Set the network timeout, set the name and location of the host
file, set the name and location of the save file, set the automatic
polling interval, and enable or disable automatic polling
(background polling) for the Manager session.

Operations Menu
Option Use To

Automatic Polling Enable or disable automatic polling (background polling) for the
Manager session.

Poll All Storage Systems Manually poll all managed storage systems; that is, survey
them once for status changes.

Software Installation Update the software on the managed storage systems you
select.

Faults Display a list of any faulted storage systems and their faulted
components.

Failover status Display the status of the Application Transparent Failover (ATF)
or CLARiiON Driver Extensions (CDE) software on the servers
connected to the managed storage systems.

Connectivity Map Display a graphical representation of the logical connections for


each currently managed storage system.

Software Operation Monitor the status of the software installation operation.


Status

SnapView Summary Display the status of all storage system snapshots, and active
snapshot sessions.

3-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Window Menu
Option Use To

Close All Close all Enterprise Storage dialog boxes.

Cascade Cascade the open Enterprise Storage dialog boxes.

Tile Horizontally Tile horizontally the open Enterprise Storage dialog boxes.

Tile Vertically Tile vertically the open Enterprise Storage dialog boxes.

Enterprise Storage Activate an open Enterprise Storage dialog box.

Help Menu
Option Use To

Contents & Index Display the online help table of contents and index.

Using Help Display information about using the online help.

About Navisphere Display the version of each Navisphere application installed


on the management station.

Toolbar The buttons on the toolbar in the Main window let you perform
operations on all managed storage systems at once. To perform
operations on individual storage systems, use the menu associated
with the storage-system icon (page 3-16). When you position the cursor
over a toolbar button, a brief description of a toolbar button displays.

Figure 3-6 Detailed View Toolbar Buttons

Button Name Use To

Poll Manually poll all managed storage systems; that is,


survey them once for status changes.

Software Update software on the storage systems you select.


Installation

Faults Display a list of any hardware faults encountered on any


managed storage system.

Help Display the online help.

Main Window 3-33

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Workspace The workspace in the Main window contains the dialog boxes that
you use to perform storage-system tasks. It always contains at least
one Enterprise Storage dialog box, unless you have closed it. You can
open additional Enterprise Storage dialog boxes in the workspace. If
you have installed any additional Navisphere applications on the
management station, another type of dialog box may open in the
workspace when you start the application.

Enterprise Storage Dialog Box

An Enterprise Storage dialog box displays the Equipment, Storage,


or Hosts tree of the managed storage systems, depending on whether
the Equipment, Storage, or Hosts tab is selected. You can specify the
managed storage systems to display in the Equipment, Storage, or
Hosts tree using Filter By and Filter For as listed in the table that
follows.

3-34 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

Table 3-20 Filters for Displaying Managed Storage Systems

Filter By Filter For

All N/A

Fault Condition Normal


Faulted

Host Individual hostname

Storage System Type C1000 Series


C1900 Series
C2x00 Series
C3x00 Series
FC50xx Series
FC5000 Series
FC5200/5300 Series
FC5400/5500 Series
FC5600/5700 Series
FC4300/4500 Series
FC4700 Series
Unsupported
Inaccessible

Subnet Individual subnet IP address

When you open a Manager session, it displays one Enterprise


Storage dialog box with a number. During the session you can open
additional Enterprise Storage dialog boxes and close them. You
might want one dialog box displaying the Equipment tree, one
displaying the Storage tree, and one displaying the Hosts tree. Only
one dialog box is active.

To Open a New Enterprise Storage Dialog Box


Follow the menu path File →New Window.
A new Enterprise Storage dialog box opens in the workspace and it
is the active one.

To Activate a Different Enterprise Storage Dialog Box


Either click in the dialog box you want to activate, or on the Window
menu click the Enterprise Storage dialog box you want to activate.

To Close all Enterprise Storage Dialog Boxes


On the Window menu, click Close All.

Main Window 3-35

6864 5738-001
Trees, Connectivity Map, and Main Window
3

The Equipment tab displays the Equipment tree; the Storage tab
displays the Storage tree; and the Hosts tab displays the Hosts tree.
You use the Equipment tree to manage the physical components of
the managed storage systems; the Storage tree to manage the logical
components of the managed storage systems; and the Hosts tree to
manage the LUNs and the storage systems to which the servers
connect.
You perform operations on
• All managed storage systems using the menu bar, or on selected
managed storage systems using the menu associated with the
storage-system icon
• Selected managed storage-system components using the menu
associated with the component’s icon
• Selected servers using the menu associated with the host icon

Status Bar The status bar in the Main window contains information fields that
provide the following:
• Automatic Polling indicator. If Automatic Polling is highlighted,
automatic polling is enabled for the session; if it is dimmed,
automatic polling is disabled for the session.
• Feedback about application operation.
• Brief description of a toolbar button when you position the cursor
over the button.

Window Configuration
When the Main window opens, it uses the default application
configuration values for the following:
• The size and position of the Main Window and any open
Enterprise Storage dialog boxes.
• In the Enterprise Storage dialog boxes, any Filter By and Filter
For settings and the selected tab.
If you change any of these values (for example, you filter for FC4700
storage systems and select the Storage tab), you can save them to
either
• the default application configuration file so future sessions open
the Main window with these values, or

3-36 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Trees, Connectivity Map, and Main Window
3

• a custom application configuration file so you can restore the


window to these values at any time during a session.

To Save the Current Configuration as the Default


Either exit the application or on the File menu, click Save.
The current application configuration values are saved to the default
application configuration file.

To Change the Name and Location of the Default File


1. In the Main window, follow the menu path
View →Options
A User Options dialog box opens, similar to the following.

2. In Save File Path, type or select the path to use for the default
application configuration file.

3. Click OK to apply your change and close the dialog box.

To Save the Current Configuration to a New Custom File


We recommend that the name for a custom configuration file have the
extension .nfx.
1. If the folder that you want to contain the new custom
configuration file does not exist, create it.
2. On the File menu, click Save As.

Main Window 3-37

6864 5738-001
Trees, Connectivity Map, and Main Window
3

3. In Save As dialog box, select the folder to hold the new custom
configuration file.
4. In File name, enter the name for the new custom configuration
file.
5. Click Save.
6. In the confirmation window that opens, click Yes.
The current application configuration values are saved to the new
custom application configuration file.

To Restore the Default or Custom Configuration


You can restore the current configuration to the values specified in
either the default or a custom application configuration file.
1. On the File menu on the Main window menu bar, click Open.
2. In the Open dialog box, select the drive or folder containing the
custom application configuration file you want to use.
3. Either select the desired configuration file or enter its name in File
name.
4. Click Open.
The current configuration of the Main window is restored to the
configuration specified in the selected file.

What Next?
• To install software on an FC4700 storage system, continue to
Chapter 4.
• To configure the remote Agent, go to Chapter 5, Configuring the
Remote Agent.

3-38 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
4
Installing Software on
an FC4700 Storage
System

This chapter describes how to install or upgrade software on an


FC4700 storage system. If you do not have any optional software or
you have a non-FC4700 storage system, go to Chapter 5 to set
storage-system properties.
This chapter describes:
• FC4700 Storage-System Software ....................................................4-2
• Installing Storage-System Software.................................................4-4
• Committing (Finalizing) the Software Installation .....................4-13
• Reverting Back to the Previous Software Version .......................4-14
• Displaying Status for All Installed Software Packages ..............4-14

Installing Software on an FC4700 Storage System 4-1

6864 5738-001
Installing Software on an FC4700 Storage System
4

FC4700 Storage-System Software


On an FC4700 storage system, if all install conditions are met, you can
install (or upgrade) any storage-system software without disrupting
the system’s operations. This storage-system software includes:
• Base Software
• Navisphere SP Agent
• Access Logix
• MirrorView
• SnapView
When you install an upgrade (that is, a newer version of an installed
package), you must install all the software packages you want to use
in the same operation. For example, if you are upgrading SnapView
in a storage system that has SnapView, Access Logix, and Base
Software installed, then you must upgrade all three to a compatible
revision using one operation. When you install a new package of the
same revision as other existing packages, you can install only that
package and not the others.
Before starting a non-disruptive software installation, record the read
and write cache sizes because they will be set to zero.
Before the SP starts a non-disruptive software installation, it disables
the caches and sets their sizes to zero. If the write cache is full and
I/O is heavy, disabling the cache may take over an hour because he
cache data must be written to disk. After the data is written, the
installation starts.
When the installation is complete, restore the cache sizes if possible.
You may need to use slightly different sizes because the new software
may require more memory than the version that it replaced.
Manager lets you perform three basic software installation
operations:
• Install software (install new software, upgrade already-installed
software).
• Commit to finalize the installation.
• Revert back to the previous version of the software.
Manager bundles storage-system software into packages, which are
software components that Manager uses when it installs, commits, or
reverts software. Each package contains a name, revision, and

4-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

attributes (for example., whether package is revertible or needs


committing).
For non-disruptive upgrades, the following happens:
• The software installs on one SP, and then the SP reboots.
• In a direct attach configuration (no hub or switch; ATF required),
all LUNs belonging to the first SP fail over to the second SP.
• In all other configurations (switch or hub; CDE or ATF required),
all LUNs belonging to the first SP failover to the second SP.
• When the installation and reboot completes on the first SP, and
this SP is now operating normally, the software installs on the
second SP.
• The second SP reboots and all LUNs belonging to the second SP
failover to the other SP.
• Once the status of the storage system returns to normal, run the
atf_restore command on all servers to return the LUNs to their
original paths.

FC4700 Storage-System Software 4-3

6864 5738-001
Installing Software on an FC4700 Storage System
4

Installing Storage-System Software


1. In the Enterprise Storage window, click either the Equipment tab
or the Storage tab to display the storage-system tree.
2. Select the storage systems on which you want to install the
software.
A drop-down menu displays.
3. Click Software Installation.
The Software Installation dialog box displays, similar to the
following.

4. Select the files to install by doing one of the following:


• In the Filename(s) box, enter the name of the package file or
files you want to install.

If you enter multiple package file names in the Filename(s) box, be


sure to enclose each file name in double quotation marks and
separate all file names with a single space.

• Browse through a list of available software packages to install


by clicking the Browse button to the right of the Filename box.
If you click Browse, an Open File dialog box displays, which
allows you to select one or more software packages to install.

4-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

To select a single software package:


Click the software package that you want to install.
To select multiple software packages:
Do either of the following:
• Press Shift while clicking the first software package and
last software package to select the first and last package
and all packages between them.
• Press Ctrl while clicking the software packages you
want to select.

If you select multiple files in this dialog box, the system automatically
encloses all file names in double quotation marks and separates them
from the other file names with a space.

5. Review the Storage Systems area of the Software Installation


dialog box to verify that it contains all the storage systems whose
software you want to upgrade (and only those systems).
• If the systems listed are satisfactory, go to step 6.
• If you want to remove any selected storage systems from the
Storage Systems area, click Remove.
• If you want to add more storage systems to be upgraded:
a. Click Select.
The Storage System Selection dialog box displays, similar
to the following.

Installing Storage-System Software 4-5

6864 5738-001
Installing Software on an FC4700 Storage System
4

b. Under Available Storage Systems, select each storage


system of the same type whose software you want to
upgrade, and then click →.
The storage-system icon moves into Selected Storage
Systems.
c. When the Selected Storage Systems list contains all the
icons for the storage systems whose software you want to
upgrade — and only those icons — click OK.
The Software Installation dialog box displays again.
6. If you are satisfied with the selections in the Filename(s) and
Storage Systems areas of this dialog box, click Next.
The Software Installation Confirmation box displays, similar to
the following.

4-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

The columns in the Install Information area and their contents


are as follows:

Column Description

Storage System Name of each storage system selected for software installation. Displays a separate listing
for each package file selected for installation onto that system. (Therefore, if you select three
packages to be installed on Storage System A, Storage System A will be listed three times.)

Package File Contents Package name

Install Type Displays information about installation status.


Install Type lists the worst case install type for any of the selected software packages and
this will be the install type for all selected software packages. For example, if you are
installing SnapView with a Non-Disruptive install type and MirrorView with a Disruptive
install type, the overall install type for all software packages will be Disruptive.
• If an installation will be performed, displays the type of installation, either Disruptive or
Non-Disruptive where:
Disruptive - causes both SPs to reboot at the same time. If you have any write-cache
data, set the write cache memory value to 0. This flushes the write-cache data. Once the
status of the storage system returns to normal, reset the write-cache value.
Non-Disruptive - installs the software and reboots each SP separately.
Degraded mode - installs the software on one SP only. If this is the only SP receiving I/O,
the install will be Disruptive.
• If an installation cannot be performed, displays one of the following:
Install Not Required - the same version of the software is already installed.
Dependency Not Met - some or all of the currently installed software does not meet the
dependency requirement for the new software package.
Transfer of the file failed - an error occurred while transferring the file to the storage
system.
Query failed - the software could not extract the contents for the software package.
Unknown - Reason unknown.

7. Review the information in these Install Information list to verify


that:
• You have correctly selected the packages you want to install
and the storage systems you want to upgrade.
• The software dependency requirements are met (that is, after
the software is installed, all software is compatible).
8. If the information in the Install Type column indicates that the
installation will be performed on all the storage systems you
selected, continue to Step 10.

Installing Storage-System Software 4-7

6864 5738-001
Installing Software on an FC4700 Storage System
4

9. If the information in the Install Type column indicates either of


the following problems, go to the section, Identifying the Source of a
Software Installation Problem on page 4-11:
• The installation cannot be performed because of software
incompatibilities between any of the selected storage systems
and any of the software to be installed.
• Some of the software you selected to install is already
installed.
10. Click Upgrade to begin the installation process.

Once you click Upgrade, you cannot cancel the installation process.

The application displays a confirmation message that indicates


the success or failure of the startup operation; not the success or
failure of the software installation operation.
11. In the confirmation dialog box, click OK to continue the software
operation.

The install or upgrade operation may take several minutes to complete.

The Software Operation Status dialog box opens, similar to the


following.

4-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

To view status updates in the Software Operation Status dialog box, you
must enable automatic polling for all storage systems undergoing a
software operation.

The Software Operation Status dialog box displays the status of


the software operation for all storage systems undergoing a
software operation.

To sort the Current Software Operations list by a specific column, click


the column header.

Installing Storage-System Software 4-9

6864 5738-001
Installing Software on an FC4700 Storage System
4

The columns in the Current Software Operations area and their


contents are as follows:

Column Description

Storage Name and icon for each storage system selected for a software operation.
System

Operation Type of software operation currently being performed on the storage


system. Valid values are Install, or Revert.

Status Status of the software operation. Valid values are

Value Operation Description

Initializing All Software operation has


started.

Storing packages Install Software operation is


copying software package
files into PSM.

In progress on secondary SP All Software operation is in


progress on the secondary
SP.

Rebooting secondary SP All Secondary SP is rebooting.

In progress on primary SP All Software operation is in


progress on the primary SP.

Rebooting primary SP All Primary SP is rebooting.

Completing Al Performing Clean-up.

Successful All Software operation was


successful.

Failed <error string> All Software operation has


failed and reason for failure.

12. Click Clear to remove a selected software operation status.


The operation must be completed in order to clear it.
13. Click Close to close the Software Operation Status dialog box.

4-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

Identifying the Source of a Software Installation Problem


You can try to identify the source of the problem by reviewing the
status of the currently-installed storage-system software as follows:
1. Click Cancel in the Software Installation Confirmation dialog
box.
The Software Installation dialog box displays again.
2. Click Cancel in the Software Installation dialog box.
The Enterprise Storage window in the Main window displays
again.
3. Right-click the icons for the storage systems whose status you
want to review.
4. Select Properties.
The Storage System Properties dialog box displays.

If you selected multiple storage systems, a separate Storage System


Properties dialog box displays for each system. These dialog boxes are
stacked on top of each other. Drag them apart to view each one
separately.

5. Click the Software tab on the Storage System Properties dialog


box.

Installing Storage-System Software 4-11

6864 5738-001
Installing Software on an FC4700 Storage System
4

The Storage System Properties - Software tab opens, similar to


the following. For information on the properties in the dialog box,
click Help.

To determine the status of currently-installed storage-system


software, review the list in the Packages area of the dialog box,
which displays:

You may want to sort the packages list by status. To do this, click the
Status column header.

• Package name
• Current version of software

4-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

• Current status of package, either:


• Active - package is installed and, if a commitment is
required, is committed.
• Not Active - package is installed, but is not active, so it
cannot be used.
A package becomes Not Active when:
A new revision of the same software is installed - The new
revision is Active; the old revision becomes Not Active.
You revert to a previous revision software - The software to
which you reverted becomes Active; the other version of
the same software becomes Not Active.
• Active (commit required) - package is installed and
running but installation has not been finalized. Some
features may not become available until the commit
operation has taken place.

What Next? Continue to the next section to commit the software you installed.

Committing (Finalizing) the Software Installation


New features may not be available until the commit is completed.
When you are ready to use the software in a production environment,
you commit the software to finalize the installation. Once you
commit a software package, you cannot revert it.
1. Open the Storage System Properties dialog box, as described in
the section Installing Storage-System Software on page 4-4, and click
the Software tab.
The Software tab displays, showing the status of the software
packages. (An example of the Software tab on the Storage
System Properties dialog box is shown on page 4-12.)
2. Select the software package you want to commit.
These packages will have an Active (commit required) status.
3. Click Commit.

Committing (Finalizing) the Software Installation 4-13

6864 5738-001
Installing Software on an FC4700 Storage System
4

Reverting Back to the Previous Software Version


There may be times when you want to install new software packages
and use them on a trial basis before you commit or finalize the
installation. If the new software does not behave as expected, Revert
lets you return to the previous revision software package, if one
exists.
1. In the Enterprise Storage window, click either the Equipment tab
or the Storage tab to display the storage-system tree.
2. Right-click the storage systems on which the software you want
to revert is installed, click Properties, and then click the Software
tab.
The Software tab displays, showing the status of the software
packages. (An example of the Software tab on the Storage
System Properties dialog box is shown on page 4-12.)
3. Select the software package you want to revert.
The software can be reverted only if:
• A previous version of the software is installed on the system.
• The software package has not been committed.

The Revert button appears dimmed and is unavailable if the


currently-selected software package cannot be reverted.

4. Click Revert.

Displaying Status for All Installed Software Packages


You can view the revision and current status of all installed software
packages, as well as the dependencies of one software package on
another.
1. In the Enterprise Storage window, click either the Equipment tab
or the Storage tab to display the storage-system tree.
2. Right-click the storage systems for which you want to display
software status, click Properties, and then click the Software tab.

4-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Installing Software on an FC4700 Storage System
4

The Software tab opens, similar to the following. For information


on the properties in the dialog box, click Help.

3. Select the software package for which you want to view


dependencies and click Info.

Displaying Status for All Installed Software Packages 4-15

6864 5738-001
Installing Software on an FC4700 Storage System
4

The Package Information dialog box opens. Click Help for a


description of each property in the dialog box.

4. Click Close to return to the Storage Systems Properties -


Software tab.

What Next? Continue to Chapter 5 to do the following:


• Configure the Navisphere Host Agents on the storage-system
server connected to non-FC4700 storage systems.
• Add additional privileged users to the SP Agents on FC4700
storage systems.

4-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
5
Configuring the
Remote Agent

This chapter describes how to configure remote Agents for FC4700


and non-FC4700 series storage systems. The chapter covers the
following:
• Configuring SP Agents — FC4700 Series .......................................5-2
• Configuring Host Agents Remotely (Non-FC4700 Series) ..........5-4

You can use Manager to configure Navisphere 4.3 or higher remote Agents
only. How you do this depends on whether you have FC4700 or non-FC4700
series storage systems.

If your Navisphere Agent is version 4.2 or less, refer to the


Navisphere Server Software Administrator’s Guide for your specific
operating system.

Configuring the Remote Agent 5-1

6864 5738-001
Configuring the Remote Agent
5

Configuring SP Agents — FC4700 Series


To limit access to the storage system, you must add privileged users
to the SP Agent configuration file using the Agent tab on the SP
Properties dialog box. Privileged users can configure the storage
system, including binding and unbinding LUNs. If you do not add
privileged users to the agent configuration file, anyone who can log
in to the management station can configure the storage system. When
you add a privileged user, the system adds the user to the SP
agent.config file.
1. Select the storage system for which you want to display the SPs.
2. Right-click the icon for the SP (A or B) to which you want to add
privileged users, and then click Properties.
3. Click the SP Properties - Agent tab.
The SP Properties - Agent tab opens, similar to the following. For
information on the properties in the dialog box, click Help.

5-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Configuring the Remote Agent
5

4. Under Privileged Users, click a blank line.


5. In the text box that displays, type the name of the privileged user.
6. Click Apply in the Agent tab to save your changes and continue
editing the SP Agent configuration file, or click OK to save your
changes and close the SP Properties dialog box.

What Next?
To continue editing the SP Agent configuration file, go to the next
section. If you have finished editing the file, go to Chapter 6 to set
storage-system properties.

Setting a Polling Polling Interval lets you specify the number of seconds between each
Interval poll of the storage system. Valid values are 10,20, 30, 60, 120, 180, 240,
300, 600, 1200, and 1800.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system, for which you want
to set the polling interval.
3. Right-click the icon for the SP (A or B), and then click Properties.
4. In Polling Interval, select a valid polling interval value.
5. Click OK in the Agent tab to save your changes and close the SP
Properties dialog box.

What Next?
Go to Chapter 6 to set storage-system properties.

Configuring SP Agents — FC4700 Series 5-3

6864 5738-001
Configuring the Remote Agent
5

Configuring Host Agents Remotely (Non-FC4700 Series)


If you have non-FC4700 storage systems connected to a server, you
must configure the Host Agent in the server so it can communicate
with the Core Software running in the storage-system SPs. Before you
can configure a Host Agent from Manager, the Host Agent must
contain a user entry with your username and hostname.
What Next?
• If this user entry does not exist, go to the next section to add it.
• If this user entry exists and the Host Agent configuration file does
not include the line entry, device auto auto "auto", go to the
section Scanning for Devices on page 5-5.
• If this user entry exists and the Host Agent configuration file
includes the line entry, device auto auto "auto", go to the section
Updating Parameters on page 5-11.

Any user who can log in to a host that is a Navisphere management station
can monitor the status of any of the managed storage systems.

The pathname to the agent configuration file is


/etc/Navisphere/.naviagent.config.xxxxxx <link>
where xxxxxx represents the time-date stamp.

Adding a User to the Agent Configuration File

To let a user configure a server’s storage system(s) from the Navisphere


Manager on different hosts, you must add a user entry for each host to the
agent configuration file.

Add a name entry to the /etc/Navisphere/agent.config file:


user name

5-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Configuring the Remote Agent
5

where
name is the person’s username. The format of this name differs
depending on whether the person will be using Manager
on a local or remote host.
For a local host - The format is user
where
user is the person’s user account name.
For a remote host - The format is user@hostname
where
user is the person’s user account name.
hostname is the name of the remote host.

For example, if you want to allow user anne to edit the Host Agent
configuration file and configure a host’s storage system using the
Navisphere Manager running on either remote host img01 or remote
host img02, you must add the following entries to the server’s agent
configuration file:
user anne@img01
user anne@img02

For these changes to take effect, you must save the agent configuration file,
and then restart the Host Agent.

What Next? You can now use Manager to edit the Host Agent configuration file so
the Agent can communicate with the storage system. The Agent tab
in the Host Properties dialog box lets you remotely configure a
Navisphere Agent—including basic settings, communication
channels, and privileged users—on a supported host.

Scanning for Before the Host Agent can communicate with a storage system, you
Devices must add a communication channel (device entry) to the Host Agent
configuration file.

If, when you were installing the server software, you edited the Host Agent
configuration file to include the entry, device auto auto "auto", you do not
need to add communication channel device entries to the Host Agent
configuration file.These are created dynamically each time you start the Host
Agent. Go to the section, Updating Parameters on page 5-11.

Configuring Host Agents Remotely (Non-FC4700 Series) 5-5

6864 5738-001
Configuring the Remote Agent
5

1. In the Enterprise Storage dialog box, click the Hosts tab.


2. Right-click the icon for the host for which you want to set or
display the Host Agent properties, and click Properties.
The Host Properties dialog box opens, similar to the following.
For information on the properties in the dialog box, click Help.

3. In the Host Properties dialog box, click the Agent tab.


4. Scan for device entries by doing one of the following:
a. Click Auto Detect, and then click OK.
The Host Agent adds valid connection paths to the storage
systems connected to the host.
or
a. Clear Auto Detect, and then click Advanced.

5-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Configuring the Remote Agent
5

The Advanced Device Configuration dialog box opens,


similar to the following. For information on the properties in
the dialog box, click Help.

If the Host Agent configuration file was never edited to add device
entries, the Communications Channels list is empty.

b. In the Advanced Device Configuration dialog box, click Scan.


The Host Agent clears the Communications Channels list,
scans the SCSI bus, and adds all connected storage systems it
finds to the Communication Channels list.
c. Click Close to return to the Agent tab.
d. Click Apply in the Agent tab to save your changes and
continue editing the agent configuration file, or click OK to
save your changes and close the Host Properties dialog box.
This stops and restarts the Host Agent automatically.
or
a. Click Scan Bus.

Configuring Host Agents Remotely (Non-FC4700 Series) 5-7

6864 5738-001
Configuring the Remote Agent
5

The Host Agent adds valid connection paths to the storage


systems connected to the host, and opens the Scan SCSI
Buses dialog box, similar to the following. For information on
the properties in the dialog box, click Help.

Scan SCSI Buses lets you view specific information for all
midrange storage devices and non-midrange storage devices.
b. Click Close to close the dialog box and return to the Host
Properties - Agent tab.

What Next?
To add privileged users, go to Adding Privileged Users on page 5-10.
To make changes to the Communications Channels list, go to
Updating the Communications Channels List on page 5-9.
To Update Host Agent parameters, go to Updating Parameters on
page 5-11.

5-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Configuring the Remote Agent
5

Updating the Communications Channels List


You can add, delete or clear device entries from the Communications
Channels list.

Adding Devices The remote Host Agent lets you add new devices to the
Communication Channels list.

1. In the Advanced Device Configuration dialog box, click Add


Device.
2. In the Add Device dialog box, do the following:
a. Enter the OS device name.
b. In Storage System, type the name of the storage system that
you want the new device to manage.
c. In Connection Type, select the desired connection type.
d. Optionally, in Comment, type any comments pertaining to
this device.
e. Click Close to return to the Agent tab.
3. Click Apply in the Agent tab to save your changes and continue
editing the agent configuration file, or click OK to save your
changes and close the Host Properties dialog box.

Deleting Devices When you delete a device, you remove it from the Communication
Channels list. When you remove the device from the
Communication Channels list, the device can no longer manage the
storage system.
1. In the Advanced Device Configuration dialog box, select the
device that you want to delete.
2. Click Delete Device.
The device is deleted from the Communication Channels list.
3. Click Close to return to the Agent tab.
4. Click Apply in the Agent tab to save your changes and continue
editing the agent configuration file, or click OK to save your
changes and close the Host Properties dialog box.

Configuring Host Agents Remotely (Non-FC4700 Series) 5-9

6864 5738-001
Configuring the Remote Agent
5

Clearing Devices Clearing devices removes all the current devices from the
Communication Channels list.

1. In the Advanced Device Configuration dialog box, click Clear.


All connected devices are removed from the Communication
Channels list.
2. Click Close to return to the Agent tab.
3. Click Apply in the Agent tab to save your changes and continue
editing the agent configuration file.

Adding Privileged Privileged users can configure the storage system, including binding
Users and unbinding LUNs. When you add a privileged user, the system
adds the user to the host agent.config file.

1. In the Host Properties dialog box, click the Agent tab.


2. Under Privileged Users, click a blank line.
3. In the box that displays, type the name of the privileged user.
4. When you have finished, click OK.
5. Click Close to return to the Agent tab.
6. Click Apply in the Agent tab to save your changes and continue
editing the agent configuration file, or click OK to save your
changes and close the Host Properties dialog box.

What Next?
Go to the next section to change the polling interval, set the serial line
baud rate for the storage system, or select the size of the log to be
transferred.

5-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Configuring the Remote Agent
5

Updating Updating parameters includes setting the polling interval, the serial
Parameters line baud rate, and the log entries to transfer. To update parameters,
you must have privileges.
Polling Interval - Lets you specify the number of seconds between
each poll of the storage system. Valid values are 10, 15, 30, 60, and
120.
Serial Line Baud Rate - Lets you select the serial communication
baud rates. Valid values are 9600, 19200, and 38400.
Log Entries to Transfer - Lets you select the log size to be transferred.
Valid values are 100, 2048, 5000, and All.

1. In the Host Properties dialog box, click the Agent tab.


2. Under Update Parameters, select a valid polling interval from the
Polling Interval drop-down list.
3. Select the serial line baud rate from the Serial Line Baud Rate
drop-down list.
4. Select the log entries to transfer from the Log Entries to Transfer
drop-down list.
5. When you have finished, click OK.

What Next?
Continue to Chapter 6 to set storage-system properties.

Configuring Host Agents Remotely (Non-FC4700 Series) 5-11

6864 5738-001
Configuring the Remote Agent
5

5-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
6
Setting Storage-System
Properties

When you set up a storage system with SPs, you can change its
general, memory, cache, data access, and configuration access
properties or use the default values for these properties.

In the storage system Properties dialog box, the following is true:

For all shared storage systems - The Data Access tab is visible.
For non-FC4700 storage systems - The Configuration Access tab is visible.
For FC4700 storage systems - The Storage tab is visible, and if MirrorView is
installed, the Remote Mirrors tab is visible.

If you want to use read or write caching or create RAID 3 LUNs, you
must set certain storage-system memory and cache properties. If you
are using caching, you may also want to change the time for running
the self-test of each standby power supply (SPS) or the battery
backup unit (BBU).
This chapter describes:
• Setting Storage-System Configuration Access Properties
(non-FC4700 Storage Systems).........................................................6-2
• Setting Storage-System General Configuration Properties........6-10
• Setting Storage-System Memory Properties ................................6-14
• Setting the Cache Properties...........................................................6-21
• Setting the Storage-System Hosts Property .................................6-24
• Setting the Battery Test Time ..........................................................6-30

Setting Storage-System Properties 6-1

6864 5738-001
Setting Storage-System Properties
6

Setting Storage-System Configuration Access Properties


(non-FC4700 Storage Systems)
Storage-system configuration access properties are available for
non-FC4700 shared storage systems only.

We recommend that you use the storage system’s configuration access


control to give this privilege to one or two servers only.

This section describes:


• Configuration access
• Storage-system configuration access properties
• How to set storage-system configuration access properties

Configuration Access
Non-FC4700 shared storage systems provide configuration access
control. This feature lets you restrict the server ports that can send
configuration commands to the storage system. We recommend that
you use the storage system’s configuration access control to give this
privilege to one or two servers only.
By default, any user whose username is entered in a Host Agent
configuration file can configure any non-FC4700 shared storage
system connected to the server.
Such a privileged user can perform any configuration task, such as
binding or unbinding LUNs from the management station.
Configuration access control lets you restrict the servers that can send
configuration commands from a privileged user to an attached
storage system. Without configuration access control, any server can
send configuration commands from a privileged user to any
connected storage system.
Configuration access is governed by a management login password
that you set when you set up the storage system.

6-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

Storage-System Configuration Access Properties


The storage-system configuration access properties are:
• Configuration access control
• Host access status
– Enable access
– Disable access

Configuration Access Control


Configuration access control enables or disables configuration access
control for the selected storage system. Configuration access control
is available for non-FC4700 shared storage systems only. The first
person to enable configuration access control for a storage system
must define a management login password. Thereafter, anyone who
disables configuration access control for the storage system or
changes host access status for the storage system must enter this
password.
By default, configuration access control is disabled, so any server
connected to a storage system can send configuration commands to
the storage system. If you enable configuration access control, none of
the servers connected to the storage system can send configuration
commands to it because its storage-system access status is disabled.
Use the enable access property to permit one or more servers to send
configuration commands to the storage system.

Host Access Status - Enable Access, Disable Access


Enable access enables configuration access for the servers that you
select. Enabling configuration access lets the selected servers send
configuration commands to the storage system. If configuration
access is enabled for a server, it is enabled for all server initiators
(HBA ports) connected to the storage system. You must enter the
management login password before enabling configuration access for
the servers.
Disable access disables configuration access for the server that you
select. Disabling configuration access prevents the selected servers
from sending commands to the storage system. If configuration
access is disabled for a server, it is disabled for all server initiators
(HBA ports) connected to the storage system. You do not need to
enter the management login password before disabling configuration
access for the servers.

Setting Storage-System Configuration Access Properties (non-FC4700 Storage Systems) 6-3

6864 5738-001
Setting Storage-System Properties
6

All servers can send certain LUN configuration commands to the storage
system even when configuration access to the storage system is disabled for
them. These commands set the user-defined properties on the General,
Cache, and Prefetch tabs in the LUN Properties dialog box, which are the
properties listed below.

Tab User-Defined Properties

General Rebuild Priority,


Verify Priority,
Auto Assignment Enabled,
Default SP Owner

Cache Read Cache Enabled,


Write Cache Enabled

Prefetch All properties

Enabling Configuration Access Control for a Non-FC4700 Shared Storage System


Before you can enable or disable configuration access to the storage
system from a server, you must enable configuration access control
for the storage system, which involves setting the management login
password for the storage system.

If configuration access control is not enabled for a storage system, any server
connected to the storage system can send configuration commands to the
storage system.

1. In the Enterprise Storage dialog box, click the Equipment or


Storage tab.
2. Right-click the storage-system icon for which you want to enable
configuration access control.
3. Click Properties, and then click the Configuration Access tab.

6-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

The Configuration Access tab of the Storage System Properties


dialog box opens, similar to the following. For information on the
properties in the dialog box, click Help.

4. In the Configuration Access tab, select the Configuration Access


Control check box.

Setting Storage-System Configuration Access Properties (non-FC4700 Storage Systems) 6-5

6864 5738-001
Setting Storage-System Properties
6

The Enable Management Login dialog box opens, similar to the


following.

5. In New Password, type the password you want to use.


6. In New Password Verification, type the password again.

You will need this password to enable or disable configuration access for
the storage system, and to enable configuration access for a host. If no
one can remember the current password, then you must connect a
management station to the serial port on a storage-system SP to change
the password.

7. Click OK to save the password and return to the Configuration


Access tab.

What Next?
Continue to the next section to enable configuration access for
servers.

Enabling and Disabling Configuration Access for Servers


The first time that configuration access is enabled for the storage
system, configuration access is disabled for all servers connected to it.
Thereafter, if storage-system configuration access is disabled and
then re-enabled, the configuration access for a server is the same as it
was when storage-system configuration access was disabled.
When you enable configuration access for a server, it is enabled for
each server HBA port (initiator) connected to the storage system.

6-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

To Enable Configuration Access for Servers

IMPORTANT: Before you can enable configuration access for a server to the
storage system, you must enable configuration access control for the storage
system (see page 6-4).

1. If the Configuration Access tab of the Storage System Properties


dialog box for the storage system is not open, open it as follows:
a. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
b. Right-click the storage-system icon for which you want to
enable configuration access control.
c. Click Properties, and then click the Configuration Access tab.
The Configuration Access tab of the Storage System
Properties dialog box opens with the Basic tab displayed.
2. Under Host Access Status, select the servers that should have
configuration access to the storage system.
3. Click Enable Access.
The Enable Management Login dialog box opens.

4. In the Enable Management Login dialog box, enter the


management login password, and click OK.
The host access status for the selected host changes from
Disabled to Enabled.

Setting Storage-System Configuration Access Properties (non-FC4700 Storage Systems) 6-7

6864 5738-001
Setting Storage-System Properties
6

To Disable Configuration Access for Servers


1. Click the Configuration Access tab of the Storage System
Properties dialog box for the storage system.
2. Under Host Access Status, select the servers that should not have
configuration access to the storage system.
3. Click Disable Access, and click OK in the confirmation dialog
box.
The host access status for the selected hosts changes from
Enabled to Disabled.

To Verify the Configuration Access Privileges for the Server


1. In the Configuration Access tab, under Host Access Status, click
the Advanced tab.
The Advanced tab opens, similar to the following. For
information on the properties in the dialog box, click Help.

6-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

2. Under Host Access Status, look for an entry for each initiator
(HBA) connected to the storage system.
If the host access status for the selected host is Disabled and it
should be Enabled, click the Basic tab and repeat the procedure,
To Enable Configuration Access for Servers on page 6-7.
If the host access status for the selected host is Enabled and it
should be Disabled, click the Basic tab and repeat the procedure,
To Disable Configuration Access for Servers on page 6-8.
3. Click OK to close the Properties dialog box.
The servers with access enabled can now send configuration
commands to the storage system. The servers with access
disabled cannot send configuration commands to the storage
system.

What Next?
Continue to the next section, Setting Storage-System General
Configuration Properties.

Setting Storage-System Configuration Access Properties (non-FC4700 Storage Systems) 6-9

6864 5738-001
Setting Storage-System Properties
6

Setting Storage-System General Configuration Properties


This section describes the general configuration properties for a
storage system and explains how to set them.

General Configuration Properties


The general configuration properties are:
• Enable automatic polling
• Automatic polling priority
• SP A statistics logging
• SP B statistics logging

Enable Automatic Polling


Enable automatic polling disables or enables automatic polling for
the storage system at set intervals.

When enable automatic polling is set (the default) for a storage system,
automatic polling of that storage system occurs only if automatic polling
(background polling) for the Manager session is enabled (that is, only when
Automatic Polling is selected on the Operations menu on the Main window
toolbar). By default, automatic polling for the session is disabled.

Automatic Polling Priority


The automatic polling priority, together with the automatic polling
interval for the Manager session, determines how often the storage
system is polled when automatic polling is enabled for both the
session and the storage system. By default, automatic polling is
disabled for the session and enabled for each managed storage
system.

6-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

The frequency at which Manager automatically polls the Agent for


storage-system information equals the automatic polling interval
multiplied by the automatic polling priority for the storage system, as
follows:

Polling Priority When the Application Polls the Agent

1 Each time the polling interval elapses (on each cycle)

2 After 2 intervals (cycles) elapse

3 After 3 intervals (cycles) elapse

4 After 4 intervals (cycles) elapse

5 After 5 intervals (cycles) elapse

For example, if the automatic polling interval is 300 seconds (5


minutes) and the automatic polling priority for the storage system is
3, Manager polls the Agent for storage-system information every 900
seconds (300*3 seconds or 15 minutes).
By default, the automatic polling priority is 1, and the automatic
polling interval is 60 seconds for all managed storage systems with
automatic polling enabled.

You can change the automatic polling interval for all managed storage
systems, but not for an individual one.

SP A Statistics Logging and SP B Statistics Logging


SP A statistics logging enables or disables logging of statistics by SP
A, and SP B statistics logging enables or disables logging of statistics
by SP B.
Each SP maintains a log of statistics for the LUNs, disks, and
storage-system caching. You can turn this log on or off.

Setting the General Configuration Properties


Use the General tab on its Storage System Properties dialog box to
set the general configuration properties for a storage system.
To set the general configuration properties for a storage system:
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.

Setting Storage-System General Configuration Properties 6-11

6864 5738-001
Setting Storage-System Properties
6

2. Right-click the icon for the storage system whose configuration


properties you want to set, and then click Properties.
The General tab of the Storage System Properties dialog box
opens, similar to the following, for a shared storage system.
.

3. If you want to change the name of the storage system, in Name


type the new name.
4. Under Configuration, change any of the user-defined properties
that you want to have new values:
a. Select the Enable Automatic Polling check box to enable
automatic polling for the storage system, or clear it to disable
automatic polling for the storage system.

6-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

When automatic polling is enabled for the storage system, an Agent


polls the storage system only if automatic polling (background
polling) is enabled for the session (that is, only when Automatic
Polling is selected on the Operations menu on the Main window
toolbar).

b. In the Automatic Polling Priority list, click the new priority.


c. Select the SP A Statistics Logging check box to start SP A
logging statistics, or clear it to stop SP A logging statistics.
d. Select the SP B Statistics Logging check box to start SP B
logging statistics, or clear it to stop SP B logging statistics.
5. Either click Apply to apply your changes and leave the
Properties dialog box open so you can change other
storage-system properties, or click OK to apply your changes and
close the Properties dialog box.

What Next?
• If you need to allocate memory for caching or for binding RAID 3
LUNs continue to the next section, Setting Storage-System Memory
Properties.
• If you do not need to allocate memory for caching or binding
RAID 3 LUNs, go to Setting the Storage-System Hosts Property on
page 6-24.

Setting Storage-System General Configuration Properties 6-13

6864 5738-001
Setting Storage-System Properties
6

Setting Storage-System Memory Properties


Setting memory properties consists of assigning memory to memory
partitions. You must assign memory to the appropriate partitions to
do the following:
• Use read or write caching
• Bind RAID 3 LUNs
This section describes:
• Memory requirements for read and write caching
• Memory requirements for RAID 3 LUNs
• Effects of SP memory architecture on memory assignment
• How to assign memory to partitions

! CAUTION
Before you bind a RAID 3 LUN do the following:

For non-FC4700 storage systems


Assign at least 2 Mbytes per RAID 3 LUN to the RAID 3 memory
partition. If this partition does not have adequate memory for the
LUN, you will not be able to bind it. Changing the size of the RAID
3 memory partition reboots the storage system. Rebooting restarts
the SPs in the storage system, which terminates all outstanding I/O
to the storage system.

For FC4700 storage systems that support RAID 3 LUNs


Allocating the RAID 3-memory partition size is not required for
RAID 3 Raid Groups and LUNs. (The RAID 3 memory partition
appears dimmed and is unavailable.) If there will be a large
amount of sequential read access to this RAID 3 LUN, you may
want to enable read caching with prefetching for the LUN.

6-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

Assigning Memory to Partitions


Use the Memory tab on the storage-system Properties dialog box to
assign storage-system memory on each SP to these partitions:
• Read cache
• Write cache
• RAID 3 partitions
These partitions have a default size of 0.

To Assign Memory to Partitions

! CAUTION
Changing the RAID 3 partition size causes the application to reboot
the storage system. This terminates all outstanding I/O to the
storage system.

1. In the Enterprise Storage dialog box, click the Equipment or


Storage tab.
2. Right-click the icon for the storage system whose memory you
want to assign, and then click Properties.
3. In the Storage System Properties dialog box, click the Memory
tab.
The Memory tab of the Storage System Properties dialog box
opens, similar to the following, for a shared storage system.

Setting Storage-System Memory Properties 6-15

6864 5738-001
Setting Storage-System Properties
6

The meanings of the sections under SP Memory in the Memory


tab are as follows:
Total Memory - Size in Mbytes of the SP’s memory capacity.
SP Usage - Size in Mbytes of the SP memory reserved for the SP’s
use.
Free Memory - Size in Mbytes of the SP memory available for the
read cache, write cache, and RAID 3 memory partitions.
SP Pie Charts - Graphical representation of the current SP
memory partitions.
SP A Read Cache Memory - Sets the size in Mbytes of SP A’s
read-cache memory partition.
SP B Read Cache Memory - Sets the size in Mbytes of SP B’s
read-cache memory partition.

6-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

Write Cache Memory - Sets the size in Mbytes of the write cache
on each SP.
RAID 3 Memory - Sets the size in Mbytes of the RAID 3 memory
partition on both SPs.
4. Under User Customizable Partitions, type the size or move the
slider to adjust the size of each memory partition that you want to
change.
When you do this, Manager reassigns memory in one of two
ways:
• From free memory to a partition whose size you are increasing
• To free memory from a partition whose size you are decreasing
The pie charts reflect the changes in memory assignment.
As a general guideline, we recommend that you make the
write-cache partition about twice the size of the read-cache
partition on each SP. For example, if total memory for each SP is
256, you can assign 150 to the write-cache partition and 75 to the
read-cache partition on each SP. For precise allocation, type the
size instead of using the slider.
5. When you complete the memory assignment, do one of the
following:
• Click Apply to save your changes and leave the dialog box
open so that you can change other storage-system properties.
• Click OK to save your changes and close the dialog box.

What Next?
Your next action depends on whether you assigned memory to the
read-cache or write-cache memory partitions.
Memory assigned to cache partitions - Continue to the next section,
Setting Storage-System Cache Properties.
Memory not assigned to cache partitions - Go directly to Chapter 7,
which describes how to create LUNs.

Setting Storage-System Memory Properties 6-17

6864 5738-001
Setting Storage-System Properties
6

Setting Storage-System Cache Properties


This section describes:
• Hardware requirements for storage-system caching
• Storage-system cache properties
• Setting the cache properties for a storage system
All storage systems support read caching. A storage system supports
write caching only if it has the required hardware, which varies with
the storage-system type as shown in the following table.

Table 6-1 Hardware Requirements for Write Caching

FC4400/4500,
FC4700, C1900, C2x000,
Icon FC5600/5700 FC5200/5300 C3x00 C1000

Disks 0-0 through 0-0 through A0, B0, C0, D0, E0 A0 through A4
0-8 0-4

SPs Two Two, with at least 8 Mbytes memory

Power Two in DPE and each DAE Two


supplies

LCCs Two in DPE and each DAE Not applicable

Backup power Fully charged SPS Fully charged BBU

Storage-System Cache Properties


The storage-system cache properties you can set are:
• Page size
• Low watermark
• High watermark
• Enable watermark processing
• Mirrored write cache
• SP A read caching
• SP B read caching
• Write caching

6-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

Page Size Page size sets the number of Kbytes stored in one cache page. The
storage processors (SPs) manage the read and write caches by pages
instead of sectors. The larger the page size, the more continuous
sectors the cache stores in a single page. The default page size is 2
Kbytes.
As a general guideline, we recommend the following page sizes:
• For general file server applications: 8 Kbytes
• For database applications: 2 or 4 Kbytes

Low Watermark, High Watermark, Enable Watermark Processing


The SPs use high and low watermarks to determine when to flush
their write caches. When an SP flushes its write cache, it writes its
dirty pages to disk. A dirty page is a write-cache page with modified
data that has not yet been written to disk.
You can make the following selections regarding watermark
processing.

Selection Function Action Remarks


Low Sets the In the Low Determines when
watermark low Watermark box, flushing stops
watermark select a value for the
low watermark
High Sets the In the High Determines when
watermark high Watermark box, flushing starts
watermark select a value for the
high watermark
Enable Enables or Check the Enable If you do not enable
watermark disables Watermark watermark
processing watermark Processing box to processing, the
processing enable watermark system sets both
processing watermarks to 100

Setting Storage-System Cache Properties 6-19

6864 5738-001
Setting Storage-System Properties
6

Following are further details about high and low watermarks.

Water-
mark Impact of High or Low Default
Type Definition Values Value
High Percentage of dirty A low value for the high 96%
pages in the write cache watermark causes the
which, when reached, SPs to begin flushing
causes the SPs to begin their write caches sooner
flushing their write than a high value
caches
Low Percentage of dirty A high value for the low 80%
pages in the write cache watermark causes the
which, when reached, SPs to stop flushing their
causes the SPs to stop write caches sooner than
flushing their write a low value
caches

Mirrored Write Cache


Mirrored write cache sets the write-cache type for both SPs in a
storage system to either mirrored or non-mirrored. If the SP with a
non-mirrored write cache fails, all write-cache data not saved to disk
is lost. To provide data integrity, the write-cache type is mirrored for
the SPs in most types of storage systems.

SP A and SP B Read Caching


You can make the following selections for SP A and SP B read
caching.

Type of
Read
Caching Function Method
SP A Enables or disables Enables or disables the
storage-system read read cache on SP A
caching for SP A
SP B Enables or disables Enables or disables the
storage-system read read cache on SP B
caching for SP B

The read cache on one SP is independent of the read cache on the


other SP. On powerup, a storage system automatically enables the
storage-system read caching for an SP if the SP’s read-cache size is
non-zero.

6-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

You can enable or disable storage-system read caching for an SP


without affecting the information on the LUNs owned by the SP. You
must enable the storage-system read caching for an SP before any of
the SP’s LUNs with read caching enabled can use read caching.
Some other operations, such as setting most of the LUN caching
properties, require that the SP A or SP B read cache be disabled. If the
cache is enabled when you perform any of these operations, Manager
automatically disables the appropriate cache for you and re-enables it
after the operation is complete.

Write Caching Write caching enables or disables storage-system write caching by


enabling or disabling the write cache on each SP. The write cache on
one SP mirrors the write cache on the other SP. As a result, both write
caches are always the same size and are always either both enabled or
both disabled. On powerup, a storage system automatically enables
the write cache on each SP if the write-cache size is non-zero.
You can enable or disable storage-system write caching without
affecting the information on the LUNs owned by the SP. You must
enable the storage-system write caching before any LUNs with write
caching enabled can use write caching.
Some other operations, such as setting most of the LUN caching
properties, require that the write cache be disabled. If the write cache
is enabled when you perform any of these operations, Manager
automatically disables the write cache for you and re-enables it after
the operation is completed.

Setting the Cache Properties


Use the Cache tab on the Storage System Properties dialog box to set
the storage-system cache properties.

The minimum write cache and read cache partition size is 1Mbyte for
non-FC4700 storage systems, and 3Mbytes for FC4700 storage systems.

1. In the Enterprise Storage dialog box, click the Equipment or


Storage tab.
2. Right-click the icon for the storage system whose cache properties
you want to change, and then click Properties.
3. In the Storage System Properties dialog box, click the Cache tab.

Setting Storage-System Cache Properties 6-21

6864 5738-001
Setting Storage-System Properties
6

The Cache tab of the Storage System Properties dialog box


opens, similar to the following, for a shared storage system.

4. Under Configuration, change any of the user-defined properties


for which you want new values:
a. In the Page Size list, click the new page size.
b. Select the Enable Watermark Processing check box to enable
watermark processing, or clear it to disable watermark
processing.
You can change the Low Watermark or High Watermark
values only if the Enable Watermark Processing check box is
selected.
c. In the Low Watermark list, click the new low watermark.
d. In the High Watermark list, click the new high watermark.

6-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

e. Select the SP A Read Caching check box to enable


storage-system read caching for SP A, or clear it to disable
storage-system read caching for SP A.
f. Select the SP B Read Caching check box to enable
storage-system read caching for SP B, or clear it to disable
storage-system read caching for SP B.
g. Select the Write Caching check box to enable storage-system
write caching, or clear it to disable storage-system write
caching.
5. Do one of the following:
• Click Apply to apply your changes and leave the Properties
dialog box open so that you can change other storage-system
properties
• Click OK to apply your changes and close the Properties
dialog box.
If storage-system read or write caching is enabled when you
change the value of a cache property, then read or write
caching is disabled before the change is made, and then
re-enabled when the change is complete.

What Next?
Your next action depends on whether the storage system is a shared
storage system.
Shared storage system - Continue to the next section, Setting the
Storage-System Hosts Property.
Unshared storage system - If you are using caching and want to
change the battery test time, go to the section, Setting the Battery Test
Time on page 6-30; otherwise, go to Chapter 7 to create LUNs on the
storage system.

Setting Storage-System Cache Properties 6-23

6864 5738-001
Setting Storage-System Properties
6

Setting the Storage-System Hosts Property


The storage-system hosts property - enforce fair access - is available
only for non-FC4700 shared storage systems.

For FC4700 storage systems Enforce Fair Access is always enabled, but
appears dimmed and is unavailable for change.

This section:
• Describes fair access to storage-system resources
• Explains how to set the enforce fair access property for a
non-FC4700 storage system

Fair Access to the Storage-System Resources


By default, the storage system processes I/O requests from servers on
a first-come, first-served basis. With multiple servers contending for
the use of a storage system, a server with disproportionate processing
demands might monopolize storage-system resources. In addition,
operating systems, such as Windows, use scheduling policies that
slow down shared storage-system access.
To provide each server connected to shared storage systems with a
fair amount of storage-system resources, shared storage systems have
an optional fairness algorithm. This algorithm tries to manage the
I/Os accepted by the storage system so that servers accessing
different LUNs with similar data access patterns will get similar I/O
throughput. Some data access patterns, however, do not work well
with the algorithm.
We strongly recommend that you try using fair access, especially if
Windows servers are accessing the storage system. Should I/O
performance be unsatisfactory, you can turn off fair access and return
to the first-come, first-served algorithm.

Enabling Fair Access to a Non-FC4700 Storage System


1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Right-click the storage-system icon for which you want to enable
fair access.

6-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

3. Click Properties, and then click the Hosts tab.


The Hosts tab of the Storage System Properties dialog box opens,
similar to the following. For information on the properties in the
dialog box, click Help.

4. Select the Enforce Fair Access check box.


5. Click OK.

What Next?
Your next action depends on which storage system you are setting
up:
For an FC4700 storage system - If you have not set the IP address for
the SPs and ALPAs, go on to the next section, Setting the SP Network
and ALPA Properties (FC4700 Series Only). If you have set them and
you plan to change the battery and use caching, go to the section

Setting the Storage-System Hosts Property 6-25

6864 5738-001
Setting Storage-System Properties
6

Setting the Battery Test Time on page 6-30. Otherwise, go directly to


Chapter 7 to create LUNs on the storage system.
For a non-FC4700 storage system - If you plan to change the battery
and use caching, go to the section Setting the Battery Test Time on
page 6-30; otherwise, go directly to Chapter 7 to create LUNs on the
storage system.

Setting the SP Network and ALPA Properties (FC4700 Series Only)


The SP network properties establish the network name and address
for each SP; the ALPA (Arbitrated Loop Physical Address) properties
establish the SCSI ID for each SP port. The settings of these properties
must be correct; if any is wrong, Manager cannot communicate with
the SP and its LUNs.

Setting the SP Network Properties


The SP Network properties include:
• SP hostname (used when you select the SP for management)
• SP IP address, subnet address, and network mask (required to let
the management station use the Internet connection to
communicate with the SP)
The SP Properties - Network tab lets you manage the SP Internet
connection by changing the SP network name and address.

The network properties are initially set by EMC service personnel to work at
your site. Do not change any value unless you are moving the SP to another
LAN or subnet. If you change any value, after you click OK or Apply, the SP
will restart and use the new value.

To Set the SP Network 1. In the Enterprise Storage dialog box, click the Equipment or
Properties Storage tab.
2. Right-click the SP whose properties you want to change.
3. Click Properties, and then click the Network tab.

6-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

4. The Network tab of the SP Properties dialog box opens, similar


to the following. For information on the properties in the dialog
box, click Help.

5. After specifying the new network name and address settings you
want, click OK or Apply.
6. Click Yes to confirm the change and close the dialog box.
The SP restarts using the new values specified.

What Next?
The SP network properties are independent of other SP properties;
there is no related setting you need to change next. Depending on
your reason for changing this SP’s network properties, you may want
to change one or more network properties of the other SP in this
storage system.

Setting the SP Network and ALPA Properties (FC4700 Series Only) 6-27

6864 5738-001
Setting Storage-System Properties
6

Setting the SP ALPA (Arbitrated Loop Physical Address) Properties


The SP ALPA Properties dialog box lets you change the SCSI ID
(ALPA address) of each SP port.

The SCSI IDs are initially set by EMC service personnel to work at your site.
Do not change any value unless you are installing a new SP and need to
change its SCSI IDs from the SP ship values of 0 and 0.

If you change any value, after you click OK or Apply, the SP will restart and
use the new values.

We suggest you use a unique SCSI ID for each SP port in your


installation. For example, on the first storage system, for ports 0 and
1, you can specify SCSI IDs 0 and 1, respectively. On the second
storage system, for the ports you can specify IDs 2 and 3 respectively,
and so on.
The software will not let you select a SCSI ID out of range (0-255) or a
duplicate ID on a storage system.
To set the SCSI ID associated with an SP’s port:
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. In the Equipment or Storage tree, right-click the SP whose
properties you want to change.
3. Click Properties, and then click the ALPA tab.

6-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

The ALPA tab of the SP Properties dialog box opens, similar to


the following. For information on the properties in the dialog box,
click Help.

4. Click Change to open the Change ALPA Setting dialog box.


5. Specify a new SCSI ID for one SP port, and then click OK.
Repeat step 5 if you want to specify a new SCSI ID for the other
SP.
6. Click OK or Apply to save the changes.
7. Click Yes to confirm the change and close the dialog box.
The SP restarts using the new values specified.

Setting the SP Network and ALPA Properties (FC4700 Series Only) 6-29

6864 5738-001
Setting Storage-System Properties
6

What Next?
The SP port ALPA addresses (SCSI IDs) are independent of other SP
properties; there is no related setting you need to change next.
Depending on your reason for changing this SP’s port SCSI IDs, you
may want to change the IDs of the other SP in this storage system.
If you are using caching and want to change the battery test time,
continue on to the next section, Setting the Battery Test Time; otherwise,
go directly to Chapter 7 to create LUNs on the storage system.

Setting the Battery Test Time


Each week, the SP runs a battery self-test to ensure that the
monitoring circuitry is working in each SPS in an FC-series storage
system or in the BBU of a C-series storage system.
While the test runs, storage-system write caching is disabled, but
communication with the server continues. I/O performance may
decrease during the test. When the test is finished, storage-system
write caching is re-enabled automatically.
The factory default setting has the battery test start at 1:00 a.m. on
Sunday. You can change this setting using the procedure that follows.
1. In the Enterprise Storage dialog box, click the Equipment tab.
2. Double-click the icon for the storage system with the SPS or BBU
whose properties you want to change.
3. Double-click the icon for the enclosure with the storage
processors (SPs).
In an FC-series storage system, enclosure 0 contains the SPs. A
C-series storage system has only one enclosure, and it contains
the SPs.
4. Double-click the Standby Power Supplies or Battery Backups
icon.
5. Right-click the icon for SPS A or SPS B or the BBU whose battery
test time you want to change and click Properties.

6-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Storage-System Properties
6

The Battery Test Time dialog box opens, similar to the following.

6. In Test Every, click the day on which you want the test to run.
7. In at, enter the time for the test to start in the following format
hh:mm where hh is the hour in 24-hour format and mm are the
minutes.
For example, for 2:47 PM, enter 14:47.
8. Click OK to apply the settings and close the dialog box.

What Next?
Continue to Chapter 7 to create LUNs on the storage system.

Setting the Battery Test Time 6-31

6864 5738-001
Setting Storage-System Properties
6

6-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
7
Creating LUNs and
RAID Groups

You can create LUNs on any storage system with SPs, that is, any
storage system except an FC5000 series storage system (JBOD
configuration). You can create RAID Groups on any storage system
that supports the RAID Group feature.
This chapter describes the following:
• LUNs, LUN RAID Types, and Properties.......................................7-2
• Creating LUNs in a Non-RAID Group Storage System .............7-10
• Creating RAID Groups....................................................................7-20
• Creating LUNs on RAID Groups...................................................7-27
• Verifying or Editing Device Information in the Host Agent
Configuration File (Non-FC4700 storage systems) .....................7-38

Creating LUNs and RAID Groups 7-1

6864 5738-001
Creating LUNs and RAID Groups
7

LUNs, LUN RAID Types, and Properties


This section describes LUNs, the RAID types for LUNs, and the LUN
properties.

LUNs A logical unit (LUN) is a grouping of one or more disks into one span
of disk storage space. A LUN looks like an individual disk to the
server’s operating system. It has a RAID type and properties that
define it.
You can have Manager create standard LUNs using the disks and
default property values that it selects, or you can create your own
custom LUNs with the disks and property values that you select. In a
storage system that supports RAID Groups, you create LUNs on
RAID Groups; therefore, you need to create a RAID Group before you
create a LUN.

RAID Types The RAID type of a LUN determines the type of redundancy, and
therefore, the data integrity provided by the LUN.
The following RAID types are available:
RAID 5 - An individual access array, which provides data integrity
using parity information that is stored on each disk in the LUN. This
RAID type is best suited for multiple applications that transfer
different amounts of data in most I/O operations.
RAID 3 - A parallel access array, which provides data integrity using
parity information that is stored on one disk in the LUN. This RAID
type is best suited for single-task applications, such as video storage,
that transfer large amounts of data in most I/O operations.
RAID 1 - A mirrored array, which provides data integrity by
mirroring (copying) its data onto another disk in the LUN. This RAID
type provides the greatest data integrity at the greatest cost in disk
space, and is well suited for an operating system disk.
RAID 0 - An individual access array without parity, which provides
the same individual access features as the RAID 5 type, but does not
have parity information. As a result, if a disk in the LUN fails, the
information on the LUN is lost.

7-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

RAID 1/0 - A mirrored individual access array without parity, which


provides the same individual access features as the RAID 5 type, but
with the highest data integrity. This RAID type is well suited to the
same applications as the RAID 5 type, but where data integrity is
more important than the cost of disk space.
Disk - An individual disk type, which functions just like a standard
single disk, and, as such, does not have the data integrity provided by
parity or mirrored data. This RAID type is well suited for temporary
directories that are not critically important.
Hot Spare - A single global spare disk, which serves as a temporary
replacement for a failed disk in a RAID 5, 3, 1, or 1/0 LUN. Data from
the failed disk is reconstructed automatically on the hot spare. It is
reconstructed from the parity data or mirrored data on the working
disks in the LUN; therefore, the data on the LUN is always accessible.
A hot spare LUN cannot belong to a Storage Group.

Number of Disks in a LUN


The RAID type of a LUN determines the number of disks that you
can select for the LUN, shown as follows:

Table 7-1 Number of Disks You Can Use in RAID Types

RAID Type Number of Disks You Can Use

RAID 5 3 - 16

RAID 3 5 or 9 (FC-series only)

RAID 1/0 4, 6, 8, 10, 12, 14, 16

RAID 1 2

RAID 0 3 - 16

Disk 1

Hot Spare 1

LUNs, LUN RAID Types, and Properties 7-3

6864 5738-001
Creating LUNs and RAID Groups
7

Table 7-2 Disks That Cannot Be Hot Spares

Series Disk IDs

FC4700 0:0 through 0:8


FC4400/4500,
FC5500/5600

FC5200/5300 0:0 through 0:4

C3x00 A0, B0, C0, D0, E0, A3, A4

C2x00 A0, B0, C0, D0, E0, A3

C1900 A0, B0, C0, D0, E0, A1

C1000 A0, A1, A2, A3, A4, B0

LUN Properties The LUN properties determine the individual characteristics of a


LUN. You set LUN properties when you bind the LUN. You can
change some LUN properties after the LUN is bound. The LUN
properties are as follows:
• LUN ID (assigned at creation; cannot be changed)
• LUN size (RAID Group storage systems only)
• Element size
• Rebuild priority
• Verify priority
• Default owner
• Enable read cache
• Enable write cache
• Enable auto assign
• Number of LUNs to bind
• Alignment offset (not available for all FC4700 storage systems.
Refer to the Manager release notes).

Element Size The element size is the number of disk sectors (512 bytes) that the
storage system can read or write to a single disk without requiring
access to another disk. (This assumes that the transfer starts at the
first sector in the stripe). The element size can affect the performance
of a RAID 3, RAID 5 or RAID 1/0 LUN. For non-FC4700 storage
systems, a RAID 3 LUN has a fixed element size of one sector. For
FC4700 storage systems, a RAID 3 LUN has a fixed element size of 16
sectors.
The smaller the element size, the more efficient the distribution of
data read or written. However, if the size is too small for a single I/O

7-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

operation, the operation requires access to two stripes. If there are


two stripes, reading and/or writing must be done from two disks
instead of one. You should use a size that is an even multiple of 16
sectors and is the smallest size that will rarely force access to another
disk. The default size, except for RAID 3 LUNs, is 128 sectors.
You set the element size for a LUN when you bind it. You cannot
change the element size without unbinding the LUN (thereby losing
its data) and rebinding it with the new size.

Rebuild Priority The rebuild priority is the relative importance of reconstructing data
on either a hot spare or a new disk that replaces a failed disk in a
LUN. It determines the amount of resources the SP devotes to
rebuilding instead of to normal I/O activity.

Table 7-3 Valid Rebuild Priorities

Value Target Rebuild Time in Hours

ASAP 0 (as quickly as possible)

HIGH 6

MEDIUM 12

LOW 18

The rebuild priorities correspond to the target times listed above. The
storage system attempts to rebuild the LUN in the target time or less.
The actual time to rebuild the LUN depends on the I/O workload,
the LUN size, and the LUN RAID type.
For a RAID Group with multiple LUNs, the highest priority specified
for any LUN on the group is used for all LUNs on the group. For
example, if the rebuild priority is High for some LUNs on a group
and Low for the other LUNs on the group, all LUNs on the group will
be rebuilt at High priority.
You set the rebuild priority for a LUN when you bind it, and you can
change it after the LUN is bound without affecting the data on the
LUN.

Verify Priority The verify priority is the relative importance of checking parity
sectors in a LUN. If an SP detects parity inconsistencies, it starts a
background process to check all the parity sectors in the LUN. Such
inconsistencies can occur after an SP fails and the LUN is taken over

LUNs, LUN RAID Types, and Properties 7-5

6864 5738-001
Creating LUNs and RAID Groups
7

by the other SP. The priority determines the amount of resources the
SP devotes to checking parity instead of to normal I/O activity.
Valid verify priorities are ASAP (as soon as possible), HIGH,
MEDIUM, and LOW. A verify operation with an ASAP or HIGH
priority checks parity faster than one with a MEDIUM or LOW
priority, but may degrade storage-system performance. The default
priority is LOW, and though a verify with this priority may take
many hours, it is adequate for most LUNs.
You set the verify priority for a LUN when you bind it, and you can
change it after the LUN is bound, without affecting the data on the
LUN.

Default Owner The default owner is the SP that assumes ownership of the LUN
when the storage system is powered up. If the storage system has two
SPs, you can choose to bind some LUNs using one SP as the default
owner and the rest using the other SP as the default owner. The
primary route to a LUN is the route through the SP that is its default
owner, and the secondary route is through the other SP.

LUNs that are not currently owned by an SP are unowned. A hot spare that is
not in use is an unowned LUN.

Valid default owner values are SP A, SP B, and Auto, which tries to


divide the LUNs equally between SP A and SP B. The default value is
SP A for a storage system with one SP, and Auto for a storage system
with two SPs.
You set the default owner for a LUN when you bind it and you can
change it after the LUN is bound, without affecting the data on the
LUN.

Enable Read Cache Enable read cache enables (default) or disables read caching for a
LUN. For a LUN with read caching enabled to actually use read
caching, the read cache on the SP that owns the LUN must also be
enabled. If the read cache for the SP owning the LUN is enabled, then
the memory assigned to that read cache is shared by all LUNs that are
owned by that SP and have read caching enabled.
Generally, you should enable read caching for every RAID type that
supports it. If you want faster read performance on some LUNs than
on others, you may want to disable read caching for the lower
priority LUNs.

7-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

You enable or disable read caching for a LUN when you bind it. You
can also enable or disable read caching after the LUN is bound
without affecting its data.

Enable Write Cache Enable write cache enables (default) or disables write caching for a
LUN. For a LUN with write caching enabled to actually use write
caching, the write cache for the storage system must also be enabled.
If the storage-system write cache is enabled, then the memory
assigned to the write cache is shared by all LUNs that have write
caching enabled.
Generally, you should enable read caching for every RAID type
(especially for a RAID 5 or RAID 1/0 LUN) that supports it. If you
want faster write performance on some LUNs than on others, you
may want to disable write caching for the lower priority LUNs.
You enable or disable write caching for a LUN when you bind it. You
can also enable or disable write caching after the LUN is bound,
without affecting its data.

Enable Auto Assign Enable auto assign enables or disables (default) auto assignment for a
LUN. Auto assignment controls the ownership of the LUN when an
SP fails in a storage system with two SPs.
With auto assignment enabled, if the SP that owns the LUN fails and
the server tries to access that LUN through the second SP, the second
SP assumes ownership of the LUN so the access can occur. The
second SP continues to own the LUN until the failed SP is replaced
and the storage system is powered up. Then, ownership of the LUN
returns to its default owner.
If auto assign is disabled in the previous situation, the other SP does
not assume ownership of the LUN, so the access to the LUN does not
occur.
If you are running Application Transparent Failover (ATF) software
on a UNIX server connected to the storage system, you must disable
auto assignment for all LUNs that you want the software to fail over
to the working SP when an SP fails.
You enable or disable auto assignment for a LUN when you bind it.
You can also enable or disable it after the LUN is bound, without
affecting the data on it.

LUN properties are not available for the Hot Spare RAID type because it is
simply a replacement disk for a failed disk in a LUN.

LUNs, LUN RAID Types, and Properties 7-7

6864 5738-001
Creating LUNs and RAID Groups
7

Alignment Offset Alignment offset sets the host Logical Block Address (LBA)
alignment to a stripe boundary on the LUN, resulting in a storage
system performance improvement. Problems can arise when a host
operating system records private information at the start of a LUN.
This can interfere with the RAID stripe alignment so that when data
I/O crosses the RAID stripe boundary the storage system
performance is degraded

Alignment Offset is not available for all FC4700 storage systems

Table 7-4 LUN Properties Available for Different RAID Types

Property RAID 5 RAID 3 RAID 1/0 RAID 1 RAID 0 Disk

Element size Yes Yes Yes No Yes No

Rebuild priority Yes Yes Yes Yes No No

Verify priority Yes Yes Yes Yes No No

Default owner Yes Yes Yes Yes Yes Yes

Enable read cache Yes No Yes Yes Yes Yes

Enable write cache Yes No Yes Yes Yes Yes

Enable auto assign Yes Yes Yes Yes Yes Yes

Alignment offset Yes Yes No No No No

Table 7-5 Default LUN Property Values for Different RAID Types

Property RAID 5 RAID 3 RAID 1/0 RAID 1 RAID 0 Disk

Element size 128 See note 128 N/A 128 N/A

Rebuild priority Low Low Low Low N/A N/A

Verify priority Low Low Low Low N/A N/A

Default owner SP A for one SP; Auto for two SPs. Auto distributes the LUNs as equally as possible
between the two SPs.

Enable read cache Selected N/A Selected Selected Selected Selected

Enable write cache Selected N/A Selected Selected Selected Selected

Enable auto assign state Cleared Cleared Cleared Cleared Cleared Cleared

Alignment offset 0 0 N/A N/A N/A N/A

Non-FC4700 storage systems = 1; FC4700 storage systems = 16

7-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

What Next? What you do next depends on whether the storage system supports
RAID Groups.
For a non-RAID Group storage system - Continue to the next
section, Creating LUNs in a Non-RAID Group Storage System on
page 7-10.
For a RAID Group storage system - Go to the section Creating RAID
Groups on page 7-20.

LUNs, LUN RAID Types, and Properties 7-9

6864 5738-001
Creating LUNs and RAID Groups
7

Creating LUNs in a Non-RAID Group Storage System


You can create either:
• Standard LUNs with disks that Manager selects and default
property values (page 7-8)
• Custom LUNs with disks that you select and property values that
you set
This section explains how to create standard LUNs (page 7-10) or
custom LUNs (page 7-14) on a non-RAID Group storage system.

! CAUTION
Before you bind a RAID 3 LUN, you must assign memory for it to
the RAID 3 memory partition. If this partition does not have
adequate memory for the LUN, you will not be able to bind it.
Changing the size of the RAID 3 memory partition reboots the
storage system.

Creating Standard LUNs on a Non-RAID Group Storage System


A standard LUN has the following:
• Disks that Manager selects
• Default property values

To Create Standard LUNs in a Non-RAID Group Storage System

If no LUNs exist on a storage system connected to a NetWare server, refer to


the Release Notice for the NetWare Navisphere Agent for information on
how to bind the first LUN.

If you are binding LUNs on a storage system connected to a Solaris


server, and neither ATF nor CDE is installed, start with step 1.
Otherwise, start with step 2.

7-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

1. If you are binding LUNs in a storage system connected to a


Solaris server and no LUNs exist on the storage system, edit the
device information in the agent configuration file on each server
connected to the storage system in one of the following ways:
a. Open the agent configuration file.
b. Enter the following line entry:
device auto auto "auto"
c. Save the agent configuration file.
d. Stop and then start the Agent.
or
a. Open the agent configuration file.
b. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for
UNIX environments.
c. Comment out (insert a # before) any device entries for the SPs
in the storage system.
d. Save the agent configuration file.
e. Stop and then start the Agent.
2. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
3. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Bind LUNs.
The Bind LUN dialog box opens, similar to the following.

4. In the RAID Type list, click the RAID type for the new LUN.

Creating LUNs in a Non-RAID Group Storage System 7-11

6864 5738-001
Creating LUNs and RAID Groups
7

Only supported RAID types for the storage system are available.
You cannot change the RAID type without unbinding the LUN
(losing its data), and then rebinding it with a new ID.
5. In the LUN ID list, click the ID for the new LUN.
Each LUN in a storage system has a unique LUN ID, which is a
hexadecimal number. The default ID for the LUN is the smallest
available one. You cannot change the ID without unbinding the
LUN (and thus losing its data), and then binding a new LUN with
the new ID.
6. In the Number of Disks list, click the number of disks to include
in each LUN.
Only numbers supported for the selected RAID type are
available.
7. Click Apply to bind the LUN.
8. In the dialog box that opens, click Yes to confirm the bind
operation.
A LUN icon for the new LUN appears in the Storage tree under
the icon for the SP that owns it.

Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).

9. If you want to bind another LUN on the storage system, repeat


steps 3 through 8.
10. When you have bound all the LUNs you want on the storage
system, click Close.
11. Reboot each server connected to the storage system to make the
LUNs in the storage system visible to the server.
All the LUNs you created have read caching enabled; however, they
will use read caching only if the read cache is enabled for the SP that
owns them (page 6-21). If the storage system supports write caching,
all the LUNs you create have write caching enabled. These LUNs will
use write caching only if storage-system write caching is enabled
(page 6-21).

7-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

What Next?
Go to the section Verifying or Editing Device Information in the Host
Agent Configuration File (Non-FC4700 storage systems) on page 7-38.

Creating Custom LUNs on a Non-RAID Group Storage system


A custom LUN has disks that you select and property values that you
set.
Before you start creating customs LUNs, read the restrictions and
recommendations in the following table.

Table 7-6 Restrictions and Recommendations for Creating LUNs

RAID
Type Recommendations

RAID 5 Binding five disks uses disk space efficiently. In a C-series storage system,
selecting disks on different internal SCSI buses provides the greatest data
integrity.

RAID 3 In a C-series storage system, selecting disks on different internal SCSI buses
provides the greatest data integrity.

RAID 1/0 Disks are paired into mirrored images in the order in which you select them.
The first and second disks you select are a pair of mirrored images; the third
and fourth disks you select are another pair of mirrored images; and so on.
For highest data integrity in a C-series storage system, the first disk you select
in each pair should be on a different internal SCSI bus than the second disk
you select.

RAID 0 In a C-series storage system, selecting disks on different internal SCSI buses
provides the highest data integrity.

RAID 1 None

Disk None

Hot Spare None

Creating LUNs in a Non-RAID Group Storage System 7-13

6864 5738-001
Creating LUNs and RAID Groups
7

To Create Custom LUNs in a Non-RAID Group Storage System

If no LUNs exist on a storage system connected to a NetWare server, refer to


the Release Notice for the NetWare Navisphere Agent for information on
how to bind the first LUN.

If you are binding LUNs on a storage system connected to a Solaris


server, and neither CDE or ATF is installed, start with step 1.
Otherwise, start with step 2.
1. If you are binding LUNs in a storage system connected to a
Solaris server and no LUNs exist on the storage system, edit the
device information in the agent configuration file on each server
connected to the storage system in one of the following ways:
a. Open the agent configuration file.
b. Enter the following line entry:
device auto auto "auto"
c. Save the agent configuration file.
d. Stop and then start the Agent.
or
a. Open the agent configuration file.
b. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for
UNIX environments.
c. Comment out (insert a # before) any device entries for the SPs
in the storage system.
d. Save the agent configuration file.
e. Stop and then start the Agent.
2. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
3. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Bind LUN.

7-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

The Bind LUN dialog box opens, similar to the following.

4. Click Advanced.
The Advanced Bind LUN dialog box opens, similar to the
following. For information on the properties in the dialog box,
click Help.

Creating LUNs in a Non-RAID Group Storage System 7-15

6864 5738-001
Creating LUNs and RAID Groups
7

5. In the RAID Type list, click the RAID type for the new LUN.
Only supported RAID types for the storage system are available.
You cannot change the RAID type without unbinding the LUN
(thereby losing its data), and then rebinding it with a new ID.

6. From Choose Disks under Disk Selection, either click


Automatically to have Manager choose the disks for the new
LUNs, or click Manually to choose the disks for the new LUN
yourself.
Automatically lets you create multiple LUNs with the same
RAID type and same number of disks in one operation. Manually
lets you create only one LUN.
7. If Automatically is selected, follow these steps (if manually is
selected, proceed to step 8):
a. In the Number of Disks list, click the number of disks for each
new LUN.
Only numbers supported for the selected RAID type are
available.
b. In the LUN ID list, click the ID for the first new LUN.
Each LUN in a storage system has a unique LUN ID, which is
a hexadecimal number. The default ID for the first new LUN is
the smallest available one; for the next new LUN, it is the next
smallest available; and so on.
You cannot change the ID without unbinding the LUN
(thereby losing its data), and then binding a new LUN with
the new ID.
c. In the Number of LUNs to Bind list, click the number of
LUNs you want to create with the selected RAID type and
selected number of disks.
d. Go to step 9.
8. If Manually is selected:
a. Click Select.
A Disk Selection dialog box opens that is similar to the
following. For information on the properties in the dialog box,
click Help.

7-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

b. For an FC-series storage system, if the disks you want in the


LUN are in just one enclosure, then in the Select from list,
click that enclosure.

All disks in a LUN must have the same physical capacity to fully use
the storage space on the disks. The physical capacity of a disk bound
as a hot spare must be at least as great as the physical capacity of the
largest disk module in any LUN on the storage system.

c. For each disk under Selected Disks that you do not want in
the LUN, click the disk, and then click ←.
The disk moves into Available Disks.
d. For each disk under Available Disks that you want in the
LUN, click the disk, and then click →.
The disk moves into Selected Disks.
e. When Selected Disks contains all the disks you want in the
LUN, click OK.
f. In the LUN ID list, click the ID for the new LUN.
Each LUN in a storage system has a unique LUN ID, which is
a hexadecimal number. The default ID is the smallest available
one. You cannot change the ID without unbinding it (thereby
losing its data), and then binding a new LUN with the new ID.
9. Under LUN Properties, change any of the user-defined properties
that you want to have new values:

Creating LUNs in a Non-RAID Group Storage System 7-17

6864 5738-001
Creating LUNs and RAID Groups
7

a. In the Element Size list, click the desired element size.


b. In the Rebuild Priority list, click the desired rebuild priority.
c. In the Verify Priority list, click the desired verify priority.
d. Select the Enable Read Cache check box to enable read
caching for the new LUNs, or clear it to disable read caching
for them.
A LUN with read caching enabled uses default prefetch
property values. You can change a LUN’s prefetch property
values later using the Cache tab in its LUN Properties dialog
box.
e. Select the Enable Write Cache check box to enable write
caching for the new LUNs, or clear it to disable write caching
for the them.
f. Select the Enable Auto Assign check box to enable auto assign
for the new LUNs, or clear it to disable auto assign for them.
If ATF is installed, disable auto assign. Enable Auto Assign is
not available for storage systems with only one SP.
g. Under Default Owner, click SP A, SP B, or Auto to assign the
ownership of the new LUNs at storage-system powerup.
If the storage system has only one SP, Auto assigns SP A as the
owner of all new LUNs. If the storage system has two SPs, it
distributes the new LUNs between the two SPs. In so doing, it
takes into account any existing LUNs owned by the SPs. As a
result, either both SPs end up with the same number of LUNs,
or else one SP ends up with one more LUN than the other.
10. Click Apply to bind the single or multiple LUNs.
11. In the dialog box that opens, click Yes to confirm the bind
operation.
A LUN icon for each new LUN appears in the Storage tree under
the icon for the SP that owns it.

Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).

7-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

12. If you want additional LUNs on the storage system, repeat steps 3
through 11.
13. When you have bound all the LUNs you want on the storage
system, click Close.
14. Reboot each server connected to the storage system to make the
LUNs in the storage system visible to the server.
A LUN bound with read caching enabled uses caching only if the
read cache is enabled for the SP that owns it (page 6-21). Similarly, a
LUN bound with write caching enabled uses caching only if
storage-system write caching is enabled (page 6-21).

What Next?
Go to the section Verifying or Editing Device Information in the Host
Agent Configuration File (Non-FC4700 storage systems) on page 7-38.

Creating LUNs in a Non-RAID Group Storage System 7-19

6864 5738-001
Creating LUNs and RAID Groups
7

Creating RAID Groups


Before you create LUNs in a RAID Group storage system, you must
first create the RAID Group on which you will bind the LUN. You can
create either:
• Standard RAID Groups with disks and default property values
that Manager selects (page 7-22)
• Custom RAID Groups with disks and property values that you
select
This section describes RAID Groups, and then explains how to create
standard RAID Groups (page 7-22) or custom RAID Groups (page
7-23).

RAID Groups A RAID Group is a set of disks on which you bind one or more LUNs.
Each LUN you bind on a RAID Group is distributed equally across
the disks in the Group.
The RAID Group supports the RAID type of the first LUN you bind
on it. Any other LUNs that you bind on it have the same RAID type.
The number of disks you can have in a RAID Group is determined by
the number of disks available for the RAID type of the LUNs that you
will bind on it (page 7-3).
You can expand a RAID Group by adding one or more disks to it.
Expanding a RAID Group does not automatically increase the user
capacity of already bound LUNs. Instead, it distributes the capacity
of the LUNs equally across all the disks in the RAID Group, freeing
space for additional LUNs.
If you expand a RAID Group that has only one bound LUN with a
user capacity equal to the user capacity of the RAID Group, you can
choose to have the user capacity of the LUN equal the user capacity
of the expanded Group. Whether you can actually use the increased
user capacity of the LUN depends on the operating system running
on the servers connected to the storage system.
If you unbind and bind LUNs on a RAID Group, you may create gaps
in the contiguous space across the Group’s disks. This activity,
fragmenting the RAID Group, leaves you with less space for new
LUNs. You can defragment a RAID Group to compress these gaps
and provide more contiguous free space across the disks.

7-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

Defragmentation may also shorten file access time, since the disk
read/write heads need to travel less distance to reach data.
When a disk in a RAID Group is replaced or fails, the rebuild
operation reconstructs the data on the replacement disk or hot spare
one LUN at a time, starting with the first LUN.

RAID Group Properties A RAID Group has the following properties:


Expansion/defragmentation priority - Determines how fast
expansion and defragmentation occurs. Values are Low, Medium, or
High.
Automatically destroy - Enables or disables (default) the automatic
dissolution of the RAID Group when the last LUN on it is unbound.

Table 7-7 Default RAID Group Property Values

Property Value

Expansion/Defragmentation Priority Medium

Automatically Destroy Cleared

Table 7-8 Maximum Number of LUNs Per RAID Group

LUN RAID Type Maximum LUNs Per RAID Group

RAID 5, RAID 1/0, RAID 1, RAID 0 32

RAID 3, Disk, Hot Spare 1

What Next?
You next action depends on whether you want to create standard or
customer RAID Groups.
Standard RAID Groups - Continue on to the next section, Creating
Standard RAID Groups.
Custom RAID Groups - Go to the section Creating Custom RAID
Groups on page 7-23.

Creating RAID Groups 7-21

6864 5738-001
Creating LUNs and RAID Groups
7

Creating Standard RAID Groups


Standard RAID Groups contain the following:
• Disks selected by Manager
• Default property values

To Create Standard 1. In the Enterprise Storage dialog box, click the Equipment or
RAID Groups Storage tab.
2. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Create RAID Group.
The Create RAID Group dialog box opens (similar to the
following).

3. In the RAID Group ID list, click the ID for the new RAID Group.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
4. In the Support RAID Type list, click the RAID type for the new
RAID Group.
Only supported RAID types for the storage system are available.
5. In the Number of Disks list, click the number for the new RAID
Group.
Only numbers supported for the RAID type are available.
6. Click Apply to create the RAID Group.

7-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

7. In the dialog box that opens, click Yes to confirm the RAID Group
creation operation.
An unbound RAID Group icon for the new RAID Group appears
in the Storage tree under the RAID Groups icon.
8. If you want another RAID Group on the storage system, repeat
steps 2 through 7.
9. When you have created all the RAID Groups you want on the
storage system, click Close.

What Next?
When you have created the RAID Groups you want, go to the section
Creating LUNs on RAID Groups on page 7-27 to create one or more
LUNs on each of them.

Creating Custom RAID Groups


Custom RAID Groups have:
• Disks that you select
• Property values that you set

To Create Custom 1. In the Enterprise Storage dialog box, click the Equipment or
RAID Groups Storage tab.
2. Right-click the icon for the storage system on which you want to
create the RAID Group, and then click Create RAID Group.
The Create RAID Group dialog box opens (similar to the
following).

3. Click Advanced.

Creating RAID Groups 7-23

6864 5738-001
Creating LUNs and RAID Groups
7

The advanced Create RAID Group dialog box opens, similar to


the following. For information on the properties in the dialog box,
click Help.

4. In the RAID Group ID list, click the ID for the new RAID Group.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.
5. From Choose Disks under Disks, either click Automatically to
have Manager choose the disks for the new RAID Group or click
Manually to choose the disks for the new LUN yourself.
6. If Automatically is selected, follow these steps (if manually is
selected, proceed to step 7):
a. In the Support RAID Type list, click the RAID type for the
new RAID Group.

7-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

b. In the Number of Disks list, click the number for the new
RAID Group.
Only numbers supported for the RAID type are available.
c. Go to step 8.
7. If Manually is selected:
a. In the Support RAID Type list, click the RAID type for the
new RAID Group.
b. Under Manual Disk Selection, click Select.
A Disk Selection dialog box opens, similar to the following.

c. For an FC-series storage system, if the disks you want in the


RAID Group are in just one enclosure, click that enclosure in
the Select From list.

All disks in a RAID Group must have the same physical capacity to
fully use the storage space on the disks. The physical capacity of a
RAID Group that supports the Hot Spare RAID type must be at least
as great as the physical capacity of the largest disk module in any
LUN on the storage system.

d. For each disk under Selected Disks that you do not want in
the RAID Group, click the disk, and then click ←.
The disk moves into Available Disks.

Creating RAID Groups 7-25

6864 5738-001
Creating LUNs and RAID Groups
7

e. For each disk under Available Disks that you want in the
RAID Group, click the disk, and then click →.
The disk moves into Selected Disks.
f. When Selected Disks contains all the disks you want in the
RAID Group, click OK.
8. Under RAID Group Parameters, change any of the user-defined
properties for which you want to change the values:
a. In the Expansion/Defragmentation Priority list, click the
priority for the new RAID Group.
b. Select the Automatically Destroy check box to enable
automatic dissolution of the RAID Group when the last LUN
is unbound, or clear the check box to disable automatic
dissolution.
9. Click Apply to create the RAID Groups.
10. In the dialog box that opens, click Yes to confirm the RAID Group
creation operation.
An unbound RAID Group icon for each new RAID Group
appears in the Storage tree under the RAID Groups icon.
11. If you want additional RAID Groups on the storage system,
repeat steps 2 through 10.
12. When you have created all the RAID Groups you want on the
storage system, click Close.

What Next?
When you have created the RAID Groups you want, continue to the
next section to create one or more LUNs on each of them.

7-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

Creating LUNs on RAID Groups


When you bind a LUN on a RAID Group, you specify how much of
the Group’s user space (contiguous free space) you want the LUN to
use. The LUN is distributed equally across all the disks in the RAID
Group.
For example, a RAID Group of RAID 5 type with five 9-Gbyte disks
provides 36 Gbytes of user space and 9 Gbytes of parity data. If you
bind one 2-Gbyte LUN, you will have 34 Gbytes left for additional
LUNs. You could bind 17 more 2-Gbyte LUNs using all the space in
the RAID Group, or you could bind four more 2-Gbyte LUNs and
four 5-Gbyte LUNs, leaving 6 Gbytes for future expansion.
The following restrictions apply to RAID Groups and the LUNs on
them:
• 32 LUNs is the maximum for the following RAID types:
• RAID 5
• RAID 1/0
• RAID 1
• RAID 0
• 1 LUN is the maximum for the following RAID types:
• RAID 3
• Disk
• Hot Spare
• All LUNs in a RAID Group have the same RAID type
• Each LUN in a RAID Group can have a different element size
where applicable
• Different SPs can own different LUNs in the same RAID Group

Creating LUNs on RAID Groups 7-27

6864 5738-001
Creating LUNs and RAID Groups
7

! CAUTION
Before you bind a RAID 3 LUN do the following:

For non-FC4700 storage systems:


Assign at least 2 Mbytes per RAID 3 LUN to the RAID 3 memory
partition. If this partition does not have adequate memory for the
LUN, you cannot bind it. Changing the size of the RAID 3 memory
partition reboots the storage system. Rebooting restarts the SPs in
the storage system, which terminates all outstanding I/O to the
storage system.

For FC4700 storage systems that support RAID 3 LUNs:


Allocating the RAID 3-memory partition size is not required for
RAID 3 Raid Groups and LUNs. (The RAID 3 memory partition
appears dimmed and is unavailable). If there will be a large
amount of sequential read access to this RAID 3 LUN, you may
want to enable read caching with prefetching for the LUN.

On RAID Groups, you can create standard LUNs with default


property values or custom LUNs with property values you set. The
rest of this section explains how to create standard LUNs (page 7-28)
or custom LUNs (page 7-32) on a RAID Group storage system.

Creating Standard LUNs on a RAID Group


A standard LUN has default property values.

If no LUNs exist on a storage system connected to a NetWare server, refer to


the Release Notice for the NetWare Navisphere Agent for information on
how to bind the first LUN.

To Create Standard LUNs on a RAID Group


If you are binding LUNs on a storage system connected to a Solaris
server, and neither CDE nor ATF is installed, start with step 1.
Otherwise, start with step 2.

1. If you are binding LUNs in a storage system connected to a


Solaris server and no LUNs exist on the storage system, edit the
device information in the agent configuration file on each server
connected to the storage system in one of the following ways:
a. Open the agent configuration file.

7-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

b. Enter the following line entry:


device auto auto "auto"
c. Save the agent configuration file.
d. Stop and then start the Agent.
or
a. Open the agent configuration file.
b. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for
UNIX environments.
c. Comment out (insert a # before) any device entries for the SPs
in the storage system.
d. Save the agent configuration file.
e. Stop and then start the Agent.
2. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
3. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Bind LUN.
A Bind LUN (RAID Groups) dialog box opens, similar to the
following.

4. In RAID Type, select the RAID Type for the new LUN.

5. In the LUN ID list, click the ID for the new LUN.


Each LUN in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.

Creating LUNs on RAID Groups 7-29

6864 5738-001
Creating LUNs and RAID Groups
7

If you change the ID, the LUN is unbound and loses all its data.
You then bind a new LUN with the new ID.
6. In the RAID Group list, click the ID of the RAID Group on which
you want to bind the new LUNs.
The list displays only those RAID Groups available for the
selected RAID type. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is
created.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.

If the storage system does not have the RAID Group you want, you can
create one by clicking New, which opens the Create RAID Group dialog
box (page 7-22 for a standard RAID Group; page 7-23 for a custom RAID
Group).

7. In the LUN Size list, do one of the following:


• Click either MB to select Mbytes or click GB to select Gbytes
as the unit of measure for the user space of the new LUN, and
then in LUN Size, select the number of Mbytes or Gbytes for
the new LUN.
• Click Block Count, if available, to select blocks (512 bytes) as
the unit of measure, and then in LUN Size, type the number of
blocks for the new LUN.

Block Count is not available for all FC4700 storage systems. Refer to
the Manager release notes.

The LUN size can be any number up to the entire amount of


contiguous user free space available on the RAID Group. Only
the sizes available for the selected RAID Group are listed.
If you first unbind LUNs, and then bind new LUNs on the RAID
Group, the group may end up with free space that is unavailable
because it is fragmented, that is, not contiguous. You can verify
the amount of free space and, if some is not contiguous,
defragment it as described in the section Defragmenting a RAID
Group on page 12-30.

7-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

The LUN size property is unavailable for a RAID Group that supports
the RAID 3, Disk, or Hot Spare RAID type because each of these LUNs
uses all the disk space on the RAID Group.

All disks in a LUN must have the same capacity to fully use the storage
space on the disks. The capacity of a disk bound as a Hot Spare must be
at least as great as the capacity of the largest disk module in any LUN on
the storage system.

8. Click Apply to create the LUN.


9. In the dialog box that opens, click Yes to confirm the bind
operation.
A LUN icon for each LUN appears in the Storage tree under the
icon for its parent RAID Group.

Binding LUNs can take as long as two hours. Some storage systems have
disks that have been preprocessed at the factory to speed up binding.
You can determine the progress of a bind operation from Percent Bound
on the LUN Properties dialog box (page 11-24).

10. If you want to create another LUN on a RAID Group, repeat steps
3 through 9.
11. When you have created all the LUNs you want on the storage
system, click Close.
All the LUNs you create have read caching enabled. However, they
can only use read caching if the read cache is enabled for the SP that
owns them (page 6-21). If the storage system supports write caching,
all LUNs that you create have write caching enabled. However, they
can only use write caching if storage-system write caching is enabled
(page 6-21).

What Next?
What you do after you have created all the LUNs you want depends
on whether the storage system is shared or unshared.
Unshared storage system - Reboot each server connected to the
storage system to make the LUNs in the storage system visible to the
server, and then go to the section Verifying or Editing Device
Information in the Host Agent Configuration File (Non-FC4700 storage
systems) on page 7-38.
Shared storage system - Go to Chapter 8, Setting Up Access Logix, to
create Storage Groups containing the LUNs you bound.

Creating LUNs on RAID Groups 7-31

6864 5738-001
Creating LUNs and RAID Groups
7

Creating Custom LUNs on a RAID Group


A custom LUN has property values that you set.

If no LUNs exist on a storage system connected to a NetWare server, refer to


the Release Notice for the NetWare Navisphere Agent for information on
how to bind the first LUN.

To Create Custom LUNs on a RAID Group


If you are binding LUNs on a storage system connected to a Solaris
server, and neither CDE or ATF is installed, start with step 1.
Otherwise, start with step 2.
1. If you are binding LUNs in a storage system connected to a
Solaris server and no LUNs exist on the storage system, edit the
device information in the agent configuration file on each server
connected to the storage system in one of the following ways:
a. Open the agent configuration file.
b. Enter the following line entry:
device auto auto "auto"
c. Save the agent configuration file.
d. Stop and then start the Agent.
or
a. Open the agent configuration file.
b. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for
UNIX environments.
c. Comment out (insert a # before) any device entries for the SPs
in the storage system.
d. Save the agent configuration file.
e. Stop and then start the Agent.
2. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.

7-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

3. Right-click the icon for the storage system on which you want to
bind LUNs, and then click Bind LUN.
A Bind LUN (RAID Groups) dialog box opens, which is similar
to the following.

4. Click Advanced.
An Advanced Bind LUN (RAID Groups) dialog box opens,
which is similar to the following.

5. In RAID Type, select the RAID type for the new LUN.
This sets the parent RAID type for the new LUN.

Creating LUNs on RAID Groups 7-33

6864 5738-001
Creating LUNs and RAID Groups
7

6. Under RAID Group Selection, in the RAID Group for new LUN
list, click the ID of the RAID Group on which you want to bind
the new LUNs.
The list displays only those RAID Groups available for the
selected RAID type. The RAID Group IDs range from 0 through
243; the RAID Group ID is assigned when the RAID Group is
created.
Each RAID Group in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the RAID Group is destroyed; thus, its
LUNs are unbound and lose all their data. You then recreate the
RAID Group with the new ID.

If the storage system does not have the RAID Group you want, you can
create one by clicking New, which opens the Create RAID Group dialog
box (page 7-22 for a standard RAID Group; page 7-23 for a custom RAID
Group).

7. Under LUN Properties, in the LUN ID list, click the ID for the
new LUN.
Each LUN in a storage system has a unique ID, which is a
hexadecimal number. The default ID is the smallest available
number.
If you change the ID, the LUN is unbound and loses all its data.
You then bind a new LUN with the new ID.
8. Under LUN Properties, change any of the user-defined properties
that you want to have new values:
a. In the Element Size list, click the new element size.
b. In the Rebuild Priority list, click the new rebuild priority.
c. In the Verify Priority list, click the new verify priority.
d. Select the Enable Read Cache check box to enable read
caching for the new LUNs, or clear it to disable read caching
for them.

7-34 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

A LUN with read caching enabled uses default prefetch


property values. You can change a LUN’s prefetch property
values using the Cache tab in its LUN Properties dialog box
(page 11-24).
e. Select the Enable Write Cache check box to enable write
caching for the new LUNs, or clear it to disable write caching
for them.
f. Select the Enable Auto Assign check box to enable auto assign
for the new LUNs, or clear it to disable auto assign for them.
If ATF is installed, disable auto assign. Enable Auto Assign is
not available for storage systems with only one SP.
g. Under Default Owner, click SP A, SP B, or Auto to assign the
ownership of the new LUNs at storage system powerup.
If the storage system has only one SP, Auto assigns SP A as the
owner of all new LUNs. If the storage system has two SPs, it
distributes the new LUNs between the two SPs. In so doing, it
takes into account any existing LUNs owned by the SPs. As a
result, either both SPs end up with the same number of LUNs,
or else one SP ends up with one more LUN than the other.
h. In the Number of LUNs to Bind list, click the number of
LUNs you want to create with the selected RAID type and
properties.
9. Under LUN Properties, if the LUN Size list is available,
a. Click MB to select Mbytes, click GB to select Gbytes or click
Block Count to select blocks as the unit of measure for the
user space of the new LUN.

Block Count is not available for all FC4700 storage systems. Refer to
the Manager release notes.

b. In the LUN Size list, select the number of Mbytes, Gbytes or


Blocks of user space for the new LUN.
The LUN size can be any number up to the entire amount of
contiguous user-free space available on the RAID Group. Only
sizes available for the RAID Group are listed.
If you first unbind LUNs, and then bind new LUNs on the RAID
Group, the group may end up with free space that is unavailable
because it is fragmented, that is, not contiguous. You can check

Creating LUNs on RAID Groups 7-35

6864 5738-001
Creating LUNs and RAID Groups
7

the amount of free space and, if some is not contiguous,


defragment it as described in the section Defragmenting a RAID
Group on page 12-30.

The LUN size property is unavailable for a RAID Group that supports
the RAID 3, Disk, or Hot Spare RAID type because each of these LUNs
uses all the disk space on the RAID Group.

All disks in a LUN must have the same capacity to fully use the storage
space on the disks. The capacity of a disk bound as a Hot Spare must be
at least as great as the capacity of the largest disk module in any LUN on
the storage system.

10. Click Apply to create the LUNs.


11. In the dialog box that opens, click Yes to confirm the bind
operation.
A LUN icon for each new LUN appears in the Storage tree under
the icon for its parent RAID Group.

Binding LUNs may take as long as two hours. Some storage systems
have disks that have been preprocessed at the factory to speed up
binding. You can determine the progress of a bind operation from
Percent Bound on the LUN Properties dialog box (page 11-24).

12. If you want to create additional LUNs on a RAID Group, repeat


steps 3 through 11.
13. When you have created all the LUNs you want on the storage
system, click Close.

A LUN that is bound with read caching enabled uses caching only if the read
cache is enabled for the SP that owns it (page 6-21). Similarly, a LUN bound
with write caching enabled uses caching only if storage-system write caching
is enabled (page 6-21).

7-36 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

What Next?
What you do after you have created all the LUNs you want depends
on whether the storage system is unshared or shared.
Unshared storage systems - Reboot each server connected to the
storage system to make the LUNs in the storage system visible to the
server, and then go to the section Verifying or Editing Device
Information in the Host Agent Configuration File (Non-FC4700 storage
systems) (page 7-38).
Shared storage system - Go to the Chapter 8, Setting Up Access Logix,
to create Storage Groups containing the LUNs you bound.

Creating LUNs on RAID Groups 7-37

6864 5738-001
Creating LUNs and RAID Groups
7

Verifying or Editing Device Information in the Host Agent


Configuration File (Non-FC4700 storage systems)
Whenever you create one or more LUNs on a storage system, you
should verify or edit the device information in the Host Agent
configuration the server. This section describes how to do this for the
following servers:
• AIX server (this page)
• HP-UX server (this page)
• Linux server (this page)
• NetWare server (this page)
• Solaris server (page 7-39)
• Windows server (page 7-40).
For information on editing the Host Agent configuration file, see
Navisphere Server Software Administrator’s or User Guide for the
operating system.

Verifying or Editing Host Agent Device Information on an AIX, HP-UX, Linux, or


NetWare Server
1. Open the agent configuration file.
2. If the file contains a device entry for an existing LUN on each SP
in the storage system, do not change anything.
3. If the file does not contain a device entry for an existing LUN on
each SP in the storage system, add one for each SP.
4. If you did not change anything, just close the file; if you changed
the file, save it.
5. Stop and then start the Agent.

What Next?
AIX, HP-UX, Linux, or NetWare on the server views the LUNs in a
storage system as identical to standard single disk drives. For AIX,
HP-UX, Linux, or NetWare to use the LUNs, you must make them
available to the operating system as described in the Navisphere
Server Software Administrator’s or User Guide for the operating
system.

7-38 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Creating LUNs and RAID Groups
7

Verifying or Editing Agent Device Information on a Solaris Server


How you edit the agent configuration file depends on whether the
storage system had any bound LUNs before you created LUNs.
Follow the appropriate procedure below for your situation.

CDE or ATF is Not Installed and There Are No Bound LUNs Before You Created LUNs
If you are binding LUNs in a storage system connected to a Solaris
server and no LUNs exist on the storage system, edit the device
information in the agent configuration file on each server connected
to the storage system in one of the following ways:
1. Open the agent configuration file.
2. Enter the following line:
device auto auto "auto"
3. Save the agent configuration file.
4. Stop and then start the Agent.
or
1. Open the agent configuration file.
2. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for UNIX
environments.
3. Comment out (insert a # before) any device entries for the SPs in
the storage system.
4. Save the agent configuration file.
5. Stop and then start the Agent.

What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.

Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 7-39

6864 5738-001
Creating LUNs and RAID Groups
7

Bound LUNs Before You Created LUNs


1. Open the agent configuration file.
2. Comment out (insert a # sign before) any clsp device entries for
SPs in the storage system.
3. If the file contains a device entry for an existing LUN on each SP
in the storage system, do not change anything.
4. If the file does not contain a device entry for an existing LUN on
each SP in the storage system, add one for each SP.
5. If you did not change anything, just close the file; if you changed
the file, save the file.
6. Stop and then start the Agent.

What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.

Verifying or Editing Agent Device Information on a Windows Server


1. On the server, start the Agent Configurator.
The Agent Configurator window opens.
2. On the window’s toolbar, click Clear Device List.
3. On the window’s toolbar, click Auto Detect Array.
4. Save the agent configuration file.
5. When you are asked if you want to restart the Agent, click Yes.

What Next?
Windows NT or Windows 2000 on the server views the LUNs in the
storage system as identical to standard single disk drives. For
Windows NT or Windows 2000 to use the LUNs, you must make
them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Windows.

7-40 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
8
Setting Up
Access Logix

The Access Logix ™ option lets you do the following:


• Enable configuration access on FC4300/4500 storage systems.
This feature lets you restrict the server ports that can send
configuration commands to the storage system. See Setting
Storage-System Configuration Access Properties (non-FC4700 Storage
Systems) on page 6-2.
• Enable data access on shared storage systems. This feature lets
you create Storage Groups on FC4300/4500 and FC4700 storage
systems. A Storage Group is a collection of one or more LUNs that
you select, and to which you can connect one or more servers. A
server can access only the LUNs in the Storage Group to which it
is connected. In other words, the server sees the Storage Group to
which it is connected as the entire storage system.

A storage system with the Access Logix option is a shared storage
system. A server can be connected to only one Storage Group per storage
system.

This chapter describes the following:


• Setting the Storage-System Data Access Property ........................8-2
• Storage Group Properties..................................................................8-4
• Creating Storage Groups...................................................................8-7
• Verifying Server Connections to a Storage Group ...................... 8-11
• Verifying or Editing Device Information in the Host Agent
Configuration File (Non-FC4700 storage systems) .....................8-15

Setting Up Access Logix 8-1

6864 5738-001
Setting Up Access Logix
8

Setting the Storage-System Data Access Property


The storage-system data access property - access control enabled - is
available for shared storage systems only. This section describes data
access control and Storage Groups and how to enable data access
control for a storage system.

Data Access Control and Storage Groups


A shared storage system provides a data access control feature that
lets you restrict the servers that can read and write to specific LUNs
on the storage system. This feature is implemented using Storage
Groups.
A Storage Group is a collection of one or more LUNs to which you
connect one or more servers. A server can access only those LUNs in
the Storage Groups to which it is connected. In other words, a server
sees the Storage Groups to which it is connected as the entire storage
system.
If you do not enable data access control for a storage system, the
following occurs:
• You cannot create Storage Groups.
• Each server connected to the storage system has access to all
LUNs on the storage system.

Enabling Data Access Control for a Storage System


If you want to create Storage Groups on a storage system, you must
enable data access control for the storage system.
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Right-click the storage-system icon for which you want to enable
data access control.
3. Click Properties, and then click the Data Access tab.

8-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

The Data Access tab of the Storage System Properties dialog box
opens (similar to the following). For information on the
properties in the dialog box, click Help.

4. Select the Access Control Enabled check box, and then click
Apply.
5. Click Yes to confirm that you want to enable data access control.
6. Click OK to apply your changes and close the Properties dialog
box.

Setting the Storage-System Data Access Property 8-3

6864 5738-001
Setting Up Access Logix
8

Storage Group Properties


The properties of a Storage Group are as follows:
• Unique ID
• Storage Group name
• Sharing
• Dedicated
• Sharable
• LUNs in Storage Group
• Connected hosts
• Used host connection paths

Unique ID The unique ID is the unique identifier for the Storage Group. It is
assigned automatically to the Storage Group when you create it. You
cannot change this name.

Storage Group By default, Storage Group Name has the format Storage Group n,
Name where n is the total number of Storage Groups plus one. You can
change the default name when you create the group or at any later
time.

Sharing Sharing sets the sharing state of the Storage Group to dedicated or
sharable. A shareable state lets you connect the group to multiple
servers, and is primarily used for clustered environments. A
dedicated state lets you connect the group to just one server. The
default setting for a new Storage Group is dedicated.

8-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

LUNs in Storage LUNs in Storage Group lists the LUNs currently in the Storage
Group Group. You cannot select the list entries. Each entry in the list consists
of the following fields:

Field Meaning

Identifier LUN icon representing the LUN

Name Name of the LUN

Capacity User capacity, that is, the amount of space for user data
on the LUN

Connected Hosts Connected hosts lists the servers currently connected to the Storage
Group. You cannot select the list entries. Each entry in the list
consists of the following fields:

Field Meaning

Name Hostname of the server

IP Address IP address of the server

OS Operating system on the server

Manager can determine what operating system is running on the server only
if the revision of the Agent running on the server is greater than 4.1.

When you connect a server to a Storage Group, the server is:


• Connected to the Storage Group through each server HBA port
(initiator) that is connected to the storage system.
In other words, if the server has two HBA ports and each is
connected to one storage-system SP, the server has two
connection paths to the Storage Group.
• Disconnected from any other Storage Group in the storage system
to which it was connected.
You can connect multiple servers to the same Storage Group, but you
should do this only if the following conditions are met:
• The servers are running the same operating system.

Storage Group Properties 8-5

6864 5738-001
Setting Up Access Logix
8

• The operating system either:


– Supports multiple sharing of the same LUN
– Has layered software (such as Microsoft Cluster Server) that
supports multiple hosts sharing the same LUN.
In a cluster environment, you must connect each server in the cluster
to the same Storage Group.

Used Host Used host connection paths is an advanced property that does the
Connection Paths following:
• Lists all paths that connect the selected server to the Storage
Group
• Tells whether that path is enabled or disabled.
If the check box for a path is selected, the path is enabled. If the
check box is cleared, the path is disabled. All paths to a host are
either enabled or disabled.
Each path consists of the following fields:

Field Meaning

HBA Device name for the HBA in the server connected to the storage system

HBA Port Unique ID for the port on the HBA connected to the storage system

SP Port Unique ID for the SP port connected to the HBA port

SP ID SP A or SP B

8-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

Creating Storage Groups


Chapter 2 recommends that you connect and power up only one server (the
configuration server) to ensure that no other server can write to a storage
system before you connect it to the appropriate Storage Group. If you
followed these recommendations, now is the time to connect and/or power
up any server that will use the storage systems, so you can connect each to
the appropriate Storage Group as you create it.

You create Storage Groups using the Create Storage Group dialog
box. The procedure in this section tells you how to open this dialog
box from the storage-system menu. You can also open it by clicking
New in the Connect Hosts to Storage dialog box or in the Data
Access tab of the Storage System Properties dialog box.

To Create Storage Before you can create Storage Groups on a storage system, you must have
Groups enabled data access control for the storage system, as described in Creating
Storage Groups on page 8-7.

1. In the Enterprise Storage dialog box, click the Equipment or


Storage tab.
2. Right-click the icon for the storage system on which you want to
create the Storage Group, and then click Create Storage Groups.
The Create Storage Group dialog box opens, which is similar to
the one on the next page.

Creating Storage Groups 8-7

6864 5738-001
Setting Up Access Logix
8

3. In the Storage System list, select the name of the storage system
on which you want to create a Storage Group.
4. If you want to assign the Storage Group your own name, enter
the name in Storage Group.
5. From Sharing, either click Dedicated to allow a single host to
access the new Storage Group, or click Sharable to allow multiple
hosts to access it.
6. If you want to assign LUNs from other Storage Groups to the new
group (which we do not recommend), click Show LUNs in Other
Storage Groups.
The Unassigned LUNs list is updated to include the LUNs in all
Storage Groups on the storage system.
7. Assign one or more LUNs to the new group by selecting the
LUNs from the Unassigned LUNs list and clicking →.
The LUNs move to the Selected LUNs list.

8-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

8. Click Connect Hosts.


The Connect Hosts to Storage dialog box opens, similar to the
following.

9. If you want to disconnect a server from another Storage Group


and connect it to the Storage Group you are creating, click Show
Hosts Connected to Other Storage Groups.
The Available Hosts list is updated to include the servers
connected to other Storage Groups on the storage system.
10. For each server you want connected to the new group, select the
server from the Available Hosts list, and click ↓ .
The selected server moves to the Hosts to be Connected list.

If the Storage Group is dedicated, you can connect it to only one server.

Creating Storage Groups 8-9

6864 5738-001
Setting Up Access Logix
8

11. Click OK to apply your changes, and close the Connect Hosts
dialog box.
The selected hostname appears in Hosts Connected To Storage
Group in the Create Storage Group dialog box.
12. If you want to create another Storage Group, click Apply;
otherwise, click OK.
13. In the confirmation dialog box that opens, click Yes to create the
Storage Group and connect the selected servers to it.
14. If you clicked Apply, repeat steps 6 through 13 to create another
Storage Group.
15. When you have created all the Storage Groups you want on the
storage system, click OK.
Each server you selected for a Storage Group should now have a
connection to the Storage Group through each server’s HBA ports
(initiators) connected to the storage system.

What Next?
Continue to the next section to verify the connections to the Storage
Groups you just created.

8-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

Verifying Server Connections to a Storage Group


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which you just
created Storage Groups.
3. Double-click the Storage Groups icon.
4. Right-click the Storage Group whose server connections you
want to verify, and click Properties.
The Storage Group Properties dialog box opens, similar to the
following. For information on the properties in the dialog box,
click Help.

Verifying Server Connections to a Storage Group 8-11

6864 5738-001
Setting Up Access Logix
8

5. Click the Advanced tab.


The Advanced tab of the Storage Group Properties dialog box
opens, similar to the following.

6. On each server tab in the Used Host Connection Paths list, look
for an enabled path for each SP on the storage system with the
Storage Group.
A path is enabled if its check box is selected. If each SP has an
enabled path, then the server is correctly connected to the Storage
Group.

8-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

If an SP has a disabled path, then a working physical connection


between the SP and an HBA port in the server existed at one time,
but this connection is currently inactive. In this situation, make
sure that:
• The HBA port is working.
• The switch is working, and it is configured to connect the HBA
port to the SP. See the switch documentation for information
on the switch and how to configure it.
• The cables between the HBA port and the switch, and between
the switch and the SP, are fastened securely.
If the Used Host Connection Paths list contains more than one
path for the same SP ID, then either the SP or the HBA to
which it was connected was replaced, and the information
about the connection between the SP and the HBA was never
removed from the storage system’s persistent memory.
Whenever a storage-system server is rebooted, the Agent
scans the network for HBA port connections to storage
systems. When it finds a connection, it sends information
about the connection to the SP.
The SP stores this information in the storage system’s
persistent memory on the database disks. This information
remains in this memory until you issue a CLI port command
to remove it. See the Agent and CLI manual for information
on the port command.
7. Repeat steps 2 through 6 for each additional Storage Group that
you created.
8. When you have verified the server connections to each Storage
Group in the storage system, reboot each server connected to the
storage system.
Rebooting makes the LUNs in the Storage Group connected to the
server visible to the server.

What Next?
For Non-FC4700 storage systems - Continue to the next section
Verifying or Editing Device Information in the Host Agent Configuration
File (Non-FC4700 storage systems) on page 8-15.

Verifying Server Connections to a Storage Group 8-13

6864 5738-001
Setting Up Access Logix
8

For FC4700 storage systems - You must make the LUNs available to
the operating system as described in the Navisphere Server Software
Administrator’s or User Guide for the operating system.

8-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

Verifying or Editing Device Information in the Host Agent


Configuration File (Non-FC4700 storage systems)
Whenever you create one or more LUNs on a storage system, you
should verify or edit the device information in the Host Agent
configuration file on the server. This section describes how to do this
for the following servers:
• AIX server (this page)
• HP-UX server (this page)
• Linux server (this page)
• NetWare server (this page)
• Solaris server (page 8-16)
• Windows server (page 8-17).
For information on editing the Host Agent configuration file, see
Navisphere Server Software Administrator’s or User Guide for the
operating system.

Verifying or Editing Host Agent Device Information on an AIX, HP-UX, Linux, or


NetWare Server
1. Open the agent configuration file.
2. If the file contains a device entry for an existing LUN on each SP
in the storage system, do not change anything.
3. If the file does not contain a device entry for an existing LUN on
each SP in the storage system, add one for each SP.
4. If you did not change anything, just close the file; if you changed
the file, save it.
5. Stop and then start the Agent.

What Next?
AIX, HP-UX, Linux, or NetWare on the server views the LUNs in a
storage system as identical to standard single disk drives. For AIX,
HP-UX, Linux, or NetWare to use the LUNs, you must make them
available to the operating system as described in the Navisphere
Server Software Administrator’s or User Guide for the operating
system.

Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 8-15

6864 5738-001
Setting Up Access Logix
8

Verifying or Editing Agent Device Information on a Solaris Server


How you edit the agent configuration file depends on whether the
storage system had any bound LUNs before you created LUNs.
Follow the appropriate procedure below for your situation.

No CDE or ATF and No Bound LUNs Before You Created LUNs


If you are binding LUNs in a storage system connected to a Solaris
server and no LUNs exist on the storage system, edit the device
information in the agent configuration file on each server connected
to the storage system in one of the following ways:
1. Open the agent configuration file.
2. Enter the following line entry:
device auto auto "auto"
3. Save the agent configuration file.
4. Stop and then start the Agent.
or
1. Open the agent configuration file.
2. Add a clspn entry for each SP in the storage system.
For information on clspn entries, see the Agent manual for UNIX
environments.
3. Comment out (insert a # before) any device entries for the SPs in
the storage system.
4. Save the agent configuration file.
5. Stop and then start the Agent.

What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.

8-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up Access Logix
8

Bound LUNs Before You Created LUNs


1. Open the agent configuration file.
2. Comment out (insert a # sign before) any clsp device entries for
SPs in the storage system.
3. If the file contains a device entry for an existing LUN on each SP
in the storage system, do not change anything.
4. If the file does not contain a device entry for an existing LUN on
each SP in the storage system, add one for each SP.
5. If you did not change anything, just close the file; if you changed
the file, save the file.
6. Stop and then start the Agent.

What Next?
Solaris on the server views the LUNs in a storage system as identical
to standard single disk drives. For Solaris to use the LUNs, you must
make them available to the operating system as described in the
Navisphere Server Software Administrator’s or User Guide for
Solaris.

Verifying or Editing Agent Device Information on a Windows Server

Auto Detect does not find FC4700 storage systems because they are managed
through their SP Agents and not through the Host Agent on the server.

1. On the server, start the Agent Configurator.


The Agent Configurator window opens.
2. On the window’s toolbar, click Clear Device List.
3. On the window’s toolbar, click Auto Detect Array.
4. Save the agent configuration file.
5. When you are asked if you want to restart the Agent, click Yes.

What Next?
Windows NT or Windows 2000 on the server views the LUNs in the
storage system as identical to standard single disk drives. For
Windows NT or Windows 2000 to use the LUNs, you must make
them available to the operating system as described in the

Verifying or Editing Device Information in the Host Agent Configuration File (Non-FC4700 storage systems) 8-17

6864 5738-001
Setting Up Access Logix
8

Navisphere Server Software Administrator’s or User Guide for


Windows.

8-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
9
Setting Up and Using
MirrorView

This chapter introduces the EMC MirrorView option, which provides


the ability to maintain a mirrored copy of current information at a
remote location. Using remote mirroring can assist in recovery in the
event of loss of data.

The features in this chapter function only with a storage system that has the
optional MirrorView software installed.

This chapter describes the following:


• MirrorView Overview .......................................................................9-2
• MirrorView Terminology ..................................................................9-3
• MirrorView Features and Benefits...................................................9-6
• How MirrorView Handles Failures...............................................9-10
• MirrorView Operations Overview ................................................9-14
• Allocating the Write Intent Log .....................................................9-16
• Creating a Remote Mirror...............................................................9-20
• Activating a Remote Mirror............................................................9-26
• Identifying Remote Mirrors on a Storage System .......................9-27
• Viewing or Modifying Remote Mirrors or Images......................9-28
• Managing MirrorView Connections..............................................9-36
• Deactivating a Remote Mirror........................................................9-39
• Adding a Secondary Image to a Remote Mirror .........................9-40
• Promoting a Secondary Image to Primary ...................................9-45
• Synchronizing a Secondary Image ................................................9-46
• Fracturing a Secondary Image .......................................................9-48
• Removing a Secondary Image from a Remote Mirror................9-49
• Destroying a Remote Mirror...........................................................9-50

Setting Up and Using MirrorView 9-1

6864 5738-001
Setting Up and Using MirrorView
9

MirrorView Overview
MirrorView is a software application that maintains a copy image of a
logical unit (LUN) at separate locations in order to provide for
disaster recovery; that is, to let one image continue if a serious
accident or natural disaster disables the other.
The production image (the one mirrored) is called the primary image;
the copy image is called the secondary image. Each image resides on
a storage system. The primary image receives I/O from a host called
the production host; the secondary image is maintained by a separate
storage system that can be a standalone storage system or connected
to its own computer system. Both storage systems are managed by
the same management station, which can promote the secondary
image if the primary image becomes inaccessible.
The following figure shows two sites and a primary and secondary
image that includes one LUN.
Highly available cluster
File Server Mail Server Database Server Mail Server Database Server
Operating Operating Operating Operating Operating
system A system A system B system A system B
Adapter

Adapter
Adapter

Adapter

Adapter

Adapter
Adapter

Adapter
Adapter

Adapter

Switch fabric Switch fabric Switch fabric Switch fabric


Extender Extender
Extender Extender

SP A SP B SP A SP B
LUN LUN
Cluster LUN LUN
Mail Server
Storage
LUN Storage Group LUN
Group
LUN LUN

Database LUN LUN


Server LUN Database Server
LUN
Storage Remote Mirror
Group LUN LUN
Path 1
Storage system 1 Path 2 Storage system 2
EMC1883

Figure 9-1 Remote Mirror Configuration Sample


The connections between storage systems require fibre channel cable
and GigaBit Interface Converters (GBICs) at each SP. If the
connections include extender boxes, then the distance between
storage systems can be up to 40 kilometers. Without extender boxes,
the maximum distance is 500 meters.

9-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

MirrorView Terminology
Active state Condition in which a remote mirror is running normally.

Attention state Condition in which a mirror is not operational because a required


condition has not been met. For example, a mirror could be in the
secondary images falls below the set limit. Access to the primary
LUN continues.

Auto recovery Option to have synchronization start as soon as a secondary image is


determined to be reachable.

Consistent state (of Condition in which a secondary image is identical to either the
image) current primary image or to some previous instance of the primary
image. This means that the secondary image is potentially
recoverable when it is promoted.

Fracture Condition in which a secondary image is unreachable by the primary


image.

Fracture log A bit map, maintained in SP memory, that indicates which portions of
the primary images might differ from the secondary image(s). Use to
shorten the synchronization process after fractures. The image is
maintained in SP memory, so if the SP that controls the primary
image fails, the fracture log is lost and full synchronization of the
secondary image(s) is needed.
Note: This is a double-failure scenario.

Image state Condition of an image. The image states are in-sync, consistent,
synchronizing, and out-of-sync. See States.

Inactive state Remote mirror state where the mirror is unavailable for host I/O.
Attempts to write to or read from a mirror in the Inactive state results
in the error, STATUS_INVALID_DEVICE_STATE.

In-sync state The state in which the data in the secondary image is identical to that
in the primary. On the next I/O, the image state will change to
consistent. Also, see states.

MirrorView mirroring A feature that provides the means for disaster recovery by
maintaining one or more copies (mirrors) of LUNs at distant
locations. MirrorView can work in conjunction with, but is
independent of, the other major CLARiiON® software features

MirrorView Terminology 9-3

6864 5738-001
Setting Up and Using MirrorView
9

known as Access Logix, Application Transparent Failover (ATF), and


SnapView. Regarding SAN storage, MirrorView works with LUNs,
and thus can be used to mirror one or more LUNs that compose a
SAN Storage Group.

Out-of-sync state Remote mirror state in which the software does not know how the
primary and secondary images differ; therefore a full synchronization
is required to make the secondary image(s) usable. Also, see image
state.

Promote (to primary) The operation by which the administrator changes a secondary image
of a remote mirror to the primary image. As part of this operation, the
previous primary image becomes a secondary image. If the previous
primary image is unavailable when you promote the secondary
image (perhaps because the primary site suffered a disaster), the
software does not include it as a secondary image in the new mirror.

Primary image The LUN that serves as a source for the remote mirrored LUN, which
is the secondary image. There is one primary image and zero or one
secondary images. A remote mirror is ineffective for recovery unless
it has at least one secondary image.

Quiesce threshold or The time period after which, without I/O from the host, any
idle threshold secondary image in the consistent state and not fractured is marked
as being in the in-sync state (the default is 60 seconds). An
administrator can promote an in-sync secondary image to primary
image with no synchronization action required, whereas promoting a
consistent image might lose the latest updates unacknowledged to
the host.

Remote mirror A LUN that is mirrored at different sites. The LUN at one site is
designated as the primary image, and a LUN at another site is called
a secondary image. The software maintains the secondary image as a
byte-for-byte copy of the primary image. If the system at the primary
site fails, a secondary image may be promoted to take over the
primary role, thus allowing access to the data at a remote location.

Remote mirror image The LUN at one site that participates in a remote mirror. The image
(image for short) can be either the primary or a secondary image.

Secondary image A LUN that contains a copy of the primary image LUN. There can be
zero or one secondary images.

9-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

States There are two types of states: remote mirror states and image states.
The remote mirror states are inactive, active, and attention. The image
states are in-sync, consistent, synchronizing, and out-of-sync. Note
that I/O can occur to the primary image only when the remote mirror
is in the active state.

Synchronize The process of updating each secondary image with changes from a
primary image. There are several levels of synchronization:
synchronization based on a fracture log, synchronization based on
the optional write intent log, and full synchronization (virtually a
copy). Synchronization based on the fracture or write intent log
requires copying only part of the primary image to the secondary
image(s).

Synchronizing state A secondary image in the process of synchronization. The data in the
image is not guaranteed to be usable until the synchronize operation
completes. Thus, an image in the synchronizing state cannot be
promoted to the primary image. Also, see States.

Write intent log (WIL) A record of changes that were made to the primary image but have
not yet been written to all secondary images. This record is stored in
persistent memory on a private LUN reserved for the mirroring
software. If the primary storage system fails (not catastrophically),
the optional write intent log can be used to quickly synchronize the
secondary image(s) when the primary storage system becomes
available. This avoids the need for full synchronization of the
secondary images, which can be a very lengthy process.

MirrorView Terminology 9-5

6864 5738-001
Setting Up and Using MirrorView
9

MirrorView Features and Benefits


MirrorView mirroring adds value to customer systems by offering
the following features:
• Provision for disaster recovery with minimal overhead
• Local high availability
• Cross mirroring
• Integration with EMC SnapView LUN copy software

Provision for Disaster Recovery with Minimal Overhead


Provision for disaster recovery is the major benefit of MirrorView
mirroring. Destruction of the primary data site would cripple or ruin
many organizations. MirrorView lets data processing operations
resume within a working day.
MirrorView is transparent to hosts and their applications. Host
applications do not know that a LUN is mirrored and the effect on
performance is minimal.
MirrorView uses synchronous writes, which means that host writes
are acknowledged only after all secondary storage systems commit
the data. This type of mirroring is in use by most disaster recovery
systems sold today.
MirrorView is not host-based, therefore it uses no host I/O or CPU
resources. The additional processing for mirroring is performed on
the storage system.

Local High Availability


MirrorView operates in a highly available environment. There are
two host-bus adapters (HBAs) per host, and there are two SPs per
storage system. If a single HBA or SP fails, the path in the surviving
SP can take control of (trespass) any LUNs owned by the failed HBA
or SP. The high availability features of RAID protect against disk
failure. And mirrors are resilient to an SP failure in the primary or
secondary storage system.

9-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Cross Mirroring
The primary or secondary role applies to an image on the storage
device. A storage system can maintain both primary and secondary
images on the same system, just not in the same mirror.

Integration with SnapView Software


SnapView software allows users to create a snapshot copy of an
active LUN at any point in time. The snapshot copy is a consistent
image that can serve for backup while I/O continues to the source
LUN. You can use SnapView in conjunction with MirrorView to
create a mirror image of a snapshot copy at a remote site.
To provide for disaster recovery, the primary and secondary storage
systems should be far apart. MirrorView ensures that data from the
primary storage system replicates to the secondary. The host (if any)
connected to the secondary might sit idle until the primary site fails.
With SnapView at the secondary site, the host at the secondary site
can take snapshot copies of the mirror images and back them up to
other media. This provides time-of-day snapshots of production data
with little impact to production host performance.

MirrorView Features and Benefits 9-7

6864 5738-001
Setting Up and Using MirrorView
9

MirrorView Example The following figure (a copy of the previous one) shows a sample
remote mirror configuration:

Highly available cluster


File Server Mail Server Database Server Mail Server Database Server
Operating Operating Operating Operating Operating
system A system A system B system A system B

Adapter

Adapter
Adapter

Adapter

Adapter

Adapter
Adapter

Adapter
Adapter

Adapter

Switch fabric Switch fabric Switch fabric Switch fabric


Extender Extender
Extender Extender

SP A SP B SP A SP B
LUN LUN
Cluster LUN LUN
Mail Server
Storage
LUN Storage Group LUN
Group
LUN LUN

Database LUN LUN


Server LUN Database Server
LUN
Storage Remote Mirror
Group LUN LUN
Path 1
Storage system 1 Path 2 Storage system 2
EMC1883

Figure 9-2 Sample Remote Mirror Configuration

In the figure above, the production host executes customer


applications. These applications access data on Storage-system 1.
Storage-system 2 is 40 km away and mirrors the data on LUN 2. The
mirroring is synchronous, so that Storage-system 2 always contains
all data modifications that are acknowledged by Storage-system 1 to
the production host.
Each server has two paths — one through each SP — to each storage
system. If a failure occurs in a path, then the storage-system software
may switch to the path through the other SP (transparent to any
applications).

9-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The server sends a write request to an SP in Storage-system 1, which


then writes data to its LUN. Next, the data is sent to the
corresponding SP in Storage-system 2, where it is stored on its LUN
before the write is acknowledged to the production host.
The standby host has no direct access to the mirrored data. (There
need not be a server at all at the standby site; if there is none, the LAN
connects to the SPs as shown.) This server runs applications that
access other data stored on Storage-system 2. If a failure occurs in
either the production host or Storage-system 1, an administrator can
use the management station to promote the image on Storage-system
2 to the primary image. Then the appropriate applications can start
on any connected server (here, the standby host) with full access to
the data. The mirror will be accessible in minutes, although the time
needed for applications to recover will vary.

MirrorView Features and Benefits 9-9

6864 5738-001
Setting Up and Using MirrorView
9

How MirrorView Handles Failures


When a failure occurs during normal operations, MirrorView
implements several actions to recover.

Primary Image Fails


When the host or storage system running the primary image fails,
access to the mirror stops until you promote a secondary image to
primary.
When you promote the secondary image, the software assigns a new
mirror ID to the promoted image to distinguish it from the old mirror.
The new status of the old primary image depends on whether the old
primary image is accessible at the time of promotion.
• If the primary image is not accessible when you promote, the
software creates a new mirror with the old secondary image as
the new primary image, and no secondary image. The mirror on
the primary storage system does not change.

Mirror before promotion Mirror after promotion

primary image = LUN xxxx primary image = LUN yyyy


secondary image = LUN yyyy secondary image =none

• If the old primary is accessible when you promote, then the


promoted image becomes primary (that is, the images swap). The
software then tests to see if the two images are in-sync. If it finds
the images are in-sync, then it proceeds with mirrored I/O as
usual. If there is a possibility the images are not in-sync, then the
software performs a full synchronization using the promoted
image as the primary image.

Mirror before promotion Mirror after promotion

primary image = LUN xxxx primary image = LUN yyyy


secondary image = LUN yyyy secondary image =xxxx

If the Primary is Repaired Instead of Being Promoted


If the primary is repaired, then the mirror continues as before the
failure.

9-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

If a primary image fails during synchronization with a secondary,


then the secondary is still in the synchronizing state, and cannot be
promoted.

Restoring the Original Mirror Configuration After Recovery of a Failed Primary Image
If the old primary image becomes accessible after a failure, and the
old mirror is repaired, the old mirror cannot communicate with the
new mirror.
To restore the mirror on the primary host to its original configuration
after the primary image is recovered, do the following:
1. If present, remove the secondary image from the new mirror.
This is the original primary image (LUN xxxx) from the original
mirror.
2. Destroy the original mirror using the Navisphere Manager Force
Destroy menu option.

Original Mirror New Mirror

Old mirror is destroyed. Original primary image = LUN yyyy


LUN used for primary image secondary image = none
remains (LUN xxxx)

3. Add a secondary image to the new mirror using the LUN that
was the primary image for the original mirror (LUN xxxx).
4. Synchronize the secondary image.

New Mirror

primary image = LUN yyyy


secondary image =LUN xxxx

5. Promote the secondary image (LUN xxxx) in the new mirror to


primary.
The new mirror has the same configuration as the original mirror.

New Mirror

primary image = LUN xxxx


secondary image =LUN yyyy

How MirrorView Handles Failures 9-11

6864 5738-001
Setting Up and Using MirrorView
9

Failure of the Secondary Image


A secondary image failure may bring the mirror below the minimum
number of images configured, which triggers a mirror failure. When
a primary cannot communicate with a secondary image, it marks the
secondary as unreachable and stops trying to write to it. However, the
secondary image remains a member of the mirror.
The primary also attempts to minimize the amount of work required
to synchronize the secondary after it recovers. It does this by
fracturing the mirror. This means that while the secondary is
unreachable, the primary keeps track of all write requests so that only
those blocks that were modified need to be copied to the secondary
during recovery. When the secondary is repaired, a synchronization
operation launches to bring the image up-to-date. The primary
recognizes that the secondary is alive and restarts write propagation
to that image.

9-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The following table shows how MirrorView might help you recover
from system failure at the primary and secondary sites. It assumes
that the mirror is active and is in the in-sync or consistent state.

Event Result and Recovery

Host or storage system running Option 1 - Catastrophic failure, repair is difficult or impossible.
primary image fails. On standby host, mirror goes to attention state. At secondary site, an Administrator
promotes secondary image and then takes other prearranged recovery steps required for
application startup on standby host.
Note: Any writes in progress when the primary storage image fails may not propagate to
the secondary image. Also, if the remote image was fractured at the time of the failure, any
writes since the fracture will not have propagated.

Option 2 -Non-catastrophic failure, repair is feasible.


On standby host and production host, if running, mirror goes to attention state. If the mirror
images were in the in-sync state before the failure, and the administrator fixes the problem,
no synchronization is required during recovery.
If the images were in a consistent state, and the recovery policy was set to automatic, then
when the primary system is restarted, the image will synchronize automatically. If the
recovery policy is set to manual, the administrator needs to synchronize the image. The
write intent log, if used, shortens the sync time needed. If the write intent log is not used, or
the secondary LUN was fractured at the time of the failure, then a full synchronization is
necessary.
If MirrorView detects a media failure on the primary image, secondary image, or the write
intent log, it administratively fractures the image. The administrator must correct the
problem and manually synchronize the image.

Host or storage system running The mirror goes to attention state, yet access to the primary image continues.The
secondary image fails. administrator has a choice: If the secondary can easily be fixed (for example, if someone
pulled out a cable), then the administrator could have it fixed and let things resume. If the
secondary can't easily be fixed, the administrator can reduce the minimum number of
secondary images allowed (if the mirror requires a secondary image) to let the mirror
become active. The secondary can be fixed and its image added and synchronized later.

How MirrorView Handles Failures 9-13

6864 5738-001
Setting Up and Using MirrorView
9

MirrorView Operations Overview


1. Connect the same Navisphere management station to both hosts
and configure the management station to let you manage both
hosts.
2. Establish a usable, two-way connection between the MirrorView
storage systems (see Managing MirrorView Connections on
page 9-36).
3. If the primary LUN does not exist, bind it on its host’s storage
system. Wait for the LUN to finish binding. Assign it to a Storage
Group as you would any LUN.
4. If the secondary LUN does not exist, bind it on its host. The
secondary LUN must have same number of blocks as the primary
LUN. You can assign block size when you bind the LUN.
5. Wait for the LUN to finish binding. This LUN must be a private
LUN (not accessible until it is promoted), so do not assign it to a
Storage Group.
6. If you want to use the write intent log, then for each SP on both
hosts, designate a LUN for the remote mirroring write intent log.
The minimum size is 128 Mbytes. See Allocating the Write Intent
Log on page 9-16.
In Navisphere Manager, you set up a write intent log LUN by
specifying the RAID group in which the write intent log LUN
should be bound. You can specify any LUN connected that can be
made a private LUN; that is, any LUN except a Hot Spare that is
not part of a Storage Group.
7. On the host with the primary LUN, create the mirror as follows.
With Manager, use the Create Remote Mirror option on the
primary storage system or select the Mirror option on the primary
LUN; then give a name and, optionally, a description. You can,
but need not, specify a write intent log. You can specify secondary
image information here (the GUI software displays possible
secondary images) or you can add a secondary image later.
Normally, when you add an image to a mirror, the software
synchronizes the secondary image with the primary. However, if
there is no pre-existing data on the primary LUN, you can avoid

9-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

the synchronization step. To ensure that no data is written to the


primary while the secondary is being added, make sure the
mirror is deactivated until the secondary is added.
After you activate the mirror, the software will duplicate all
writes that occur to the primary LUN to the secondary LUN.
At any time in the sequence above, you can get remote mirror
status with the Manager Remote Mirror Property dialog box.
8. If a primary failure occurs, the secondary will report the failure
the same way as Navisphere usually reports failures.
If the primary failure is catastrophic, the original management
station may be unusable and thus unable to report the failure. For
such a failure, the administrator at the secondary site must set up
a management station, and then promote the secondary to
primary and take other recovery action needed. This includes
assigning the newly promoted LUN to a Storage Group.
If the primary failure is minor, have the primary fixed and resume
mirroring.
9. If a secondary image fails it will be system fractured. If the
secondary can easily be fixed (for example, replacing a cable),
then the administrator could fix it and let mirroring recover and
re-synch the image. If the secondary can't be fixed, the
administrator can use Manager or the CLI to reduce to 0 the
minimum number of required images; this will activate the
mirror. Later, the secondary can be fixed and the minimum
number of required images can be changed.
10. Any time you want to stop mirroring, you can reduce the
minimum number of images to 0 and then fracture the secondary
image. To stop on a permanent basis, or reconfigure, destroy the
remote mirror with Navisphere Manager or CLI. You must
deactivate the mirror and remove all secondary images first.

MirrorView Operations Overview 9-15

6864 5738-001
Setting Up and Using MirrorView
9

Allocating the Write Intent Log

Write Intent Log The write intent log keeps track of writes that have not yet been made
to the secondary mirror image. It provides for fast recovery when the
primary storage system fails. When the primary fails and is
recovered, the write intent log is used to synchronize the data on the
secondary mirror image. Otherwise, a full resynchronization would
be required for the secondary mirror image.
The write intent log consists of two private 128 Mbyte LUNs, one
assigned to each SP in the storage system.
By default, newly created remote mirrors do not use this feature and
therefore, it is not necessary to allocate space for the write intent log.
However, if you decide to use this feature for even one remote mirror,
then it becomes necessary to allocate the private disk space for the
write intent log.

To Allocate the The Allocate Write Intent Log dialog box contains two sets of
Write Intent Log controls to specify a RAID Group for each SP. The controls behave the
same for both SPs, except as specifically noted.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Right-click the storage system for which you want to allocate the
write intent log, click Properties, and then click the Remote
Mirrors tab.

9-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The Storage System Properties - Remote Mirrors tab opens,


similar to the following.

3. Click Allocate Write Intent Log.

If the write intent log is already allocated, Allocate Write Intent Log
changes to Reallocate Write Intent Log.

Allocating the Write Intent Log 9-17

6864 5738-001
Setting Up and Using MirrorView
9

The Allocate (Reallocate) Write Intent Log dialog box opens,


similar to the following.

4. Specify how and where the write intent log is to be allocated


using SP RAID Group Selection Method (SP A or SP B).
5. For both SPs, do one of the following:
• Click Auto Select to allocate the write intent log in an
application-selected RAID Group.

The RAID type assigned to the write intent log’s RAID Group will be
RAID 0.

• Click User Specified to specify the desired RAID Group using


Select SP RAID Group or New.
For SP B only:
Click Same RAID Group as SP A to allocate the write intent
log for SP B in the same RAID Group as SP A
6. If you click User Specified, either click Select SP RAID Group to
select an existing RAID Group in which to allocate the write
intent log, or click New to create a new RAID Group in which to
allocate the write intent log.
7. Click OK to apply any changes and, if successful, close the dialog
box.

9-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

If you selected User Specified, and the RAID Group you select for the
write intent log has no LUNs (it is Unbound), the Select RAID Type
dialog box opens. Assign a RAID Type for the write intent log’s LUNs.

Clicking OK initiates the process of allocating or reallocating the


write intent log on the storage system, as follows:

No Previous Write Intent Log The application attempts to allocate the new space for the
write intent log (bind new LUNs) and specify those LUNs as
the write intent log LUNs.

Previous Write Intent Log The application attempts to deallocate (unbind) the current
LUNs assigned to the write intent log and allocate (bind)
new ones to the log.

The application attempts to allocate the new space for the


write intent log (bind new LUNs) and specify those LUNs as
the write intent log LUNs.

• If the action is successful, the application informs you that the


write intent log was successfully allocated, and closes the dialog
box. Otherwise, an error message displays and the dialog box
remains open.
• If the allocation fails, but a new LUN was successfully bound in
the process, the application attempts to unbind these LUNs. If the
unbind fails, an error message is displayed.

Allocating the Write Intent Log 9-19

6864 5738-001
Setting Up and Using MirrorView
9

Creating a Remote Mirror


You can create a remote mirror on a storage system if all of the
following are true:
• The storage system supports MirrorView.
• There are LUNs bound on the storage system that are not already
participating in a remote mirror but are capable of being
mirrored.
• The maximum number of remote mirrors for the storage system
has not been reached.

The maximum number of mirrors per storage system is 50.

Use either the basic or advanced Create Remote Mirror dialog box to
create a remote mirror.

Creating a Remote The Create Remote Mirror basic dialog box lets you to create a
Mirror - Basic remote mirror with the minimum amount of information. It assumes
the default values for some of the more advanced parameters.

The primary LUN and secondary LUN must have the same number of
blocks. You set the block count when you bind the LUN.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Right-click the storage system for which you want to create a
remote mirror, and then click Create Remote Mirror.
The Create Remote Mirror dialog box opens, similar to the
following. For information on the properties in the dialog box,
click Help.
3. To change the primary storage system, select a primary storage
system from the Primary Storage System list.
You select the system on which you will create the remote mirror.
The list displays only those storage systems that support
MirrorView.
4. In Name, type a valid name for the remote mirror.
A valid name consists of at least one non-white space character
and must not exceed 64 characters.

9-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

5. In Description (optional), type any detailed information about


the remote mirror being created. A valid description must not
exceed 256 characters.
6. From the Primary Storage System LUN to be Mirrored list, select
the eligible LUN on the selected primary storage system to be
mirrored.
LUNs that cannot be mirrored include:
• LUNs that are already participating in a remote mirror
• Invalid LUNs, such as Hot Spare LUNs
7. Choose how to specify the secondary LUN by clicking Auto
Select, User Specified, or None.

Auto Select, User Specified or None may appear dimmed and be


unavailable. If this happens:
Check all hardware connections to ensure that the primary storage
system is physically connected to the secondary storage system.

Ensure that MirrorView is installed on the secondary system.

Creating a Remote Mirror 9-21

6864 5738-001
Setting Up and Using MirrorView
9

Ensure that a secondary image exists on the secondary system and that it
matches the requirements of the primary image.

Verify the status of the logical connections between storage system. See
Managing MirrorView Connections on page 9-36.

• Auto Select - The application selects the LUN on the


secondary storage system for the secondary image.
a. Click Auto Select.
b. From the Secondary Storage System list, select the storage
system to be used as the secondary storage system.
• User Specified - You specify the LUN on the secondary
storage system for the secondary image.
a. Click User Specified.
b. Click Pick Secondary LUN.
The Add Secondary Image dialog box opens. See Adding a
Secondary Image to a Remote Mirror on page 9-40.
• None - No LUN is specified for the secondary image.
8. Click OK to create the remote mirror and the application asks you
to do the following:
• Continue or cancel the action if there is no secondary image
for this mirror or
• Continue creating a remote mirror with a primary and
secondary image
9. If the action is successful, the application closes the dialog box,
activates the mirror, displays a confirmation message, and places
an icon for the remote mirror, primary image, and secondary
image (if created) in the Storage tree. The word Active is
appended to the name of the remote mirror in the Storage tree.
Otherwise, the application displays an error message.

9-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Creating a Remote The Advanced Create Remote Mirror dialog box lets you supply
Mirror - Advanced your own values for the advanced parameters.

The primary LUN and secondary LUN must have the same number of
blocks. You set the block count when you bind the LUN.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Right-click the storage system for which you want to create a
remote mirror, and then click Create Remote Mirror.
3. In the Create Remote Mirror dialog box, click Advanced.
The Advanced Create Remote Mirror dialog box opens, similar
to the following.

Creating a Remote Mirror 9-23

6864 5738-001
Setting Up and Using MirrorView
9

4. To change the primary storage system, select a primary storage


system from the Primary Storage System list.
You select the system on which you will create the remote mirror.
The list displays only those storage systems that support
MirrorView.
5. In Name, type a valid name for the remote mirror.
A valid name consists of at least one non-white space character
and must not exceed 64 characters.
6. In Description (optional), type detailed information about the
remote mirror being created.
7. Click the Use Write Intent Log check box to use the write intent
log feature.
The write intent log keeps track of writes that have not yet been
made to the secondary mirror image for the mirror. It provides for
fast recovery when the primary storage system fails.
When the primary system fails and is recovered, the write intent
log is used to synchronize the data on the secondary mirror
image. Otherwise, a full resynchronization would be required for
the secondary mirror image.
8. In the Minimum Required Images list, click 0, 1, or All. This
parameter specifies the minimum number of secondary mirror
images that must be defined and operational for a remote mirror
to continue to operate.
9. In Quiesce Threshold, enter the quiesce threshold value for the
remote mirror. Valid values are 0 through 3600 seconds.
10. From the Primary Storage System LUN to be Mirrored list, select
the eligible LUN on the selected primary storage system to be
mirrored.
LUNs that cannot be mirrored include:
• LUNs that are already participating in a remote mirror.
• Invalid LUNs, such as Hot Spare LUNs.
11. Specify the remote LUN by selecting Auto Select, User Specified
or None.

Auto Select, User Specified or None may appear dimmed and be


unavailable. If this happens:

9-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Check all hardware connections to ensure that the primary storage


system is physically connected to the secondary storage system.

Ensure that MirrorView is installed on the secondary system.

Ensure that a secondary image exists on the secondary system and that it
matches the requirements of the primary image.

Verify the status of the logical connections between storage system. See
Managing MirrorView Connections on page 9-36.

• Auto Select - The application selects the LUN on the


secondary storage system for the secondary image.
a. Click Auto Select.
The Secondary Storage System list displays.
b. From the Secondary Storage System list, select the storage
system to be used as the secondary storage system.
• User Specified - You specify the LUN on the secondary
storage system for the secondary image.
a. Click User Specified.
b. Click Pick Secondary LUN.
The Add Secondary Image dialog box opens. See Adding a
Secondary Image to a Remote Mirror on page 9-40.
• None - No LUN is specified for the secondary image.
12. Click OK to create the remote mirror. The application asks you
to do the following:
• Configure the Use Write Intent Log on the primary storage
system if you have not already done this. See Allocating the
Write Intent Log on page 9-16.
• Continue or cancel the action if there is no secondary image
for this mirror
• Continue creating a remote mirror with a primary and
secondary image
13. If the action is successful, the application closes the dialog box,
activates the mirror, displays a confirmation message and places
an icon for the remote mirror, primary image, and secondary
image (if created) in the Storage tree. The word Active is
appended to the name of the remote mirror in the Storage tree.
Otherwise, the application displays an error message.

Creating a Remote Mirror 9-25

6864 5738-001
Setting Up and Using MirrorView
9

Activating a Remote Mirror


All remote mirrors are automatically activated when they are created.
You will need to manually activate a mirror in the following cases:
• When a mirror has been deactivated and you want to activate it.
• When you promote a secondary image to primary.
To Activate a Remote Mirror
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the remote
mirror resides.
3. Double-click a Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click an inactive remote mirror node, and then click
Activate.

9-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Identifying Remote Mirrors on a Storage System


The possible states of a remote mirror are Active, Inactive, and
Attention.
• Active – The remote mirror is running normally.
• Inactive – The mirror is unavailable for host I/O, a state
produced by a deactivate operation.
• Attention – The remote mirror is not operational because the
minimum number of secondary images falls below the set limit.
Access to the primary image continues.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system, and then
double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.

You can also identify remote mirrors on a storage system using the
Storage System Properties - Remote Mirrors tab.

Identifying Remote Mirrors on a Storage System 9-27

6864 5738-001
Setting Up and Using MirrorView
9

Viewing or Modifying Remote Mirrors or Images


Using Navisphere Manager, you can view or modify information
about a remote mirror or any of its images — primary or secondary.

Always view and modify remote mirror properties from the primary storage
system. Information displayed from the secondary storage system may not
be accurate, especially if the primary storage system has lost contact with the
secondary storage system.

To View or Modify a Remote Mirror’s General Properties


The Remote Mirror Properties - General tab lets you view and
modify the general properties of a remote mirror, including the
remote mirror name, state and description.

Unique ID is read only.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system whose remote
mirrors you want to view or modify, and then double-click the
Remote Mirrors container icon.
3. Right-click the remote mirror image icon, click Properties, and
then click the General tab.

9-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The Remote Mirror Properties - General tab opens, similar to the


following. For information on the properties in the dialog box,
click Help.

4. View read-only information in the following fields:


• Unique ID displays the read only unique ID (World Wide
Name) for the remote mirror.
• State displays the current state of the remote mirror, as shown
in the following table

Viewing or Modifying Remote Mirrors or Images 9-29

6864 5738-001
Setting Up and Using MirrorView
9

State Meaning

Active The remote mirror is running normally.

Inactive The remote mirror was idled by a deactivate operation.

Attention The remote mirror is not operational because the a


required condition has not been met. For example, no
secondary image is available and a minimum of one is
required. In this case, access to the primary image
continues.

5. View or modify information in the following areas:


• In Name, type a valid name for the remote mirror.
A valid name consists of at least one non-white space
character and must not exceed 64 characters.
• In Description (optional), type detailed information about the
remote mirror being created.
• Click the Use Write Intent Log check box to use the write
intent log feature, or deselect it to stop using the write intent
log. The write intent log must be allocated before use.
• In Quiesce Threshold, type the quiesce threshold for the
remote mirror. Valid values are 0 through 3600 seconds.
• In Minimum Required Images set the minimum number of
secondary images that must be defined and operational for a
remote mirror to continue to operate.
6. To apply any changes and close the dialog box, click OK. To apply
any changes and leave the dialog box open, click Apply.
7. If the remote mirror is not active, you can activate it by clicking
Activate. If the remote mirror is active, you can deactivate it by
clicking Deactivate.
8. Click Add Secondary Image to add a new secondary image for
the remote mirror.

9-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

To View or Modify a Primary Remote Mirror Image


The Remote Mirror Properties - Primary Image tab lets you view
information about the LUN being protected by the remote mirror.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system whose remote
mirrors you want to view or modify.
3. Double-click the Remote Mirrors icon.
4. Right-click a remote mirror image icon, and then click Properties.
5. Click the Primary Image tab.
The Remote Mirror Properties - Primary Image tab opens,
similar to the following. For information on the properties in the
dialog box, click Help.

Viewing or Modifying Remote Mirrors or Images 9-31

6864 5738-001
Setting Up and Using MirrorView
9

6. View the following read-only information:


Storage System displays the current name of the storage system
on which the LUN being mirrored is bound. (This is the storage
system that holds the primary image for the remote mirror.)
Image LUNs displays a list of the LUNs being protected
(mirrored) by the remote mirror. Each entry consists of the
user-specified LUN Name, the LUN Identifier in hexadecimal,
and the LUN’s user capacity in Gbytes.
7. Use these buttons to access other options:
Click System Properties to open the Storage System Properties
dialog box for the primary image storage system.
For each selected LUN in the Image LUNs list, click LUN
Properties to open the LUN Properties dialog box.

To View or Modify a Secondary Remote Mirror Image


The Remote Mirror Properties - Secondary Image tab lets you view
and modify parameters for the secondary mirror image LUN that is
protecting (mirroring) the LUN in the primary image of the remote
mirror. You can also start a synchronization process on the secondary
mirror image, remove or fracture the secondary mirror image, or
promote the secondary mirror image to the primary image.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system whose remote
mirrors you want to view or modify.
3. Double-click the Remote Mirrors icon.
4. Right-click a remote mirror image icon, and then click Properties.
5. Click the Secondary Image tab.

9-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The Remote Mirror Properties - Secondary Image tab opens,


similar to the following. For information on the properties in the
dialog box, click Help.

6. View the following read-only information:


Storage System displays the current name of the storage system
that holds the secondary image for the remote mirror.
State displays the current state of the secondary mirror image, as
shown in the following table.

Viewing or Modifying Remote Mirrors or Images 9-33

6864 5738-001
Setting Up and Using MirrorView
9

State Meaning

In-Sync The secondary image is identical to the primary image.


This state persists only until the next write to the primary
image, at which time the image state becomes
Consistent.

Consistent The secondary image is identical to the primary image,


or it has been identical at some point in the past. If the
mirror is not fractured, then the software will try to make
the secondary image In-Sync after there has been no I/O
for a given period of time (the quiesce threshold).

Synchronizing The software is applying changes to the secondary


image to mirror the primary image, but the current
contents of the secondary are not known and are likely
not usable.

Out-of-Sync None of the above.The secondary image requires


synchronization with the primary image.

% Synchronized displays the percentage complete during a


resynchronization of the secondary mirror image. If the
secondary mirror image state is In-Sync, the value is 100.
Image LUNs displays a list of the LUNs in the secondary image.
Each entry consists of the user-specified LUN name, the LUN
identifier in hexadecimal, and the LUN’s user capacity in Gbytes.
7. View or modify the following information:
Recovery Policy specifies the policy for recovering the secondary
mirror image after a failure.
• Click Automatic to specify that recovery is automatic as soon
as the primary image determines that the secondary mirror
image is once again accessible.
• Click Manual to specify that the administrator must explicitly
start a synchronization operation to recover the secondary
mirror image.
Preferred SP specifies which SP, in the storage system where the
secondary mirror image resides, is preferred for inter-storage
system communication. Click SP A or SP B.

9-34 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Synchronization Rate specifies a relative value for the


synchronization write delay parameter for the secondary mirror
image. Valid values are Low, Medium, or High where Low
increases the delay between writes, thereby increasing the time
needed to synchronize, and High decreases the delay between
writes, thereby decreasing synchronization time.

If you are running concurrent synchronizations on more than one


secondary image, we recommend that you set the Synchronization Rate
to Low. Mirror synchronizations can significantly reduce the storage
system performance, especially on the LUNs being synchronized.

8. To apply any changes and close the dialog box, click OK. To apply
any changes and leave the dialog box open, click Apply.
9. Use these buttons to access other options:
• Click System Properties to open the Storage System
Properties dialog box for the storage system selected in
Storage System.
• For each LUN selected in the Images LUNs list, click LUN
Properties to open the LUN Properties dialog box.
• Click Promote to promote the secondary mirror image so that
it becomes the primary image for the remote mirror. The
current primary, if accessible, is demoted so that it is now a
secondary mirror image for the remote mirror.

The promoted secondary image must be added to the appropriate


Storage Group, and the demoted primary image should be removed
from its Storage Group.

• Click Remove to remove the secondary mirror image


represented by the page from the remote mirror.
• Click Synchronize to start a synchronization operation on the
secondary mirror image.
• Click Fracture to perform an administrative fracture on the
secondary mirror image.

Viewing or Modifying Remote Mirrors or Images 9-35

6864 5738-001
Setting Up and Using MirrorView
9

Managing MirrorView Connections


The Manage MirrorView Connections dialog box lets you establish a
logical connection between storage systems that are physically
connected and managed, and that have the MirrorView feature
installed. It also lets you remove logical connections between the
same storage systems. You must establish at least one logical
connection between storage systems in order for the remote mirror to
work and to add a secondary image to an existing remote mirror. You
must be managing at least one SP Agent in each storage system to
establish a logical connection.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Right-click the storage system for which you want to manage
MirrorView connections, and then click Manage MirrorView
Connections.
The Manage MirrorView Connections dialog box opens, similar
to the following. For information on the properties in the dialog
box, click Help.

9-36 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

3. In the Storage System box, verify that you have selected the right
storage system. If not select another from the list.

The Storage System list includes only those storage systems that
support MirrorView.

4. Establish or remove logical connections between storage systems


by doing the following:
a. In the MirrorView Enabled Systems list or the
Unconnected/Unknown Systems list, select a storage system
for which you want to establish a logical connection to the
storage system you selected, or from which you want to
remove a logical connection.

MirrorView Enabled Systems lists storage systems that have at least


one logical connection to the selected storage system, but these
connections may not be usable.
Unconnected/Unknown lists storage systems that have a physical
connection but no logical connection to the storage system, or the
connection status is unknown. This includes storage systems that are
unmanaged or unavailable (the World Wide Name displays instead
of the storage system name).

In MirrorView Enabled Systems, the status of each storage


system connection to the selected storage system is one of the
following:

Status Description Action to be taken

Connected Connection is usable and fully None needed, unless you want to remove
established. (SP A <-> SP A and SP B a logical connection.
<-> SP B)

Partially Connected Connection is usable, but not fully Establish a logical connection between
established. (SP A <-> SP A, but SP B SP B on both storage systems, or remove
<-> SP B does not exist) the existing connection.

Unusable (one-way) Connection is unusable since the Try to establish a two-way connection
connection is a one-way connection. (SP between one (Partially Connected) or
A > SP A, or SP B < SP B) both SPs (Connected), or remove any
unusable connections.

Unmanaged Connection is not verifiable since the Manage the storage system and then try
storage system is unmanaged. to establish logical connections, or
remove any connections.

Managing MirrorView Connections 9-37

6864 5738-001
Setting Up and Using MirrorView
9

In Unconnected/Unknown Storage Systems, the status of


each storage system connection is one of the following:

Status Description Action to be taken

Disconnected No connections are established. For mirroring to be possible, establish at


least one usable connection. (Partially
Connected)

Unknown The connection status cannot be Manage the storage system or determine
determined because either the storage why the storage system is inaccessible.
system is unmanaged or inaccessible.

b. Click Connect to establish logical connections.


c. Click Disconnect to remove logical connections.
Manager displays a message box confirming the operation
you are about to perform. Depending on the status of the
storage systems, Manager may display a warning message
describing how the status may impact the operation.
d. Click Yes to continue the operation and return to the Manage
MirrorView Connections dialog box.
e. Click No to cancel the operation and return to the Manage
MirrorView Connections dialog box.
5. Click Close to close the dialog box.

9-38 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Deactivating a Remote Mirror


Deactivating a remote mirror stops all I/O to the primary image.
Therefore, you must deactivate a remote mirror for the following
conditions:
• Before promoting a secondary image to the primary image.
• Before you destroy the remote mirror.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the remote
mirror resides.
3. Double-click the Remote Mirrors Container node.
Remote mirrors display under the Remote Mirrors Container
node.
4. Right-click an active remote mirror node, and then click
Deactivate.
You are warned that I/O to the primary image LUN will be
rejected while the mirror is inactive.

You can also deactivate a remote mirror using the Remote Mirrors
Properties-General tab.

Deactivating a Remote Mirror 9-39

6864 5738-001
Setting Up and Using MirrorView
9

Adding a Secondary Image to a Remote Mirror


To add a secondary image to a remote mirror, you can use the basic
Add Secondary Image dialog box, the Advanced Add Secondary
Image dialog box, or the Create Secondary Image LUN dialog box.
The secondary LUN must be precisely the same size as the primary
LUN, and it cannot belong to a Storage Group.

Add Secondary Image


The basic Add Secondary Image dialog box specifies a secondary
mirror image for a remote mirror with a minimum of user input.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system to which you want to
add the secondary image.
3. Double-click the Remote Mirrors Container node.
4. Right-click a remote mirror node, and then click Add Secondary
Image.

9-40 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

The Add Secondary Image dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

5. In Remote Mirror Name, select the remote mirror to which a


secondary mirror image is to be added.
6. In Secondary Storage System, select the secondary storage
system on which the secondary mirror image is to reside.
7. Click Auto Select if you want the application to determine the
LUN on the specified secondary storage system that will
comprise the secondary mirror image. (The application’s choice
will be checked in the Select Secondary Mirror Image LUN list.)
To choose the LUN yourself, deselect Auto Select. In the Select
Secondary Mirror Image LUN list, select the LUN that will
comprise the secondary mirror image.
8. Click OK to add the secondary image and close the dialog box.
The application places an icon for the secondary image under the
remote mirror image icon in the Storage tree.

Adding a Secondary Image to a Remote Mirror 9-41

6864 5738-001
Setting Up and Using MirrorView
9

Advanced Add The Advanced Add Secondary Image dialog box lets you supply
Secondary Image your own values for the advanced parameters.
1. In the basic Add Secondary Image dialog box, click Advanced to
open the Advanced Add Secondary Image dialog box.
The Advanced Add Secondary Image dialog box opens, similar
to the following. For information on the properties in the dialog
box, click Help.

2. In Remote Mirror Name, select the remote mirror to which a


secondary mirror image is to be added.
3. In Secondary Storage System, select the secondary storage
system on which the secondary mirror image is to reside.
4. Click Auto Select to specify that the application is to determine
the LUN on the specified secondary storage system that will
comprise the secondary mirror image.

9-42 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

To choose the LUN yourself, deselect Auto Select. In the Select


Secondary Mirror Image LUN list, select the LUN that will
comprise the secondary mirror image.
5. To perform a full synchronization on the newly added secondary
mirror image, click Initial Sync Required. To prevent a full
synchronization, clear Initial Sync Required.

If there is no pre-existing data on the primary image, it is not necessary to


synchronize the secondary image when it is added.

To ensure that no data is written to the primary image until the


secondary image is added, deactivate the primary image (See
Deactivating a Remote Mirror on page 9-39).

6. View or modify the following information:


Recovery Policy specifies the policy for recovering the secondary
mirror image after a failure.
• Click Automatic to specify that recovery is automatic as soon
as the primary image determines that the secondary mirror
image is once again accessible.
• Click Manual to specify that the administrator must explicitly
start a synchronization operation to recover the secondary
mirror image.
Synchronization Rate specifies a relative value for the
synchronization write delay parameter for the secondary mirror
image. Click Low, Medium, or High.
7. Click OK to add the secondary image and close the dialog box.
The application places an icon for the secondary image under the
remote mirror image icon in the Storage tree.

Adding a Secondary Image to a Remote Mirror 9-43

6864 5738-001
Setting Up and Using MirrorView
9

Create Secondary The Create Secondary Image LUN dialog box lets you create a
Image LUN secondary image that is the same LUN size and RAID type as the
primary image.
To create a secondary image LUN, there must a RAID Group on the
secondary storage system that matches the RAID type of the primary
image. If one does not exist, you can create one.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the primary
image resides.
3. Double-click the RAID Groups icon, and then double-click the
icon for the RAID Group on which the primary LUN resides.
4. Right-click the icon for the LUN for which you want to create a
secondary image LUN, and then click Create Secondary Image
LUN.
The Create Secondary Image LUN dialog box opens, similar to
the following. For information on the properties in the dialog box,
click Help.

5. Verify that the entries in Primary Storage System and Primary


LUN are correct.
6. In Select Secondary Storage System, select a storage system on
which to create the secondary image LUN.

9-44 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

7. In Select RAID Group, select the RAID Group for the secondary
image LUN.
Select RAID Group only lists valid RAID Groups for the
secondary LUN. A RAID Group is valid if it is the same RAID
type as the primary LUN or unbound.

If a valid RAID Group does not exist on the secondary storage system,
click New RAID Group to open the Create RAID Group dialog box.

8. Click OK to continue and create the secondary image LUN.

If the RAID Group you select for the secondary image has no LUNs (it is
Unbound), the Select RAID Type dialog box opens. Assign a RAID Type
for the secondary mirror image LUN here.

The application places a LUN in the selected or new RAID Group


on the secondary storage system.

Promoting a Secondary Image to Primary


Remote mirrors provide quick recovery in the event of a catastrophic
failure at the primary storage site. To accomplish this recovery and
restore I/O access, you must promote a secondary mirror image to
the role of primary mirror image. You can also promote a secondary
image even if there has not been a catastrophic failure.
If the secondary image is still able to communicate with the primary
image, the image currently serving in the role of primary (if available)
is demoted and becomes a secondary image for the remote mirror.
To promote a secondary image, the following conditions must be
true.
• The storage system hosting the secondary mirror image must be
currently managed by the application.
• The state of the secondary image to be promoted must be either
Consistent or In-Sync.

Promoting a Secondary Image to Primary 9-45

6864 5738-001
Setting Up and Using MirrorView
9

To Promote a If the existing primary image is accessible, you should deactivate the mirror
Secondary Image and remove the primary image from any Storage Groups before promoting
the secondary image.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system on which the
secondary image resides.
3. Double-click the Remote Mirrors Container node.
Remote mirrors appear under the Remote Mirrors Container
node.
4. Right-click a secondary mirror image, and then click Promote.
The current primary image, if accessible, is demoted so that it is
now a secondary mirror image for the remote mirror.
The promoted secondary image must be added to the appropriate
Storage Group.
5. Activate the promoted secondary image. See Activating a Remote
Mirror on page 9-26.

You can also use the Remote Mirrors Properties - Secondary Image tab
to promote a secondary image.

Synchronizing a Secondary Image


When a secondary mirror image gets Out-Of-Sync with the primary
mirror image, you must synchronize the secondary image so that the
data on the secondary image LUN once again matches the data on the
primary image LUN.
Synchronization is not possible if any of the following conditions is
true:
• The storage system hosting the primary mirror image is not
currently managed by the application.
• The secondary mirror image state is not fractured, for example
In-Sync, Synchronizing, or Consistent.

9-46 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

To Synchronize a You can also use the Remote Mirrors Properties - Secondary Image tab
Secondary Image to synchronize a secondary image.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system on which the
secondary image resides.
3. Double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click the icon for a secondary mirror image, and then click
Synchronize.

Synchronizing a Secondary Image 9-47

6864 5738-001
Setting Up and Using MirrorView
9

Fracturing a Secondary Image


A fracture stops the mirroring I/O from the primary image to a
secondary mirror image. A fracture can occur either automatically,
because of a failure in the path to the secondary image’s SPs, or
manually, by an administrative action (or both).
In some cases, you may want to fracture the secondary mirror image
from the remote mirror. You might administratively fracture an
image to perform preventive maintenance. You can then perform the
maintenance, which might include several system startups, without
having the primary begin synchronization each time the secondary
becomes available. After the maintenance, you can then start
synchronization.
When a secondary image is fractured, the primary image storage
system does not forward any writes to the secondary image storage
system and, therefore, the secondary image LUN may not have the
same data as the primary image LUN.
Administrative fracture is not possible if any of the following
conditions is true:
• The storage system hosting the primary mirror image is not
currently managed by the application.
• The secondary mirror image is already administrative fractured.

To Fracture a 1. In the Enterprise Storage dialog box, click the Storage tab.
Secondary Image 2. Double-click the icon for the storage system on which the
secondary image resides.
3. Double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click a secondary mirror image, and then click Fracture.

You can also use the Remote Mirrors Properties - Secondary Image tab
to fracture a secondary image.

9-48 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

Removing a Secondary Image from a Remote Mirror


You may want or need to remove a secondary mirror image from a
remote mirror.
You cannot remove the secondary image if the storage system hosting
the primary mirror image is not currently managed by the
application.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the
secondary image resides.
3. Double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click a secondary mirror image, and then click Remove.
If the action is successful, the application confirms that the image
was removed and removes the icon for the secondary image from
the Storage tree. Otherwise, the application displays an error
message.

You can also use the Remote Mirrors Properties - Secondary Image tab
to remove a secondary image.

Removing a Secondary Image from a Remote Mirror 9-49

6864 5738-001
Setting Up and Using MirrorView
9

Destroying a Remote Mirror


At some point you may want to stop mirroring a LUN. To do this,
you must destroy the remote mirror for the LUN. When you destroy
a remote mirror, you destroy the mirror’s data structure, thus
eliminating its mirroring capabilities. You do not destroy any data
stored on the LUN.
There are two ways to destroy a remote mirror: Destroy and Force
Destroy.

! CAUTION
Force Destroy should only be used in disaster recovery situations.
Normal safety checks are bypassed during the force destroy
operation. The force destroy operation can cause SP failures if used
incorrectly.

To use Destroy, the storage system hosting the primary image must
be managed by the application and you must do the following:
• Remove all secondary images from the remote mirror. See
Removing a Secondary Image from a Remote Mirror on page 9-49.
• Deactivate the mirror. See Deactivating a Remote Mirror on
page 9-39.
To use Force Destroy, the primary image should be removed from
any Storage Groups. See Adding or Removing LUNs from Storage
Groups on page 12-38.

Force Destroy destroys a remote mirror regardless of whether there are any
secondary images.

9-50 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using MirrorView
9

To Destroy a Remote Mirror Using Destroy


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the remote
mirror resides.
3. Double-click the Remote Mirrors Container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click a remote mirror node, and then click Destroy.
The application removes the icon for the remote mirror from the
Storage tree.

To Destroy a Remote Mirror Using Force Destroy

! CAUTION
Force Destroy should only be used in disaster recovery situations.
Normal safety checks are bypassed during the force destroy
operation. The force destroy operation can cause SP failures if used
incorrectly.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system on which the remote
mirror resides.
3. Double-click the Remote Mirrors container node.
Remote mirrors appear under the Remote Mirrors container
node.
4. Right-click a remote mirror node, and then click Force Destroy.
The application removes the icon for the remote mirror from the
Storage tree.

Destroying a Remote Mirror 9-51

6864 5738-001
Setting Up and Using MirrorView
9

9-52 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
10
Setting Up and Using
SnapView

This chapter introduces the EMC SnapView option, which captures a


snapshot image of a LUN that can be used for decision support,
testing, or backup while the production work continues.

The features in this chapter function only with a storage system that has the
optional SnapView software installed.

• SnapView Overview ........................................................................10-2


• SnapView Components...................................................................10-3
• SnapView Requirements .................................................................10-4
• SnapView Operations Overview ...................................................10-4
• Setting Up SnapView.......................................................................10-9
• Adding a Snapshot to a Storage Group ......................................10-15
• Destroying a Snapshot...................................................................10-16
• Using SnapView .............................................................................10-17
• Displaying Snapshot Component Properties.............................10-21
• Displaying Status of All Snapshots and Snapshot Sessions ....10-25

Setting Up and Using SnapView 10-1

6864 5738-001
Setting Up and Using SnapView
10

SnapView Overview
SnapView is a software application that captures a snapshot image of
a LUN and retains the image independently of any subsequent
changes to the LUN. The snapshot image can serve as a base for
decision support, revision testing, backup, or in any situation where
you need a consistent, stable image of real data.
SnapView can create or destroy a snapshot in seconds, regardless of
the LUN size, since it does not actually copy data. The snapshot
image consists of the unchanged LUN blocks and, for each block that
changes from the snapshot moment, a copy of the original block. The
software stores the copies of original blocks in a private LUN called
the snapshot cache. For any block, the copy happens only once, when
the block is first modified. In summary:
snapshot = unchanged-blocks-on-source-LUN + cached-blocks
As time passes, and I/O modifies the LUN, the number of blocks
stored in the snapshot cache grows. However, the snapshot,
composed of all the unchanged blocks — some from the source LUN
and some from the snapshot cache — remains unchanged.
The snapshot does not reside on disk like a conventional LUN.
However, the copy appears as a conventional LUN to another host.
The snapshot is readable and writable by any other host. This host
can access the copy for data processing analysis, testing, or backup.
A snapshot is accessible for only as long as the snapshot session lasts.
If a the storage system loses power while the session is running, the
snapshot is lost.

10-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

The following figure shows an overview of a snapshot session.


Production host Second host

Continuous I/O

cache
Snapshot
cache
Snapshot
LUN

The snapshot copy is a


Snapshot composite of LUN and
copy cache data that is accessible
as long as the session lasts.
Storage system

Figure 10 -1 Snapshot Session Overview

SnapView provides several important benefits:


• It allows full access to production data with modest impact on
performance;
• For decision support or revision testing, it provides a coherent,
readable and writable copy of real production data at any given
point in time; and
• For backup, it practically eliminates the time that production data
spends off line or in hot backup mode. And it off-loads the
backup overhead from the production host to another host.

SnapView Components
SnapView uses three components: a production host, a second host,
and a snapshot session.
The production host
• runs the customer applications that you want to copy.
• owns the source LUN.

SnapView Overview 10-3

6864 5738-001
Setting Up and Using SnapView
10

• allows the management software to create, start, and stop


snapshot sessions.
The second (or backup) host
• reads or writes to the snapshot copy during the snapshot session.
• performs an independent analysis or backup task using the copy.
A snapshot session
• begins when you start a session (using Manager, CLI or admsnap)
and ends when you stop the session.
• makes the snapshot accessible to the second host.
• retains the name the administrator assigned at session startup as
long as the session is active (sessions for other LUNs can share a
session name).

SnapView Requirements
SnapView has the following requirements:
• The snapshot cache must be established on one or more LUNs
that do not belong to a Storage Group.
• The source and cache LUNs must be owned by the same SP.
• You can use the snapshot in only one snapshot session at a time.
• The source LUN and snapshot must be assigned to different
Storage Groups. The source LUN Storage Group must be
accessible to the production host and the snapshot Storage Group
to the other host.
• The production and second hosts must run the same operating
system.

SnapView Operations Overview


The following steps explain how to use SnapView.
1. Determine which LUNs you want to copy with SnapView. The
size of these LUNs will help you decide on an approximate
snapshot cache size.

10-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

The LUN(s) in the snapshot cache are needed to store chunks of


the original data when that data is first updated on the source
LUN. The cache can use any LUN that is not part of a Storage
Group. (This requires forethought, since normally you assign
each LUN that will store user data to a Storage Group.)
The snapshot cache is shared by all the active sessions on a
specific SP. The cache should be large enough for all concurrent
sessions on an SP.
An adequate snapshot cache size is essential. A general guideline
for cache size is 10% of the size of the LUN you want to copy;
however, you may want to use a larger cache size, such as 20%,
50%, or even 100%. SnapView has a copy simulation mode to help
you determine an ideal cache size.
2. On the production host, bind one or more LUNs for each SP to the
size you determined for the snapshot cache.
The source and snapshot cache LUNs must be owned by the same
SP. That SP manages the cache space and apportions it to all
source LUNs that are involved in a snapshot session.
3. On the production host, create the snapshot.
A snapshot occupies no disk space and is accessible only for the
duration of the snapshot session. To create a snapshot, use
Navisphere Manager or CLI.
We recommend that you assign the snapshot to a Storage Group
other than the Storage Group that holds the source LUN. Make
sure that the snapshot Storage Group is accessible by the second
host. Navisphere Manager will prompt you for this. This step is
needed only once.

SnapView Overview 10-5

6864 5738-001
Setting Up and Using SnapView
10

4. On the production host, allocate the snapshot cache.


You allocate LUNs to the cache using the Manager Snapshot
Cache Properties dialog box.
If the snapshot cache size is too small, you can add an additional
LUN to the snapshot cache pool or, if multiple snapshot sessions
are running, terminate a session to free its cache space. If the
snapshot cache fills up, SnapView terminates the session that
encountered the error, logs an error, and releases the cache space
used by that session.
When you allocate the snapshot cache, you specify a cache chunk
size. The chunk size is the size of each cache write. With the
default chunk size of 64 Kbytes, the software divides the source
LUN into 64-Kbyte blocks and, if any part of a 64-Kbyte block is
changed during the session, the software retains the original
image of that block in the cache. The chunk size applies to both
SPs. You can change the chunk size after you allocate the cache if
there are no active snapshot sessions running on the storage
system.
5. On the production host, start a snapshot session.
You can start a snapshot session using the Navisphere Manager
Start Session function or the admsnap start operation.
The session name identifies the session. You create the name
when you start a snapshot session, use it to activate the snapshot
and then use it to stop the session. You can use the same name for
more than one snapshot session. A snapshot is accessible for only
as long as the snapshot session lasts. If a the storage system loses
power while the session is running, the snapshot is lost.
6. On the second host, identify the snapshot to the operating system.
The procedure depends on the operating system; for example,
with Windows you need to run Disk Admin.This step is needed
only once, as part of the SnapView initial setup.
7. On the second host, if you have installed the admsnap utility
(supplied with the SnapView software), then use admsnap as
follows to make the new session available.
On the second host, enter an admsnap activate command as
explained in the Admsnap Host Management Utility Administrator’s
Guide. On a Windows system, the admsnap command returns a
drive letter that you can use immediately to access the frozen

10-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

data. On a Unix system, the command returns a device name on


which you must run fsck and then mount to make a file system
available for use.
If you have not installed admsnap, then you must reboot the
second host or, using some other means, cause it to recognize the
new device created when you started the snapshot session.
Installing admsnap is explained in the Admsnap Host Management
Utility Administrator’s Guide.
8. On the second host, use the backup host’s frozen data as you
wish: for modeling, testing, or backup. As needed, mount any file
systems or start database software to analyze the frozen data or
back it up.
9. Before stopping the snapshot session, make the snapshot session
unavailable to the second host by doing one of the following:
• Windows server - use the admsnap deactivate command as
explained in the Admsnap Host Management Utility
Administrator’s Guide.
• Unix server - use the unmount command.
10. On the production host, stop the snapshot session.
You can stop the session using Manager. Stopping a session frees
the snapshot cache space and any SP memory used for the
session. The newly freed snapshot cache space becomes available
for other snapshot sessions. Stopping a session also makes the
snapshot appear offline.
For future snapshot sessions, you need only step 5 and steps 7
through 10.

SnapView Overview 10-7

6864 5738-001
Setting Up and Using SnapView
10

Snapshot Session The following figure shows how a snapshot session starts, runs, and
stops.

1. Before session starts 2. At session start (2:00 pm) 3. At start of operation (2:02 pm)
Production Second Production Second Production Second
host host host host host host

LUN Snapshot Snapshot LUN cache Snapshot LUN cache Snapshot


(144 cache copy copy copy (pointers
Gbytes) to chunks)

4. At end of operation (4:15 pm) 5. At session end (4:25 pm)


Production Second Production Second
host host host host

Snapshot copy
LUN cache (pointers to chunks)

LUN cache Snapshot


copy

= Unchanged chunks on source LUN

= Changed chunks on source LUN

= Unchanged chunks in cache and snapshot

Figure 10 -2 How a Snapshot Session Starts, Runs, and Stops

10-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

Setting Up SnapView
Before starting a snapshot session, you must complete the following
tasks.
• Make sure that you have bound LUNs available for the snapshot
cache.
• Create snapshots and, for shared storage systems, assign the
snapshots to Storage Groups.
• Configure the snapshot cache.

Binding LUNs for the Snapshot Cache


Before binding LUNs for the snapshot cache, think about which
source LUNs will participate in a snapshot session. The size of these
LUNs will help you determine an approximate snapshot cache size.
To bind LUNs see Creating LUNs on RAID Groups on page 7-27.

The snapshot cache must be established on one or more LUNS that do not
belong to a Storage Group.

Snapshot Cache Size An adequate snapshot cache is essential. Since the snapshot cache
stores only blocks of the source LUN’s original data when that data is
first updated on the source LUN, a general guideline for cache size is
10% of the size of the LUN you want to copy. For example, if the LUN
you want to copy belongs to SP A and is 10 Gbytes in size, you will
need a snapshot cache size of at least 1 Gbyte for SP A. See
Configuring an SP’s Snapshot Cache on page 10-12.

If you intend to write to the snapshot, make sure that the snapshot cache is
large enough to store these writes, since all writes to the snapshot are stored
in the snapshot cache.

The same SP that owns the snapshot source LUNs must own the
snapshot cache LUNs. The SP manages the cache space and
apportions it to all source LUNs that are involved in a snapshot
session. Therefore, if you plan to create snapshots for LUNs owned
by SP A and LUNs owned by SP B, configure the snapshot cache for
both SPs.

Setting Up SnapView 10-9

6864 5738-001
Setting Up and Using SnapView
10

A snapshot is a composite of the source LUN and snapshot cache


data that lasts only as long as the snapshot session. Each snapshot is
linked to a source LUN. SnapView creates the snapshot and, for
shared storage systems, places the LUN in a Storage Group.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the source
LUN resides, and then double-click the icon for the SP to which
the source LUN belongs.
3. Right-click the icon for the source LUN, and then click Create
Snapshot.
The Create Snapshot dialog box opens, similar to the following.
For information on the properties in the dialog box, click Help.

4. In Storage System and Snapshot Source LUN, verify that you are
creating the snapshot for the correct source LUN on the correct
storage system.
5. In Snapshot Name, enter a unique name for the snapshot.

If you enter an invalid name and then click OK, the application displays
an error message.

10-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

6. In Server Accessibility, select the host that you want to have


access to the new snapshot.
For shared storage systems, Server Accessibility lists all hosts
that are attached to the storage system. For unshared storage
systems, Server Accessibility appears dimmed and is
unavailable.

The default value for Server Accessibility is None - no host has access to
the new snapshot.

Before you can access a snapshot, you must add the new snapshot to a
Storage Group and connect a host to the Storage Group.

7. Click OK to create the snapshot.


The application creates the snapshot and, if the action is
successful, does the following:
• Places an icon and description for the new snapshot in the
Snapshots container associated with the snapshot’s source
LUN. The format for the snapshot description is
snapshotname[Flare Lun X; Offline]
where Offline means the LUN is not participating in a
snapshot session.
• Adds the text, Snapshot Offline to the description for the
snapshot’s source LUN. For example:
Flare LUN 18 [0x12; RAID 1; Snapshot Offline]
where Snapshot Offline means the source LUN is not
participating in a snapshot session.
• For shared storage systems, Navisphere assigns the snapshot
to a Storage Group as follows:

We recommend that you assign the snapshot to a Storage Group


other than the Storage Group that holds the source LUN.

If the host you select in Server Accessibility


• connects to a Storage Group, Navisphere places the
snapshot into that Storage Group and asks you to confirm
the action.

Setting Up SnapView 10-11

6864 5738-001
Setting Up and Using SnapView
10

• does not connect to any Storage Group, Navisphere creates


a new Storage Group, places the snapshot in that Storage
Group, and asks you to confirm the action.

If you select None in Server Accessibility, no host has access to the


new snapshot and the snapshot is not placed in any Storage Group.
To add the snapshot to a Storage Group, see Adding a Snapshot to a
Storage Group on page 10-15.

If the action is successful, Navisphere displays a confirmation


message and closes the dialog box. If the action fails,
Navisphere displays an error message and does not close the
dialog box.

Configuring an SP’s Snapshot Cache


The snapshot cache consists of one or more private LUNs, and the
size of the cache is the aggregate of all these LUNs. The LUNs in the
snapshot cache are needed to store blocks of the original data when
that data is first updated on the source LUN. For any one snapshot
session, the contents of the snapshot cache and any unchanged source
LUN blocks comprise the snapshot. The SP that owns the snapshot
source LUN must own the snapshot cache LUNs. The SP manages the
cache space and apportions it to all source LUNs that are involved in
a snapshot session.
Each SP has its own snapshot cache, and before starting a snapshot
session, the cache must contain at least one LUN. You can increase or
decrease the size of the cache by adding or removing LUNs.
You can run a snapshot session in simulation mode to help you
determine a suitable cache size. If you determine that the cache size is
too small, you can add LUNs to the snapshot cache.
If multiple snapshot sessions are running, and you determine that the
cache size is running out of space, you can terminate a session to free
its cache space, or you can add LUNs to the snapshot cache while the
session is active. If the snapshot cache fills up, the software
automatically terminates the session that encountered the error, logs
an error, and releases the cache space used by that session. At this
point, the snapshot is deleted and any host that has mounted
volumes on the snapshot will lose those volumes.

10-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

To Configure an SP’s If you plan to create snapshots for LUNs owned by SP A and LUNs owned
Snapshot Cache by SP B, configure the snapshot cache for both SPs.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system for which you want
to configure the snapshot cache, and double-click Snapshot
Cache.
3. Right-click either the SP A or SP B cache icon, and click
Properties.

Setting Up SnapView 10-13

6864 5738-001
Setting Up and Using SnapView
10

The Snapshot Cache Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

4. In Chunk Size (Blocks), select the chunk size (size of each cache
write) for both SPs.
5. In Available LUNs, select the LUNs that you want to add to the
snapshot cache for each SP.
Available LUNs lists only those LUNs that are eligible for
inclusion in the snapshot cache.

10-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

If there are no LUNs in the list or if none of the available LUNs


are the necessary size, you must bind new LUNs and then add
them to the snapshot cache. To bind new LUNs see Creating LUNs
on RAID Groups on page 7-27.
6. To add the selected LUNs to SP A’s snapshot cache, click Add to
SP A Cache; to add the LUNs to SP B’s cache, click Add to SP B
Cache.
The selected LUNs move to the Member LUNs list for SP A or SP
B, and the value in Modified Capacity is updated to reflect the
changes.

Modified Capacity is the current size of the SP’s snapshot cache.

7. To remove LUNs from the Member LUNs list, select the LUNs
you want to remove, and then click Remove from Cache.
8. When you have added all the LUNs you want to the snapshot
cache, click OK to apply the changes and close the dialog box.
The snapshot cache LUNs are added to either the SP A Cache or
SP B Cache container in the Storage tree.

Adding a Snapshot to a Storage Group


Before starting a snapshot session, the snapshot must belong to a
Storage Group and the Storage Group must connect to a host.

We recommend that you assign the snapshot to a Storage Group other than
the Storage Group that holds the source LUN.

To Add a Snapshot to a Storage Group

If the host that will have access to the snapshot already connects to a Storage
Group, add the snapshot to that Storage Group. If you create a new Storage
Group for the snapshot and then connect the host to the new Storage Group,
the host will be removed from the original Storage Group and will no longer
have access to the LUNs in that Storage Group.

1. In the Enterprise Storage dialog box, click the Storage tab.


2. Double-click the icon for the storage system to which the
snapshot belongs.

Adding a Snapshot to a Storage Group 10-15

6864 5738-001
Setting Up and Using SnapView
10

3. Double-click the icon for the SP that owns the snapshot source
LUN.
4. Double-click the icon for the Snapshots container.
5. Right-click the icon for the snapshot you want to add to a Storage
Group, and then click Add to Storage Groups.
6. In available Storage Groups, select the Storage Group to which
you want to add the snapshot.
The Storage Group moves to Selected Storage Groups.
7. Click OK to add the snapshot to the Storage Group.
8. To connect a host to the Storage Group, right-click the Storage
Group, and then click Connect Hosts.
The Connect Hosts to Storage dialog box opens. To connect the
host to the Storage Group, refer to page 8-9.

Destroying a Snapshot
When you destroy a snapshot, the following is true:
• If the snapshot is participating in a snapshot session, the
application stops the session prior to destroying the snapshot.
• If the snapshot belongs to one or more Storage Groups and you
destroy the snapshot, the hosts connected to the Storage Groups
will no longer have access to the destroyed snapshot.

To Destroy a Snapshot 1. Right-click the icon for the snapshot you want to destroy, and
then click Destroy Snapshot.
2. In the confirmation dialog box, click Yes to destroy the snapshot.
The application removes the snapshot icon from the Snapshots
container in the Storage tree.

10-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

Using SnapView
Use SnapView to start and stop a snapshot session; to display the
status and properties of the snapshot cache, snapshot sessions, and
snapshots; to verify that the snapshot cache is the necessary size.

If the MirrorView option is installed, we recommend that you do not take a


snapshot of the remote mirror secondary copy while the secondary copy is
being synchronized. The data may not be useful.

Starting a Snapshot Session


You can start a snapshot session in either normal or simulation mode.
A normal snapshot session stores both a copy of the unchanged LUN
data and statistical data, such as the number of writes to the cache. A
session started in simulation mode records only the statistical data.
Before starting a snapshot session, the following must be true:
• The snapshot should belong to a Storage Group and the Storage
Group must connect to a host.
• The SP cache must contain at least one LUN. See Configuring an
SP’s Snapshot Cache on page 10-12.

A snapshot is accessible for only as long as the snapshot session lasts. If a the
storage system loses power while the session is running, the snapshot is lost.

Normal Mode A normal snapshot session stores both a copy of the unchanged
Snapshot Session source LUN data and statistical data, such as the number of writes
and reads to the cache. When a session is active, a host can read data
from the snapshot.
To Start a Session in Normal Mode
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Right-click the icon for the storage system on which you want to
start the snapshot session, and click Start Snapshot Session.

If Start Snapshot Session appears dimmed and is unavailable, check that


there is at least one LUN in the SP’s snapshot cache, and make sure that
there is at least one available snapshot (not participating in a snapshot
session).

Using SnapView 10-17

6864 5738-001
Setting Up and Using SnapView
10

The Start Snapshot Session dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

3. In Session Name, enter a unique name for the snapshot session.

To agree with an admsnap requirement for a valid session name, include


only characters, numbers and underscores in Session Name. Symbols
and spaces are not supported.

If you do not enter a unique name and then click OK, the software
displays an error message.

4. In Snapshots Available, select the LUNs that you want to


participate in the snapshot session.

If you start the session by right-clicking a storage-system icon, the list


includes all available snapshots on the storage system. If you start a
session by right-clicking a snapshot icon, the list includes only that
snapshot.

5. Click OK to start the session, and if the action is successful, the


application does the following:
• Places an icon and description for the active snapshot session
in the Snapshot Sessions container in the Storage tree.
• Places an icon and description for each snapshot participating
in the session under the icon for the active snapshot session.

10-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

• Changes the status of the participating snapshot and source


LUN from Offline to Online in the Storage tree.

On the second host, if you have installed the admsnap utility (supplied
with the SnapView software), then use admsnap as follows to make the
new session available for use.

On the second host, enter an admsnap -activate command as explained


in the admsnap Host Management Utility Administrator’s Guide. On a
Windows system, the admsnap command returns a drive letter that you
can use immediately to access the frozen data. On a Unix system, the
command returns a device name on which you must run fsck, and then
mount to make a file system available for use.

If you have not installed admsnap, then you must reboot the second host
or, using some other means, cause it to recognize the new device created
when you started the snapshot session. Installing admsnap is explained
in the admsnap Host Management Utility Administrator’s Guide.

6. Stop the session (see Stopping a Snapshot Session on page 10-20)


when you think the activity on the snapshot is completed.
Stopping a session frees the snapshot cache LUN space used by
the session and any SP memory used to maintain the session
image. The newly freed snapshot cache space becomes available
for other snapshot sessions. Stopping a session also makes the
snapshot appear offline.

Simulation Mode Starting a session in simulation mode helps you verify that you have
Snapshot Session correctly configured the size of the snapshot cache. Unlike a session
run in normal mode, a session run in simulation mode, does not store
a copy of the unchanged source LUN data. It records only the
statistical data, such as the number of writes to the cache that would
have occurred had this not been a simulation. This data provides a
reasonable approximation of how large the snapshot cache should be
for this session. We recommend that you make the snapshot cache
larger than required so that you do not run out of cache disk space.

While the session is running, use the Snapshot Session Properties dialog box
to monitor the snapshot cache usage for the SP. (See To Monitor the Snapshot
Cache Usage on page 10-24.)

Using SnapView 10-19

6864 5738-001
Setting Up and Using SnapView
10

To Start a Session in Simulation Mode


1. Set up the session as you would a normal session — assign a
name to the session, and select LUNs from the Snapshots
Available list.
2. In the Start Snapshot Session dialog box, be sure to select
Simulation Mode.
3. Click OK to start the simulated session.

What Next?
While the session is running, monitor the session to determine if the
snapshot cache is the needed size. See To Monitor the Snapshot Cache
Usage on page 10-24.

Stopping a Snapshot Session


Stopping a session frees the snapshot cache LUN space used by the
session and any SP memory used to maintain the session image. The
newly freed snapshot cache space becomes available for other
snapshot sessions.
If the snapshots participating in the session belong to one or more
Storage Groups and you stop the snapshot session, the hosts
connected to the Storage Groups will no longer have access to the
snapshots in those Storage Groups. Stopping a session makes the
snapshot appear offline.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the
snapshot session is running, and double-click Snapshot Sessions.
3. Right-click the icon for the snapshot session you want to stop,
and click Stop Snapshot Session.
4. Click Yes to stop the session, or click No to continue the session.

10-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

Displaying Snapshot Component Properties


Each snapshot, snapshot cache, and snapshot session has a Properties
dialog box associated with it that provides a variety of information
about the component. This section describes how to open the
Properties dialog box for each component and view the component’s
properties.

To Display Snapshot 1. In the Enterprise Storage dialog box, click the Storage tab.
Properties
2. Double-click the storage system on which the snapshot resides.
One way to display the snapshot icon is the following:
a. Double-click the Storage Groups icon, and then double-click
the Storage Group on which the snapshot LUN resides.
b. Double-click Snapshot.
3. Right-click the snapshot icon for which you want to display
properties, and click Properties.

Displaying Snapshot Component Properties 10-21

6864 5738-001
Setting Up and Using SnapView
10

The Snapshot Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

To Display Snapshot 1. In the Enterprise Storage dialog box, click the Storage tab.
Cache Properties
2. Double-click the storage system on which the snapshot cache
resides, and double-click the Snapshot Cache icon.
3. Right-click SP A or SP B, and click Properties.

10-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

4. The Snapshot Cache Properties dialog box opens, similar to the


following. For information on properties in the dialog box, click
Help.

To Display Snapshot You can view statistics, such as total reads and writes, for an active
Session Properties session. You can also monitor the snapshot cache usage for the SPs
and determine if the cache is the necessary size.
1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system on which the
snapshot session is running, and double-click Snapshot Sessions.
3. Right-click the icon for the session you want to monitor, and click
Properties.

Displaying Snapshot Component Properties 10-23

6864 5738-001
Setting Up and Using SnapView
10

The Snapshot Session Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

4. To display the statistics for this session, click the Statistics tab.

To Monitor the 1. To help determine if the snapshot cache is the necessary size,
Snapshot Cache under Snapshot Cache in the dialog box, check the value for
Usage Session Usage for SP A or SP B (%).
If the SP usage registers at 80% to 90%, you may want to increase
the size of the snapshot cache. (See Configuring an SP’s Snapshot
Cache on page 10-12.)
2. To display a list of all LUNs participating in this session, click the
Member LUNs tab.
3. Click Close to close the dialog box.

10-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Setting Up and Using SnapView
10

Displaying Status of All Snapshots and Snapshot Sessions


You can view the status of all snapshots on all managed storage
systems as well as the status of all snapshot sessions.

To Display Status for all Snapshots


On the Operations menu in the Main window, click SnapView
Summary.
The SnapView Summary dialog box opens, similar to the following.
For information on the properties in the Snapshots tab, click Help.

To Display Status for all Snapshot Sessions


In the SnapView Summary dialog box, click the Sessions tab. For
information on the properties in the Sessions tab, click Help.

Displaying Status of All Snapshots and Snapshot Sessions 10-25

6864 5738-001
Setting Up and Using SnapView
10

10-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
11
Monitoring
Storage-System
Operation

You monitor the operation of managed storage systems using the


Main window. You can also monitor their operation by checking the
event messages that the Agent receives from an SP.
This chapter describes the following:
• Updating Storage-System Information......................................... 11-2
• Monitoring Storage-System Operation......................................... 11-8
• Monitoring Failover Software ...................................................... 11-21
• Displaying Storage-System Component and Server Status..... 11-22
• Displaying NAS Device Status..................................................... 11-28
• Using the SP Event Log - Non-FC4700 Series............................ 11-29
• Using the SP Event Logs - FC4700 Series ................................... 11-34
• Using the Events Timeline Window............................................ 11-38

Monitoring Storage-System Operation 11-1

6864 5738-001
Monitoring Storage-System Operation
11

Updating Storage-System Information


Before using Manager to monitor the operation of managed storage
systems, make sure Manager has up-to-date information on these
storage systems.
This section explains the following:
• How the Agent’s storage-system polling affects the current state
of Manager’s storage-system information.
• The two ways to poll for storage-system information.
• How to set the automatic polling properties.
• How to manually poll storage systems.

Currentness of Manager’s Storage-System Information


Manager updates its information for each storage system by polling
the Agent on the managed servers for changes in:
• the storage system's field-replaceable units (FRUs)
• LUNs
• RAID Groups
The currentness of the information depends on when the Agent last
polled the storage system.
The time when the Agent polls a storage system depends on its
polling interval, which you or another user set in the agent’s
configuration file. A client application such as Manager may poll the
Agent frequently.
The Agent, however, polls the storage system only after the polling
interval has elapsed. That is, no matter how frequently the client
application polls the Agent, the Agent does not poll the storage
system until the elapse of the polling interval. In this way, the Agent
polling interval prevents client applications from overwhelming the
Agent with excessive poll requests.
Each time the Agent polls a storage system, it updates its information
for the storage system.

11-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Because the Agent does not always poll a storage system every time a
client application requests a poll, the client application represents
only the information that the Agent currently has for the storage
system. If the information for a storage system has changed, but the
polling interval has not elapsed at the time of the poll request, the
Agent does not poll the storage system. As a result, it cannot notify
the client application of the change.
For example, suppose the Agent polling interval is 60 seconds. If a
client application sends the Agent a request to poll a storage system
at 6:00:00, the Agent polls the storage system and notifies the client
application of any change in the storage system. In this situation, the
client application reflects the current state of the storage system after
the poll request.
As determined by the polling interval, the Agent does not poll the
storage system again until at least 6:01:00. If a client application
requests a poll of the storage system between 6:00:00 and 6:01:00, the
application reflects only the state of the storage system at 6:00:00.
Thus, if a disk in the storage system fails at 6:00:25, the client
applications that request a poll of the storage system between 6:00:25
and 6:01:00 are not notified of the disk failure.
The Agent does not poll the storage system again until it receives the
first client application request to poll the storage system after 6:01:00.
At this time, the Agent:
• polls the storage system
• updates its information on the storage system
• notifies the request from a client application of the disk failure

Updating Storage-System Information 11-3

6864 5738-001
Monitoring Storage-System Operation
11

Automatic and Manual Polling for Managed Storage-System Information


Manager can automatically poll one or more storage systems at set
intervals that can vary for each storage system. Manager also lets you
manually poll selected storage systems whenever you want.
Automatic and manual polling are completely independent
procedures; that is, one does not affect the other.
When automatic polling (background polling) is enabled for the
session and for a selected storage system, the frequency at which the
application automatically polls the Agent for storage-system
information equals the automatic polling interval multiplied by the
automatic polling priority for the storage system. For example, if the
automatic polling interval is 300 seconds (5 minutes), and the
automatic polling priority for the storage system is 3, the application
polls the Agent for storage-system information every 900 seconds (15
minutes).

Table 11-1 Default Automatic Polling Property Values

Property For Default Value

Automatic polling Manager session Disabled

Automatic polling interval Manager session 60 seconds

Automatic polling Individual storage system Enabled

Automatic polling priority Individual storage system 1

Setting Automatic Polling Properties


The automatic polling properties for the session control background
polling, which is the mechanism that maintains the polling interval
counter and sends the request to an Agent to poll a storage system
with automatic polling enabled. You set up background polling by
setting the automatic polling interval and enabling automatic polling
for the session. If you disable background polling, no storage systems
are polled. By default, background polling is disabled.

11-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

To Set the Automatic Polling Interval and Enable Automatic Polling


1. In the Main window, on the View menu, click Options.
A User Options dialog box opens, similar to the following.

2. Under Polling, change the polling interval or enable or disable


automatic polling:
a. In the Polling Interval box, enter the desired number of
seconds.
b. Select the Automatic Polling check box to enable automatic
polling for the session (that is, enable background polling), or
clear it to disable it for the session (that is, disable background
polling).
3. Click OK to apply the settings and close the dialog box.
All settings are saved for your future sessions of Manager.

To Disable or Re-enable Automatic Polling or Set Polling Priority for an Individual Storage
System
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Right-click the icon for the storage system whose automatic
polling properties you want to change, and click Properties.

Updating Storage-System Information 11-5

6864 5738-001
Monitoring Storage-System Operation
11

The Properties dialog box for the storage system opens, similar to
the following. For information on the properties in the dialog box,
click Help.

In the storage system Properties dialog box, the following is true:

For all shared storage systems - The Data Access tab is visible.
For non-FC4700 storage systems - The Configuration Access tab is visible.
For FC4700 storage systems - The Storage tab is visible, and if MirrorView is
installed, the Remote Mirrors tab is visible.

3. Under Configuration
a. Clear the Enable Automatic Polling check box to disable
automatic polling for the storage system, or select it to
re-enable automatic polling for the storage system.
b. In the Automatic Polling Priority box, enter the desired
priority.

11-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

4. Click OK to apply the settings and close the dialog box.

Manager will poll the storage system only if automatic polling for the
session (background polling) is enabled.

Manually Polling Storage Systems


You can manually poll all managed storage systems or individual
storage systems.

To Manually Poll All Storage Systems


1. In the Main window, do one of the following:
• Click Poll on the toolbar
• Select the Operations menu on the menu bar
2. Click Poll All Storage Systems.

To Manually Poll an Individual Storage System


1. In the Equipment, Storage or Hosts tree, right-click the icon for
the storage system.
2. Click Poll.

Updating Storage-System Information 11-7

6864 5738-001
Monitoring Storage-System Operation
11

Monitoring Storage-System Operation


This section outlines a procedure you can use to detect and isolate
problems that may arise with managed storage systems. You can vary
the procedure to suit your particular storage-system environment.

1. If icons for any of the storage systems you want to monitor do not
appear in the Enterprise Storage dialog box, follow these steps:
a. In the Main window, select the File menu and click Select
Agents.
The Agent Selection dialog box opens.
b. For each server whose storage systems are missing from the
Main window, do one of the following:
• If the server is in the Agents list, click the icon and click →.
• If the server is not in the Agents list, enter the server
hostname in Agent to Add, and click →.
c. When Managed Agents contains just the storage systems that
you want to manage, click OK.
2. If you want to monitor only some of the storage systems on a
managed server, right-click the icon for the server in the
Enterprise Storage dialog box, and click Unmanage.
3. Either enable automatic polling for all the storage systems, or
manually poll them periodically.
4. Periodically look at the Application icon or the storage-system
icons.
If you are managing many storage systems, it is more convenient
to look at the Application icon than the storage-system icons.
You can update the information the icons reflect by clicking Poll
on the Main window toolbar. As soon as the Agent on the server
for a storage system polls a selected storage system for
information, it responds to your request for updated information.
See the following tables for descriptions of the icon states.

11-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Table 11-2 Application Icon States

Icon Color Meaning and What To Do Next

Grey All managed storage systems are in a normal operating state For more information about a storage
system, display its Properties dialog box by right-clicking its icon and clicking Properties.

Flashing All managed storage systems are in a transitional operating state. For more information about a
blue storage system, display its Properties dialog box by right-clicking its icon and clicking Properties.

Flashing One or more storage systems are faulted. Look for orange storage-system icons, and go to
orange Storage-System Faults on page 11-10.

Table 11-3 Storage-System Icon States

Icon Color Character Meaning and What To Do Next

Grey None Storage system is operating normally. For more


information about the storage system, display the its
Properties dialog box by right-clicking its icon and
clicking Properties.

Blue T Storage system is in a transitional operating state. For


more information about the storage system, display its
Properties dialog box by right-clicking its icon and
clicking Properties.

Orange F Storage system is faulty. Go to Storage-System Faults on


page 11-10. For more information about the storage
system, display its Properties dialog box by right-clicking
its icon and clicking Properties.

Orange X Storage system is inaccessible because this session of


Manager has been unable to communicate with the
storage system. The device entry for the storage system
in the agent configuration file on its server may be wrong.

Orange ? Storage is unsupported because its device entry in the


agent configuration file on its server is for a device that
Manager does not support.

Monitoring Storage-System Operation 11-9

6864 5738-001
Monitoring Storage-System Operation
11

Storage-System Faults
For information about all storage systems with faults, start with step
1 in the following procedure. For information about a specific storage
system with faults, start with step 2 in the following procedure.

To Display Storage System Faults


1. For information about all storage systems with faults, on the
Main window toolbar, select the Operations menu and click
Faults.
A Fault Status Report dialog box opens, similar to the following.
This dialog box tells you which storage systems failed and why.

You can also display the Fault Status Report dialog box by right-clicking
the storage-system icon, and then clicking Faults.

2. If you want more information about the faulted components in a


storage system:
a. In the Enterprise Storage dialog box, click the Equipment tab
for a faulted hardware component or the Storage tab for a
faulted storage component.
b. In the Equipment or Storage tree, double-click the icon for the
faulted storage system to display icons for its components.
c. For each orange component icon that has a menu associated
with it, examine its properties as follows:
• Right-click the orange icon and click Properties.
The Properties dialog box for the component represented
by the icon opens.

11-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

• If the dialog box has different tabs, click a tab to view the
additional properties.
d. For each orange icon that does not have a menu associated
with it, do the following:
• Double-click the icon to display the icons for its
components.
• For each orange component icon, repeat steps c and d.
e. For more information about a FRU represented by an orange
icon, go to the selection listed below for the FRU.

For an Orange Go to the Section

Disk icon Orange Disk Icon on this page.

SP icon Orange SP Icon on page 11-14.

LCC icon Orange LCC Icon on page 11-15.

Fan A or Fan B icon Orange Fan A or Fan B Icon on page 11-16.

Power supply or VSC Orange Power Supply or VSC Icon on


page 11-18.

SPS icon Orange SPS Icon on page 11-19.

BBU icon Orange BBU Icon on page 11-20.

Orange Disk Icon An orange disk icon indicates that the disk it represents is in one of
the states listed below.

Table 11-4 Disk Failure States

State Meaning

Removed Removed from the enclosure; applies only to a disk that is part
of a LUN.

Off Failed and powered off by the SP.

You can determine the state of a disk from the General tab of its Disk
Properties dialog box (page 11-26).

Monitoring Storage-System Operation 11-11

6864 5738-001
Monitoring Storage-System Operation
11

! CAUTION
Removing the wrong disk can introduce an additional fault that
shuts down the LUN containing the failed disk. Before removing a
disk, be sure to verify that the suspected disk has actually failed by
checking its orange check or fault light or the SP event log for the
SP that owns the LUN containing the disk. In addition to checking
the log for messages about the disk, also check for any other
messages that indicate a related failure, such as a failure of a SCSI
bus or a general shutdown of an enclosure. Such a message could
mean the disk itself has not failed. A message about the disk will
contain its module ID.

The icon for a working hot spare in a RAID Group may be orange
instead of blue if you replace the failed disk that the hot spare is
replacing while the hot spare is transitioning into a group. When
this happens, the icon for a working SP is orange instead of green.
The Fault Status Report dialog box says the storage system is
normal instead of transitioning, the state property for the hot spare
is faulted instead of transitioning, and the state of the SP is normal
(the correct state).

After you confirm the failure of a disk, the system operator or service
person should replace it, as described in the storage-system
installation and service manual.

You must replace a failed disk with one of the same capacity and format.

The rest of this section discusses a failed disk in RAID 0 or Disk LUN
or a RAID 5, 3, 1, or 1/0 LUN and a failed vault disk when
storage-system write caching is enabled.

Failed Disk in a RAID 0 or Disk LUN


If a disk in a RAID 0 or Disk LUN fails, applications cannot access the
LUN.
Before you replace the failed disk, unbind the LUN.
After you replace the failed disk:
1. Rebind the LUN.
2. Create partitions or file systems on the LUN.
3. Restore data from backup files.

11-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Failed Disk in a RAID 5, 3, 1, or 1/0 LUN


If a disk in a RAID 5, 3, 1, or 1/0 LUN fails, applications can continue
to access the LUN. If the storage system has a hot spare on standby
when a disk fails, the SP automatically rebuilds the failed disk on the
hot spare.
You should replace the failed disk while the storage system is
powered up so that applications can continue to access the LUN.
When you replace the disk, the SP:
1. Formats and checks the new disk.
2. Rebuilds the data on the new disk as described below.
While rebuilding occurs, applications have uninterrupted access
to information on the LUN.

Rebuilding a RAID 5, 3, 1, or 1/0 LUN


You can monitor the rebuilding of a new disk from the General tab of
its Disk Properties dialog box (page 11-26).
A new disk module’s state changes as follows:
1. Powering up - The disk is powering up.
2. Rebuilding - The storage system is reconstructing the data on the
new disk from the information on the other disks in the LUN.
If the disk is the replacement for a hot spare that is being
integrated into a redundant LUN, the state is Equalizing instead
of Rebuilding. In this situation, the storage system is simply
copying the data from the hot spare onto the new disk.
3. Enabled - The disk is bound and assigned to the SP being used as
the communication channel to the enclosure.
A hot spare’s state changes as follows:
1. Rebuilding - The SP is rebuilding the data on the hot spare.
2. Enabled - The hot spare is fully integrated into the LUN, or the
failed disk has been replaced with a new disk and the SP is
copying the data from the hot spare onto the new disk.
3. Ready - The copy is complete. The LUN consists of the disks in
the original slots and the hot spare is on standby.
Rebuilding occurs at the same time as user I/O. The rebuild priority
for the LUN determines the duration of the rebuild process and the
amount of SP resources dedicated to rebuilding.

Monitoring Storage-System Operation 11-13

6864 5738-001
Monitoring Storage-System Operation
11

A High or ASAP (as soon as possible) rebuild priority consumes


many resources and may significantly degrade performance. A Low
rebuild priority consumes fewer resources with less effect on
performance. You can determine the rebuild priority for a LUN from
the General tab of its LUN Properties dialog box (page 11-24).

Failed Vault Disk with Storage-System Write Caching Enabled


If you are using write caching, the storage system uses the disks listed
below for its cache vault. If one of these disks fails, the storage system
dumps its write cache image to the remaining disks in the vault; then
it writes all dirty (modified) pages to disk and disables write caching.
Storage-system write caching remains disabled until a replacement
disk is inserted and the storage system rebuilds the LUN with the
replacement disk in it. You can determine whether storage-system
write caching is enabled or disabled from the Cache tab of its
Properties dialog box (page 11-22).

Table 11-5 Cache Vault Disks

Storage-System Type Cache Vault Disks

FC4400/4500, FC47000, FC5600/5700 0-0 through 0-8

FC5200/5300 0-0 through 0-4

C1900, C2x00, C3x00 A0, B0, C0, D0, E0

C1000 A0 through A4

Orange SP Icon An orange SP icon indicates that the SP it represents has failed. When
an SP fails, one or more LUNs may become inaccessible and the
storage system’s performance may decrease. In addition, the SP’s
check or service light turns on, along with the check or service light
on the front of the storage system.
If the storage system has a second SP and ATF (Application
Transparent Failover) software is running on the server, the LUNs
that were owned by the failed SP may be accessible through the
working SP. If the server is not running failover software and a
number of LUNs are inaccessible, you may want to transfer control of
the LUNs to the working SP (Chapter 12).

11-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

! CAUTION
The icon for a working SP may appear orange instead of green
when you replace a failed disk in a RAID Group with a working
hot spare while it is transitioning into the group to replace the
failed disk. When this happens, the icon for the hot spare is orange
instead of blue. The Fault Status Report dialog box says the storage
system is normal instead of transitioning, the state property for the
SP is normal (the correct state), and the state property for the hot
spare is faulted instead of transitioning.

The system operator or service person can replace the SP under


power; however, doing so interrupts application access to LUNs
owned by the SP unless the server is running LUN transfer software
such as Application Transparent Failover (ATF). If an SP in a shared
storage system needs replacing, see the service provider’s installation
and service manual for switch configurations. If an SP in an unshared
storage system needs replacing, see the storage-system installation
and service manual for replacement instructions.

Orange LCC Icon An orange link control card (LCC) icon indicates that the LCC that it
represents has failed. In addition, the LCC’s fault light turns on, along
with the service light on the front of the storage system.
When an LCC fails, the SP it is connected to loses access to its LUNs,
and the storage system’s performance may decrease. If the storage
system has a second LCC and the server is running failover software,
the LUNs that were owned by the SP connected to the failed LCC
may be accessible through the SP connected to the working LCC. If
the server is not running failover software, you may want to transfer
control of the inaccessible LUNs to the SP that is connected to the
working LCC (Chapter 12).
The system operator or service person can replace the LCC under
power, without interrupting applications to accessible LUNs. The
storage-system installation and service manual describes how to
replace an LCC.

Monitoring Storage-System Operation 11-15

6864 5738-001
Monitoring Storage-System Operation
11

Orange Fan A or Fan B Icon


For any FC-series storage system, an orange Fan A icon indicates that
the drive fan pack has one or more faults. An orange Fan B icon
indicates that the SP fan pack has one or more faults.

Only DPEs have SP fan packs.

For any C-series storage system, an orange Fan A icon and a green
and grey Fan B icon indicate that its fan module has one fault. An
orange Fan A icon and an orange Fan B icon indicate that its fan
module has two or more faults.

Drive Fan Pack If one fan fails in a drive fan pack, the other fans speed up to
compensate so that the storage system can continue operating. If a
second fan fails and the temperature rises, the storage system shuts
down after about two minutes.
If you see an orange Fan A icon in an FC-series storage system, the
system operator or a service person should replace the entire drive
fan pack as soon as possible. The storage-system installation and
service manual describes how to replace the fan pack.

Do not remove a faulted drive fan pack until a replacement unit is available.
You can replace the drive fan pack while the DPE or DAE is powered up.

If the drive fan pack in a DPE is removed for more than two minutes,
the SPs and the disks power down. The SPs and disks power up
when you reinstall a drive fan pack.
If the drive fan pack in a DAE is removed for more than two minutes,
the Fibre Channel interconnect system continues to operate, but the
disks power down. The disks power up when you reinstall a drive
fan pack.

SP Fan Pack If one fan fails in an SP fan pack, the other fans speed up to
compensate so that the storage system can continue operating. If a
second fan fails and the temperature rises, the storage system shuts
down after about two minutes.
If you see an orange Fan B icon, the system operator or a service
person should replace the entire fan pack or module as soon as
possible. The storage-system installation and service manual
describes how to replace the fan pack or module.

11-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Do not remove a faulted SP fan pack until a replacement unit is available. You
can replace the fan pack when the DPE is powered up. If the fan pack is
removed for more than two minutes, the SPs and the disks power down.
They power up when you reinstall an SP fan pack.

Fan Module Each C-series storage system has one fan module. The following table
shows the number of fans per number of slots in the enclosure.

Table 11-6 Number of Fans in a C-Series Storage System

Enclosure Size Number of Fans

30-slot 9

20-slot 6

10-slot 3

If any fan fails, the fault light on the back of the fan module turns on.
The storage system can run after one fan fails; however, if another fan
failure occurs, the storage system shuts down after two minutes.
If you see an orange Fan A icon in a C-series storage system, the
system operator or a service person should replace the entire fan
module as soon as possible. The storage-system installation and
service manual describes how to replace the fan module.

Swinging the fan module away from the enclosure or removing it for more
than two minutes may cause the storage system to overheat. To prevent
damage to the disks from overheating, the storage system shuts down if you
unlatch or remove the fan module for more than two minutes. You should
not leave a fan module unlatched or removed for more than the absolute
minimum amount of time that you need to replace it.

Monitoring Storage-System Operation 11-17

6864 5738-001
Monitoring Storage-System Operation
11

Orange Power Supply or VSC Icon


An orange power supply or voltage semi-regulated converter (VSC)
icon indicates that the power supply it represents has failed.
FC-Series Storage System
Each enclosure has one or two power supplies: A and optionally B.
An enclosure with two power supplies can recover from the failure of
one power supply and provide uninterrupted service while the
defective power supply is replaced. If a second power supply fails or
is removed, the enclosure shuts down immediately.
C3x00 Series Storage System
The system has three power supplies (VSCs): A, B, and C. It can
recover from the failure of one power supply and provide
uninterrupted service while the defective power supply is replaced. If
a second power supply fails or is removed, the enclosure shuts down
immediately.
C2x00 Series Storage System
The system has two or three power supplies: A, B, and optionally C.
If it has three power supplies, it can recover from the failure of one
power supply and provide uninterrupted service while the defective
power supply is replaced. If a second power supply fails or is
removed, the enclosure shuts down immediately.
C1900 or C1000 Series Storage System
The system has one or two power supplies: A and optionally B. If it
has two power supplies, it can recover from the failure of one power
supply and provide uninterrupted service while the defective power
supply is replaced. If a second power supply fails or is removed, the
enclosure shuts down immediately.
Storage System without an SPS or BBU
Failure of the ac distribution system (line cord, utility power, and so
on) also immediately shuts down the entire enclosure.
When a power supply fails, the system operator or service person
should replace the power supply as soon as possible. The
storage-system installation and service manual describes how to
replace a power supply.

11-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

When a C-series enclosure or the DPE in an FC-series storage system shuts


down, the operating system loses contact with the LUNs. When the enclosure
powers up, you may need to reboot the server to let the operating system
access LUNs, and you must restart the Agent on the server connected to the
storage system.

Orange SPS Icon An orange standby power supply (SPS) icon indicates that the SPS it
represents has an internal fault. When the SPS develops an internal
fault, it may still be able to run on line, but the SPs disable write
caching. The storage system can use the write cache only when a fully
charged, working SPS is present.
However, if the storage system has a second SPS, write caching can
continue when one SPS has an internal fault or is not fully charged.
The status lights on the SPS indicate when it has an internal fault,
when it is recharging, and when it needs replacing because its battery
cannot be recharged.
Each week, the SP runs a battery self-test to ensure that the
monitoring circuitry is working in each SPS. While the test runs,
storage-system write caching is disabled, but communication with
the server continues. I/O performance may decrease during the test.
When the test is finished, storage-system write caching is re-enabled
automatically. The factory default setting has the battery test start at
1:00 a.m. on Sunday, which you can change (Chapter 6).
When the SPS Fault light or the SPS Replace Battery light is lit, the
system operator or service person should replace the SPS as soon as
possible. The SPS installation and service manual describes how to
replace an SPS.
If the storage system has two SPSs, you can replace one of them while
the DPE is powered up, but we recommend that you disable
storage-system write caching before removing the SPSs. (Chapter 6).

Monitoring Storage-System Operation 11-19

6864 5738-001
Monitoring Storage-System Operation
11

Orange BBU Icon An orange battery backup unit (BBU) icon indicates that the BBU in a
C-series storage system is in one of the states listed below.

Table 11-7 BBU Failure States

BBU State Meaning

Down Failed or removed after the Agent started running

Not Present Failed or removed before the Agent started running

You can determine the state of a BBU from its Properties dialog box
(page 11-26).
When a BBU fails:
• Storage-system write caching is disabled and storage-system
performance may decrease.
You can determine whether storage-system write caching is
disabled from the Cache tab of the storage-system Properties
dialog box (page 11-22).
Storage-system write caching remains disabled until the BBU is
replaced.
• The BBU service light turns on, indicating that the BBU is either
charging or not working.
After a power outage, a BBU takes 15 minutes or less to recharge.
From total depletion, recharging takes an hour or less.
Each week, the SP runs a self-test to ensure that the BBU’s monitoring
circuitry is working. While the test runs, storage-system caching is
disabled, but communication with the server continues. I/O
performance may decrease during the test. When the test is finished,
storage-system caching is re-enabled automatically. The factory
default time for the BBU test to start is 1:00 a.m. on Sunday, which
you can change (Chapter 6).
A system operator or service person can replace a failed BBU under
power without interrupting applications. The storage-system
installation and service manual describes how to replace the BBU.

11-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Monitoring Failover Software


Manager lets you determine whether Application Transparent
Failover (ATF) or CLARiiON Driver Extensions (CDE) software is
installed and running on all managed servers connected to the
storage system, and if running, whether the software has transferred
(trespassed) any LUNs from one SP to another SP.
In the Main window menu bar, select the Operations menu and click
Failover Status.
A Failover Status dialog box opens, similar to the following. For a
description of each field in the dialog box, click Help.

Monitoring Failover Software 11-21

6864 5738-001
Monitoring Storage-System Operation
11

Displaying Storage-System Component and Server Status


Most hardware components, each storage-system server, each RAID
Group, each LUN, and each Storage Group represented by an icon on
the Equipment, Storage, or Hosts tree have a Properties or State
dialog box associated with them. The Properties dialog box provides
a variety of information about the component; the State dialog box
simply displays the current state of the component.
This section describes how to display the properties of the following
components:
• Storage system (this page)
• Storage-system server - FC4400/4500 only (page 11-23)
• Storage Group (page 11-24)
• LUN (page 11-24)
• SP (page 11-25)
• RAID Group (page 11-25)
• Disk (page 11-26)
• SPS or BBU (page 11-26)
It also describes how to display the state of the following
components:
• LCC (page 11-27)
• Power supply (page 11-27)
The properties of some components, such as a LUN, can also be
displayed in ways other than described in this section.

To Display Storage-System Properties


1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab, right-click the icon for the storage system, and click
Properties.
The Storage System Properties dialog box opens with the
General tab displayed. For a description of each property, click
Help in the dialog box.

11-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

2. If you want to view other storage-system properties, click one of


the following tabs. The tab opens in the Storage System
Properties dialog box. Click Help for more information.
• For storage-system cache properties, click the Cache tab.
• For storage-system memory properties, click the Memory tab.
• For storage-system server properties, click the Hosts tab.
• For storage-system data access properties, click the Data
Access tab.
• If the selected storage system is a shared non-FC4700 storage
system and you want to view its configuration access
properties, click the Configuration Access tab.
• If the selected storage system is an FC4700 storage system and
you want to view its software properties, click the Software
tab.
• If the selected storage system is an FC4700 storage system
with MirrorView installed and you want to view remote
mirror properties, click the Remote Mirrors tab.

To Display Storage-System Server Properties - FC4400/4500 only


1. In the Enterprise Storage dialog box, click the Host tab.
2. Right-click the icon for the server whose properties you want to
display, and click Properties.
The Host Properties dialog box opens with the General tab
displayed. For a description of each property, click Help in the
dialog box.
3. If you want to view other storage-system server properties, click
one of the following tabs. The tab opens in the Host Properties
dialog box. Click Help for more information.
• For server Agent properties, click the Agent tab.
• For failover software properties, click the Failover tab.
• If the server is connected to a shared storage system and you
want to view the server’s storage properties, click the Storage
tab.

Displaying Storage-System Component and Server Status 11-23

6864 5738-001
Monitoring Storage-System Operation
11

To Display Storage Group Properties


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system with the Storage
Group whose properties you want to display.
3. Double-click the Storage Groups icon.
4. Right-click the icon for the Storage Group whose properties you
want to display, and click Properties.
The Storage Group Properties dialog box opens with the General
tab displayed. For a description of each property, click Help in the
dialog box.
5. If you want to display the advanced Storage Group properties,
click the Advanced tab.
The Advanced tab is displayed in the Storage Group Properties
dialog box.

To Display LUN Properties


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system with the LUN whose
properties you want to display.
3. Double-click the icon for the SP that owns the desired LUN or
click the icon for Unowned LUNs, if the desired LUN is a hot
spare or not owned by an SP.
4. Right-click the icon for the LUN whose properties you want to
display, and click Properties.
The LUN Properties dialog box opens with the General tab
displayed. For a description of each property, click Help in the
dialog box.
5. If you want to view other LUN properties, click one of the
following tabs. The tab opens in the LUN Properties dialog box.
Click Help for more information.
• For LUN caching information, click the Cache tab.
• For information about LUN read cache prefetching, click the
Prefetch tab.
• For LUN statistics, click the Statistics tab.

11-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

• For information about the mapping of operating system drives


on the LUN or the managed server connections to the LUN,
click the Hosts tab.
• For information about the disks in the LUN, click the Disks
tab.

To Display SP Properties
1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Double-click the icon for the storage system with the SP whose
properties you want to display.
3. If the Equipment tab is displayed, then for any FC-series storage
system except an FC5000 series, double-click the Enclosure 0 icon.
4. Double-click the SPs icon.
5. Right-click the icon for the SP whose properties you want to
display, and click Properties.
The SP Properties dialog box opens with the General tab
displayed. For a description of each property, click Help in the
dialog box.
6. If you want to view the other SP properties, click one of the
following tabs. The tab opens in the SP Properties dialog box.
click Help for more information.
• For SP cache information, click the Cache tab.
• For SP statistics, click the Statistics tab.
• If the selected SP is in an FC4700 storage system, and you
want to view its additional properties
• For SP network information, click the Network tab.
• For information on the SCSI ID s for the SP’s front-end
ports, click the ALPA tab.
• For information on the SP Agent, click the Agent tab.

To Display RAID Group Properties


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system with the RAID Group
whose properties you want to display.
3. Double-click the RAID Groups icon.

Displaying Storage-System Component and Server Status 11-25

6864 5738-001
Monitoring Storage-System Operation
11

4. Right-click the icon for the RAID Group whose properties you
want to display, and click Properties.
The RAID Group Properties dialog box opens with the General
tab displayed.
5. If you want to display information about the LUNs on the RAID
Group, click the Partitions tab.
The Partitions tab is displayed in the RAID Group Properties
dialog box. For a description of each property, click Help in the
dialog box.

To Display Disk Properties


1. In the Enterprise Storage dialog box, click the Equipment tab.
2. Double-click the icon for the storage system with the disk whose
properties you want to display.
3. For any FC-series storage system, double-click the icon for the
enclosure containing the disk whose properties you want to
display.
4. Double-click the disks icon.
5. Right-click the icon for the disk whose properties you want to
display, and click Properties.
The Disk Properties dialog box opens with the General tab
displayed. For a description of each property, click Help in the
dialog box.
6. If you want to view the other disk properties, click one of the
following tabs. The tab opens in the Disk Properties dialog box.
Click Help for more information.
• For disk error information, click the Errors tab.
• For disk statistics, click the Statistics tab.

To Display SPS or BBU Properties


1. In the Enterprise Storage dialog box, click the Equipment tab.
2. Double-click the icon for the storage system with the desired SPS
or BBU.
3. For any FC-Enclosure 0.
4. Double-click the Standby Power Supplies or Battery Backups
icon.

11-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

5. Right-click the icon for the SPS or BBU whose properties you
want to display, and click Properties.
The Battery Test Time dialog box opens. For a description of each
property, click Help in the dialog box.

To Display LCC State


1. In the Enterprise Storage dialog box, click the Equipment tab.
2. Double-click the icon for the storage system with the LCC whose
state you want to display.
3. Double-click the icon for the enclosure with the LCC whose state
you want to display.
4. Double-click the LCCs icon.
5. Right-click the icon for the LCC whose state you want to display,
and click State.
The Device State dialog box for the LCC opens.

To Display Power Supply State


1. In the Enterprise Storage dialog box, click the Equipment tab.
2. Double-click the icon for the storage system with the power
supply whose state you want to display.
3. For any FC-series storage system except an FC5000 series,
double-click the icon for the enclosure containing the power
supply whose state you want to display.
4. Double-click the power supplies icon.
5. Right-click the icon for the power supply whose state you want to
display, and click State.
The Device State dialog box for the power supply opens.

Displaying Storage-System Component and Server Status 11-27

6864 5738-001
Monitoring Storage-System Operation
11

Displaying NAS Device Status


The NAS System Properties dialog box provides a variety of general
information about the selected NAS device, and about each Ethernet
connection to the device. Each NAS System Properties tab is
read-only. None of the fields are editable.

To Display NAS Device Properties


1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab, right-click the icon for the NAS device and click
Properties.
The NAS System Properties dialog box opens with the General
tab displayed. For a description of each property, click Help in the
dialog box.
2. To display information about a network interface connection to
the NAS device, click one of the Network Interface tabs.
For a description of each property, click B in the dialog box.

11-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Using the SP Event Log - Non-FC4700 Series


Each SP in a non-FC4700 series storage system maintains a log of
event messages. These events include the following:
• Hard errors
• Startups
• Shutdowns involving:
• Disks
• Fans
• SPs
• LCCs
• Power supplies
• SPSs
• BBU
The messages are ordered by the time the event occurred with the
most recent messages at the beginning of the log. Periodically, the SP
writes this log to disk to maintain it when SP power is off. The log can
hold 16,800 event messages.
You can display all the events in the log or only the events that
occurred from a date and time you specify to the current time. You
can also save the contents of the log to a file you specify.

Displaying the Event Log for an SP


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system with the SP whose
event log you want to display.
3. Right-click the icon for the SP whose event log you want to
display, and click Event Log.
The Event Log window for the SP opens and displays all the
events in the SP log since the date and time specified in Show
Events As of. A sample event log follows.

Using the SP Event Log - Non-FC4700 Series 11-29

6864 5738-001
Monitoring Storage-System Operation
11

The fields in the event log have the following meanings:


Date/Time - Day and time the event occurred.
CRU - Type of the CRU that the event is about.
Event Code - The hexadecimal code for the type of event that
occurred.
Description - An abbreviated description of the message. See
Appendix A, “Troubleshooting Manager Problems,” for a more
detailed description of codes.
Sense Key, Ext. Code 1, Ext. Code 2 - Extended codes for use by
service personnel.

11-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Displaying Events You can display all the events in the log or filter the events in the log
to display the following:
• All events as of a specified date and time.
• All events for all components (that is, all FRUs and LUNs) or a
specified component.
• All events as of a specified date and time for all components or a
specified component.
You specify all components or an individual component by selecting
an entry from the Filter by and Filter for lists. The default selection
for Filter by is All and Filter for is unavailable.
You specify the date and time using one of the following formats or
by selecting an entry from a list.

Format Meanings

-N Display all events logged in the last N hours.

-N days Display all events logged in the last N multiplied by 24


hours.

MM /DD/YY HH:MM:SS Display all events logged since hour HH, minute MM,
second SS on month MM, day DD, year YY.

You can select these list entries:

List Entry Meaning

Now Display all events as of right now.

Yesterday Display all events logged since 00:00:00 on the previous day.

Last week Display all events logged since 00:00:00 seven days ago.

Last 24 hours Display all events logged in the last 24 hours.

Last 48 hours Display all events logged in the last 48 hours.

Last 3 days Display all events logged in the last 72 hours.

Using the SP Event Log - Non-FC4700 Series 11-31

6864 5738-001
Monitoring Storage-System Operation
11

To Clear the List of Displayed Events


Click Clear.
The SP deletes all events from its log so that you can no longer view
them.

! CAUTION
Clearing events permanently deletes all the events in the log file.

To Display All the Events


In the Filter by list, click All.
The Event Log dialog box shows all the events for the all FRUs in the
SP log that the Agent retrieved when it was started, plus the events
that occurred since it was started.

All the events in the SP log may not be displayed if the Agent is configured to
limit the numbers of entries that it retrieves on startup.

To Display Only Events for a Specific Component from a Specific Date and Time
1. In Show Events as of, enter the desired date and time.
The list of events is updated to show only events from the new
date and time.
2. In the Filter by list, click the type of component whose events you
want to display.
The Filter for list selection updates to the default component of
the specified component type. The list of events is updated to
show only events for the default component of the specified
component type.
3. In the Filter for list, click the specific component whose event you
want to display.
The list of events is updated to show only events for the specified
component.
To sort the events in the table:
Click the header for the column you want to use to sort the events in
the table.
The events are sorted by the values in the selected column. The first
time you click a column to sort the events, they are listed in ascending

11-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

order. The second time you click the same column, the events are
listed in descending order. The third time you click the same column,
the events are listed in ascending order, and so on.

To Save the Contents of the Log to a File


1. Click Save.
A Save As dialog box opens.
2. Select the drive and directory for the file.
3. Enter the name of the file in which you want to save the contents
of the log.
4. Click Save.
The log is saved to the specified file.

To Print the Contents of the Log to a File


1. Click Print.
A Print dialog box opens.
2. Select the printer and the print properties you want.
3. Click OK.
The log is printed on the specified printer.

Using the SP Event Log - Non-FC4700 Series 11-33

6864 5738-001
Monitoring Storage-System Operation
11

Using the SP Event Logs - FC4700 Series


Each SP in an FC4700 series storage system maintains a variety of
event logs. The Agent running on the SP lets you display the
storage-system events in these logs.

Displaying the Event Logs for an FC4700 SP


1. In the Enterprise Storage dialog box, click the Storage tab.
2. Double-click the icon for the storage system with the SP whose
event log you want to display.
3. Right-click the icon for the SP whose event log you want to
display, and click Event Log.

You can also open the event logs from the Event Monitor Configuration
window. To do so, right-click the icon for the monitoring host (SP Agent)
for which you want to view the storage-system logs, and click View
Events.

The Events window opens, similar to the following.

The fields in the Events window have the following meanings:

Date - Day the event occurred.


Time - Time the event occurred.

11-34 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Event Code - The hexadecimal code for the type of event that
occurred.
Description - An abbreviated description of the message. See the
Storage-System and Navisphere Event Messages Reference for a more
detailed description of codes.
Subsystem - The name of the storage system that the event is
about.
SP - The name of the SP that the event is about.
Host - Name of the host that the Agent is running on.

Filtering Events If you have an event log with multiple events, you can reduce the log
size by filtering the events. Filtering events lets you view only the
event types you specify.
1. In the Events window, click Filter.
The Event Filter dialog box opens.

2. In View, select one of the options from the As of list.


3. In Types, select the types of events that you want to view.
4. In View By, select the Event Code by which you want to view the
events.
5. In Description, enter a description by which you want to view
events.
6. Click OK.

Using the SP Event Logs - FC4700 Series 11-35

6864 5738-001
Monitoring Storage-System Operation
11

Viewing Event Details


1. In the Events window, double-click the event for which you want
to view details.
The Event Detail dialog box opens.

2. When you have finished viewing the event detail, you can do any
of the following:
• Click Next to view the next event detail.
• Click Previous to view the previous event detail.
• Click Close to close the Event Detail dialog box.
3. For more information about the properties in the dialog box, click
Help.

11-36 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Saving Events to a Log File


1. In the Events window, click Save.
A Save As dialog box opens.
2. In File name, enter the name of the file in which you want to save
the events displayed in the Events window.
3. Click Save.

Printing Events You can print all the events displayed in the Event window by
clicking the Print button in the window. Since the number of
displayed events may be every large, we recommend that you save
the events to a file, and print the file using another application, such
as Microsoft Excel or Microsoft Word, as follows:
1. In the Events window, click Save.
A Save as dialog box opens.
2. In File name, enter the name of the file in which you want to save
the events displayed in the Events window.
3. In Save as type, select Text Files (*.txt) from the list.
4. Open the file in another application, such as Microsoft Excel or
Notepad.
5. Highlight only the text that you want to print, and copy the text
to the clipboard.
6. Paste the events on a fresh page in the application.
7. Print your file.

Clearing Events

! CAUTION
Clearing events permanently deletes all the events in the log file.

1. In the Events window, click Clear.


2. Click Yes to clear all the events from the Events window.

Using the SP Event Logs - FC4700 Series 11-37

6864 5738-001
Monitoring Storage-System Operation
11

Opening the Events Timeline Window


In the Events window, click Timeline.
The Events Timeline Window opens, similar to the following. For
information on the properties in the window, click Help.

Using the Events Timeline Window


The Events Timeline window provides a graphical view of events for
selected agents. The Events Timeline window allows you to:
• Display a large number of events in one window
• Differentiate events by time and severity
• Visually identify clusters of events

11-38 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Description of the Events Timeline Window


Each event is shown on the timeline as a small vertical line called an
event marker. The color of each event marker shows the severity of
the event as follows:

Blue Informational events

Yellow Warning events

Orange Error events

Red Critical error events

If multiple events occur at the same time (or close enough that the
zooming level does not allow separate pixels for each event), the
color of the event marker is that of the highest priority event. The
height of the event marker shows how many events are represented
by the event marker.
Use the Zoom In and Zoom Out buttons on the toolbar to change the
displayed time interval of the timeline.
The default time interval for the timeline is twelve hours. The time
intervals you can display are:
• 2 days
• 1 day
• 12 hours
• 6 hours
• 1 hour
• 10 minutes
The system updates the timeline as events occur unless you freeze the
timeline with the Stop button. When the system updates the timeline,
it also updates the start and end times above the timeline.
When you move the mouse over an event marker, the timeline
displays information about that marker’s event. If the event marker
represents more than one event, the timeline displays information for
the most recent event with the highest priority.
The graphic in the upper right corner of the timeline window
represents the category of event codes displayed.
When you click an event marker, information about the events in that
marker appears in a separate Event Selection window.

Using the Events Timeline Window 11-39

6864 5738-001
Monitoring Storage-System Operation
11

The Events Timeline Toolbar


The Toolbar performs the following operations:

Zoom In Displays the timeline at the next smallest time interval.This


button is not active when you are already displaying the
minimum allowed time.

Zoom Out Displays the timeline at the next largest time interval.This button
is not active when you are already displaying the maximum
allowed time.

Scroll Left Scrolls to the left section of the timeline.

Scroll Right Scrolls to the right section of the timeline.

Time Updates the timeline so that the end time is the current time.
This does not change the current time interval.

Stop A toggle switch to stop and start timeline updates. If you click this
button, the system will not update timeline with new events until
you click the button again.

Viewing Events Represented by Event Markers


Place the mouse over an event marker to select the marker. When you
have selected the marker, the number of events for the selected
marker appears under the timeline. The following information
appears above the timeline for the marker’s most recent event with
the highest severity:

Subsystem Name of the storage system that generated the event. Displays N/A for
non-device event types.

Severity Severity of the event.

Date The date and time that the event occurred.

Event Code Displays the numerical code that pertains to the particular event.

SP SP to which the event belongs - SP A or SP B.

Description Brief description of the event.

11-40 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Monitoring Storage-System Operation
11

Viewing Event Details from the Timeline


1. Click an event marker in the timeline.
An Event Selection dialog box opens, similar to the following.

The list of events includes all events that the selected event
marker represents.
2. In the Line column, select an event by clicking its severity icon or
line number. (In the above example, the second event in the
window is selected.)
3. Click OK.

Using the Events Timeline Window 11-41

6864 5738-001
Monitoring Storage-System Operation
11

The Event Selection dialog box closes and the Event Detail
dialog box opens, similar to the following. For more information
about the dialog box, click Help.

The Previous and Next buttons appear dimmed and are unavailable
when you open this dialog box from within Event Monitor.

4. Click Close to close the Event Detail window.

11-42 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
12
Reconfiguring LUNs,
RAID Groups, and
Storage Groups

After you create a LUN or a RAID Group you may want to


reconfigure it by changing its properties or user capacity. You can do
some of this reconfiguration without unbinding the LUN (and thus
losing its data) or destroying the RAID Group (and thus unbinding
its LUNs).
After you create a Storage Group, you may want to reconfigure it by
changing its properties, adding LUNs to it or removing LUNs from it,
or connecting a server to it or disconnecting a server from it. You can
do this reconfiguration without destroying the Storage Group or
unbinding the LUNs it contains.
This chapter describes:
• Reconfiguring LUNs........................................................................12-2
• Reconfiguring RAID Groups........................................................12-25
• Reconfiguring Storage Groups.....................................................12-36

Reconfiguring LUNs, RAID Groups, and Storage Groups 12-1

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Reconfiguring LUNs
After you bind a LUN, you can change all of the LUN’s properties
without unbinding it (and thus losing its data), except for the
following:
• Unique ID
• Element size
• RAID type
To change any of these three properties, follow the procedures below.

To Change a LUN’s Unique ID or Element Size


1. Unbind the LUN.
2. Bind a new LUN with the desired ID or element size.

To Change the RAID Type of a LUN


1. Unbind the LUN.
2. The next step depends upon whether the LUN was on a RAID
Group storage system:
• Non-RAID Group storage system
Bind a new LUN with the desired RAID type.
• RAID Group storage system
Bind a new LUN on another RAID Group that supports the
desired RAID type.

To Change the User Capacity of a LUN


How you change the user capacity of a LUN depends on whether the
LUN is on a RAID Group and what its user capacity is.
• To change the user capacity of a LUN if the LUN is the only one
on a RAID Group and its user capacity is equal to the user
capacity of the RAID Group:
1. Expand the RAID Group.
2. Have the LUN automatically increase its user capacity to the
size of the expanded RAID Group.

12-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

• To change the user capacity of a LUN if the LUN is on a


non-RAID Group:
1. Unbind the LUN.
2. Bind a new LUN with more or less disks, depending on
whether you want to increase or decrease the user capacity.
• To change the user capacity of a LUN that is on a RAID Group
storage system:
1. Unbind the LUN.
2. Bind a new LUN with either additional or less user capacity.
If the RAID Group does not have the additional user capacity you
want for a LUN, you must first expand the RAID Group.

The rest of this section describes how to do the following:


• Change the LUN enable read cache or enable write cache
property (page 12-3)
• Change the LUN rebuild priority, verify priority, or auto assign
property (page 12-5)
• Change the LUN prefetch (read caching) properties (page 12-7)
• Transfer the default ownership of a LUN (page 12-11)
• Unbind a LUN (page 12-14)
• Change the user capacity of a LUN (page 12-23)

Changing the LUN Enable Read Cache or Enable Write Cache Properties
Changing the enable read cache or enable write cache properties for a
LUN does not affect the data stored on the LUN.
1. Display the icon for the LUN whose cache properties you want to
change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.

Reconfiguring LUNs 12-3

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
The LUN Properties dialog box for the LUN opens.

3. Click the Cache tab.


The LUN Properties - Cache tab opens, similar to the following.
For information on the properties in the dialog box, click Help.

4. Select the Read Cache Enabled check box to enable read caching
for the LUN, or clear it to disable read caching for the LUN.

5. Select the Write Cache Enabled check box to enable write caching
for the LUN, or clear it to disable write caching for the LUN.
6. Click OK to apply the settings and close the dialog box.

12-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

A LUN with read caching enabled uses default values for its
prefetching properties. The next section describes how to change
these properties.

A LUN with read caching enabled can use read caching only if the read cache
for the SP that owns it is enabled. Similarly, a LUN with write caching
enabled can use write caching only if the storage-system write cache is
enabled. To enable the read cache for an SP or the storage-system write cache,
see Chapter 6.

Changing the Rebuild Priority, Verify Priority, or Auto Assign Property for a LUN
Changing the rebuild priority, verify priority, or auto assign property
for a LUN does not affect the data stored on the LUN.
1. Display the icon for the LUN whose properties you want to
change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties that you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.

Reconfiguring LUNs 12-5

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The LUN Properties dialog box opens, similar to the following.


For information on the properties in the dialog box, click Help.

3. Set the desired property as follows:


a. In the Rebuild Priority list, click the desired priority.
b. In the Verify Priority list, click the desired priority.
c. Select the Auto Assignment Enabled check box to enable auto
assign, or clear it to disable auto assign.
4. Click OK to apply the settings and close the dialog box.

12-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Changing LUN Prefetch (Read Caching) Properties


Prefetching is read-ahead caching. This process lets the SP anticipate
the data that an application will request so it can read the data into its
read cache before the data is needed. The SP monitors I/O requests to
each LUN that it owns for sequential reads. If it finds that any I/O
requests occur, it prefetches the data for them automatically from the
LUN.
You can define a specific type of prefetch operation for any LUN,
except a RAID 3 LUN or hot spare, by setting the values of the LUN’s
prefetch properties. Changing any of these properties does not affect
the data on the LUN.
The prefetch properties are as follows:
• Prefetch type
• For constant prefetch type:
– prefetch size
– segment Size
• For variable prefetch type:
– prefetch multiplier
– segment multiplier
– maximum prefetch
• Retention
• Idle count
• Disable size
Prefetch type - Determines whether to prefetch data of a variable or
constant length or disable prefetching.
Prefetch size or prefetch multiplier - Determines the amount of data
prefetched for one host read request.
For constant-length prefetching, the prefetch size is the number of
data blocks to prefetch. For variable-length prefetching, the prefetch
multiplier is the amount of data to prefetch relative to the amount of
data requested. For example, if the prefetch multiplier is 8, the
amount of data to prefetch is 8 multiplied by the amount of data
requested.

Reconfiguring LUNs 12-7

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Segment size or segment multiplier - Determines the size of the


segments that make up a prefetch operation. The SP reads one
segment at a time from the LUN because smaller prefetch requests
interfere less with other host requests.
For constant-length prefetching, the segment size is the number of
data blocks to prefetch in one read operation from the LUN. For
variable-length prefetching, the segment multiplier determines the
amount of data to prefetch in one operation relative to the amount of
data requested. For example, if the segment multiplier is 4, the
segment size is 4 multiplied by the amount of data requested.
Maximum prefetch - The number of data blocks to prefetch for
variable-length prefetching.
Retention - Determines whether prefetched data has equal or favored
priority over host-requested data when the read cache becomes full.
Idle count - With prefetching enabled, specifies the maximum
number of I/Os that can be outstanding to the storage system. When
this number is exceeded, the system disables prefetching.
Disable size - Determines when a read request is so large that
prefetching data would not be beneficial. If so, the read request is
disabled. For example, if the amount of requested data is equal to or
greater than the size of the read cache, prefetching is a waste of
resources.

Table 12-1 Default Prefetch Properties Values

Property Default Value

Prefetch type Variable

Prefetch multiplier 4

Segment multiplier 4

Maximum prefetch 512 blocks

Retention Favor prefetch

Idle count 40

Disable size 129 blocks

12-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

We recommend that you use the default values, unless you are certain that
the applications accessing the LUN will benefit from changing the values.

Table 12-2 Available Prefetch Properties Values - General

General Prefetch Properties Valid Values

Prefetch type None, constant, variable

Retention Favor prefetch

Idle count 0 through 100

Disable size 0 through 65534 blocks

Table 12-3 Available Prefetch Properties Values - Constant

Constant Prefetch Properties Valid Values

Prefetch size 0 through 2048 blocks and equal to or greater


than the segment size

Segment size 0 if prefetch size is 0


1 through prefetch size with a maximum of 254

Table 12-4 Available Prefetch Properties Values - Variable

Variable Prefetch Properties Valid Values

Prefetch multiplier 0 through 32

Segment multiplier 0 if prefetch multiplier is 0


1 through prefetch multiplier

Maximum prefetch 0 through 2048 blocks

To Change Prefetch 1. Display the icon for the LUN whose prefetch properties you want
Properties to change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.

Reconfiguring LUNs 12-9

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.
The LUN Properties dialog box for the LUN opens.
3. Click the Prefetch tab.
The LUN Properties - Prefetch tab opens, similar to the
following. For information on the properties in the dialog box,
click Help.

4. If you want to use the default values, select the Use Default
Values check box, and click OK to apply the default values and
close the dialog box.

12-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

5. Under Prefetch Type, click None to disable prefetch, Constant to


enable constant-length prefetching, or Variable to enable
variable-length prefetching.
6. If None is selected, click OK to disable prefetching and close the
dialog box.
7. If Constant is selected, do the following:
a. In Prefetch Size, type the new prefetch size.
b. In Segment Size, type the new segment size.
8. If Variable is selected, do the following:
a. In Prefetch Multiplier, type the new prefetch multiplier.
b. In Segment Multiplier, type the new segment multiplier.
c. In Maximum Prefetch, type the new maximum number of
blocks.
9. Under Retention, click Equal Priority or Favor Prefetch.
10. In Idle Count, type the new idle count number.
11. In Disable Size, type the new number of blocks.
12. Click OK to apply the settings and close the dialog box.

Transferring the Default Ownership of a LUN


In a storage system, the default owner is the SP that assumes
ownership of a LUN after storage-system power is turned off and
then on again.
In the storage system, you can transfer the default ownership of a
LUN from the SP that is the default owner of the LUN (primary route
to the LUN) to the other SP (secondary route to the LUN).
Transferring default ownership of a LUN is one way of transferring
control of a LUN.
You should transfer the default ownership of a LUN when you want
to balance LUNs between two SPs, as you might, for example, if a
second SP is installed in a storage system with existing LUNs.

Reconfiguring LUNs 12-11

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Depending on the type of server connected to the storage system, you


may want to transfer default ownership of a LUN if any of the
following failure situations occurs:
• An SP fails and the server to which it is connected is not running
Application Transparent Failover (ATF) or the equivalent
software, and you want to transfer control of a LUN to the
working SP.
• Each SP is connected to a different host bus adapter and one
adapter fails or the connection to one adapter fails, and you want
the working adapter to access the LUNs owned by the SP
connected to the other adapter.
• When one server fails in a dual-server configuration without host
failover software, and you want the working server to access the
failed server’s LUNs.

The auto assign property of a LUN and the ATF software or its
equivalent can also transfer control of a LUN from one SP to another. For
information on the auto assign property, see Chapter 6; for information
on ATF, see the ATF manual. If you have failover software on the server,
you should use it to handle the failure situations just listed, instead of the
procedure in this section.

Transferring default ownership of a LUN from one SP to another can


affect how the operating system accesses the LUN. Any change you
make in ownership does not take effect until the storage system is
powered down and up again.

To Transfer Default 1. Display the icon for the LUN whose default SP owner you want
Ownership of a LUN to change.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Right-click the icon for the LUN whose properties you want to
change, and click Properties.

12-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The LUN Properties dialog box for the LUN opens, similar to the
following.

If the MirrorView feature is installed, the LUN Properties dialog box has a
Mirror tab.

3. Under Default Owner, click SP A to make SP A the default


owner of the LUN, or click SP B to make SP B its default owner.
4. Click OK to apply the setting and close the dialog box.
5. If the storage system is connected to a Solaris server, unmount all
the partitions that are associated with the LUN.
6. Have the system operator or service person power the storage
system down and then up again for the change in ownership to
take effect.

Allow at least 3 minutes for the storage system to power up and become
ready. Polling may fail while the storage system is reinitializing.

Reconfiguring LUNs 12-13

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

7. On each server with access to the LUN whose SP ownership you


transferred, follow the procedure below for the operating system
on the server:
AIX:
a. Remove the sp device for the LUN on the original SP using the
rmdev -l sp# -d command.
b. Rescan the bus to make AIX aware of the transferred LUN
using the cfgmgr command.
NetWare:
a. Scan all buses to make NetWare aware of the transferred LUN
using the scan all luns command.
b. Verify that NetWare sees the transferred LUN using the list
devices command.
HP-UX:
a. Make sure that the storage system has finished re-initializing
and that the LUN has transferred.
b. Rescan the bus and inform HP-UX about the transferred LUN
using the ioscan -fnC disk command.
Solaris:
a. LUN identifies the SP that is the new default owner and that
this device name is in the /kernel/drv/sd.conf file.
b. Shut down Solaris.
c. Restart Solaris using the boot -r command.
Windows:
Reboot Windows.

Unbinding a LUN Typically, you unbind a LUN only if you want to do any of the
following:
• Destroy a RAID Group on a RAID Group storage system. (You
cannot destroy a RAID Group with LUNs bound on it.)
• Add disks to the LUN. (If the LUN is the only LUN in a RAID
Group, you can add disks to it by expanding the RAID Group.)
• Use the LUN’s disks in a different LUN or RAID Group.
• Recreate the LUN with different capacity disks.

12-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

In any of these situations, you should make sure that the LUN
contains the disk you want. In addition, if the LUN is part of a
Storage Group, you must remove it from the Storage Group before
you unbind it.
This section describes how to do the following:
• Determine which disks make up a specific LUN (page 12-15).
• Remove a LUN from the Storage Groups that contain it when you
know which server uses the LUN (page 12-15) or which storage
system contains the LUN (page 12-19).
• Unbind a LUN (page 12-22).

To Determine Which Disks Make up a LUN


1. Display the icon for the LUN whose disks you want to display.
One way to display the LUN icon is
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN
whose properties you want to change.
c. Double-click the icon for the SP that owns the LUN.
2. Double-click the LUN icon.
An icon for each disk in the LUN is displayed.

To Remove LUNs from Storage Groups


There are two methods for removing LUNs from Storage Groups:
• When you know which server uses the LUN.
• When you know which storage system contains the LUN.
When you know which server uses the LUN:
1. Display the icon for the server using the LUN.
One way to display the server icon is to click the Hosts tab in the
Enterprise Storage dialog box.
2. Right-click the icon for a server using the LUN, and click
Properties.
The Host Properties dialog box opens.

Reconfiguring LUNs 12-15

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

3. Click the Storage tab.


The Storage tab in the Host Properties dialog box opens, similar
to the following.

4. Click the tab for the LUN you want to unbind.


The information for the selected LUN is displayed in the Storage
tab of the Host Properties dialog box.
5. Under Storage Groups, select a Storage Group, and click
Properties.

12-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Storage Group Properties dialog box opens, similar to the


following.

6. Click Select LUNs.

Reconfiguring LUNs 12-17

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Modify Storage Group dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

7. In the Selected LUNs list under Select LUNs for Storage Group,
select the LUN you want to remove from the group, and click ←.
The LUN moves from Selected LUNs to Unassigned LUNs.
8. Click OK to save the change and return to the Storage System
Properties dialog box.
9. For each Storage Group containing the LUN, repeat steps 5
through 8.
10. In the Storage System Properties box, click OK to remove the
LUN from the Storage Groups and close the dialog box.

12-18 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

When you know which storage system contains the LUN:


1. Display the icon for the storage system with the LUN that you
want to remove from Storage Groups.
One way to display the storage-system icon is to click the
Equipment or Storage tab in the Enterprise Storage dialog box.
2. Right-click the icon for the storage system with the LUN, and
click Properties.
A Storage System Properties dialog box opens.
3. Click the Data Access tab.
The Storage System Properties dialog box displays the Data
Access tab.
The LUNs column under Storage Groups lists the LUN ID for
each LUN in the Storage Group.

4. In the LUNs column under Storage Groups, select a Storage


Group, and click Properties.

Reconfiguring LUNs 12-19

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Storage Group Properties dialog box opens, similar to the


following.
.

5. In the Storage Group Properties dialog box, click Select LUNs.

12-20 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Modify Storage Group dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

6. Under Select LUNs for Storage Group in Selected LUNs, select


the LUN you want to remove from the group, and click ←.
The LUN moves from Selected LUNs to Unassigned LUNs.
7. Click OK to save the change and return to the Storage System
Properties dialog box.
8. For each Storage Group containing the LUN, repeat steps 4
through 7.
9. In the Storage System Properties box, click OK to remove the
LUN from the Storage Groups and close the dialog box.

Reconfiguring LUNs 12-21

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

To Unbind a LUN You cannot unbind a LUN in a Storage Group until you remove the
LUN from the group as described in one of the two previous
procedures.

! CAUTION
Unbinding a LUN destroys any data on it. Before unbinding a
LUN, make a backup copy of any data on it that you want to retain.
Do not unbind the last LUN owned by an SP connected to a
NetWare or Solaris server unless it is absolutely necessary. If you
do unbind it, do the following:

NetWare server - Refer to the Release Notice for the NetWare


Navisphere Agent for information on how to bind the first LUN.

Solaris server - Edit the Agent configuration file on the servers


connected to the SPs that you will use to bind LUNs. For
information on editing this file, see the section Verifying or Editing
Device Information in the Host Agent Configuration File
(Non-FC4700 storage systems) on page 7-38.

For each server with access to the LUN that you want to unbind,
follow the step below for the operating system running on the server:
AIX or HP-UX - Unmount all file systems on the server associated
with the LUN, and deactivate and then export the volume group
associated with the LUN.
NetWare - Unmount all volumes on all partitions that are
associated with the LUN, and then delete these volumes and
partitions.
Solaris - Unmount all partitions that are associated with the LUN.
Windows - Stop all processes on the partitions associated with
the LUN and delete the partitions.
10. Display the icon for the LUN you want to unbind.
One way to display the LUN icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN you
want to unbind.
c. Double-click the icon for the SP that owns the LUN.
11. Right-click the icon for the LUN to unbind, and click Unbind.

12-22 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

A confirmation dialog box opens warning you that unbinding


causes data loss and asking you to confirm the unbind operation.
12. Click Yes to confirm the operation.
The LUN icon disappears from the trees containing it.

Changing the User Capacity of a LUN


In most cases, you cannot change the user capacity of a LUN without
unbinding the LUN and losing all data on it. The only exception is if
the LUN is the only LUN within a RAID Group and if its user
capacity equals the user capacity of the RAID Group. For such a
LUN, you can increase its user capacity by expanding its RAID
Group as described on page 12-27.
In a RAID Group, you can rebind a LUN with an increased user
capacity only if the Group has usable space that is at least as great as
the user capacity you want to add. To determine the amount of usable
space in a RAID Group, see page 12-30.
If the RAID Group does not have enough usable space, you may be
able to free up enough space by defragmenting the RAID Group as
described on page 12-30. If defragmenting does not work, then you
must expand the RAID Group as described on page 12-27.
All disks in a LUN in a non-RAID Group storage system must have
the same physical capacity to fully use the storage space on the disks.
The physical capacity of a hot spare LUN must be at least as great as
the physical capacity of the largest disk module in any LUN or RAID
Group on the storage system.

To Change the User 1. Back up any data you want to retain on the LUN whose user
Capacity of a LUN capacity you want to change.
2. Unbind the LUN (page 12-22).
3. Bind the LUN with the new user capacity (page 7-10 for a LUN on
a non-RAID Group storage system; page 7-27 for a LUN on a
RAID-Group storage system).

Reconfiguring LUNs 12-23

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

4. For an unshared storage system, make the newly-created LUN


available to the operating system as described in the server setup
manual for the storage system.
5. For a shared storage system, add the newly-created LUN to its
original Storage Group, and then make it available to the
operating system as described in the server setup manual for the
storage system.
6. Restore any data you backed up from the original LUN.

12-24 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Reconfiguring RAID Groups


After you create a RAID Group, you can change all its properties
except its RAID Group ID and, if the Group has a LUN bound on it,
the RAID type it supports.
You can add disks to a RAID Group, but if you want to change the
disks in a Group to disks with a different physical capacity, you must
destroy the RAID Group (thus unbinding all its LUNs), and then
recreate it with the other disks.
This section describes how to do the following:
• Change the expansion/defragmentation priority property or the
automatically destroy after last LUN unbound property of a
RAID Group (page 12-25)
• Expand a RAID Group by adding disks to it (page 12-27)
• Defragment a RAID Group (page 12-30)
• Destroy a RAID Group (page 12-33)

Changing the Expansion/Defragmentation Priority or the Automatically Destroy


After Last LUN Unbound Property of a RAID Group
You can change the expansion/defragmentation priority property or
the automatically destroy after last LUN unbound property of a
RAID Group without affecting the data on any of its LUNs.
1. Display the icon for the RAID Group whose priority you want to
change.
One way to display the RAID Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group whose property you want to change.
c. Double-click the RAID Groups icon.
2. Right-click the icon for the RAID Group whose priority you want
to change, and click Properties.

Reconfiguring RAID Groups 12-25

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The RAID Group Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.
:

3. In Expansion/Defragmentation Priority, type or select the


desired priority.
4. Select the Automatically destroy after last LUN is unbound
check box to have the RAID Group destroyed when you unbind
its last LUN, or clear it to leave the RAID Group intact after you
unbind the last LUN.
5. Click OK to apply the settings and close the dialog box.

12-26 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Expanding a RAID Group


You can expand a RAID Group by adding unbound disks to it.
Expanding a RAID Group does not automatically increase the user
capacity of already bound LUNs. Instead, it distributes the capacity
of the LUNs equally across all the disks in the Group, freeing space
for additional LUNs.
If you expand a RAID Group that has only one bound LUN with a
user capacity equal to the user capacity of the RAID Group, you can
choose to have the user capacity of the LUN equal the user capacity
of the expanded Group. Whether you can actually use the increased
user capacity of the LUN depends on the operating system running
on the servers connected to the storage system.
You cannot expand a RAID Group that supports the RAID 1, Disk, or
Hot Spare RAID type because a RAID 1 type must have exactly two
disks, and the Disk and Hot Spare RAID types must have only one
disk. The number of disks you can use for the other RAID types are as
follows:

Table 12-5 Number of Disks You Can Use in RAID Types

RAID Type Number of Disks You Can Use

RAID 5 3 through 16

RAID 3 5 or 9 (FC series only)

RAID 1/0 4, 6, 8, 10, 12, 14, or 16

RAID 0 3 through 16

To Expand a RAID 1. Display the icon for the RAID Group that you want to expand.
Group
One way to display the icon for a RAID Group is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to expand.
c. Double-click the RAID Groups icon.
2. Right-click the icon for the RAID Group you want to expand, and
click Properties.

Reconfiguring RAID Groups 12-27

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The RAID Group Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

12-28 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

3. Under Disks, click Expand.


The RAID Group expansion dialog box opens, similar to the
following. For information on the properties in the dialog box,
click Help.

4. For an FC-series storage system, if the disks you want to add to


the RAID Group are in one enclosure, then in the Select from list,
click that enclosure.

All disks in a RAID Group must have the same capacity to fully use the
storage space on the disks. The capacity of a RAID Group that supports
the Hot Spare RAID type must be at least as great as the capacity of the
largest disk module in any LUN on the storage system.

5. Under Available Disks, for each disk that you want to add to the
RAID Group, click the icon for the disk and then click →.
The disk icon moves into Selected Disks.
6. If the RAID Group contains only one LUN with a user capacity
equal to the RAID Group’s user capacity and you want that
LUN’s user capacity to increase by the user capacity of the added
disks, select the Expand LUN with RAID Group check box.
Otherwise, clear the check box.

Reconfiguring RAID Groups 12-29

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

7. When Selected Disks contains only the icons for the disks you
want to add to the RAID Group, click OK.
The RAID Group expansion dialog box closes and the expansion
operation starts. Percent Expanded in the RAID Group
Properties dialog box displays the percentage of the operation
that is completed. When the percentage is 100, the operation is
finished.

What Next? What you do next depends on whether you cleared or selected the
Expand LUN with RAID Group check box.
Check box cleared - You can bind additional LUNs on the RAID
Group.
Check box selected - You need to make the additional space on the
LUN available to the operating system on the server as follows:
• AIX - Change the size of the file system on the LUN using the
following command: chfs -a size where size is the capacity of the
LUN in 512 blocks.
• Solaris - Change the size of the file system on the LUN using the
Solstice Disk Suite command growfs.

Defragmenting a RAID Group


If you unbind and rebind LUNs on a RAID Group, you may create
gaps in the contiguous space across the Group’s disks (that is,
fragment the RAID Group). This leaves you with less space for new
LUNs.
You can defragment a RAID Group to compress these gaps and
provide more contiguous free space across the disks. This section
explains how to determine the amount of usable (contiguous) free
space in a RAID Group, and how to defragment a RAID Group.

To Determine the Amount of Usable Space in a RAID Group


1. Display the icon for the RAID Group you want to defragment.
One way to display the RAID Group icon is
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to defragment.

12-30 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

c. Double-click the RAID Groups icon.


2. Right-click the icon for the RAID Group you want to defragment,
and click Properties.
The RAID Group Properties dialog box opens, similar to the
following. For information about the properties in the dialog box,
click Help.

The amount of unusable free space is Free Capacity minus


Largest Contiguous Free Space. If the Free Capacity is at least as
large as the total user space on all the LUNs that you want to bind
on the RAID Group, then the RAID Group has enough space to
bind the LUNs. However, if Largest Contiguous Free Space is
less than the total, you do not have enough space available for the
LUNs, and you need to defragment the RAID Group.

Reconfiguring RAID Groups 12-31

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

To Defragment a RAID Group


1. If the RAID Group Properties dialog box is not open for the
RAID Group you want to defragment, open it as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to defragment.
c. Double-click the RAID Groups icon.
d. Right-click the icon for the RAID Group you want to
defragment, and click Properties.
2. Click the Partitions tab.
The Partitions tab of the RAID Group Properties dialog box
opens, similar to the following. For information on the properties
in the dialog box, click Help.
.

12-32 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

3. Click Defragment, and then click Apply.


The defragmentation operation starts. Percent Defragmented
displays the percentage of the operation that is completed. When the
percentage is 100, the operation is finished, and you can bind
additional LUNs on the Group.

Destroying a RAID Group

Before you can destroy a RAID Group, you must unbind all the LUNs on it.
Unbinding a LUN destroys all the data on it.

Typically, you destroy a RAID Group only if you want to do either of


the following:
• Use its disks in a different RAID Group.
• Exchange its disks for disks with a different capacity.
In any of these situations, you should make sure that the RAID
Group contains the disks that you want.
This section describes how to:
• Determine which disks make up a RAID Group
• Destroy a RAID Group

To Determine Which Disks Make up a RAID Group


1. In the Enterprise Storage dialog box, click the Equipment or
Storage tab.
2. Double-click the icon for the storage system with the RAID Group
you want to destroy.
3. Double-click the RAID Groups icon.
4. Double-click the icon for RAID Group you want to destroy.
5. Double-click the Disks icon.
An icon for each disk in the RAID Group is displayed.

Reconfiguring RAID Groups 12-33

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

To Destroy a RAID Group


Before you can destroy a RAID Group, you must unbind all LUNs on
it.

! CAUTION
Unbinding a LUN destroys any data on it. Before unbinding a
LUN, make a backup copy of any data on it that you want to retain.
Do not unbind the last LUN owned by an SP connected to a
NetWare or Solaris server unless it is absolutely necessary. If you
do unbind it, you will have to do the following:

NetWare server - Refer to the Release Notice for the NetWare


Navisphere Agent for information on how to bind the first LUN.

Solaris server - Edit the Agent configuration file on the servers


connected to the SPs that you will use to bind LUNs. For
information on editing this file, see the section Verifying or Editing
Device Information in the Host Agent Configuration File
(Non-FC4700 storage systems) on page 7-38.

1. For each server with access to any LUN in the RAID Group that
you want to destroy, follow the step below for the operating
system running on the server:
AIX or HP-UX:
a. Unmount all file systems on the server associated with each
LUN in the RAID Group.
b. Deactivate and then export the volume group associated with
each LUN.
NetWare:
a. Unmount all volumes on all partitions that are associated with
each LUN in the RAID Group.
b. Delete these volumes and partitions.
Solaris:
Unmount all partitions that are associated with each LUN in the
RAID Group.

12-34 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Windows:
Stop all processes on the partitions associated with each LUN in
the RAID Group and delete the partitions.
2. Display the icon for the RAID Group that you want to destroy.
One way to display the RAID Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the RAID
Group you want to destroy.
c. Double-click the RAID Groups icon.
3. Unbind the LUNs in the RAID Group you want to destroy as
follows:
a. Double-click the icon for the RAID Group you want to destroy.
b. For each LUN in the RAID Group, right-click its icon, click
Unbind LUN, and then click Yes in the confirmation dialog
box that opens.
4. Right-click the icon for the RAID Group to destroy, and click
Destroy.
A confirmation dialog box opens warning you that destroying a
RAID Group destroys all data stored on the Group and asking
you to confirm the destroy operation.
5. Click Yes to confirm the operation.

Reconfiguring RAID Groups 12-35

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Reconfiguring Storage Groups


After you create a Storage Group, you can change all its properties:
• Name
• Sharing
• LUNs comprising it
• Servers connected to it

Changing the Name or Sharing State of a Storage Group


You can change the name of a Storage Group at any time.
You can change the sharing state of a Storage System from dedicated
to sharable at any time, and from sharable to dedicated only if it is
connected to just one server.
If you want a Storage Group that is currently connected to multiple
servers to become dedicated to just one server, before you can change
its sharing state from sharable to dedicated, you must disconnect it
from all servers or all servers except the server to which you want to
dedicate it.

To Change the Name or Sharing State of a Storage Group


You can change the name or sharing state of a Storage Group using its
Properties dialog box.
1. Display the icon for the Storage Group whose name or sharing
state you want to change.
One way to display the Storage Group icon is
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the Storage
Group whose name or sharing state you want to change.
c. Double-click the Storage Groups icon.
2. Right-click the icon for the Storage Group whose name or sharing
state you want to change, and click Properties.

12-36 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Storage Group Properties dialog box opens, similar to the


following. For information on the properties in the dialog box,
click Help.

3. If you want to change the group’s name, in Storage Group, type


the new name.
4. If you want to change the group’s sharing state, under Sharing,
select either Dedicated or Sharable.
5. Click OK to save your changes and close the Storage Group
Properties dialog box.

Reconfiguring Storage Groups 12-37

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Adding or Removing LUNs from Storage Groups

Removing a LUN from a Storage Group makes the LUN inaccessible to the
servers connected to the Storage Group. Adding a LUN to a Storage Group
makes the LUN accessible to the servers connected to the Storage Group.

You can add a selected LUN to or remove a selected LUN from one or
more Storage Groups using the Select Storage Groups dialog box,
which you display from the LUN icon (page 12-38).
You can add one or more LUNs to or remove one or more LUNs from a
selected Storage Group using the Modify Storage Group dialog box,
which you display from the Storage Group icon (page 12-39).

To Add or Remove a Single LUN from One or More Storage Groups


The following procedure assumes that you have already created the
LUN that you want to add to the Storage Group.
1. Display the icon for the LUN that you want to add to or remove
from the Storage Group.
One way to display the LUN icon is
a. In the Enterprise Storage dialog box, click the Hosts tab.
b. Double-click the icon for a server connected to the Storage
Group containing the LUN you want to add or remove.
c. Double-click the LUNs icon.
2. Right-click the icon for the LUN that you want to add to or
remove from the Storage Group, and click Add to Storage
Groups.

12-38 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Add LUN to Selected Storage Groups dialog box opens,


similar to the following. For information on the properties in the
dialog box, click Help.

3. If you want to add the LUN to Storage Groups, in the Available


Storage Groups list, click the Storage Groups to which to add the
LUN, and click →.
The Storage Groups move into the Selected Storage Groups list.
4. If you want to remove the LUN from Storage Groups, in the
Selected Storage Groups list, click the Storage Groups from
which to remove the LUN, and click ← .
The Storage Groups move into the Available Storage Groups list.
5. Click OK.
The Storage and Hosts trees are updated to reflect the change.

To Add or Remove One or More LUNs from a Single Storage Group


1. Display the icon for the Storage Group to which you want to add
or remove a LUN.
One way to display the Storage Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for a storage system with the LUN that
you want to add to or remove from the Storage Group.
c. Double-click the Storage Groups icon.

Reconfiguring Storage Groups 12-39

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

2. Right-click the icon for the Storage Group to which you want add
or remove the LUN, and click Properties.
A Properties dialog box for the Storage Group opens.
3. Click Select LUNs.
A Modify Storage Group dialog box opens, similar to the
following.

4. If you want to add LUNs to the selected Storage Group, do the


following:
a. If any LUN that you want to add does not exist, click New
LUN and create the LUN.
For information on creating a new LUN, see the section
Creating LUNs on RAID Groups on page 7-27.
b. Under Select LUNs for Storage Group in the Unassigned
LUNs list, click the LUNs to add to the group, and click →.
The LUNs move into the Selected LUNs list.

12-40 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

5. If you want to remove LUNs from the selected Storage Group,


under Select LUNs for Storage Group in the Selected LUNs list,
click the LUNs you want to remove from the group, and click ←.
The LUNs move into the Unassigned LUNs list.
6. Click OK.
The Storage and Hosts trees are updated to reflect the change.

Connecting Servers to a Storage Group or Disconnecting Servers from a Storage


Group

Connecting a server to a Storage Group makes the LUNs in the Storage


Group accessible to the server. Disconnecting a server from a Storage Group
makes the LUNs in the Storage Group inaccessible to the server.

1. Open the Connect Hosts to Storage dialog box in one of the


following ways:
• Right-click the icon for the storage system with the Storage
Group you want to connect to or disconnect from servers, and
click Connect Hosts.
• Right-click the icon for the Storage Group you want to connect
to or disconnect from servers, and click Connect Hosts.
• Right-click the host icon for the server that you want to
connect to or disconnect from a Storage Group, and click
Connect Storage.

Reconfiguring Storage Groups 12-41

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Connect Hosts to Storage dialog box opens, similar to the


following.

2. In Storage System, click the storage system with the Storage


Group that you want to connect to or disconnect from a server.
All servers connected to the selected storage system, but not
connected to one of its Storage Groups, are in the Available Hosts
list. All servers connected to the selected Storage Group are in the
Hosts to be Connected list.

3. For the servers that you want to connect to the selected Storage
Group, do the following:
a. If a server is connected to a different Storage Group, select the
Show Hosts Connected to Other Storage Groups check box.
All servers connected to Storage Groups on the storage system
are listed in the Available Hosts list.

12-42 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

A server can be connected to only one Storage Group. If a server is


already connected to a Storage Group when you connect it to the
selected Storage Group, it is disconnected from the first Storage
Group and, can no longer access LUNs in this group.

b. In the Available Hosts list, click the servers to connect to the


selected Storage Group, and click ↓.
The selected servers move down to the Hosts to be Connected
list.
4. For the servers that you want to disconnect from the selected
Storage Group, do the following:
a. In the Hosts to be Connected list, click the servers to
disconnect from the selected Storage Group, and click ↑ .
The selected servers move up to the Available Hosts list.
b. Click Apply to apply your changes.
5. Verify the connection paths to the servers you want to connect to
the Storage Group as follows:
a. Open the Connect Hosts to Storage dialog box in one of the
following ways:
• Right-click the icon for the storage system with the Storage
Group you want to connect to or disconnect from servers,
and click Connect Hosts.
• Right-click the icon for the Storage Group you want to
connect to or disconnect from servers, and click Connect
Hosts.
• Right-click the host icon for the server that you want to
connect to or disconnect from a Storage Group, and click
Connect Storage.
The Connect Hosts to Storage dialog box opens.
b. In Hosts to be Connected, select all the servers, and click
Advanced.

Reconfiguring Storage Groups 12-43

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

An advanced Connect Hosts to Storage dialog box opens,


similar to the following.

c. In the Selected Host Connection Paths list, look for an


enabled path for each SP in the storage system to each selected
server.
A path is enabled if its check box is checked. If each SP has an
enabled path, then the server is connected to the Storage
Group.

12-44 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

If an SP has a disabled path, then a working physical


connection between the SP and an HBA port in the server
existed at one time, but this connection is currently broken. If
no path to an SP exists, then a working connection never
existed between the SP and the HBA port in the server. In
either situation, make sure that:
• The HBA port is working.
• The switch is working and it is configured to connect the
HBA port to the SP. See the switch documentation for
information on the switch and how to configure it.
• The cables between the HBA port and the switch, and
between the switch and the SP are fastened securely.
• If the Selected Host Connection Path list contains more
than one path for the same SP ID, then either the SP or the
HBA to which it was connected was replaced, and the
information about the connection between the SP and the
HBA was never removed from the storage system’s
persistent memory.
• Whenever a storage-system server is rebooted, the Agent
scans the network for HBA port connections to storage
systems. When it finds a connection, it sends information
about the connection to the SP. The SP stores this
information in the storage system’s persistent memory on
the database disks. This information remains in this
memory until you issue a CLI port command to remove it.
See the Agent and CLI manual for information on the port
command.
6. Click OK to close the dialog box.
7. In the confirmation dialog box that opens, click Yes to confirm
your changes and close the Connect Hosts to Storage dialog box.
The Storage and Hosts trees are updated to reflect the change.

Reconfiguring Storage Groups 12-45

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

Destroying Storage Groups

All servers connected to a Storage Group lose access to the LUNs in the
Storage Group after you destroy the group. The LUNs in a Storage Group are
not unbound when you destroy it.

To Destroy a Single 1. Display the icon for the Storage Group you want to destroy.
Storage Group
One way to display the Storage Group icon is as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for a storage system with the Storage
Group you want to destroy.
c. Double-click the Storage Groups icon.
2. Right-click the icon for the Storage Group you want to destroy,
and click Destroy.
3. In the confirmation dialog box, click Yes to destroy the Storage
Group and close the Destroy Storage Groups dialog box.
The Storage and Hosts trees are updated to reflect the change.

To Destroy One or 1. In the Enterprise Storage dialog box, click the Storage tab.
More Storage Groups
2. Right-click the icon for the storage system with the Storage
Groups you want to display, and click Properties.
The Storage System Properties dialog box opens.
3. Click the Data Access tab.

12-46 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

The Data Access tab in the Storage System Properties dialog box,
opens, similar to the following.

4. In the Storage Groups list, select the Storage Groups that you
want to destroy, and click Destroy.
5. In the confirmation dialog box, click Yes to destroy the Storage
Groups and close the Destroy Storage Groups dialog box.
The Storage and Hosts trees are updated to reflect the change.

Reconfiguring Storage Groups 12-47

6864 5738-001
Reconfiguring LUNs, RAID Groups, and Storage Groups
12

12-48 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
13
Reconfiguring Storage
Systems

After you set up a storage system, you may want to:


Reconfigure any type of storage system:
• Upgrading Storage-System Software ............................................13-2
• Upgrading a Storage System to Support Caching ......................13-7
• Replacing Disks with Higher Capacity Disks..............................13-9
Reconfigure a shared storage system:
• Connecting a New Server to a Shared Storage System ............ 13-11
• Disconnecting a Server from a Shared Storage System ............13-14

Reconfiguring Storage Systems 13-1

6864 5738-001
Reconfiguring Storage Systems
13

Upgrading Storage-System Software


The procedure you use to upgrade storage-system software depends
on whether you are installing the software on an FC4700 storage
system or a non-FC4700 storage system. On an FC4700 storage
system, you can upgrade Base Software and a variety of software
options, such as Access Logix and SnapView. On a non-FC4700
storage system, you can upgrade the Core Software. This type of
storage system does not have software options like an FC4700 storage
system does.
To upgrade software on an FC4700 storage system
Follow the procedure in Chapter 4, Installing Software on an FC4700
Storage System.
To upgrade software on a non-FC4700 storage system
Follow the procedure in this section.

Upgrading Software on a Non-FC4700 Storage System


You can upgrade the Core Software on a non-FC4700 storage system.
For a new revision of the Core Software to take effect, the SPs in the
storage system must be rebooted.
The Core Software media for all non-FC4700 storage systems may
also include an upgrade to the SP programmable read-only memory
(PROM) code. If PROM code is included, it is installed automatically
with the Core Software.
When you install Core Software, the SP tries to copy it to reserved
areas outside operating system control on several disks, which are
called the database disks. Having multiple copies of code offers
higher availability if a disk fails. The database disks for the different
storage-system types are listed in Table 13-1.

13-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

Table 13-1 Database Disks for Different Storage-System Types

Storage-System Type Database Disk IDs

FC series 00, 01,02

C3x00 series A0, B0, C0, A3, A4

C2x000 series A0, B0, C0, A3

C1900 series A0, B0, C0, A1

C1000 series A0, A1, A3, B0

When you install Core Software, at least two of the database disks
must be online, and ideally, all of them should be online. A disk is
online if it is fully powered up and not faulted; that is, if Current
State is Normal on its Disk Properties dialog box. If you try to power
up the storage system without two of these disks in place, the
powerup fails.

The file for the new Core Software revision must be on a host that can be
reached across a network from the server connected to the storage systems
whose Core Software you want to upgrade.

To Upgrade Core 1. In the Enterprise Storage dialog box in the Main window, click
Software either the Equipment tab or the Storage tab to display the
storage-system tree.
2. Right-click the system or systems on which you want to install or
upgrade the Core Software.

All storage systems must be the same type.

3. From the drop-down menu, click Software Installation.

Upgrading Storage-System Software 13-3

6864 5738-001
Reconfiguring Storage Systems
13

The Software Installation dialog box opens, similar to the


following.

4. If you need to add additional storage systems to the Storage


Systems list, click Select.
The Storage System Selection dialog box opens, similar to the
following.

13-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

5. Under Available Storage Systems, select any storage systems of


the same type that you want to add to the Storage Systems list, and
then click →.
The storage-system icon moves into Selected Storage Systems.
6. When the Selected Storage Systems list includes all the storage
systems that you want to add to the Storage Systems list, click
OK to close the Storage System Selection dialog box and return
to the Software Installation dialog box.
The Storage Systems list is updated with any changes.
7. To remove storage systems from the Storage Systems list, select
the storage system and click Remove.
8. Under Firmware Options, if the new Core Software file is
Accessible from the management station
a. Select the File Accessed Locally check box.
b. Click Browse to locate and then select the new Core Software
file.
The name of the Core Software file displays in Filename.
On a server connected to a storage system
a. Clear the File Accessed Locally check box.
b. In Filename(s), type the complete path name for the new Core
Software file.
Valid Core Software filenames end with the .bin extension.
c. Under Reboot Options, select one of the following check
boxes:

The SPs in a storage system must be rebooted for a new revision of


Core Software to take effect.

• No Reboot will not automatically reboot the storage


system after downloading new Core Software.
• Warm Reboot, if available, suspends all outstanding I/O to
the storage system, restarts the SPs after downloading the
new software, and typically takes less than 30 seconds.

Upgrading Storage-System Software 13-5

6864 5738-001
Reconfiguring Storage Systems
13

Warm Reboot is available only if all the storage systems in


the Storage Systems list support warm reboot. To perform
a warm reboot, you must be running Navisphere ATF.
• Hard Reboot terminates all outstanding I/O to the storage
system, restarts the SPs after downloading the new
software, and typically takes about 1 to 2 minutes. Before
selecting Hard Reboot, suspend all I/O to the storage
system, and do one of the following:
• For Windows 2000 hosts, stop all I/O traffic to the
storage system.
• For all other hosts, unmount all file systems on the
storage system to which the host has access.

If you are upgrading Core Software on multiple storage systems, you


may want to perform two download operations: one for storage
systems supporting warm reboot and one for the remaining storage
systems.

9. Click License to review the License Agreement, and click OK to


accept it.
10. Click Next to start the download operation.
After the SPs reboot, you may have to restart the Agent on the
servers connected to the storage system that received the new
Core Software.

13-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

Upgrading a Storage System to Support Caching


To upgrade a storage system to support caching, the system operator
or service person must install the necessary hardware components,
and then you must set up array caching. This section describes how
to perform each of these tasks.

Installing the Hardware Components for Caching


All storage systems support read caching. A storage system supports
write caching only if it has the required hardware, which varies with
the storage-system type, as follows:

Table 13-2 Hardware Requirements for Write Caching

FC4700,
FC4400/4500, C1900,
Icon FC5600/5700 FC5200/5300 C2x000, C3x00 C1000

Disks 0-0 through 0-0 through A0, B0, C0, D0, E0 A0 through A4
0-8 0-4

SPs Two Two, with at least 8 Mbytes memory

Power supplies Two


Two in DPE and each DAE
LCCs Not applicable

Backup power Fully charged SPS Fully charged BBU

The system operator or service person can install memory modules,


disks, a BBU or SPS, and a second SP, LCC, or power supply without
powering down the storage system. For an FC series storage system,
the DPE, DAE, and SPS installation and service manuals describe
how to install an SP, LCC, power supply, and SPS. For a C series
storage system, the storage-system installation and service manual
describes how to install the SP, power supply, and BBU. If you add
disks, you will probably want to create new LUNs with them.
If you add a second SP, you may want it to own some of the LUNs.
You can switch the ownership of a LUN from one SP to the new SP
(page 12-11).

Upgrading a Storage System to Support Caching 13-7

6864 5738-001
Reconfiguring Storage Systems
13

Setting Up Caching 1. Assign memory to the partitions for the caches you will use
(page 6-15).
2. Enable the storage-system (SP) caches that you will use, and set
the other storage-system cache properties (page 6-18).
3. Enable read or write caching for each LUN that you want to use
read or write caching, as follows:
a. In the Enterprise Storage dialog box, click the Storage tab.
b. Double-click the icon for the storage system with the LUN that
will use caching.
c. Double-click the SP icon that owns the LUN.
d. Right-click the LUN icon, and then click Properties.
e. Click the Cache tab.
f. Select the Read Cache Enabled check box to enable read
caching for the LUN.
g. Select the Write Cache Enabled check box to enable write
caching for the LUN.
h. Click OK to apply the changes and close the LUN Properties
dialog box.

13-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

Replacing Disks with Higher Capacity Disks


You can replace any disks in a storage system with higher capacity
disks, as long as you do not replace all disks that contain the Core
Software database at the same time. These disks are shown as
follows:

Table 13-3 Database Disks

Storage-System Type Database Disks IDs

FC series 0-0, 0-1, 0-2

C3x00 series A0, B0, C0, A3, A4

C2x00 series A0, B0, C0, A3

C1900 series A0, B0, C0, A1

C1000 series A0, A1, A3, B0

This section describes how to replace a group of disks that either:


• Does not include all the database disks.
• Does include all the database disks.

To Replace a Group of Disks That Does Not Include All the Databse Disks

You do not need to power off the storage system during the following
procedure.

1. For a non-RAID Group storage system, unbind the LUNs whose


disks you want to replace (page 12-14).
2. For a RAID Group storage system, destroy the RAID Groups
whose disks you want to replace (page 12-33).
3. Replace each disk one at a time; that is, remove the disk and then
insert the replacement disk before removing another disk.
4. Bind the disks into the desired LUNs (page 7-10 for a non-RAID
Group storage system; page 7-27 for a RAID Group storage
system).
If you replaced one or more database disks, the SP copies the Core
Software from another database disk to the replacement disks.

Replacing Disks with Higher Capacity Disks 13-9

6864 5738-001
Reconfiguring Storage Systems
13

5. Make the LUNs available to the operating system on the


storage-system server as described in the server setup manual for
the storage system.
6. Restore any data you backed up from the original LUNs.

To Replace a Group of Disks That Does Include All the Database Disks

! CAUTION
Do not power off the storage system during the following
procedure.

1. For a non-RAID Group storage system, unbind the LUNs whose


disks you want to replace (page 12-14).
2. For a RAID Group storage system, destroy the RAID Groups
whose disks you want to replace (page 12-33).
3. Replace each disk one at a time, except for disk 0-0 in an FC-series
storage system or disk A0 in a C-series storage system.
4. Download Core Software to the storage system (pages 13-2 and
13-2).
5. Replace disk 0-0 in an FC-series storage system or disk A0 in a
C-series storage system.
6. Bind the disks into the desired LUNs (page 7-10 for a non-RAID
Group storage system; page 7-27 for a RAID Group storage
system).
When you bind the LUNs, the SP copies the Core Software to the
00 or A0 disk from the other database disks.
7. Make the LUNs available to the operating system on the
storage-system server as described in the server setup manual for
the storage system.
8. Restore any data you backed up from the original LUNs.

13-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

Connecting a New Server to a Shared Storage System


The procedure that follows assumes that the new server is physically
connected to the server through a switch.
1. Manage the new server as follows:
a. On the Main window menu bar, select the File menu and click
Select Agents.
An Agent Selection dialog box opens, similar to the
following:

b. Type the name of the new Host Agent in the Agent to Add
box, and click →.
The Host Agent displays in the Managed Agents box.
c. Click OK to manage the Host Agent and close the dialog box.
An icon for each Host Agent displays in the Equipment and
Hosts trees.

Connecting a New Server to a Shared Storage System 13-11

6864 5738-001
Reconfiguring Storage Systems
13

2. In the Enterprise Storage dialog box, click the Hosts tab.


3. Right-click the icon for the new Host Agent, and click Properties.
The Hosts Properties dialog box opens.
4. Click the Storage tab.
The Host Properties - Storage tab opens, similar to the following.
For information on the properties in the dialog box, click Help.

5. In the Storage Groups list, select the Storage Group that you
want to connect to the new server and click Connect Storage.

13-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

The Connect Hosts to Storage dialog box opens, similar to the


following.

6. In the Available Hosts list, select the new server and click ↓.
The new server moves into the Hosts to be Connected list.
7. Click OK to apply your changes.
8. In the confirmation dialog box that opens, click Yes to connect the
new server to the Storage Group.

Connecting a New Server to a Shared Storage System 13-13

6864 5738-001
Reconfiguring Storage Systems
13

Disconnecting a Server from a Shared Storage System


1. In the Enterprise Storage dialog box, click the Equipment or
Hosts tab.
2. Right-click the icon for the shared storage system you want
disconnected from the server, and click Connect Hosts.
The Connect Hosts to Storage dialog box opens, similar to the
following.

3. In the Hosts to be Connected list, select the server to disconnect


from the storage system, and click ↑ .
The selected server moves to the Available Hosts list.
4. Click OK to apply your change and close the dialog box.

13-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Reconfiguring Storage Systems
13

If you are physically disconnecting the server from the storage


system, you should remove the connection paths from the server
HBA ports to the storage system from the storage system’s
persistent memory. To do so, use the CLI port command. See the
CLI manual for information on the port command.

Disconnecting a Server from a Shared Storage System 13-15

6864 5738-001
Reconfiguring Storage Systems
13

13-16 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Invisible Body Tag
A
Troubleshooting
Manager Problems

This appendix describes how to troubleshoot problems that may


occur when you run Manager. It lists problems and suggests actions
that may help resolve or isolate the problem.

Troubleshooting Manager Problems A-1

6864 5738-001
Troubleshooting Manager Problems
A

Unable to Connect to a Storage-System Server or Time Out


Connecting to a Storage-System Server
Make sure that the following conditions are met:
• The Agent is running (and not in the process of starting up) on
the server connected to the storage system you are trying to
manage, and the correct device name for the storage system is
entered in the agent configuration file.
• Fibre Channel cabling between the server and the storage system
is connected correctly.
• Server can perform I/O operations to the storage system.
• Ping the server. If you cannot ping the server, check the network
configuration as follows:
1. Determine how the network is configured by running
IPCONFIG/ALL.
2. If the network uses DNS or WINS for name resolution, ping the
DNS or WINS server to make sure it is reachable over the
network.
If the DNS or WINS server is not reachable, and you want to
manage locally attached storage systems only (that is, Manager
and the Agent are both on the storage-system server), you can
enter localhost instead of the hostname in the Host Selection
dialog box. You can temporarily manage storage systems on any
server that you can ping by entering its IP address in the hosts file
on the management station. If you are using DHCP, remove any
server IP addresses you added to the hosts file as soon as you
configure their storage systems. The default directory for the
hosts file is C:\winnt\system32\drivers\etc on a Windows
management station.

Dual Board Unbind Error


Make sure that the configuration file for the Agent on the server for
the storage system has a path to the SP that currently owns the LUNs
that you are trying to unbind. For information on the Agent
configuration file, see the Agent manual.

A-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Troubleshooting Manager Problems
A

Caller Not Privileged


Make sure that the configuration file for the Agent on the server for
the storage system has a privileged user entry for the user account on
the management station that started the Manager session. For
information on the Agent configuration file, see the Agent manual.
For a shared system with configuration access control enabled, make
sure that you are managing the storage system through a server for
which the storage system has configuration access enabled. The
Configuration Access tab on the Storage System Properties dialog
box tells you whether the storage system has configuration access
control enabled; and, if it does, the servers for which it has
configuration access enabled.

LUN Not Visible on a Solaris Server


Make sure the sd.conf file has an entry for the LUN. For information
on editing this file, see the storage-system server setup manual for
Solaris environments.

LUNs Are Unowned


Being unowned is an abnormal state for any LUN except a hot spare.
If you have an unowned LUN that is not a hot spare, go to the section
below for the appropriate LUN type.
For any unowned LUN - Do the following:
• Make sure that the SP that is the default owner of the LUN is
present in the storage system and has a host connection. If this SP
does not exist, you can use the CLI trespass command to give the
existing SP ownership of the LUN for the current Manager
session, as well as after the existing SP is rebooted.
• Make sure the Agent configuration file has the correct path to the
LUN's default SP.
• If the LUN is a RAID 5, RAID 3, RAID 1/0, or RAID 0 type, check
the storage system’s Equipment tree for failed CRUs. If it has
more than one failed CRU, follow these steps:
a. Replace the failed CRUs.

Caller Not Privileged A-3

6864 5738-001
Troubleshooting Manager Problems
A

b. Unbind and then rebind the LUN.


c. Restore the data on the LUN from a backup copy.
• If the LUN is a RAID 1 or Disk type, check the storage system’s
Equipment tree for failed CRUs. If it has any failed CRUs, follow
these steps:
a. Replace the failed CRUs.
b. Unbind and then rebind the LUN.
c. Restore the data on the LUN from a backup copy.
Non-RAID 3 LUN - Make sure that mixed mode is enabled for the
storage system with the LUN.
RAID 3 - Check the SP event log for an entry stating Can't Assign.
No R3 Memory for the storage system. If you see this error, allocate 8
Mbytes of storage-system memory per RAID 3 LUN to the RAID 3
partition.

Enclosure x: Bypass Error


Make sure that the DAE with enclosure address x has operational
disk modules in two adjacent slots. These disks provide the re-timing
that is required to maintain the integrity of the Fibre Channel loops.

A-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

A what to do when orange 11-9


access control enabled property ATF (Application-Transparent Failover), status
defined 8-2 11-21
setting 8-2 automatic destroy property
accessible storage systems 3-11 defined 7-21
activating remote mirrors, general 9-27 setting
admsnap command 10-6, 10-19 after creating RAID Group 12-25
Agent when creating RAID Group 7-26
editing configuration file automatic polling
AIX server 7-38, 8-15 description of 11-4
HP-UX server 7-38, 8-15 properties, setting 11-4
NetWare server 7-38, 8-15 automatic polling option
Solaris server 7-39, 8-16 default value 2-9
Windows server 7-40, 8-17 defined 2-8
Remote SP Agent automatic polling priority property
configuring, FC4700 series 5-2 default value 11-4
agent configuration file, user entry 5-4 defined 6-10
Agent Selection dialog box 2-11 setting 6-11
opening 2-11
using to select storage systems for B
management 2-13, 2-15 Base Software
allocating the write intent log 9-16 committing on FC4700 storage system 4-13
ALPA property of SP 6-28 reverting to previous on FC4700 storage
application configuration system 4-14
default configuration file, changing name upgrading on FC4700 storage system 4-2, 4-4
and location of 3-37 basic storage component icons 3-19
default, using 2-6 battery
defined 3-36 self-test 6-30
saving to see also BBU (battery back up) or SPS
custom configuration file 3-37 (standby power supply)
default configuration file 3-37 Battery Test Time dialog box 6-31
application icon BBU (battery backup unit)
function 3-31 failure states 11-20

EMC Navisphere Manager Version 5.X Administrator’s Guide i-1

6864 5738-001
Index

menu 3-29 configuration access control, defined 6-2


properties, displaying 11-26 configuration access for a server, enabling 6-6
self-test 6-30 configuration server, defined 1-10
Bind LUN dialog boxes configuring SP snapshot cache 10-12
Advanced Bind LUN 7-15 Connect Hosts to Storage dialog box, advanced
Advanced Bind LUN (RAID Groups) 7-33 12-44
Bind LUN 7-11 connected hosts property, defined 8-5
Bind LUN (RAID Groups) 7-29 connectivity map
binding LUNs about 3-6
custom components 3-6
non-RAID Group storage system 7-13 Detailed View window 3-7
RAID Group storage system 7-32 displaying the connections between hosts
snapshot cache 10-9 and storage systems 3-7
standard constant prefetch property, defined 12-7
non-RAID Group storage system 7-10 containers 3-10
RAID Group storage system 7-28 Core Software, upgrading on non-FC4700 storage
buttons system 13-3
disk IDs 3-9 Create RAID Group dialog box
Help advanced 7-24
Detailed View 3-9 basic 7-22
Main window 3-33 Create Snapshot dialog box 10-10
LUN devices 3-9 Create Storage Group dialog box 8-7
LUN ownership 3-9 creating a remote mirror 9-19
Poll 3-33 advanced 9-23
Select View 3-9 basic 9-20
Software Installation 3-33
D
C data access control for storage system
C1000, icon for 3-15 defined 8-2
cache enabling 8-2
snapshot 3-22 database disks 13-2
snapshot cache size 10-9 deactivating remote mirrors 9-40
caching default owner property
LUN defined 7-6
enabling or disabling setting after binding LUN 12-11
read caching 12-3 setting when binding LUN
write caching 12-3 non-RAID Group storage system 7-18
storage-system RAID Group storage system 7-35
enabling or disabling defragmenting a RAID Group 12-30
read cache 6-23 destroying
write cache 6-23 a RAID Group 12-33
caller not privileged error A-3 remote mirrors 9-51
CDE (CLARiiON Driver Extensions), status 11-21 Detailed View window
clearing events in events window 11-37 containers 3-10
configuration access control property defined 3-7
defined 6-3 workspace 3-10

i-2 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

device information in Agent configuration file, Storage System Selection 4-5, 13-4
editing User Options 2-9, 3-37
AIX server 7-38, 8-15 disable size property, defined 12-8
HP-UX server 7-38, 8-15 disk IDs button 3-9
NetWare server 7-38, 8-15 Disk Selection dialog box 7-16
Solaris server 7-39, 8-16 disk type, defined 7-3
Windows server 7-40, 8-17 disk-array storage system, see storage system
dialog boxes disks
Advanced Bind LUN 7-15 cache vault 11-14
Advanced Bind LUN (RAID Groups) 7-33 database 13-2
Agent Selection 2-11 failure states 11-11
Battery Test Time 6-31 faulted 11-11
Bind LUN 7-11 rebuilding on hot spare 11-13
Bind LUN (RAID Groups) 7-29 icons for 3-21
Connect Hosts to Storage, advanced 12-44 what to do when orange 11-11
Create RAID Group in LUN 12-15
advanced 7-24 in RAID Group 12-33
basic 7-22 increasing capacity of 13-9
Create Snapshot 10-10 menu 3-24, 3-25
Create Storage Group 8-7 number in RAID types 7-3
Disk Properties 13-3 properties, displaying 11-26
Disk Selection 7-16 displaying the connectivity map 3-6
Enable Management Login 6-6 drive fan pack, faulted 11-16
Enterprise Storage 3-34 dual board unbind error A-2
Failover Status 11-21
Fault Status Report 11-10
E
Host Properties 5-6
element size property
Host Properties, Storage tab 12-16
defined 7-4
LUN Properties
setting when binding LUN
Cache tab 12-4
non-RAID Group storage system 7-18
General tab 12-6
RAID Group storage system 7-34
Prefetch tab 12-10
enable auto assign property
RAID Group Properties
defined 7-7
General tab 12-26
setting when binding LUN
Partitions tab 12-32
non-RAID Group storage system 7-18
Software Installation 4-4, 13-4
RAID Group storage system 7-35
SP Properties 5-2
enable automatic polling property
Storage Group Properties
default value 11-4
Advanced tab 8-12
defined 6-10
General tab 8-11
setting 6-11
Storage System Properties 4-11
Enable Management Login dialog box 6-6
Cache tab 6-22
enable read cache property
Configuration Access tab 6-6
defined 7-6
Data Access tab 8-3
setting after binding LUN 12-3
General tab 6-12
setting when binding LUN
Hosts tab 6-25
non-RAID Group storage system 7-18
Memory tab 6-15

EMC Navisphere Manager Version 5.X Administrator’s Guide i-3

6864 5738-001
Index

RAID Group storage system 7-34 expansion/defragmentation priority property


enable watermark processing property, defined defined 7-21
6-19 setting after creating RAID Group 12-25
enable write cache property setting when creating RAID Group 7-26
defined 7-7
setting after binding LUN 12-3
F
setting when binding LUN
failover software, status 11-21
non-RAID Group storage system 7-18
Failover Status dialog box 11-21
RAID Group storage system 7-35
fair access to storage system, defined 6-24
enclosure
fan module, faulted 11-17
icons for 3-27
fans
menu 3-29
faulted 11-16
enclosure x bypass error A-4
drive fan pack 11-16
enforce fair access property, defined 6-24
fan module 11-17
Enterprise Storage dialog boxes 3-34
SP fan pack 11-16
activating 3-35
icons for 3-27
closing 3-35
what do when orange 11-16
Equipment tab 3-2
menu 3-29
Hosts tab 3-2
Fault Status Report dialog box 11-10
opening 3-35
faults
Storage tab 3-2
disk 11-11
Equipment tree 3-2
fan 11-16
event log
LCC 11-15
clearing 11-32
power supply 11-18
displaying
SP 11-14
all events 11-32
SPS 11-19
events for specific component or date
storage system 11-10
and time 11-32
FC4300/4500, icon for 3-14
opening 11-29
FC4700
printing 11-33
icon for 3-14
saving 11-33
installing new software 4-4
viewing 11-34
installing or upgrading software on 4-2
events
FC5000, icon for 3-14
clearing 11-37
FC5200/5300, icon for 3-14
filtering 11-35
FC5600/5700, icon for 3-14
printing 11-37
FC-series storage system, defined 1-2
saving to log file 11-37
File menu 3-31
viewing 11-34
Filter By filter, SP Event Log dialog box 11-31
viewing details 11-36
Filter For filter, SP Event Log dialog box 11-31
Events Timeline
filtering events 11-35
overview 11-39
filters for displaying events 11-31
time intervals 11-39
filters for displaying managed storage systems
toolbar 11-39
3-35
viewing 11-39
Free Memory partition 6-16
expanding a RAID Group 12-27

i-4 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

H server 3-12, 3-13


Help server HBAs 3-12
button snapshots 3-23
Detailed View 3-9 SnapView components 3-22
Main window 3-33 SPS 3-28
menu 3-33 what to do when orange 11-19
high watermark property, defined 6-19 SPs 3-20
host, see server what to do when orange 11-14
host access status property storage components 3-23
defined 6-3 Storage Group 3-19
setting 6-6 storage system 3-14
Host Connections button 3-9 what to do when orange 11-9
host file path option VSCs 3-28
default value 2-9 what to do when orange 11-18
defined 2-8 idle count property, defined 12-8
Host Properties dialog box inaccessible storage system, defined 3-11
displayed 5-6 individual disk
Storage tab 12-16 defined 7-3
Hosts tree 3-2 icon for 3-20
hot spare installing Manager 2-3
defined 7-3
disks that cannot be 7-4 L
icon for 3-20 LCCs (link control cards)
rebuilding failed disk 11-13 faulted 11-15
type, defined 7-3 icons for 3-27
what do when orange 11-15
I menu 3-29
icons 3-12, 3-14 link control card, see LCCs (link control cards)
application 3-31 log file
what to do when orange 11-9 opening 11-34
basic storage components 3-19 printing events 11-37
color of 3-12 saving events 11-37
disks 3-21 logs, event, viewing 11-34
what to do when orange 11-11 low watermark property, defined 6-19
enclosure 3-27 LUN devices button 3-9
fans 3-27 LUN ownership button 3-9
what to do when orange 11-16 LUN properties
LCCs 3-27 default owner
what to do when orange 11-15 defined 7-6
letter in 3-12 setting after binding LUN 12-11
LUNs 3-19 setting when binding LUN
MirrorView components 3-22 non-RAID Group storage system
power supplies 3-27 7-18
what to do when orange 11-18 RAID Group storage system 7-35
RAID Groups 3-20 default values 7-8
remote mirror 3-22 defined 7-4

EMC Navisphere Manager Version 5.X Administrator’s Guide i-5

6864 5738-001
Index

displaying 11-24 General tab 12-6


element size Prefetch tab 12-10
defined 7-4 LUNs (logical units)
setting when binding LUN binding custom on
non-RAID Group storage system non-RAID Group storage system 7-13
7-18 RAID Group storage system 7-32
RAID Group storage system 7-34 binding standard on
enable auto assign non-RAID Group storage system 7-10
defined 7-7 RAID Group storage system 7-28
setting when binding LUN default ownership, transferring to other SP
non-RAID Group storage system 12-11
7-18 defined 7-2
RAID Group storage system 7-35 disks in 12-15
enable read cache eligibility for mirroring 9-21
defined 7-6 icon for snapshots 3-23
setting after binding LUN 12-3 icons for 3-19
setting when binding LUN in Storage Group property, defined 8-5
non-RAID Group storage system maximum number in RAID Group 7-21
7-18 menu 3-24, 3-25
RAID Group storage system 7-34 not visible on Solaris server
enable write cache troubleshooting A-3
defined 7-7 properties, see LUN properties
setting after binding LUN 12-3 RAID types, defined 7-2
setting when binding LUN read caching, enabling or disabling 12-3
non-RAID Group storage system rebuilding failed disk on hot spare 11-13
7-18 reconfiguring 12-2
RAID Group storage system 7-35 unbinding 12-14
for different RAID types 7-8 unowned
prefetch defined 3-20
available values 12-9 troubleshooting A-3
default values 12-8 user capacity of, changing 12-23
defined 12-7 write caching, enabling or disabling 12-3
setting 12-9 see also LUN properties
rebuild priority
defined 7-5
M
setting when binding LUN
Main window
non-RAID Group storage system
application configuration
7-18
default configuration file, changing
RAID Group storage system 7-34
name and location of 3-37
verify priority
default, using 2-6
defined 7-5
defined 3-36
setting when binding LUN
saving to
non-RAID Group storage system
custom configuration file 3-37
7-18
default configuration file 3-37
RAID Group storage system 7-34
application icon 3-31
LUN Properties dialog box
Enterprise Storage dialog boxes 3-34
Cache tab 12-4

i-6 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

menu bar 3-31 managing storage systems on


File menu 3-31 selected agents on subnets 2-13, 2-15
Help menu 3-33 servers whose names you know 2-12, 2-16
Operations menu 3-32 manual polling
Windows menu 3-33 description of 11-4
status bar 3-36 storage systems 11-7
toolbar 3-33 maximum prefetch property, defined 12-8
workspace 3-10, 3-34 memory
managed storage system partitions
defined 1-2 assigning memory to 6-15
filters 3-35 Free Memory 6-16
see also storage system RAID 3 Memory 6-17
management station, defined 1-3 SP A Read Cache Memory partition 6-16
Manager SP B Read Cache Memory 6-16
automatic polling interval property, setting SP Usage 6-16
11-5 Write Cache Memory 6-17
automatic polling option total 6-16
default value 2-9 menu bar 3-31
setting 2-9 File menu 3-31
configuration management 1-6 Help menu 3-33
fault and problem monitoring 1-7 Operations menu 3-32
hardware and software requirements 2-3 View menu 3-32
host file path option Window menu 3-33
default value 2-9 menu options
defined 2-8 remote mirror 3-25
setting 2-9 menus
installing 2-3 displaying for storage system 3-16
network time-out option for snapshot session 3-26
default value 2-9 for snapshots 3-26
setting 2-9 for storage-system components 3-23, 3-28
polling for storage-system information 11-2 BBU 3-29
polling interval option disk 3-24, 3-25
default value 2-9 enclosure 3-29
setting 2-9 fan 3-29
problems A-1 LCC 3-29
caller not privileged error A-3 LUNs 3-24, 3-25
dual board unbind error A-2 RAID Group 3-25
enclosure x bypass error A-4 RAID group 3-24
LUN not visible on Solaris server A-3 SPS 3-29
server connection timeout A-2 SPs 3-24, 3-25
storage system connection timeout A-2 storage groups 3-23, 3-25
unowned LUNs A-3 on menu bar
removing 2-2 File 3-31
save file path option, defined 2-8 Help 3-33
starting a session 2-5 Operation 3-32
updating storage-system information 11-2 View 3-32
Window 3-33

EMC Navisphere Manager Version 5.X Administrator’s Guide i-7

6864 5738-001
Index

remote mirror image 3-26 description of 11-4


storage groups 3-25 polling interval option
mirrored write cache property, defined 6-20 default value 2-9
MirrorView defined 2-8
component icons 3-22 power supplies
handling failures 9-10 faulted 11-18
installing on an FC4700 storage system 4-4 icons for 3-27
managing connections 9-37 what do when orange 11-18
overview 9-2 state, displaying 11-27
status of logical connections 9-38 prefetch multiplier property, defined 12-7
storage components for 3-18 prefetch properties
terminology 9-3 available values 12-9
modifying remote mirrors 9-29 default values 12-8
multiple storage systems, menu options 3-18 defined 12-7
prefetch size property, defined 12-7
prefetch type property
N
defined 12-7
Navisphere Agent, see Agent
setting 12-9
Navisphere Supervisor, removing 2-2
printing events 11-37
network timeout option, default value 2-9
PROM code, included with Core Software 13-2
non-FC4700 storage system
promotion of secondary image 9-9
upgrading storage-system software with
properties
Manager 5X 13-2
access control enabled
non-RAID Group storage system
defined 8-2
binding LUNs
setting 8-2
custom 7-13
automatic polling interval
standard 7-10
default value 11-4
non-RAID Group storage system, binding LUNs
setting 11-5
7-10
automatic polling priority
defined 6-10
O setting 6-11
Operations menu 3-32 automatic polling, setting 11-4
BBU, displaying 11-26
configuration access control
P defined 6-3
packages 4-2 connected hosts, defined 8-5
active status 4-13 disk, displaying 11-26
page size property, defined 6-19 enable automatic polling
partitions on RAID Group 12-32 default value 11-4
Poll button 3-33 defined 6-10
polling setting 6-11
Agent by Manager 11-2 enable watermark processing, defined 6-19
automatic 11-4 enforce fair access, defined 6-24
description of 11-4 high watermark, defined 6-19
properties, setting 11-4 host access status
see also automatic polling defined 6-3
manual 11-7 setting 6-6

i-8 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

low watermark, defined 6-19 used host connection paths, defined 8-6
LUN write caching, defined 6-21
defined 7-4
displaying 11-24
R
prefetch
RAID 0
defined 12-7
LUN, icon for 3-20
setting 12-9
type, defined 7-2
setting after binding LUN 12-3
RAID 1
setting when binding LUN
LUN, icon for 3-19
non-RAID Group storage system
type, defined 7-2
7-17
RAID 1/0
RAID Group storage system 7-34
LUN, icon for 3-19
see also LUN properties
type, defined 7-3
LUNs in Storage Group, defined 8-5
RAID 3
mirrored write cache, defined 6-20
LUN
page size, defined 6-19
icon for 3-19
RAID Group 7-21
LUN, icon for 3-19
displaying 11-25
memory partition 6-17
setting after creating Group 12-25
type, defined 7-2
server, displaying 11-23, 11-24
RAID 5
sharing
LUN, icon for 3-19
defined 8-4
type, defined 7-2
SP A read caching, defined 6-20
RAID Group Properties dialog box
SP A statistics logging
General tab 12-26
defined 6-11
Partitions tab 12-32
setting 6-11
RAID Group storage system
SP B read caching, defined 6-20
binding LUNs 7-27
SP B statistics logging
custom 7-32
defined 6-11
standard 7-28
setting 6-11
creating RAID Groups 7-20
SPs, displaying 11-25, 11-26
custom 7-23
Storage Group name, defined 8-4
standard 7-22
Storage Group, defined 8-4
see also storage system
storage-system
RAID Groups
cache, defined 6-18
binding LUNs on
configuration access
custom 7-32
defined 6-2, 6-3
standard 7-28
data access
creating 7-20
defined 8-2
custom 7-23
setting 8-2
standard 7-22
displaying 11-22
defined 7-20
general configuration
defragmenting 12-30
defined 6-10
destroying 12-33
setting 6-11
disks in 12-33
hosts, defined 6-24
expanding 12-27
unique ID (for Storage Group), defined 8-4
icons for 3-20

EMC Navisphere Manager Version 5.X Administrator’s Guide i-9

6864 5738-001
Index

maximum number of LUNs 7-21 creating 9-19


menu 3-24, 3-25 advanced 9-23
partitions 12-32 basic 9-20
properties 7-21 deactivating 9-40
automatic destroy 7-21 destroying 9-51
setting when creating RAID Group fracturing a secondary image 9-49
7-26 icons for 3-22
automatically destroy menu options for 3-25
setting after creating RAID Group promoting a secondary image to primary,
12-25 general 9-46
default values 7-21 remote mirror image, menu options for 3-26
displaying 11-25 removing a secondary image, general 9-50
expansion/defragmentation priority synchronizing a secondary image, general
defined 7-21 9-47
setting when creating RAID Group viewing or modifying 9-29
7-26 Remote SP Agent
reconfiguring 12-25 configuring 5-2
RAID types setting polling interval 5-3
defined 7-2 removing
LUN properties for 7-8 a secondary image in remote mirrors 9-50
number of disks in 7-3 Manager 2-2
read caching Supervisor 2-2
storage-system retention property, defined 12-8
enabling or disabling 6-23
setting properties for 6-21
S
rebooting SPs 13-5
save file path option, defined 2-8
rebuild priority property
saving events to log file 11-37
defined 7-5
secondary image, promotion 9-9
setting when binding LUN
secondary images in remote mirrors
non-RAID Group storage system 7-18
promoting to primary 9-46
RAID Group storage system 7-34
segment multiplier property, defined 12-8
reconfiguring
segment size property, defined 12-8
LUNs 12-2
Select View button 3-9
RAID Groups 12-25
server
storage system 4-1, 13-1
ATF status 11-21
Remote Host Agents
CDE status 11-21
adding devices 5-9
connecting to
adding privileged users 5-3, 5-10
shared storage system 13-11
clearing devices 5-10
Storage Groups 12-41
deleting devices 5-9
disconnecting from
scanning for devices 5-5
shared storage system 13-14
updating parameters 5-11
Storage Group 12-41
remote mirroring
enabling configuration access to storage
promotion 9-9
system 6-6
Remote Mirrors
icon for 3-12, 3-13
activating, general 9-27
properties, displaying 11-24

i-10 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

Storage Group for, verifying connection to software installation, identifying source of


8-11 problems 4-11
shared storage system software packages see packages
connecting new server 13-11 SP (storage processor)
disconnecting server 13-14 faulted 11-14
sharing property, defined 8-4 icons for 3-20
single storage system, menu options 3-17 what do when orange 11-14
snapshot cache menu 3-24, 3-25
binding LUNs 10-9 port, ALPA address (SCSI ID), setting 6-28
display properties 10-22 properties, ALPA property 6-28
icon for 3-22 properties, displaying 11-25
menus for 3-26 rebooting 13-5
snapshot session snapshot cache icon 3-22
icon for 3-22 snapshot cache menus 3-26
menus for 3-26 SP A
monitoring 10-23 event log, see SP event log
starting 10-17 Read Cache Memory partition 6-16
starting in simulation mode 10-19 read caching property, defined 6-20
stopping 10-20 statistics logging property
snapshots defined 6-11
creating 10-10 setting 6-11
destroying 10-16 Write Cache Memory partition 6-17
display properties 10-21 SP Agent
icon for 3-23 installing or upgrading 4-2
menus for 3-26 Remote, configuring 5-2
SnapView SP B
binding snapshot cache LUNs 10-9 event log, see SP event log
configuring SP snapshot cache 10-12 Read Cache Memory partition 6-16
creating snapshots 10-10 read caching property, defined 6-20
destroying a snapshot 10-16 statistics logging property
display defined 6-11
component status 10-21 setting 6-11
snapshot cache properties 10-22 Write Cache Memory partition 6-17
snapshot properties 10-21 SP event log
installing or upgrading 4-2 clearing 11-32
setting up 10-9 displaying
snapshot cache size 10-9 all events 11-32
starting a snapshot session, simulation mode events for specific component or date
10-19 and time 11-32
stopping a snapshot session 10-20 opening 11-29
storage component icons 3-22 printing 11-33
Software Installation Confirmation box saving 11-33
displayed 4-6 SP fan pack, faulted 11-16
verifying information in 4-7 SP Properties dialog box 5-2
Software Installation dialog box 4-4, 13-4 SP Usage memory partition 6-16
Software Installation, button 3-33 SPS (standby power supply)
faulted 11-19

EMC Navisphere Manager Version 5.X Administrator’s Guide i-11

6864 5738-001
Index

icons for 3-28 write


what do when orange 11-19 enabling or disabling 6-23
menu 3-29 hardware requirements for 6-18
properties, displaying 11-26 component menus 3-28
self-test 6-30 configuration access control
state defined 6-2
BBU failure 11-20 data access control
disk failure 11-11 defined 8-2
power supply, displaying 11-27 enabling 8-2
status bar 3-36 description 3-15
information fields 3-36 disks, replacing with higher capacity 13-9
status, ATF 11-21 enabled automatic polling property, default
status, CDE 11-21 value 11-4
Storage 8-4 fair access to, defined 6-24
storage components, icons for 3-23 faults 11-10
Storage Group Properties dialog box filters for displaying, events 11-31
Advanced tab 8-12 icons 3-14
General tab 8-11 what to do when orange 11-9
Storage Groups 8-1 inaccessible 3-11
connecting servers 12-41 information, updating 11-2
defined 8-1, 8-2 managing on
destroying 12-46 selected agents on subnets 2-13, 2-15
disconnecting servers 12-41 servers whose name you know 2-12,
icons for 3-19 2-16
menus 3-23, 3-25 menu 3-16
properties for components 3-23, 3-28
connected hosts, defined 8-5 menu options
LUNs in Storage Group, defined 8-5 for multiple storage systems 3-18
sharing, defined 8-4 for single storage system 3-17
Storage Group name property, defined operation, monitoring 11-8
8-4 with Application icon 11-9
unique ID, defined 8-4 with storage-system icon 11-9
used host connection paths, defined 8-6 overview of configuration and management
server connections to, verifying 8-11 shared storage system 1-10
storage processor, see SP (storage processor) unshared storage system 1-11
storage system polling
accessible 3-11 automatic 11-4
automatic polling manual 11-7
description of 11-4 properties
interval, defined 2-8 cache, defined 6-18
automatic polling priority property configuration access
default value 11-4 defined 6-2, 6-3
setting 11-5 setting 6-2, 6-3
caching 6-21 data access
read, enabling or disabling 6-23 defined 8-2
setting 8-2
displaying 11-22

i-12 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
Index

general configuration selecting icons on 3-5


defined 6-10 Storage 3-2
setting 6-11
hosts, defined 6-24
U
read cache, setting properties for 6-21
unbinding LUNs 12-14
reconfiguring 4-1, 13-1
unique ID property, defined 8-4
selecting ones to manage 2-10
unowned LUNs
server access to 6-3
defined 3-20
trees for 3-2
icon for 3-20
unsupported 3-11
troubleshooting A-3
upgrading to support caching 13-7
unsupported storage systems 3-11
write cache, setting properties for 6-21
upgrading
see also shared storage system
Core Software on non-FC4700 storage
Storage System Properties dialog box 4-11
system, Manager 5X 13-2, 13-3
Cache tab 6-22
storage-system software with Manager 5X,
Configuration Access tab 6-6
non-FC4700 storage system 4-14,
Data Access tab 8-3
13-2
determining status of existing storage-system
used host connection paths property, defined 8-6
software 4-12
user entry in agent configuration file 5-4
General tab 6-12
User Options dialog box 2-9, 3-37
Hosts tab 6-25
user options for Manager
Memory tab 6-15
default values 2-9
Storage System Selection dialog box 4-5, 13-4
defined 2-8
storage systems
setting 2-9
accessible 3-11
icons 3-14
inaccessible 3-11 V
unsupported 3-11 variable prefetch property
Storage tree 3-2 defined 12-7
Supervisor, removing 2-2 setting 12-9
synchronizing a secondary image in remote verify priority property
mirrors defined 7-5
general 9-47 setting when binding LUN
non-RAID Group storage system 7-18
T RAID Group storage system 7-34
View menu options 3-32
Timeline, see Events Timeline
viewing
timeout connecting to
event details 11-36
server A-2
event logs 11-34
storage system A-2
remote mirrors 9-29
trees 3-19
VSCs (Voltage Semi-regulated Converters)
defined 3-2
icons for 3-28
Equipment 3-2
what do when orange 11-18
Hosts 3-2
see also power supplies
icons 3-12

EMC Navisphere Manager Version 5.X Administrator’s Guide i-13

6864 5738-001
Index

W
Window menu 3-33
Write Cache Memory partition 6-17
write caching
hardware requirements for 6-18
storage-system
enabling or disabling 6-23
setting properties for 6-21
write intent log, allocating 9-16

i-14 EMC Navisphere Manager Version 5.X Administrator’s Guide

6864 5738-001
.
*68645738-001*
68645738–001

Вам также может понравиться