Вы находитесь на странице: 1из 320

UNM2000

Network Convergence
Management System (Based on
Windows)

Active/Standby System
Installation Guide
Version: A

Code: MN000001819

FiberHome Telecommunication Technologies Co., Ltd.

February 2017
Thank you for choosing our products.

We appreciate your business. Your satisfaction is our goal.


We will provide you with comprehensive technical support
and after-sales service. Please contact your local sales
representative, service representative or distributor for any
help needed at the contact information shown below.

Fiberhome Telecommunication Technologies Co., Ltd.

Address: No. 67, Guanggu Chuangye Jie, Wuhan, Hubei, China


Zip code: 430073
Tel: +6 03 7960 0860/0884 (for Malaysia)
+91 98 9985 5448 (for South Asia)
+593 4 501 4529 (for South America)
Fax: +86 27 8717 8521
Website: http://www.fiberhomegroup.com
Legal Notice

are trademarks of FiberHome Telecommunication Technologies Co., Ltd.


(Hereinafter referred to as FiberHome)
All brand names and product names used in this document are used for
identification purposes only and are trademarks or registered trademarks
of their respective holders.

All rights reserved

No part of this document (including the electronic version) may be


reproduced or transmitted in any form or by any means without prior
written permission from FiberHome.
Information in this document is subject to change without notice.
Preface

Related Documentation
Document Description

UNM2000 Network Convergence Introduces the functions, application scenarios and


Management System Product technical specifications of the UNM2000 Network
Description Convergence Management System.

UNM2000 Network Convergence Introduces how to install the UNM2000 Network


Management System Installation Convergence Management System on the Windows
Guide (Based on Windows) operating system.

UNM2000 Network Convergence Introduces how to install the UNM2000 Network


Management System Installation Convergence Management System on the SUSE Linux
Guide (Based on SUSELinux) operating system.

UNM2000 Network Convergence


Management System Introduces how to install the active / standby system of the
Active/Standby System UNM2000 Network Convergence Management System on
Installation Guide (Based on the Windows operating system.
Windows)

UNM2000 Network Convergence


Management System Introduces how to install the active / standby system of the
Active/Standby System UNM2000 Network Convergence Management System on
Installation Guide (Based on the SUSE Linux operating system.
SUSE Linux)

UNM2000 Network Convergence


Introduces the operation guidelines of the UNM2000
Management System Operation
Network Convergence Management System.
Guide

I
Version Description
Version Description

A Initial version.

Intended Readers

This manual is intended for the following readers:

u Commissioning engineers

u Project commissioning engineers

u FiberHome engineers

To utilize this manual, these prerequisite skills are necessary:

u Data communication technology

u Access network technology

u Cluster technology

u Data disaster-tolerance technology

II
Conventions

Terminology Conventions

Terminology Convention

UNM2000 FiberHome UNM2000 Network Convergence Management System

ETERNUS DX60 S2 Fujitsu disk array

DELL MDSS
Dell management tool for disk arrays
Consolidated
Veritas Cluster management software developed by Veritas

Symbol Conventions

Symbol Convention Description

Note Important features or operation guide.

Possible injury to persons or systems, or cause


Caution
traffic interruption or loss.

Warning May cause severe bodily injuries.

➔ Jump Jumps to another step.

→ Cascading menu Connects multi-level menu options.

Bidirectional
↔ The service signal is bidirectional.
service
Unidirectional
→ The service signal is unidirectional.
service

III
Operation Safety Rules

The network management computer should be placed away from


direct sunlight, electromagnetic interference, heat source,
humidity and dust, and with at least 8cm distance from other
objects in order to keep good ventilation.

Use UPS power supply to avoid loss of network management


data caused by accidental power failure.

The computer case, UPS power supply and switch (or hub)
should be connected to protection earth ground.

To shut down the network management computer, first exit the


operation system normally and then shut off the power supply.

Do not exit the network management system when it is working


normally. Exiting the network management system does not
interrupt traffic in the network, but precludes centralized control of
the networked equipment.

The network management computer cannot be used for purposes


other than network management. Use of unidentified memory
devices should be prohibited so as to avoid computer viruses.

Do not delete any file in the network management system


randomly or copy any irrelevant file into the network management
computer.

Do not visit Internet via the network management computer.


Doing so may increase data flow in the net card and hence affects
normal network management data transmission or results in other
accidents.

V
Do not perform service configuration or expansion during service
busy hours via the network management system.

Do not modify the network management computer’s protocol


settings, computer name or LAN settings. Doing so may result in
abnormal operation of network management system.

VI
VII
VIII
Contents

Preface...................................................................................................................I

Related Documentation ...................................................................................I

Version Description.........................................................................................II

Intended Readers ...........................................................................................II

Conventions ..................................................................................................III

Operation Safety Rules ......................................................................................... V

1 Installation Overview.......................................................................................1

1.1 Deployment Mode.............................................................................2

1.1.1 Local HA System with One Disk Array .................................2


1.1.2 Local HA System with Two Disk Arrays ...............................3
1.1.3 Remote HA System with Two Disk Arrays ...........................5

1.2 Hardware Configuration Requirements..............................................8

1.3 Software Configuration Requirements ...............................................9

2 Installation Process.......................................................................................10

2.1 Installing the Local HA System with One / Two Disk Array(s)............ 11

2.2 Remote HA System with Two Disk Arrays .......................................13

3 Local HA System with One / Two Disk Array(s)..............................................15

3.1 Preparations Before Installation ......................................................16

3.2 Checking Hardware Connections ....................................................17

3.3 Installing the Disk Array ..................................................................20

3.3.1 Installing the Fujitsu Disk Array .........................................20


3.3.2 Installing the Dell Disk Array..............................................25

3.4 Installing the Cluster Software Veritas .............................................46

3.5 Configuring the VCS .......................................................................54

3.6 Configuring the Disk Array ..............................................................67

3.7 Configuring the Cluster Resource....................................................81

3.7.1 Configuring the Resource Group .......................................82


3.7.2 Configuring the NIC Resource...........................................88
3.7.3 Configuring the IP Resource .............................................92
3.7.4 Setting the Disk Resource.................................................98
3.7.5 Configuring the Dependency Relationship Among Cluster
Resources ......................................................................107

3.8 Installing the Database..................................................................109

3.9 Installing the UNM2000................................................................. 112

3.9.1 Installing the UNM2000................................................... 113


3.9.2 Initializing the Database ..................................................121
3.9.3 Configuring the Service...................................................123

3.10 Configuring the EMS Resource .....................................................124

3.11 Verifying the Installation ................................................................129

4 Remote HA System with Two Disk Arrays ...................................................131

4.1 Preparations Before Installation ....................................................132

4.2 Checking Hardware Connections ..................................................134

4.3 Installing the Disk Array ................................................................136

4.3.1 Installing the Fujitsu Disk Array .......................................136


4.3.2 Installing the Dell Disk Array............................................141

4.4 Installing the Cluster Software Veritas ...........................................162

4.5 Configuring the VVR .....................................................................170

4.5.1 Configuring the VVR Security Service .............................170


4.5.2 Configuring the Dynamic Disk Groups .............................175
4.5.3 Creating the Disk Volume................................................183

4.6 Configuring the VCS .....................................................................191

4.7 Configuring the Cluster Resource..................................................205

4.7.1 Setting the Replication Resource Group ..........................205


4.7.2 Configuring the RDS .......................................................213
4.7.3 Configuring the Disk Resource........................................222
4.7.4 Configuring the FHEmsService Resource Group .............227
4.7.5 Configuring the GCO Function ........................................235

4.8 Installing the Database..................................................................243

4.9 Installing the UNM2000.................................................................247

4.9.1 Installing the UNM2000...................................................247


4.9.2 Initializing the Database ..................................................255
4.9.3 Configuring the Service...................................................257

4.10 Configuring the EMS Resource .....................................................258

4.11 Verifying the Installation ................................................................262

5 Precautions for EMS Upgrade.....................................................................264

6 Common Maintenance Operations ..............................................................266

7 Failure Processing ......................................................................................267

7.1 VCS Troubleshooting and Restoring .............................................268

7.1.1 Service Group Troubleshooting .......................................268


7.1.2 Resource Troubleshooting ..............................................270
7.1.3 Global Cluster Troubleshooting .......................................271

7.2 Disk Troubleshooting ....................................................................275

7.2.1 Disk and Volume Status Information................................275


7.2.2 Solving Common Problems.............................................280
7.2.3 Command or Procedure for Fault Elimination and
Restoration .....................................................................290

Appendix A Abbreviations ..........................................................................299


1 Installation Overview

Along with the ever increasing scale of the communication network, the operators
do not only have requirements for easy-to-use of functions and management
capability of the EMS, but also have higher requirements for stability of the EMS. To
improve and guarantee stability, reliability and restoration of the UNM2000,
FiberHome introduces the 1+1 cluster hot standby tolerance solution for the
UNM2000.

To install the UNM2000 active / standby systems, you need to understand the
installation and deployment modes, and the software and hardware configuration
requirements of the UNM2000.

Deployment Mode

Hardware Configuration Requirements

Software Configuration Requirements

Version: A 1
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1.1 Deployment Mode

The UNM2000 supports three deployment modes, namely, local HA system with
one cabinet, local HA system with two cabinets and remote HA system with two
cabinets.

1.1.1 Local HA System with One Disk Array

This following introduces the network and working principle of the local HA system
with one disk array.

Network Diagram of Local HA System with One Disk Array

In the local HA system with one disk array solution, two servers are connected to
the same disk array through Fibre-Optic (FC) so that they can access the same
storage unit. By configuring the Redundant Arrays of Independent Disks (RAID) of
the disk array, you can achieve data reliability. Figure 1-1 shows the network of the
local HA system with one disk array.

Figure 1-1 Network Diagram of Local HA System with One Disk Array

2 Version: A
1 Installation Overview

Working Principle of Local HA System with One Disk Array

Figure 1-2 shows the working principle of the local HA system with one disk array.

Figure 1-2 Working Principle of Local HA System with One Disk Array

The cluster system provides services for the NMS by the floating IP address and
filter the difference after active / standby switching to implement uninterrupted
working.

When the active server is faulty, the Veritas Cluster Server (VCS) switches the
floating IP to the standby server and the disk array is loaded on the standby server.
In this situation, the standby server provides the GUI service by the floating IP
address and takes the active server's responsibilities.

1.1.2 Local HA System with Two Disk Arrays

This following introduces the network and working principle of the local HA system
with two disk arrays.

Network Diagram of Local HA System with Two Disk Arrays

Figure 1-3 shows the connection relationship among the components in the network
of local HA system with two disk arrays.

Version: A 3
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 1-3 Network Diagram of Local HA System with Two Disk Arrays

Working Principle of Local HA System with Two Disk Arrays

Similar to the working principle of local HA system with one disk array, the local HA
system with two disk arrays has one more disk array as data storage mirroring,
which provides disk array backup to avoid hardware failure of disk array and data
loss. Figure 1-4 shows the working principle of the local HA system with two disk
arrays.

4 Version: A
1 Installation Overview

Figure 1-4 Working Principle of Local HA System with Two Disk Arrays

1.1.3 Remote HA System with Two Disk Arrays

This section introduces the network and working principle of the remote HA system
with two disk arrays.

Network Diagram of Remote HA System with Two Disk Arrays

Two servers are installed in different equipment rooms and connected through the
Data Communication Network (DCN). The disk array should be connected only to
the server in the same equipment room and implements data duplication by the
Veritas Volume Replicator (VVR) technology. shows the network diagram of the
remote HA system with two disk arrays.

Version: A 5
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 1-5 Network Diagram of Remote HA System with Two Disk Arrays

6 Version: A
1 Installation Overview

Figure 1-6 Network Diagram of Remote HA System with Two Disk Arrays (Cloud Deployment)

Working Principle of Remote HA System with Two Disk Arrays

shows the network and working principle of the remote HA system with two disk
arrays.

Version: A 7
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 1-7 Working Principle of Remote HA System with Two Disk Arrays

When the active server is working, it manages all the NEs through the gateway NE
"NE1" and write the configuration to the connected disk array DB1. Meanwhile, the
active and standby servers copies the bottom-layer I/O-changed actual data to the
remote disk array through the VVR replication technology. When unexpected
disaster occurs on the active server, the Global Cluster Option (GCO) monitors the
status in real time, switches the EMS services to the standby server and changes
the VVR direction to ensure uniqueness of the data. The UNM2000 automatically
modifies the EMS management program according to the configured IP address of
the server and monitors / manages the networkwide NEs through the gateway NE
"NE2".

1.2 Hardware Configuration Requirements

Table 1-1 shows the hardware configuration requirements of the UNM2000 active/
standby disaster-tolerate system.

8 Version: A
1 Installation Overview

Table 1-1 Hardware Configuration Requirements of the Active/Standby Disaster-tolerate


System

Hardware Name Model Quantity Configuration


Requirement

Each server should


Dell PowerEdge have at least three
Server R720 (standard 2 network cards and
configuration) one HBA card with
two optical interfaces.

Disk array (local HA


Capacity of each disk
system with one disk ETERNUS DX60 S2 1
≥ 400GB
array)

Disk array (local HA


Capacity of each disk
system with two disk ETERNUS DX60 S2 2
≥ 400GB
arrays)

Disk array (remote


Capacity of each disk
HA system with two ETERNUS DX60 S2 2
≥ 400GB
disk arrays)

Optical fiber, network See the network in


- -
cable Deployment Mode.

1.3 Software Configuration Requirements

To install the UNM2000 active/standby system, you need to make sure the following
software configuration requirements are met:

u The Windows Server 2012 R2 Standard operating system is installed on the


EMS server.

u The Symantec cluster software and license are obtained.

Version: A 9
2 Installation Process

There are three installation modes, namely, local HA system with one disk array,
local HA system with two disk arrays and remote HA system with two disk arrays.
You can select the mode according to the actual situation and install the active/
standby system according to the corresponding installation process.

Installing the Local HA System with One / Two Disk Array(s)

Remote HA System with Two Disk Arrays

10 Version: A
2 Installation Process

2.1 Installing the Local HA System with One /


Two Disk Array(s)

The installation process of the local HA system with one disk array is the same that
of the local HA system with two disk arrays, as shown in Figure 2-1.

Version: A 11
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 2-1 Installation Flowchart of Local HA System with One / Two Disk Array(s)
12 Version: A
2 Installation Process

2.2 Remote HA System with Two Disk Arrays

Figure 2-2 shows the installation process of remote HA system with two disk arrays.

Version: A 13
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 2-2 Installation Flowchart of Remote HA System with Two Disk Arrays
14 Version: A
3 Local HA System with One / Two
Disk Array(s)

This chapter introduces the installation method for the local HA system with one or
two disk array(s).

Preparations Before Installation

Checking Hardware Connections

Installing the Disk Array

Installing the Cluster Software Veritas

Configuring the VCS

Configuring the Disk Array

Configuring the Cluster Resource

Installing the Database

Installing the UNM2000

Configuring the EMS Resource

Verifying the Installation

Version: A 15
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3.1 Preparations Before Installation

Before installing the active/standby system (local HA system with one disk array or
two disk arrays), make sure the following preparations are done:

u The Qlogic optical fiber channel HBA card is installed on the active and
standby servers respectively.

u The hardware connection for the local HA system with one disk array and the
local HA system with two disk arrays are completed according to Checking
Hardware Connections.

u The host names of the active and standby servers are set to WIN52 and WIN53
respectively (you can customize the host names in the Properties dialog box of
the server).

u The network card names of the active and standby servers are set. It is
recommended that you set the database network card, device network card
and heartbeat network card to TCPIP, DEVICE and HEARTBEAT respectively.

u The database IP address, device IP address and heartbeat IP address of the


active and standby servers are configured.

This manual takes the IP configuration planning as an example to introduce


how to configure the network card. In the following example, only two network
cards are configured for the database IP address and the device IP address
share the same network card.
Server Network card IP Description

Database IP address: Configures it


directly on the TCPIP network card.
10.170.1.52
Device IP address: Configures it directly
Database
on the TCPIP network card.
TCPIP network
Database floating IP address: Sets the
Active card
server IP address to the database
server 10.170.1.54
floating IP address when installing the
UNM2000.
Heartbeat Heartbeat IP address: Configures it
HEART-
network 10.0.0.1 directly on the HEARTBEAT network
BEAT
card card.
Database
Standby Database IP address: Configures it
TCPIP network 10.170.1.53
server directly on the TCPIP network card.
card

16 Version: A
3 Local HA System with One / Two Disk Array(s)

Device IP address: Configures it directly


on the TCPIP network card.
Database floating IP address: Sets the
server IP address to the database
10.170.1.54
floating IP address when installing the
UNM2000.
Heartbeat Heartbeat IP address: Configures it
HEART-
network 10.0.0.2 directly on the HEARTBEAT network
BEAT
card card.

Note:

The heartbeat IP addresses of the active and standby servers can be set
to 10.0.0.1 and 10.0.0.2 respectively so long as they can communicate
with each other.

u The priorities of the network cards of the active and standby servers is set.
Their priorities from high to low should be: database network card > device
network card > heartbeat network card.

u Add the following information in the windows\system32\driver\etc\hosts file and


make sure the active and standby servers can communicate with each other by
host name.

4 The relationship between the local server host name and its database
floating IP address.

4 The relationship between the remote server host name and its database IP
address.

u Set the same username and password for the administrators of the active and
standby servers.

u Turn off the Windows firewall on both the active and standby servers.

3.2 Checking Hardware Connections

Before installing the disk array software, you need to install and connect the
hardware correctly; otherwise, errors may occur during the installation.

Version: A 17
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Connecting the Hardware for Local HA System with One Cabinet

When the device network card and database network card of the cluster server
belong to the same DCN network, connect the hardware according to Figure 3-1.

Figure 3-1 Hardware Connection (Device Network Card and Database Network Card in the
same DCN Network)

When the device network card and database network card of the cluster server
belong to different DCN networks, connect the hardware according to Figure 3-2.

Figure 3-2 Hardware Connection (Device Network Card and Database Network Card in
Different DCN Networks)

18 Version: A
3 Local HA System with One / Two Disk Array(s)

Connecting the Hardware for Local HA System with Two Cabinets

When the device network card and database network card of the cluster server
belong to the same DCN network, connect the hardware according to Figure 3-3.

Figure 3-3 Hardware Connection (Device Network Card and Database Network Card in the
Same DCN Network)

When the device network card and database network card of the cluster server
belong to different DCN networks, connect the hardware according to Figure 3-4.

Version: A 19
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 3-4 Hardware Connection (Device Network Card and Database Network Card in
Different DCN Networks)

3.3 Installing the Disk Array

The FiberHome active / standby system can use the Fujitsu and Dell disk arrays.
The following introduces how to install the Fujitsu and Dell disk arrays respectively.

3.3.1 Installing the Fujitsu Disk Array

The Fujitsu disk array is installed through the webpage.

Prerequisite

u The RMT or MNT port of the Fujitsu disk array is connected to an idle port of the
active server through a network cable, and the IP address of the network card
is on the same network segment as the default IP address of the disk array.

u The IE 7.0 or later versions is installed on the active server.

20 Version: A
3 Local HA System with One / Two Disk Array(s)

Procedure

1. Enter the IP address (http://192.168.1.1 by default) in the IE browser of the


active server to open the following webpage.

2. Enter the username and password (both are root) and click Logon to access
the installation window.

3. Select Volume Settings→RAID Group Management→Create RAID Group


to open the following webpage.

4. Select the default option Create RAID Group (Disks are assigned
automatically) and click Next.

5. Set the RAID group name, set RAID Level to RAID5 (keep default settings for
other parameters) and click Create, as shown below.

Version: A 21
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. In the following dialog box, click OK. The system starts to create the RAID,
which may take several minutes.

If the following information appears, the RAID group is created successfully.

7. Select Volume Settings→Volume Management→Create Volume to open the


following webpage.

22 Version: A
3 Local HA System with One / Two Disk Array(s)

8. Click 0:lv in the left navigation tree, set the volume name and capacity (adopt
default settigs for other parameters), and then click Create.

9. In the following dialog box, click OK. The system starts to create the volume,
which may take several minutes.

If the following information appears, the volume is created successfully.

Version: A 23
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Select Volume Settings→Volume Management→Configure LUN Mapping


to open the following webpage.

11. In the left navigation tree, click CM#0 Port#0 and click Edit to set the name of
bound volume, as shown below.

24 Version: A
3 Local HA System with One / Two Disk Array(s)

12. Click Set and then click OK in the following dialog box. The system starts to
establish the binding relationship, which may take several minutes.

If the following information appears, the binding relationship is established


successfully.

13. Repeat steps 11 to 12 to establish the binding relationship of CM#0 Port#1,


CM#1 Port#0, CM#1 Port#1 respectively.

3.3.2 Installing the Dell Disk Array

The Dell disk array installation includes installing the disk array management
software and configuring the disk array.

3.3.2.1 Installing the Dell Disk Array Management Software

The following introduces how to install the Dell disk array management software.

Prerequisite

The installation program of the Dell disk array management software is obtained.

Version: A 25
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Procedure

1. On the active server, double-click the installation program of the disk array
management software mdss_install.exe to open the following dialog box.

2. Select the language and click OK.

3. Click Next in the dialog box as shown below.

26 Version: A
3 Local HA System with One / Two Disk Array(s)

4. Select I accept the terms of the license Agreement and click Next.

5. Select the default install set Full (Recommended and click Next.

Version: A 27
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. Select Fiber Channel (MD3600f, MD3620f, MD3660f) and click Next.

7. Select No, I will manually start the event monitor service and click Next.

28 Version: A
3 Local HA System with One / Two Disk Array(s)

8. Adopt the default destination folder (recommended), and click Next.

9. View the installation summary and click Install.

Version: A 29
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Wait until the system completes the installation. Then select No, I will restart
my system myself later and click Done, as shown below.

30 Version: A
3 Local HA System with One / Two Disk Array(s)

3.3.2.2 Configuring the Dell Disk Array

The dell disk array is configured by the Dell disk array management software.

Prerequisite

u The Dell disk array management software is installed on the active server.

u The management port of the Dell disk array is connected to an idle port of the
active server through a network cable, and the IP address of the network card
is on the same network segment as the default IP address of the disk array.

Caution:

The Dell disk array has two management ports (network ports), generally
marked as 3 and 4. Their default IP addresses are 192.168.128.101 and
192.168.128.102 respectively.

Procedure

1. Select Start→All Programs→Dell→Modular Disk Storage Manager→


Modular Disk Storage Manager Client.

2. In the Select Addition Method dialog box, select Manual and click OK.

Version: A 31
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. In the Add New Storage Array - Manual dialog box, select Out-of-band
management, enter the management IP address 192.168.128.101 of the disk
array in the RAID Controller Module (DNS/Network name, IPv4 address or
IPv6 address): text box, and then click Add.

32 Version: A
3 Local HA System with One / Two Disk Array(s)

4. Conform to add only one RAID control module path and click Yes in the
displayed alert box.

5. Click Yes in the displayed Storage Array Added alert box.

Version: A 33
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. In the Device tab of the PowerVault MD Storage Manager (Enterprise


Management) window, right-click to disk array and select Manage Storage
Array from the shortcut menu.

7. Click Yes in the Partially Managed Notice alert box to start the disk array
management window.

34 Version: A
3 Local HA System with One / Two Disk Array(s)

8. In the Storage & Copy Services tab of the disk array management window,
right-click Total Unconfigured Capacity and select Create Disk Array from
the shortcut menu.

9. Click Next as shown below.

Version: A 35
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Enter the disk group name, select Manual (Advanced): Choose specific
physical disks to obtain capacity for the new disk group. and click Next.

36 Version: A
3 Local HA System with One / Two Disk Array(s)

11. Set the RAID level to RAID 5, add the physical disk, click Calculate Capacity
and then click Finish.

Version: A 37
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

12. In the Storage & Copy Services tab of the disk array management window,
right-click the created disk group and select Create Virtual Array from the
shortcut menu.

13. Set the capacity and name of the virtual disk (keep default values for other
settings), as shown below, and then click Finish.

38 Version: A
3 Local HA System with One / Two Disk Array(s)

14. Click No in the displayed alert box to complete creating the virtual disk and
wait for the initialization to complete.

Version: A 39
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

15. In the Host Mappings tab of the disk array management window, right-click
the host under Default Group and select Delete from the shortcut menu to
delete existing hosts under Default Group one by one.

16. Right-click Default Group and select Define→Host from the shortcut menu.

17. Set the host name, click No, and click Next, as shown below.

40 Version: A
3 Local HA System with One / Two Disk Array(s)

18. Select Add by selecting a known unassociated host port identifier, select
the identifier from the Known unassociated host port identifier drop-down list,
set alias and click Add to add the port identifiers and alias associated with the
host to the list. Then click Next.

Version: A 41
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

19. Select Windows from the Host type (operating system) drop-down list and
click Next.

42 Version: A
3 Local HA System with One / Two Disk Array(s)

20. View the definition of the current host and click Finish.

Version: A 43
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

21. Click No in the following alert box.

22. In the Host Mappings tab of the disk array management window, right-click
the host and select Add LUN Mapping from the shortcut menu.

44 Version: A
3 Local HA System with One / Two Disk Array(s)

23. Set the logical unit number of the host and click Add, as shown below.

Version: A 45
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

24. View the mapping result and restart the two servers.

3.4 Installing the Cluster Software Veritas

The following introduces how to install the cluster software Veritas. You need to
install the software only on the active server and the standby server will synchronize
the installation automatically.

46 Version: A
3 Local HA System with One / Two Disk Array(s)

Prerequisite

u The Veritas software installation package and license are obtained.

u The Windows Server 2012 R2 Standard operating system is installed on the


server that is to run the cluster software Veritas.

Procedure

1. Double-click the installation file to open the following window.

2. Click Install or upgrade server and client components to open the following
welcome window.

Version: A 47
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Click Next and select I accept the terms of License Agreement, as shown
below.

48 Version: A
3 Local HA System with One / Two Disk Array(s)

4. Click Next to access the Product Updates window.

Version: A 49
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. Click Next to access the System Selection window. Enter the hostnames of
the active and standby servers in System Name or IP respectively and click
Add. Keep default values for other parameters.

Note:
When this step is performed on the active server, the cluster software will
be automatically installed on the standby server as the hostnames of the
active and standby servers are added.

6. Click Next and then click Yes in the displayed dialog box, as shown below.

50 Version: A
3 Local HA System with One / Two Disk Array(s)

7. Click OK in the dialog box that appears.

8. In the following previewed installation information, deselect Automatically


reboot systems after installer completes the operation and then click Next.

Version: A 51
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. In the displayed dialog box, click OK. The Veritas software starts to be installed,
which may take several minutes.

If the following information appears, the installation is successful.

52 Version: A
3 Local HA System with One / Two Disk Array(s)

10. Click Next and the following information appears.

Version: A 53
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

11. Click Next and the following information appears. Then, click Finish to
complete the installation.

12. Restart the active and standby servers manually.

3.5 Configuring the VCS

The following introduces how to configure the VCS. You need to configure the VCS
only on the active server and the standby server will synchronize the configuration
automatically.

Procedure

1. On the CLI of the active server, enter the VCW /nonad and press "Enter" to
start the VCS configuration wizard.

54 Version: A
3 Local HA System with One / Two Disk Array(s)

2. In the VCS welcome dialog box, click Next.

Version: A 55
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. As shown below, enter the hostnames of the active and standby servers
respectively, click Add and then click Next.

56 Version: A
3 Local HA System with One / Two Disk Array(s)

4. As shown below, view the server status and click Next.

Version: A 57
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. As shown below, select Create New Cluster and click Next to create a cluster.

58 Version: A
3 Local HA System with One / Two Disk Array(s)

6. As shown below, enter the cluster name, select all systems, and then click
Next.

Version: A 59
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. As shown below, make sure the statuses of the active and standby servers are
Accepted and then click Next.

60 Version: A
3 Local HA System with One / Two Disk Array(s)

8. As shown below, select Configure LLT over UDP on IPv4 network and click
Edit Ports.

Version: A 61
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. As shown below, set the port numbers of line 1 and link 2 to 50000 and 50001
respectively. Click OK.

Note:

Use link 1 port 50000 and link 2 port 50001 as an example. If the ports
conflict with the EMS ports, you can modify them to other available ones.

62 Version: A
3 Local HA System with One / Two Disk Array(s)

10. As shown below, select the heartbeat network cards (HEARTBEAT) and
database network cards (TCPIP) of the active and standby servers, and then
click Next.

Version: A 63
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

11. As shown below, select Use VCS User Privileges to set the username and
password of the created cluster, and then click Next.

Note:

The default username and initial password of the VCS are admin and
password respectively, which can be modified as needed.

64 Version: A
3 Local HA System with One / Two Disk Array(s)

12. As shown below, check the configuration of the cluster and click Configure.

Version: A 65
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

13. As shown below, click Finish to complete all the configurations.

66 Version: A
3 Local HA System with One / Two Disk Array(s)

3.6 Configuring the Disk Array

The following introduces how to configure the disk array for the local HA system with
one or two disk array(s).

You can perform this operation on any of the servers, the other server will
automatically synchonize the configuration.

Procedure

1. On the active server, click Start→All Programs→Symantec→Veritas


Storage Foundation→Veritas Enterprise Administrator to open the
following dialog box and click OK.

Version: A 67
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2. As shown below, click Connect.

3. As shown below, enter the host name of the active server WIN52 in the Host
Name text box and click Connect.

68 Version: A
3 Local HA System with One / Two Disk Array(s)

4. Enter the login username and password of the active server in the following
dialog box and click OK.

5. In the following dialog box, click OK to enter the main configuration GUI.

Version: A 69
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. Select Disks→Harddisk1 (No Signature) to view the disk information with no


signature.

Note:

u Under Disks, Harddisk0 is the built-in disk of the server and


Harddisk1 (No Signature) is the disk array to be configured.

u For the local HA system with two disk arrays, there are two disk
arrays, while for the local HA system with one disk array, there is
only one disk array, as shown below.

7. Right-click Harddisk1 (No Signature) and select Write Signature.

8. As shown below, select the GPT mode to sign the disk, select the destination
disk, and then click OK.

Note:

For the local HA system with two disk arrays, select two disk arrays; for
the HA system with one disk array, select one disk array. The following
figure uses the local HA system with one disk array as an example.

70 Version: A
3 Local HA System with One / Two Disk Array(s)

9. Click Yes in the displayed dialog box, as shown below.

10. In the main GUI, right-click the disk already signed and select New Dynamic
Disk Group to open the following dialog box and then click Next.

Version: A 71
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

11. As shown below, enter the disk group name dbdg, select Create cluster
group and the disk array, and then click Next.

Note:

For the local HA system with two disk arrays, select two disk arrays; for
the HA system with one disk array, select one disk array. The following
figure uses the local HA system with one disk array as an example.

72 Version: A
3 Local HA System with One / Two Disk Array(s)

12. As shown below, click Next.

Note:

For the local HA system with two disk arrays, there are two disk arrays,
while for the local HA system with one disk array, there is only one disk
array. The following figure uses the local HA system with one disk array
as an example.

Version: A 73
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

13. As shown below, confirm the basic information of the disk group and click
Finish to complete the creation.

Note:

The confirmation information of the local HA system with two disk arrays
is a bit different from that with one disk array. The following figures uses
the local HA system with one disk array as an example.

74 Version: A
3 Local HA System with One / Two Disk Array(s)

14. In the VEA main GUI, right-click the created disk group and select New
Volume to open the following dialog box and click Next.

Version: A 75
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

15. As shown below, select the created disk group dbdg, configure according to
the following figure and click Next.

Note:

For the local HA system with two disk arrays, select two disk arrays; for
the HA system with one disk array, select one disk array. The following
figure uses the local HA system with one disk array as an example.

76 Version: A
3 Local HA System with One / Two Disk Array(s)

16. As shown below, enter the volume name and space, and click Next.

Note:

The following figure uses the local HA system with one disk array as an
example and it does not require to configure the parameters in the red
rectangle. To configure the disk arrays for the local HA system with two
disk arrays, you need to select Mirrored and Enable logging in the red
rectangle.

Version: A 77
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

17. As shown below, select Do not assign a drive letter and click Next.

78 Version: A
3 Local HA System with One / Two Disk Array(s)

18. As shown below, select the file system in the format of NTFS, configure the
attribute (default values) of the file system and click Next.

Version: A 79
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

19. As shown below, confirm the basic information of the created volume and click
Finish to complete the creation.

Note:

The confirmation information of the local HA system with two disk arrays
is a bit different from that with one disk array. The following figures uses
the local HA system with one disk array as an example.

80 Version: A
3 Local HA System with One / Two Disk Array(s)

20. As shown below, view the information of the volume created successfully.

3.7 Configuring the Cluster Resource

The following introduces how to configure the cluster resource.

Version: A 81
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3.7.1 Configuring the Resource Group

The following introduces how to configure resource groups.

Procedure

1. On the active / standby server, select Start→All Programs→Symantec→


Veritas Cluster Server→Veritas Cluster Manager-Java Console to open the
following window.

2. Click , enter the IP address or host name of the local computer (keep default
values for other parameters) and then click OK.

3. Enter the username and password for logging into the Cluster and click OK.

82 Version: A
3 Local HA System with One / Two Disk Array(s)

4. As shown below, click Yes to view the status of the cluster whose name is
UNM2000.

5. Right-click UNM2000 in the left pane and select Add Service Group....

Version: A 83
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. As shown below, click Next.

84 Version: A
3 Local HA System with One / Two Disk Array(s)

7. As shown below, enter the resource group name hyzd, select the server to be
started first (the server being configured is recommended) and click Next.

Note:

The smaller the value of Priority is, the higher the priority is.

8. As shown below, click Finish to use the default template to configure the
resource group.

Version: A 85
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. As shown below, in the Properties tab, click the button corresponding to


the FailOverPolicy attribute to modify it to RoundRobin and click OK.

86 Version: A
3 Local HA System with One / Two Disk Array(s)

10. View the change result, as shown below.

Version: A 87
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3.7.2 Configuring the NIC Resource

The following introduces how to configure the NIC resource.

Procedure

1. In the Cluster main GUI, right-click the created resource group and select Add
Resource, as shown below.

2. In the displayed Add Resource dialog box, set the NIC resource name and
select the resource type NIC, as shown below.

88 Version: A
3 Local HA System with One / Two Disk Array(s)

Caution:

The items in bold are required attributes.

3. In the Add Resource dialog box, click the button corresponding to


MACAddress to open the Edit Attribute dialog box.

Version: A 89
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4. In the Edit Attribute dialog box, select Per System, select the active and
standby servers respectively from the drop-down list, enter their corresponding
MAC addresses, and click OK to return to the Add Resource dialog box.

90 Version: A
3 Local HA System with One / Two Disk Array(s)

5. In the Add Resource dialog box, select Critical and Enabled, and click OK to
complete setting the NIC resource.

6. View the information of the created NIC resource, as shown below.

Version: A 91
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3.7.3 Configuring the IP Resource

The following introduces how to configure the IP resource.

Procedure

1. As shown below, right-click the resource group in the Cluster main GUI and
select Add Resource.

92 Version: A
3 Local HA System with One / Two Disk Array(s)

2. In the displayed Add Resource dialog box, set the IP resource name and
select the resource type IP, as shown below.

Version: A 93
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Caution:

The items in bold are required attributes.

3. In the Add Resource dialog box, click the button corresponding to Address
to open the Edit Attribute dialog box.

4. In the Edit Attribute dialog box, select Global, enter the floating IP address
and click OK to return to the Add Resource dialog box.

94 Version: A
3 Local HA System with One / Two Disk Array(s)

5. In the Add Resource dialog box, click the button corresponding to


SubNetMask to open the Edit Attribute dialog box.

6. In the Edit Attribute dialog box, select Global, enter the planeed subnet mask
and click OK to return to the Add Resource dialog box.

Version: A 95
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. In the Add Resource dialog box, click the button corresponding to


MACAddress to open the Edit Attribute dialog box.

8. In the Edit Attribute dialog box, select Per System, select the active and
standby servers respectively from the drop-down list, enter their corresponding
MAC addresses, and click OK to return to the Add Resource dialog box.

96 Version: A
3 Local HA System with One / Two Disk Array(s)

9. In the Add Resource dialog box, select Critical and Enabled, and click OK to
complete setting the IP resource.

10. View the information of the created IP resource, as shown below.

Version: A 97
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3.7.4 Setting the Disk Resource

The following introduces how to configure the disk resource, including the VMDg
and MountV resources.

Procedure

1. Set the VMDg resource.

1) As shown below, right-click the resource group in the Cluster main GUI
and select Add Resource.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type VMDg, as shown below.

98 Version: A
3 Local HA System with One / Two Disk Array(s)

Note:

The items in bold are required attributes.

3) In the Add Resource dialog box, click the button corresponding to


DiskGroupName to open the Edit Attribute dialog box.

Version: A 99
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4) In the Edit Attribute dialog box, select Global, enter the name dbdg of the
disk group created in Configuring the Disk Array and click OKConfiguring
the Disk Array to return to the Add Resource dialog box.

5) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete setting the VMDg resource.

100 Version: A
3 Local HA System with One / Two Disk Array(s)

2. Set the MountV resource.

1) As shown below, right-click the resource group in the Cluster main GUI
and select Add Resource.

Version: A 101
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type MountV, as shown below.

102 Version: A
3 Local HA System with One / Two Disk Array(s)

Note:

The items in bold are required attributes.

3) In the Add Resource dialog box, click the button corresponding to


MountPath to open the Edit Attribute dialog box.

4) In the Edit Attribute dialog box, select Global, set the path to D:\MySQL
(recommended) and click OK to return to the Add Resource dialog box.

Note:

After the disk resource is configured, the system will automatically


generate the file folder in the root directory of disk D on
the server.

Version: A 103
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5) In the Add Resource dialog box, click the button corresponding to


VolumeName to open the Edit Attribute dialog box.

6) In the Edit Attribute dialog box, select Global, enter the name dbvol of
the volume created in Configuring the Disk Array and click OKConfiguring
the Disk Array to return to the Add Resource dialog box.

104 Version: A
3 Local HA System with One / Two Disk Array(s)

7) In the Add Resource dialog box, click the button corresponding to


VMDGResName to open the Edit Attribute dialog box.

8) In the Edit Attribute dialog box, select Global, enter the planned name
dg-res and click OK to return to the Add Resource dialog box.

9) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete setting the MountV resource.

Version: A 105
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. View the information of the created disk resource, as shown below.

106 Version: A
3 Local HA System with One / Two Disk Array(s)

3.7.5 Configuring the Dependency Relationship Among


Cluster Resources

The following introduces how to configure the dependency relationship among the
cluster resources.

Prerequisite

The NIC, IP address and disk resources are configured.

Procedure

1. Start the NIC, IP address, VMDg and MountV resources in the cluster.

Right-click the desired resource group, select Online→Server Name (using


the active server WIN52 as an example) and click Yes in the displayed dialog
box to start the resource.

2. Make sure the disk array is added to the server.

3. Select the created resource group in the left object tree and click the
Resources tab.

Version: A 107
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4. Right-click the parent resource in the topology view and select Link from the
shortcut menu. Then select child resource in the displayed Link Resources
dialog box. See Table 3-1.

Table 3-1 Dependency Relationship Among Cluster Resources

No. Dependency Relationship: Parent Resource→


→ Child Resource

1 ip-res→nic-res

2 vol-res→dg-res

5. As shown below, the dependency relationship among cluster resources is set


successfully.

108 Version: A
3 Local HA System with One / Two Disk Array(s)

3.8 Installing the Database

This section introduces how to install the MySQL database. You need to install the
database on the active and standby servers respectively.

Prerequisite

u The MySQL database installation package is obtained.

u The disk array and cluster resource are configured.

Procedure

1. Start the resource group in the cluster of the active server.

Right-click the corresponding resource group, select Online→Server Name


(using WIN52 as an example) and click Yes in the displayed dialog box to
start the resource group.

2. Make sure the disk array is added to the server.

Version: A 109
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Caution:

When the MountV resource on a server is offline, the empty D:\MYSQL


will be generated. In this situation, you cannot add files to this directory;
otherwise, the MountV resource on the server cannot be online.

3. Decompress the MySQL installation package to a new folder of any disk and
then copy the decompressed files to the "D:\MySQL" directory, as shown
below.

Caution:
Do not decompress the MySQL package into the "D:\MySQL" directory
directly; otherwise, the database cannot be installed.

4. Set the environment variables.

110 Version: A
3 Local HA System with One / Two Disk Array(s)

1) Right-click Computer and select Properties and select Advanced


system settings in the System window. Then select Environment
Variables in the System Properties dialog box.

2) In the System variables box, select the Path variable, click Edit and add
D:\MySQL\bin in the variable value, separated from the previous value by
;, as shown below.

5. Restart the server to validate the environment variables after the modification.

6. Run the following commands on the CLI to install the MySQL database service.

7. Set the startup type of the MySQL service.

Version: A 111
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1) View the attribute of the MySQL service in the service list and make sure
the executable file path of the MySQL service is D:\MySQL\bin, as shown
below.

2) Set the startup type of the MySQL service to Manual and click Start to
view whether the MySQL service can be normally started.

8. Right-click the resource group, select Switch To→WIN53 and click Yes in the
displayed alert box to switch the resource to the standby server (WIN53).

9. When the statuses of all resources are Online on WIN53, perform steps 2 to 6
again.

3.9 Installing the UNM2000

The following introduces how to install the UNM2000.

112 Version: A
3 Local HA System with One / Two Disk Array(s)

3.9.1 Installing the UNM2000

The following introduces how to install the UNM2000. You need to install it on two
servers respectively.

Prerequisite

u The installation package of the UNM2000 is obtained.

u The MySQL database is installed on the server and the MySQL service is
started.

Procedure

1. Double-click the UNM2000 installation software on the active server, and click
"Next" in the displayed dialog box.

Note:

The active server indicates the server whose cluster resources are online.

2. Select I accept the agreement and click Next.

Version: A 113
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Select the components to be installed and click Next.

114 Version: A
3 Local HA System with One / Two Disk Array(s)

4. Specify the UNM2000 installation path (the default installation path D:


\unm2000 is recommended), and then click Next.

5. Select the installation language and click "Next".

Version: A 115
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. Set the Tomcat installation path (the default installation path D:\ApacheTomcat
is recommended), and then click Next.

7. Click Yes in the alert box that appears.

8. Set the Tomcat port (The default port 8080 is recommended), and then click
Next.

116 Version: A
3 Local HA System with One / Two Disk Array(s)

9. Specify the server deployment mode. Select the default Centralized and click
Next.

Version: A 117
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Specify the IP address of the computer running the server end, and then click
Next.

Note:

u When adopting the local HA system with one or two disk array(s), or
the remote HA system with two disk arrays (with the database
floating IP address configured), you need to set the IP address to the
floating IP address of the database upon EMS installation.

u When adopting the remote HA system with two disk arrays (with the
database floating IP address not configured), you need to set the IP
address to the database IP address of the corresponding server
upon EMS installation.

11. Select the mysql database and click Next.

118 Version: A
3 Local HA System with One / Two Disk Array(s)

12. Set the database information for the UNM2000 server end and then click
Next.

Version: A 119
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

13. Confirm the installation information and then click Next.

14. The installation of the program starts. In the alert box displayed a while later,
click Finish.

120 Version: A
3 Local HA System with One / Two Disk Array(s)

15. Click No in the displayed alert box to not restart the computer immediately.

16. Switch the cluster resources to the other server and repeat steps 1 to 15 to
install the UNM2000 on the other server.

3.9.2 Initializing the Database

The following introduces how to initialize the database table structure. You need to
perform this operation only on one server.

Prerequisite

u The UNM2000 software has been installed successfully.

u The following services are stopped: UnmBus, UNMCMAgent,


UNMCMService, UNMCFGDataMgr, UnmNode and UnmServiceMonitor.

u The cluster resource is switched to the server whose database is to be


initialized, and the MySQL service is started.

Procedure

1. Click to open the Start window, and click . In the Apps window,
click Visual Studio 2008 Command Prompt.

2. Run the following commands on the CLI.

Version: A 121
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Complete the database initialization according to Table 3-2.

Table 3-2 Initialization Procedure

No. Meaning Description

Select the database type sequence


1 Select the database type. number according to the current
database.
Set this password according to the
Enter the database administrator
2 actual planning. The default password
password.
is vislecaina.
Set this password according to the
3 Re-enter the password. actual planning. The default password
is vislecaina.
4 Enter the database instance name. Default value.
5 Enter the database port number. Default value.

6 Select the database language. The default value is 1.


Enter the name of the database file to
be omitted when clearing the
7 Default value.
database. Multiple files are separated
by "-".

8 Confirm the files to be omitted. The default value is 1.


9 Select the database clearing type. The default value is 1.

4. Restart the active and standby servers.

122 Version: A
3 Local HA System with One / Two Disk Array(s)

3.9.3 Configuring the Service

After installing the UNM2000, you need to stop the relevant EMS services and set
the startup type of Manual. You need to configure the service on two servers
respectively.

Procedure

1. On the active or standby server, click to open the Start window. Click
Administrative Tool, double-click Services to open the Services window.

2. Stop the relevant UNM2000 services and set the startup type of Manual.

The relevant UNM2000 services inlude UnmBus, UNMCMAgent,


UNMCMService, UNMCFGDataMgr, UnmNode, UnmServiceMonitor,
Apache Tomcat 6 and MySQL. The following uses the
UNMCMServiceservice as an example.

Version: A 123
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Repeat steps 1 and 2 to stop the relevant EMS services on the other server
and set the startup type of Manual.

3.10 Configuring the EMS Resource

The following introduces how to configure the EMS resource. In the local HA system
with one or two disk array(s), you need to configure the EMS resource only on the
active server.

Background Information

The EMS resources include the database resources and the UNM2000 resources.

Prerequisite

On the active / standby server, stop the services related to the UNM2000 and
MySQL database and change the startup type of the services to Manual.

Procedure

1. As shown below, right-click the resource group in the Cluster main GUI and
select Add Resource.

124 Version: A
3 Local HA System with One / Two Disk Array(s)

2. In the displayed Add Resource dialog box, set the name of the database
resource and select the resource type GenericService, as shown below.

Version: A 125
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

The items in bold are required attributes.

3. In the Add Resource dialog box, click the button corresponding to


ServiceName to open the Edit Attribute dialog box.

4. In the Edit Attribute dialog box, select Global, enter the database service
name and click OK to return to the Add Resource dialog box.

5. In the Add Resource dialog box, select Critical and Enabled, and click OK to
complete configuring the database resource.

126 Version: A
3 Local HA System with One / Two Disk Array(s)

6. Repeat steps 1 to 5 and configure other EMS resources according to the


following.

Item Resource Type Resource Name ServiceName

icegridregistry.
UnmbusRes
UnmBusIceGrid
UNMCFGDa-
UNMCFGDataMgr
taMgrRes

UNMCMAgentRes UNMCMAgent
UNM2000 resources GenericService UNMCMServiceRes UNMCMService
icegridnode.
UnmNodeRes UnmBusIceGrid.
UnmRpcNode1

UnmServiceMoni-
UnmServiceMonitor
torRes
Tomcat6 resources GenericService Tomcat6Res Tomcat6

Subsequent Operation

After configuring the EMS resources, you need to start all relevant services
manually and create the dependency relationship between EMS resources and the
cluster resources.

Version: A 127
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1. Manually start the EMS services, database services and Tomcat services on
the active server.

2. Start all EMS resources in the cluster.

3. Click the resource group hyzd and select the Resources tab.

4. Configure the dependency relationship between EMS resources and cluster


resources according to Table 3-3 and operation procedures in Configuring the
Dependency Relationship Among Cluster Resources, as shown below.

Table 3-3 Dependency Relationship

No. Dependency Relationship: Parent Resource→


→ Child Resource

1 Tomcat6Res→ip-res

2 UnmServiceMonitorRes→UnmNodeRes
UNMCFGDataMgrRes→UNMCMAgentRes→UNMCMServiceRe-
3
s→UnmNodeRes→UnmbusRes→Mysqlres

4 Mysqlres→ip-res

5 Mysqlres→vol-res

128 Version: A
3 Local HA System with One / Two Disk Array(s)

3.11 Verifying the Installation

The following introduces how to verify the installation of the active / standby system
is installed.

Procedure

1. Check whether the EMS client can log into the active and standby servers
respectively.

1) Start all resources on any of the active and standby servers and connect to
the EMS service end through the EMS client. If you log in successfully and
perform operations normally, the EMS is installed correctly on the server.

2) Switch all resources to another server manually and connect to the EMS
service end through the EMS client. If you log in successfully and perform
operations normally, the EMS is installed correctly on the server.

2. Check whether automatic switching can be performed between the active and
standby EMS systems.

The active and standby EMS servers will trigger automatic switching upon the
following failures.

4 A power failure occurs on the active EMS server.

4 An abnormal restart occurs on the active EMS server.

4 A disk fault occurs on the active EMS server.

4 A fault occurs on the network card (including the IP address) of the active
EMS server.

4 The EMS service on the active server automatically stops abnormally.

4 A database fault occurs on the active EMS server.

Simulate the following faults to check whether the active and standby EMS
systems trigger automatic switching successfully.

1) Manually disconnect the power supply of the active EMS server and check
whether the switching is successful.

2) Manually restart the active EMS server and check whether the switching is
successful.

Version: A 129
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3) Manually disable the optical card of the active EMS server and check
whether the switching is successful.

4) Manually disable the network card of the active EMS server and check
whether the switching is successful.

5) Manually stop the EMS service on the active EMS server and check
whether the switching is successful.

6) Manually stop the database service on the active EMS server and check
whether the switching is successful.

130 Version: A
4 Remote HA System with Two Disk
Arrays

This chapter introduces the installation method for the local HA system with two disk
arrays.

Preparations Before Installation

Checking Hardware Connections

Installing the Disk Array

Installing the Cluster Software Veritas

Configuring the VVR

Configuring the VCS

Configuring the Cluster Resource

Installing the Database

Installing the UNM2000

Configuring the EMS Resource

Verifying the Installation

Version: A 131
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4.1 Preparations Before Installation

Before installing the active/standby system (remote HA system with two disk arrays),
make sure the following preparations are done:

u The Qlogic optical fiber channel HBA card is installed on the active and
standby servers respectively.

u The hardware connection for the remote HA system with two disk arrays are
completed according to Checking Hardware Connections.

u The host names of the active and standby servers are set to WIN52 and WIN53
respectively (you can customize the host names in the Properties dialog box of
the server).

u The network card names of the active and standby servers are set. It is
recommended that you set the database network card, device network card
and heartbeat network card to TCPIP, DEVICE and HEARTBEAT respectively.

u The database IP address, device IP address and heartbeat IP address of the


active and standby servers are configured.

This manual takes the IP configuration planning as an example to introduce


how to configure the network card. In the following example, only two network
cards are configured for the database IP address and the device IP address
share the same network card.
Server Network card IP Description

Database IP address: Configures it


directly on the TCPIP network card.
10.170.1.52
Database Device IP address: Configures it directly
TCPIP network on the TCPIP network card.
card Floating IP address of the database: It is
10.170.1.54 bound when configuring the
Active FHEmsService resource group
server Heartbeat IP address: Configures it
10.0.0.1 directly on the HEARTBEAT network

Heartbeat card.
HEART-
network GCO heartbeat IP address: It is bound
BEAT
card when configuring the VCS
10.0.0.11
The replication resource group is
configured.

132 Version: A
4 Remote HA System with Two Disk Arrays

Server Network card IP Description

Database IP address: Configures it


directly on the TCPIP network card.
10.170.1.53
Database Device IP address: Configures it directly
TCPIP network on the TCPIP network card.
card Floating IP address of the database: It is
10.170.1.54 bound when configuring the
Standby FHEmsService resource group
server Heartbeat IP address: Configures it
10.0.0.2 directly on the HEARTBEAT network

Heartbeat card.
HEART-
network GCO heartbeat IP address: It is bound
BEAT
card when configuring the VCS
10.0.0.12
The replication resource group is
configured.

Note:

u The network for VVR volume replication should have sufficient


bandwidth according to the number of NEs managed by the EMS.
Generally, the rate of 50Mb/s should be guaranteed.

u The IP address used for VVR volume replication is bound with the
heartbeat network card.

u The IP address used for GCO heartbeat detection can be bound with
the database network card or heartbeat network card. In the project,
it is usually bound with the heartbeat network card.

u As the active and standby are deployed in different equipment rooms,


if the database IP address configured is on a different network
segment, the database floating IP address cannot be provided. Upon
active / standby switching, the upper-level system, such as the NMS
and client, requires modifying the connected IP address manually.

u The priorities of the network cards of the active and standby servers is set.
Their priorities from high to low should be: database network card > device
network card > heartbeat network card.

Version: A 133
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

u Add the following information in the windows\system32\driver\etc\hosts file and


make sure the active and standby servers can communicate with each other by
host name.

4 The relationship between the current server host name and its database
floating IP address.

4 The relationship between the remote server host name and its database IP
address.

u Set the same username and password for the administrators of the active and
standby servers.

u Turn off the Windows firewall on both the active and standby servers.

4.2 Checking Hardware Connections

Before installing the disk array software, you need to install and connect the
hardware correctly; otherwise, errors may occur during the installation.

Connecting the Hardware for Remote HA System with Two Disk Arrays

When the device network card and database network card of the cluster server
belong to the same DCN network, connect the hardware according to Figure 4-1.

134 Version: A
4 Remote HA System with Two Disk Arrays

Figure 4-1 Hardware Connection (Device Network Card and Database Network Card in the
same DCN Network)

When the device network card and database network card of the cluster server
belong to different DCN networks, connect the hardware according to Figure 4-2.

Version: A 135
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Figure 4-2 Hardware Connection (Device Network Card and Database Network Card in
Different DCN Networks)

4.3 Installing the Disk Array

The FiberHome active / standby system can use the Fujitsu and Dell disk arrays.
The following introduces how to install the Fujitsu and Dell disk arrays respectively.

4.3.1 Installing the Fujitsu Disk Array

The Fujitsu disk array is installed through the webpage.

Prerequisite

u The RMT or MNT port of the Fujitsu disk array is connected to an idle port of the
active server through a network cable, and the IP address of the network card
is on the same network segment as the default IP address of the disk array.

u The IE 7.0 or later versions is installed on the active server.

136 Version: A
4 Remote HA System with Two Disk Arrays

Procedure

1. Enter the IP address (http://192.168.1.1 by default) in the IE browser of the


active server to open the following webpage.

2. Enter the username and password (both are root) and click Logon to access
the installation window.

3. Select Volume Settings→RAID Group Management→Create RAID Group


to open the following webpage.

4. Select the default option Create RAID Group (Disks are assigned
automatically) and click Next.

5. Set the RAID group name, set RAID Level to RAID5 (keep default settings for
other parameters) and click Create, as shown below.

Version: A 137
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. In the following dialog box, click OK. The system starts to create the RAID,
which may take several minutes.

If the following information appears, the RAID group is created successfully.

7. Select Volume Settings→Volume Management→Create Volume to open the


following webpage.

138 Version: A
4 Remote HA System with Two Disk Arrays

8. Click 0:lv in the left navigation tree, set the volume name and capacity (adopt
default settigs for other parameters), and then click Create.

9. In the following dialog box, click OK. The system starts to create the volume,
which may take several minutes.

If the following information appears, the volume is created successfully.

Version: A 139
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Select Volume Settings→Volume Management→Configure LUN Mapping


to open the following webpage.

11. In the left navigation tree, click CM#0 Port#0 and click Edit to set the name of
bound volume, as shown below.

140 Version: A
4 Remote HA System with Two Disk Arrays

12. Click Set and then click OK in the following dialog box. The system starts to
establish the binding relationship, which may take several minutes.

If the following information appears, the binding relationship is established


successfully.

13. Repeat steps 11 to 12 to establish the binding relationship of CM#0 Port#1,


CM#1 Port#0, CM#1 Port#1 respectively.

4.3.2 Installing the Dell Disk Array

The Dell disk array installation includes installing the disk array management
software and configuring the disk array.

4.3.2.1 Installing the Dell Disk Array Management Software

The following introduces how to install the Dell disk array management software.

Prerequisite

The installation program of the Dell disk array management software is obtained.

Version: A 141
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Procedure

1. On the active server, double-click the installation program of the disk array
management software mdss_install.exe to open the following dialog box.

2. Select the language and click OK.

3. Click Next in the dialog box as shown below.

142 Version: A
4 Remote HA System with Two Disk Arrays

4. Select I accept the terms of the license Agreement and click Next.

5. Select the default install set Full (Recommended and click Next.

Version: A 143
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. Select Fiber Channel (MD3600f, MD3620f, MD3660f) and click Next.

7. Select No, I will manually start the event monitor service and click Next.

144 Version: A
4 Remote HA System with Two Disk Arrays

8. Adopt the default destination folder (recommended), and click Next.

9. View the installation summary and click Install.

Version: A 145
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Wait until the system completes the installation. Then select No, I will restart
my system myself later and click Done, as shown below.

146 Version: A
4 Remote HA System with Two Disk Arrays

4.3.2.2 Configuring the Dell Disk Array

The dell disk array is configured by the Dell disk array management software.

Prerequisite

u The Dell disk array management software is installed on the active server.

u The management port of the Dell disk array is connected to an idle port of the
active server through a network cable, and the IP address of the network card
is on the same network segment as the default IP address of the disk array.

Caution:

The Dell disk array has two management ports (network ports), generally
marked as 3 and 4. Their default IP addresses are 192.168.128.101 and
192.168.128.102 respectively.

Procedure

1. Select Start→All Programs→Dell→Modular Disk Storage Manager→


Modular Disk Storage Manager Client.

2. In the Select Addition Method dialog box, select Manual and click OK.

Version: A 147
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. In the Add New Storage Array - Manual dialog box, select Out-of-band
management, enter the management IP address 192.168.128.101 of the disk
array in the RAID Controller Module (DNS/Network name, IPv4 address or
IPv6 address): text box, and then click Add.

148 Version: A
4 Remote HA System with Two Disk Arrays

4. Conform to add only one RAID control module path and click Yes in the
displayed alert box.

5. Click Yes in the displayed Storage Array Added alert box.

Version: A 149
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. In the Device tab of the PowerVault MD Storage Manager (Enterprise


Management) window, right-click to disk array and select Manage Storage
Array from the shortcut menu.

7. Click Yes in the Partially Managed Notice alert box to start the disk array
management window.

150 Version: A
4 Remote HA System with Two Disk Arrays

8. In the Storage & Copy Services tab of the disk array management window,
right-click Total Unconfigured Capacity and select Create Disk Array from
the shortcut menu.

9. Click Next as shown below.

Version: A 151
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Enter the disk group name, select Manual (Advanced): Choose specific
physical disks to obtain capacity for the new disk group. and click Next.

152 Version: A
4 Remote HA System with Two Disk Arrays

11. Set the RAID level to RAID 5, add the physical disk, click Calculate Capacity
and then click Finish.

Version: A 153
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

12. In the Storage & Copy Services tab of the disk array management window,
right-click the created disk group and select Create Virtual Array from the
shortcut menu.

13. Set the capacity and name of the virtual disk (keep default values for other
settings), as shown below, and then click Finish.

154 Version: A
4 Remote HA System with Two Disk Arrays

14. Click No in the displayed alert box to complete creating the virtual disk and
wait for the initialization to complete.

Version: A 155
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

15. In the Host Mappings tab of the disk array management window, right-click
the host under Default Group and select Delete from the shortcut menu to
delete existing hosts under Default Group one by one.

16. Right-click Default Group and select Define→Host from the shortcut menu.

17. Set the host name, click No, and click Next, as shown below.

156 Version: A
4 Remote HA System with Two Disk Arrays

18. Select Add by selecting a known unassociated host port identifier, select
the identifier from the Known unassociated host port identifier drop-down list,
set alias and click Add to add the port identifiers and alias associated with the
host to the list. Then click Next.

Version: A 157
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

19. Select Windows from the Host type (operating system) drop-down list and
click Next.

158 Version: A
4 Remote HA System with Two Disk Arrays

20. View the definition of the current host and click Finish.

Version: A 159
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

21. Click No in the following alert box.

22. In the Host Mappings tab of the disk array management window, right-click
the host and select Add LUN Mapping from the shortcut menu.

160 Version: A
4 Remote HA System with Two Disk Arrays

23. Set the logical unit number of the host and click Add, as shown below.

Version: A 161
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

24. View the mapping result and restart the two servers.

4.4 Installing the Cluster Software Veritas

The following introduces how to install the cluster software Veritas. You need to
install the software only on the active server and the standby server will synchronize
the installation automatically.

162 Version: A
4 Remote HA System with Two Disk Arrays

Prerequisite

u The Veritas software installation package and license are obtained.

u The Windows Server 2012 R2 Standard operating system is installed on the


server that is to run the cluster software Veritas.

Procedure

1. Double-click the installation file to open the following window.

2. Click Install or upgrade server and client components to open the following
welcome window.

Version: A 163
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Click Next and select I accept the terms of License Agreement, as shown
below.

164 Version: A
4 Remote HA System with Two Disk Arrays

4. Click Next to access the Product Updates window.

Version: A 165
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. Click Next to access the System Selection window. Enter the hostnames of
the active and standby servers in System Name or IP respectively and click
Add. Keep default values for other parameters.

Note:
When this step is performed on the active server, the cluster software will
be automatically installed on the standby server as the hostnames of the
active and standby servers are added.

6. Click Next and then click Yes in the displayed dialog box, as shown below.

166 Version: A
4 Remote HA System with Two Disk Arrays

7. Click OK in the dialog box that appears.

8. In the following previewed installation information, deselect Automatically


reboot systems after installer completes the operation and then click Next.

Version: A 167
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. In the displayed dialog box, click OK. The Veritas software starts to be installed,
which may take several minutes.

If the following information appears, the installation is successful.

168 Version: A
4 Remote HA System with Two Disk Arrays

10. Click Next and the following information appears.

Version: A 169
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

11. Click Next and the following information appears. Then, click Finish to
complete the installation.

12. Restart the active and standby servers manually.

4.5 Configuring the VVR

The following introduces how to configure the related functions of VVR.

4.5.1 Configuring the VVR Security Service

The following introduces how to configure the VVR security service. You need to
configure it only on one server.

Background Information

When the VVR security service is not configured or fails to be configured, no


privilege will be prompted upon switching between the active and standby servers.

170 Version: A
4 Remote HA System with Two Disk Arrays

Procedure

1. On any of the servers, click to open the Start window, and click . In
the Apps window, click VVR Security Service Configuration Wizard.

2. Click Next in the welcome window as shown below.

3. Enter the username and password of the Windows system and click Next.

Version: A 171
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4. In the Available domains pane, select the working group where the server
locates, click to add it to the Selected domains pane and click Next.

Note:

Wait for the system to automatically search for the available working
group or click Add domain to add the working group manually.

172 Version: A
4 Remote HA System with Two Disk Arrays

5. Click Add host to add the host names one by one and click Configure.

Version: A 173
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

6. Wait until the Status column of the host shows Account update succeeded,
and then click Finish.

174 Version: A
4 Remote HA System with Two Disk Arrays

4.5.2 Configuring the Dynamic Disk Groups

The following introduces how to configure the dynamic disk group. You need to
configure it on two servers respectively.

Background Information

After being installed, the cluster software will take over the disk management
function of the original Windows operating system.

Procedure

1. On the active server, click to open the Start window, and click . In the
Apps window, click Veritas Enterprise Administrator to open the following
dialog box and click OK.

Version: A 175
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2. In the Veritas Enterprise Administrator window, click Connect.

3. Set Host Name and click Connect, as shown below.

176 Version: A
4 Remote HA System with Two Disk Arrays

4. In the displayed dialog box, enter the username and password for logging into
the Windows system of the selected host, and click OK.

5. Click OK in the dialog box that appears.

6. Under Disks of the System pane, right-click the disk used for volume
replication and select Write Signature from the shortcut menu.

Note:

u You can determine the disk to be set according to the capacity of the
disk.

u The disk to be performed the operation must be online. Right-click


the disk and select Online Disk from the shortcut menu to get the
disk online.

Version: A 177
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. Select GUID Partition Table (GPT), select the corresponding disk and click
Select, as shown below. Then click OK.

178 Version: A
4 Remote HA System with Two Disk Arrays

8. Click Yes in the displayed alert box.

9. Right-click the corresponding disk and select New Dynamic Disk Group from
the shortcut menu to open the following dialog box and then click Next.

Version: A 179
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Set the disk group name, select Create cluster group, confirm the selected
disk and click Next.

180 Version: A
4 Remote HA System with Two Disk Arrays

11. Confirm the information of the disk group and click Next.

Version: A 181
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

12. Click Finish in the dialog box as shown below to complete creating the
dynamic disk group.

182 Version: A
4 Remote HA System with Two Disk Arrays

13. Repeat steps 2 to 12 to create the dynamic disk group on the other server.
Make sure the disk group names on the two servers are the same.

4.5.3 Creating the Disk Volume

You need to create two volumes for replicating data and logs respectively. The
volumes should be configured on two servers respectively and the volume names
and spaces on the active server should be completely consistent with those on the
standby server.

Prerequisite

The dynamic disk group is created.

Version: A 183
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Procedure

1. Under Disk Groups of the Veritas Enterprise Administrator window of any


server, right-click the created disk group and select New Volume from the
shortcut menu.

2. Click Next in the dialog box as shown below.

3. Adopt the default settings (recommended), and click Next.

184 Version: A
4 Remote HA System with Two Disk Arrays

4. Set the name and capacity of the data volume, and then click Next.

Version: A 185
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. Select Do not assign a drive letter and click Next.

186 Version: A
4 Remote HA System with Two Disk Arrays

6. Select the NTFS to format it, as shown below, and click Next.

Version: A 187
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. Confirm the information of the created volume and click Finish.

188 Version: A
4 Remote HA System with Two Disk Arrays

8. Repeat steps 1 to 7 to create the log volume. The log volume does not require
formatting. Therefore, deselect Format this volume when creating the log
volume in the dialog box as shown below.

Version: A 189
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. After the creation, view the created volumes, as shown below.

10. Repeat steps 1 to 9 to create the volumes on the other server. Make sure the
volume names and capacities on the two servers are completely consistent.

190 Version: A
4 Remote HA System with Two Disk Arrays

4.6 Configuring the VCS

The following introduces how to configure the VCS in the remote HA system mode.
You need to configure it on two servers respectively.

Procedure

1. On the CLI of any server, enter VCW /nonad and press "Enter" to start the VCS
configuration wizard.

2. In the VCS welcome dialog box, click Next.

Version: A 191
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Enter the host name of the current server, click Add to add it to the Selected
systems pane and click Next.

192 Version: A
4 Remote HA System with Two Disk Arrays

4. View the status of the selected system and click Next.

Version: A 193
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. Select Create New Cluster and click Next.

194 Version: A
4 Remote HA System with Two Disk Arrays

6. Enter the information of the cluster, select the host name of the server in the
cluster, as shown below, and then click Next.

Note:

Select the cluster ID and make sure the cluster IDs of the active and
standby servers are not the same.

Version: A 195
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. Confirm the system information and click Next.

196 Version: A
4 Remote HA System with Two Disk Arrays

8. Click No in the displayed alert box to not confirm the private heartbeat
connection.

Version: A 197
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

9. Select Use VCS User Privileges, set the administrator user information of the
cluster and click Next.

Note:

The default username and initial password of the VCS are admin and
password respectively, which can be modified as needed.

198 Version: A
4 Remote HA System with Two Disk Arrays

10. As shown below, confirm the information in the following dialog box and click
Configure.

Version: A 199
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

11. Click Next in the dialog box as shown below.

200 Version: A
4 Remote HA System with Two Disk Arrays

12. Select GCO Option and click Next.

Version: A 201
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

13. Select the network card, enter the IP address and subnet mask used for the
GCO heartbeat, as shown below, and click Next.

The GCO heartbeat IP address is usually bound with the heartbeat network
card.

202 Version: A
4 Remote HA System with Two Disk Arrays

14. Select Bring‘WAC’resource online. and click Configure.

Version: A 203
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

15. Click Finish to complete configuring the cluster service component.

204 Version: A
4 Remote HA System with Two Disk Arrays

16. Repeat steps 1 to 15 to configure the VCS on the other server. Make sure the
Cluster Name, Cluster ID and GCO IP are unique.

4.7 Configuring the Cluster Resource

The following introduces how to configure the cluster resource.

4.7.1 Setting the Replication Resource Group

The following introduces how to set the Replication resource group. You need to
configure it on two servers respectively.

Version: A 205
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Background Information

The Replication resource group includes the NIC, IP, VMDg and VvrRvg resources.
The VMDg and VvrRvg resources should be configured after the RDS is configured.
For specific operations, see Configuring the Disk Resource.

Procedure

1. On the active server, click to open the Start window, and click . In
the Apps window, select Veritas Cluster Manager – Java Console to open
the Cluster Monitor window.

2. Select File→New Cluster, enter the IP address or computer name of the local
computer, and then click OK.

3. Enter the username and password for logging into the Cluster and click OK.

206 Version: A
4 Remote HA System with Two Disk Arrays

Note:

After login, you can view the configured GCO resource group in the
cluster.

4. Add a Replication resource group.

1) Right-click the cluster name and select Add Service Group from the
shortcut menu.

2) Click Yes in the displayed alert box to switch to the Read/Write mode.

Version: A 207
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3) In the Add Service Group dialog box, enter the resource group name in
the Service Group name text box, select the corresponding system in the

Available Systems pane, and click to add it to the Systems for


Service Group pane. Select Startup, adopt default settings for other
parameters, as shown below, and then click OK.

5. Add the NIC resource.

1) Right-click the Replication resource group and select Add Resource from
the shortcut menu.

208 Version: A
4 Remote HA System with Two Disk Arrays

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

Version: A 209
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3) In the lower pane of the Add Resource dialog box, select the
MACAddress row (in bold) and click . Then select Per System, enter
the MAC address of the network card bound with the VVR IP address and
click OK.

Note:

u The IP address used for VVR volume replication is generally bound


with the heartbeat network card.

u Enter "ipconfig /all" on the CLI to view the MAC address of the
network card.

210 Version: A
4 Remote HA System with Two Disk Arrays

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the NIC resource.

6. Add the IP resource.

Version: A 211
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1) Right-click the Replication resource group and select Add Resource from
the shortcut menu.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

3) In the lower pane of the Add Resource dialog box, set Address,
SubNetMask and MACAddress.

¡ Select Global for Address and enter the planned VVR IP address.

¡ Select Global for SubNetMask and enter the planned subnet mark.

¡ Select Per System for MACAddress and enter the MAC address of
the network card bound with the VVR IP address.

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the IP resource.

212 Version: A
4 Remote HA System with Two Disk Arrays

7. Repeat steps 1 to 6 to add the Replication resource group and relevant


resources on the other server. The resource groups and resource names on
two servers must be consistent.

4.7.2 Configuring the RDS

The following introduces how to configure the RDS. You need to configure it only on
the active server, and the standby server will automatically synchronize the
configuration.

Prerequisite

The replication resource group is configured.

Procedure

1. On the active server, click to open the Start window, and click . In
the Apps window, click Veritas Enterprise Administrator to open the
following dialog box and click OK.

Version: A 213
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2. In the displayed window, click Connect.

3. Select Host Name and click Connect, as shown below.

4. In the displayed dialog box, enter the username and password for logging into
the Windows system of the corresponding host, and click OK.

214 Version: A
4 Remote HA System with Two Disk Arrays

5. Click OK in the dialog box that appears.

6. In the Veritas Enterprise Administrator window, select View→Connection→


Replication Network to open the following window.

Version: A 215
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7. Right-click Replication Network in the System pane, select Setup Replicated


Data Set from the shortcut menu to open the RDS installation wizard dialog box,
and then click Next.

216 Version: A
4 Remote HA System with Two Disk Arrays

8. Set the RDS and RVG name, set the active server as the primary node, as
shown below, and then click Next.

9. Select the disk volume (the data volume only) for replication and click Next.

Version: A 217
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Select the volume for replicating logs and click Next.

11. Confirm the relevant information of the primary node RDS and click Create
Primary RVG.

218 Version: A
4 Remote HA System with Two Disk Arrays

12. Click Yes in the displayed alert box to add the standby node.

13. Enter the host name of the standby server and click Next.

14. Click Yes in the displayed alert box to automatically create the some
configuration on the standby node.

15. Select IP addresses of the active and standby server, that is, the planned VVR
IP addresses, used for data replication and transmission. For other parameters,
adopt the default settings, as shown below.

Version: A 219
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

16. Click Advanced and set Protocol to TCP/IP. Adopt the default settings for
other parameters, as shown below, and then click OK.

220 Version: A
4 Remote HA System with Two Disk Arrays

17. Click Next in the dialog box as shown below.

18. Click Next in the dialog box as shown below to automatically start
synchronization and replication.

Version: A 221
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

19. Confirm the RDS configuration and click Finish.

4.7.3 Configuring the Disk Resource

The following introduces how to configure the disk resource. You need to configure
it on two servers respectively.

Prerequisite

The RDS is configured.

Procedure

1. Add the VMDg resource.

1) In the cluster of the active or standby server, right-click the Replication


resource group and select Add Resource from the shortcut menu.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

222 Version: A
4 Remote HA System with Two Disk Arrays

3) In the lower pane of the Add Resource dialog box, select the
DiskGroupName row (in bold) and click . In the Edit Attribute dialog
box, set the disk group name and click OK.

Note:

DiskGroupName should be set to the disk group name defined when


setting the dynamic disk group.

Version: A 223
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the VMDg resource.

2. Add the VvrRvg resource.

224 Version: A
4 Remote HA System with Two Disk Arrays

1) Right-click the Replication resource group and select Add Resource from
the shortcut menu.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

3) In the lower pane of the Add Resource dialog box, set RVG,
VMDgResName and IPResName.

¡ Select Global for RVG and enter the RVC name, which should be
consistent with the RVG name set when setting the RDS.

¡ Select Global for VMDgResName and enter the name of the VMDg
resource.

¡ Select Global for IPResName and enter any character string, which
actually does not take effect; However, the resource in bold must be
set.

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the VvrRvg resource.

Version: A 225
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Set the dependency relationship among resources.

1) Select the Replication resource group and select the Resources tab.

2) Right-click the corresponding resource and select Link from the shortcut
menu. In the displayed Link Resources dialog box, select the child
resource and establish the dependency relationship among the resources
one by one, according to Table 4-1.

Table 4-1 Dependency Relationship Among Disk Resources

No. Dependency Relationship: Parent Resource→


→ Child Resource

1 VvrRvg_Res→VvrRvg_VMDg

2 VvrRvg_Res→ip_res→nic_res

226 Version: A
4 Remote HA System with Two Disk Arrays

4. Repeat steps 1 to 3 to configure the disk resource and set the dependency
relationship among resources on the other server.

4.7.4 Configuring the FHEmsService Resource Group

The following introduces how to configure the FHEmsService resource group. You
need to configure it on two servers respectively.

Background Information

The FHEmsService resource group includes the RVGPrimary, MountV and EMS
resources. The relevant EMS resources should be configured after the database
and EMS are installed.

Procedure

1. Add the FHEmsService resource group in the cluster of the active or standby
server.

Version: A 227
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1) Right-click the cluster name and select Add Service Group from the
shortcut menu.

2) In the Add Service Group dialog box, enter the resource group name in
the Service Group name text box, select the corresponding system in the

Available Systems pane, and click to add it to the Systems for


Service Group pane. Select Startup, adopt default settings for other
parameters, as shown below, and then click OK.

228 Version: A
4 Remote HA System with Two Disk Arrays

2. Add the RVGPrimary resource.

1) Right-click the FHEmsService resource group and select Add Resource


from the shortcut menu.

Version: A 229
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

230 Version: A
4 Remote HA System with Two Disk Arrays

3) In the lower pane of the Add Resource dialog box, set AutoTakeover,
AutoResync and RvgResourceName.

¡ Set AutoTakeover to 1 (default value).

¡ Select Global for AutoResync and the value to 1.

¡ Select Global for RvgResourceName and enter the RVG resource


name in the Replication resource group.

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the RVGPrimary resource.

Version: A 231
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Add the MountV resource.

1) Right-click the FHEmsService resource group and select Add Resource


from the shortcut menu.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type, as shown below.

232 Version: A
4 Remote HA System with Two Disk Arrays

3) In the lower pane of the Add Resource dialog box, set MountPath,
VolumeName and VMDGResName.

¡ Select Global for MountPath and enter the disk address D:\MySQL.

¡ Select Global for VolumeName and enter the data volume name
defined when creating the disk volume.

¡ Select Global for VMDGResName and enter the VMDg resource


name in the Replication resource group.

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the MountV resource.

Version: A 233
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4. (Optional) The database IP addresses of the two servers are on the same
network segment, you need to configure the database floating IP addresses.
For the specific operations of adding the floating IP resource, see Configuring
the IP Resource.

234 Version: A
4 Remote HA System with Two Disk Arrays

5. Set the dependency relationship among resources.

1) Select the FHEmsService resource group and select the Resources tab.

2) Accroding to Table 4-2, right-click the corresponding resource and select


Link from the shortcut menu. In the displayed Link Resources dialog box,
select the child resource and establish the dependency relationship among
the resources one by one. The following figure users the remote HA
system with two disk arrays (with the floating IP address not configured) as
an example.

Table 4-2 Dependency Relationship Between the RVGPrimary and MountV Resources.

No. Dependency Relationship: Parent Resource→


→ Child Resource

1 MountV_res→RVGPrimary_res

6. Repeat steps 1 to 5 to add the FHEmsService resource group, add relevant


resources and set the dependency relationship among the resources on the
other server. The resource groups and resource names on two servers must be
consistent.

4.7.5 Configuring the GCO Function

The following introduces how to configure the GCO function. You need to configure
it only on one server. By configuring the GCO function, you can manage and
monitor the clusters on the active and standby servers through the GCO.

Prerequisite

The FHEmsService resource group is configured.

Procedure

1. Add the remote cluster.

Version: A 235
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

1) In the cluster on the active or standby server, select Edit→Add/Delete


Remote Cluster.

2) In the displayed Remote Cluster Configuration Wizard dialog box, click


Next.

3) Select Add Cluster and click Next.

236 Version: A
4 Remote HA System with Two Disk Arrays

4) Set the host name / IP address, username and password of the remote
cluster, and then click Next.

Version: A 237
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5) Click Finish in the dialog box as shown below to complete adding the
remote cluster.

6) In the cluster, select the cluster name and view the status of the local
cluster and remote cluster in the Remote Cluster Status tab. The
following figure shows the cluster status is normal. If the status is abnormal,
check the network and configuration.

238 Version: A
4 Remote HA System with Two Disk Arrays

2. Configure the global resource group.

1) In the cluster on the active or standby server, select Edit→Configure


Global Groups.

Version: A 239
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2) In the displayed Global Group Configuration Wizard dialog box, click


Next.

3) In the Available Clusters pane, select the cluster of the other server and

click . In the Select the group to modify drop-down list, select


the resource group to be configured as the global resource group. In the
Select cluster fail over policy drop-down list, select Auto and click Next.

Note:

Set the resource group where the EMS services locate as the global
resource group. The relevant EMS services will be added to the
FHEmsService resource group in the subsequent procedures.

240 Version: A
4 Remote HA System with Two Disk Arrays

4) Click Next in the dialog box as shown below.

Version: A 241
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

If Host Name/IP Address and Username in the above figure are null, click
and set the host name / IP address, cluster username and password
of the remote cluster in the Remote cluster information dialog box.

5) Click Finish in the dialog box as shown below to complete setting the
global resource group.

Note:

After the global resource group is set, the icon of the FHEmsService
resource group will be circled in blue, as shown below.

242 Version: A
4 Remote HA System with Two Disk Arrays

4.8 Installing the Database

The following introduces how to install the MySQL database. You need to install the
database on the active and standby servers respectively.

Prerequisite

u The MySQL database installation package is obtained.

u The disk array and cluster resource are configured.

Procedure

1. Start all resource groups in the cluster of the active server.

Right-click the corresponding resource group, select Online→Server Name


(using WIN52 as an example) and click Yes in the displayed dialog box to
start the resource group.

2. Make sure the disk array is added to the server.

Caution:

In the remote HA system with two disk arrays, you can bind the path of
the disk array by setting the path value of the MountV resource. For
specific operations, see the MountV resource configuration part in
Configuring the FHEmsService Resource Group.

Caution:

When the MountV resource on a server is offline, the empty D:\MYSQL


will be generated. In this situation, you cannot add files to this directory;
otherwise, the MountV resource on the server cannot be online.

Version: A 243
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Decompress the MySQL installation package and then copy the decompressed
files to the "D:\MySQL" directory, as shown below.

4. Set the environment variables.

1) Right-click Computer and select Properties and select Advanced


system settings in the System Properties window. Then select
Environment Variables in the System Properties dialog box.

2) In the System variables box, select the Path variable, click Edit and add
D:\MySQL\bin in the variable value, separated from the previous value by
;, as shown below.

5. Run the following commands on the CLI to install the MySQL database service.

244 Version: A
4 Remote HA System with Two Disk Arrays

6. Set the MySQL service.

1) View the attribute of the MySQL service in the service list and make sure
the executable file path of the MySQL service is D:\MySQL\bin, as shown
below.

Version: A 245
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2) Set the startup type of the MySQL service to Manual and click Start to
view whether the MySQL service can be normally started.

7. Switch the FHEmsService resource group to another server.

Right-click the FHEmsService resource group, select Switch To→Remote


switch and click OK in the displayed dialog box. Make sure the FHEmsService
resource group is switched to another server.

8. Repeat steps 4 to 6 to install the MySQL database on the active server


(WIN53).

Note:

When installing the MySQL database on another server, you need not
copy the MySQL installation program.

246 Version: A
4 Remote HA System with Two Disk Arrays

9. (Optional) Add the MySQL service resources on two servers respectively and
check whether the MySQL service resources can be manually switched. For
specific operations of adding resources, see Configuring the EMS Resource.

Note:

When you install the active / standby system for the first time, make sure
the MySQL service resources can be manually switched before installing
the EMS.

4.9 Installing the UNM2000

The following introduces how to install the UNM2000.

4.9.1 Installing the UNM2000

The following introduces how to install the UNM2000. You need to install it on two
servers respectively.

Prerequisite

u The installation package of the UNM2000 is obtained.

u The MySQL database is installed on the server and the MySQL service is
started.

Procedure

1. Double-click the UNM2000 installation software on the active server, and click
"Next" in the displayed dialog box.

Note:

The active server indicates the server whose cluster resources are online.

Version: A 247
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

2. Select I accept the agreement and click Next.

3. Select the components to be installed and click Next.

248 Version: A
4 Remote HA System with Two Disk Arrays

4. Specify the UNM2000 installation path (the default installation path D:


\unm2000 is recommended), and then click Next.

Version: A 249
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

5. Select the installation language and click "Next".

6. Set the Tomcat installation path (the default installation path D:\ApacheTomcat
is recommended), and then click Next.

250 Version: A
4 Remote HA System with Two Disk Arrays

7. Click Yes in the alert box that appears.

8. Set the Tomcat port (The default port 8080 is recommended), and then click
Next.

9. Specify the server deployment mode. Select the default Centralized and click
Next.

Version: A 251
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

10. Specify the IP address of the computer running the server end, and then click
Next.

Note:

u When adopting the local HA system with one or two disk array(s), or
the remote HA system with two disk arrays (with the database
floating IP address configured), you need to set the IP address to the
floating IP address of the database upon EMS installation.

u When adopting the remote HA system with two disk arrays (with the
database floating IP address not configured), you need to set the IP
address to the database IP address of the corresponding server
upon EMS installation.

252 Version: A
4 Remote HA System with Two Disk Arrays

11. Select the mysql database and click Next.

Version: A 253
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

12. Set the database information for the UNM2000 server end and then click
Next.

13. Confirm the installation information and then click Next.

254 Version: A
4 Remote HA System with Two Disk Arrays

14. The installation of the program starts. In the alert box displayed a while later,
click Finish.

15. Click No in the displayed alert box to not restart the computer immediately.

16. Switch the cluster resources to the other server and repeat steps 1 to 15 to
install the UNM2000 on the other server.

4.9.2 Initializing the Database

The following introduces how to initialize the database table structure. You need to
perform this operation only on one server.

Version: A 255
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Prerequisite

u The UNM2000 software has been installed successfully.

u The following services are stopped: UnmBus, UNMCMAgent,


UNMCMService, UNMCFGDataMgr, UnmNode and UnmServiceMonitor.

u The cluster resource is switched to the server whose database is to be


initialized, and the MySQL service is started.

Procedure

1. Click to open the Start window, and click . In the Apps window,
click Visual Studio 2008 Command Prompt.

2. Run the following commands on the CLI.

3. Complete the database initialization according to Table 4-3.

Table 4-3 Initialization Procedure

No. Meaning Description

Select the database type sequence


1 Select the database type. number according to the current
database.
Set this password according to the
Enter the database administrator
2 actual planning. The default password
password.
is vislecaina.

256 Version: A
4 Remote HA System with Two Disk Arrays

Table 4-3 Initialization Procedure (Continued)

No. Meaning Description

Set this password according to the


3 Re-enter the password. actual planning. The default password
is vislecaina.
4 Enter the database instance name. Default value.
5 Enter the database port number. Default value.

6 Select the database language. The default value is 1.


Enter the name of the database file to
be omitted when clearing the
7 Default value.
database. Multiple files are separated
by "-".

8 Confirm the files to be omitted. The default value is 1.


9 Select the database clearing type. The default value is 1.

4. Restart the active and standby servers.

4.9.3 Configuring the Service

After installing the UNM2000, you need to stop the relevant EMS services and set
the startup type of Manual. You need to configure the service on two servers
respectively.

Procedure

1. On the active or standby server, click to open the Start window. Click
Administrative Tool, double-click Services to open the Services window.

2. Stop the relevant UNM2000 services and set the startup type of Manual.

The relevant UNM2000 services inlude UnmBus, UNMCMAgent,


UNMCMService, UNMCFGDataMgr, UnmNode, UnmServiceMonitor,
Apache Tomcat 6 and MySQL. The following uses the
UNMCMServiceservice as an example.

Version: A 257
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

3. Repeat steps 1 and 2 to stop the relevant EMS services on the other server
and set the startup type of Manual.

4.10 Configuring the EMS Resource

The following introduces how to configure the EMS resource. You need to configure
it on two servers respectively.

Background Information

The EMS resources include the database service resources and EMS service
resources.

Procedure

1. Add the MySQL service resource.

258 Version: A
4 Remote HA System with Two Disk Arrays

1) In the cluster of the active or standby server, right-click the FHEmsService


resource group and select Add Resource from the shortcut menu.

2) In the displayed Add Resource dialog box, set the resource name and
select the resource type GenericService, as shown below.

3) In the lower pane of the Add Resource dialog box, set ServiceName.

Version: A 259
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

ServiceName should be set to the service name in the service list.

4) In the Add Resource dialog box, select Critical and Enabled, and click
OK to complete adding the MySQL resource.

2. Add the EMS service resource as needed according to step 1.

4 Set Resource name to a resource name, which cannot contain spaces.


Refer to the service name.

4 Set Resource Type to GenericService.

4 Set ServiceName to a service name in the service list, as shown below.

260 Version: A
4 Remote HA System with Two Disk Arrays

4 Select Enabled for the service resource. For the key service resources
that have the automatic switching function, select Critical.

3. Set the dependency relationship among resources.

1) Select the FHEmsService resource group and select the Resources tab.

2) Right-click the corresponding resource and select Link from the shortcut
menu. In the displayed Link Resources dialog box, select the child
resource and establish the dependency relationship among the resources
one by one, according to the following figure.

Table 4-4 Dependency Relationship Among All Resources in the FHEmsService Resource
Group

No. Dependency Relationship: Parent Resource→


→ Child Resource

1 Tomcat6→Tcp-ip

2 UnmServiceMonitorRes→UnmNodeRes
UNMCFGDataMgrRes→UNMCMAgentRes→UNMCMServiceRe-
3
s→UnmNodeRes→UnmbusRes→MySQLRes

4 MySQLRes→Tcp-ip

5 MySQLRes→MountV_res→RVGPrimary_res

Version: A 261
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

4. Repeat steps 1 to 3 to configure the service resource on the other server. The
resource names on two servers must be consistent.

4.11 Verifying the Installation

The following introduces how to verify the installation of the active / standby system
is installed.

Procedure

1. Check whether the EMS client can log into the active and standby servers
respectively.

262 Version: A
4 Remote HA System with Two Disk Arrays

1) Start all resources on any of the active and standby servers and connect to
the EMS service end through the EMS client. If you log in successfully and
perform operations normally, the EMS is installed correctly on the server.

2) Switch all resources to another server manually and connect to the EMS
service end through the EMS client. If you log in successfully and perform
operations normally, the EMS is installed correctly on the server.

2. Check whether automatic switching can be performed between the active and
standby EMS systems.

The active and standby EMS servers will trigger automatic switching upon the
following failures.

4 A power failure occurs on the active EMS server.

4 An abnormal restart occurs on the active EMS server.

4 A disk fault occurs on the active EMS server.

4 A fault occurs on the network card (including the IP address) of the active
EMS server.

4 The EMS service on the active server automatically stops abnormally.

4 A database fault occurs on the active EMS server.

Simulate the following faults to check whether the active and standby EMS
systems trigger automatic switching successfully.

1) Manually disconnect the power supply of the active EMS server and check
whether the switching is successful.

2) Manually restart the active EMS server and check whether the switching is
successful.

3) Manually disable the optical card of the active EMS server and check
whether the switching is successful.

4) Manually disable the network card of the active EMS server and check
whether the switching is successful.

5) Manually stop the EMS service on the active EMS server and check
whether the switching is successful.

6) Manually stop the database service on the active EMS server and check
whether the switching is successful.

Version: A 263
5 Precautions for EMS Upgrade
The following introduces the precautions when upgrading the EMS version or
installing the version patches in the active / standby mode.

Procedure

1. Log into the EMS service end, and back up the EMS configuration and the
EMS installation file.

2. In the cluster, right-click the FHEmsServic resource group, select Offline→All


System and click OK.

3. Make sure the running statuses of all resources are Offline, select the
resources corresponding to the EMS services and deselect the Critical option
for them one by one.

4. On any of the active and standby servers, uninstall the EMS and install the new
version / load the patches.

5. After upgrading the EMS version or loading the patches, right-click the resource
group in the cluster and select Online→Server Name to start the resource.

6. (Optional) Upon installing the new EMS version or loading the patches, if the
creating the table space and table database operations are performed, log into
the EMS server and import the EMS configuration.

7. Log into the EMS server. If you can perform operations successfully, the server
is upgrade or patches are loaded successfully.

8. Repeat steps 3 to 6 on the other server to upgrade the EMS version or load the
patches.

Note:

The EMS version or patches installed on the active and standby servers
should be the same.

9. After upgrading the two servers, select the resource corresponding to the EMS
service and select the Critical option.

264 Version: A
5 Precautions for EMS Upgrade

Note:

Verify the EMS upgrade or patch loading correctness according to


Verifying the Installation.

Version: A 265
6 Common Maintenance Operations
The following introduces the common maintenance operations in the active /
standby disaster-tolerant system.

Procedure

1. In the cluster, right-click the resource group to perform the following operations:

4 Online: Enables the resources in the resource group.

4 Offline: Disables the resources in the resource group.

4 Switch to: Switches the resource group between the active and standby
systems.

4 Clear Fault: Eliminates the failures of the resources in the resource group.

Perform the Online, Offline and Clear Fault operations as needed.

266 Version: A
7 Failure Processing

The following introduces how to process the common failures of the active / standby
system.

VCS Troubleshooting and Restoring

Disk Troubleshooting

Version: A 267
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7.1 VCS Troubleshooting and Restoring

The following introduces how to troubleshoot and restore the VCS.

7.1.1 Service Group Troubleshooting

Table 7-1 shows the common problems of making service groups offline and online
and the corresponding solutions.

Table 7-1 Service Group Troubleshooting

Problem Analysis Suggestion

The cluster
Enter hasys -display system to validate
system is in the
whether the system is running.
RUNNING status.
The service group Use the output of the hagrp -display
The SystemList attribute of the service group
is not set to run on service_group command to validate the
may not contain the system name.
this sytem. system name.

The service group is not automatically


Use the output of the hagrp -display
The service group started on the system, it may be due to that
service_group command to validate the
is not set to start the group is not set to start automatically or
values of the AutoStart and AutoStartList
automatically. not set to start automatically on a specific
attributes.
system.

Use the output of the hagrp -display


service_group command to validate the
The service group values of the Frozen and TFrozen attributes.
is frozen. Use the hagrp -unfreeze command to
unfreeze the group. Please note that the VCS
will not get the frozen service group offline.

268 Version: A
7 Failure Processing

Table 7-1 Service Group Troubleshooting (Continued)

Problem Analysis Suggestion

When the VCS does not known the status of


the service groups on a specific system, it
Use the output of the hagrp -display
will automatically disable the service groups
service_group command to validate the
on this system. The service group will be
value of the AutoDisabled attribute.
automatically disabled in the following
To get offline a group that is automatically
situations:
disabled by the VCS, make sure the group is not
u When the VCS engine HAD is not
completely or partially active on any system
The service group running on the system
whose AutoDisabled attribute is set to 1 by
is disabled u When the VCS does not detect all the
VCS. Specifically, the validation may be
automatically. resources of the service group on the
disabled on the specified system because all
system
the resources are damaged by being active on
u When the VCS can view the specific
multiple systems. Then, clear the AutoDisabled
system only by disk heartbeats
attribute for each system:
In these situations, all the service groups
C:\>hagrp -autoenable service_group
whose SystemList attribute including the
-sys system
system will be automatically disabled: It is
not applicable to the power-off system.

The service group Use the output of the hagrp -display


whose failure is to service_group command to validate the
be forwarded is value of the State attribute. Use the hagrp
online on another -offline command to get the group offline
system. from another system.

View the IState attributes of all resources in the


service group to determine which resources are
waiting for getting online (or which resources
are waiting for getting offline). Use the
hastatus command to identify these
The service group resources. For the reason that the resource
is waiting for the cannot get online or offline, see the engine logs
resource to be and proxy logs.
online / offline. To clear this status, make sure all the resources
that are waiting for getting online / offline will not
get online / offline by themselves. Use the
hagrp -flush command to clear the internal
status of the VCS. Now, you can get the service
group online or offline on another system.

The output of the


A key resource is hagrp -display service_group Use the hares -clear command to eliminate
faulty. command indicates that the service group is the failure.
faulty.

Version: A 269
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Table 7-1 Service Group Troubleshooting (Continued)

Problem Analysis Suggestion

To view the dependency relationship not


The service group
satisfied, enter hagrp -dep service_group
is waiting for a
to view the dependency relationship of the
dependency
service group or enter hares -depresource
relationship to be
to view the dependency relationship of the
satisfied.
resource.
If the proxy process does not monitor all the
resources in the service group, this situation
Use the output of the hagrp -display
will appear. When the VCS engine HAD is
service_group command to view the value
started, it will explore immediately to search
(which should be 0) of the ProbesPending
for the initial statuses of all resources (if the
attribute of the service group. To determine
proxy has not returned value, it cannot
The service group which resources are not explored, validate the
explore). Before the VCS tries to get the
is not completely local Probed attribute of each resource on the
service group online as part of AutoStart, it
detected. specified system. 0 indicates it is waiting for the
needs to explore this group on all systems
explore result; 1 indicates it has explored; 2
contained in the SystemList attribute. This
indicates the VCS is not booted. To obtain the
can make sure that the VCS will not get the
relevant information, see the engine logs and
service online on another system even
proxy logs.
through the service group is online before
the VCS starts.

7.1.2 Resource Troubleshooting

Table 7-2 shows the common problems of making resources offline and online and
the corresponding solutions.

Table 7-2 Resource Troubleshooting

Problem Analysis Suggestion

The VCS tries to get the resource that is


Get the service online or getting online on the faulty
group online due to system online. Each parent resource Validate whether the child resource is online.
failure forwarding. can be started only after its child
resource gets online.

The status of the service group can


Waiting for service
prevent the VCS from getting resources View the details of the status
group status
online.
Waiting for the child One or more child resource(s) of the
Get the child resource online first.
resource parent resource is offline.

270 Version: A
7 Failure Processing

Table 7-2 Resource Troubleshooting (Continued)

Problem Analysis Suggestion

Waiting for the One or more parent resource(s) is (are)


Get the parent resource offline first.
parent resource. offline.
The resource is getting online or offline
Validate the IState attribute of the resource. For
Waiting for the as indicated. The VCS indicates the
the reason that the resource cannot get online,
resource to respond. online ingress point of the proxy-running
see the engine logs and proxy logs.
resource.

7.1.3 Global Cluster Troubleshooting

The following introduces the concepts of the disaster declaration and gives prompts
for configuration troubleshooting using the global cluster.

7.1.3.1 Disaster Declaration

When the status of a cluster in the global cluster changes to FAULTED, it cannot be
accessed any longer, and the failure will be forwarded according to the failure
reason (split brain, temporary interruption or permanent disaster).

If you take measures to process the cluster failures in the global cluster, the VCS
will prompt the type of the declared failure.

u Disaster: Indicates the primary data center will be lost forever.

u Outage: Indicates the primary data center may be restored back to its current
status at some time.

u Disconnect: Indicates split brain. Two clusters are started, but they are
disconnected.

u Replica: Indicates the data on the takeover target are consistent with the
backup source. When the service group is online, RVGPrimary will be started to
take over. This option is only applicable to the VVR environment.

Version: A 271
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

You can select the groups whose failures are to be forwarded to the local cluster. In
this situation, the VCS will make these groups online on the node according to the
FailOverPolicy attribute of the selected groups. It will also mark these groups as
OFFLINE in other clusters. If no service groups are selected, the VCS will only mark
these groups as offline implicitly in the closed cluster without take any other
measures.

7.1.3.2 Lost Heartbeat and Query Mechanism

When the internal heartbeats between any two clusters or all external heartbeats
are lost, the remote cluster is faulty or the communication links between two clusters
are interrupted (wide area split brain).

When there are more than two clusters, they should be differentiated from each
other (system A and system B). You can query the remaining clusters to confirm that
the remote cluster that has lost the heartbeat transmission is closed. This
mechanism is called Query.

u In the dual-cluster configuration, if one connector loses all the heartbeats with
the other connector, the remote cluster is faulty.

u If there are more than two clusters, and one connector loses all the heartbeats
with another connector, the mechanism will query the status of the cluster
displayed in the remaining connector before declaring that the cluster is faulty.

u If the status of the cluster displayed in another connector is running (passive


query), the queried connector will change the status of the cluster to
UNKNOWN. This greatly reduces false cluster failures.

u If all connectors reports that the cluster is faulty (active query), the queried
connector will regard the cluster as a faulty one and change the status of the
remote cluster to FAULTED.

7.1.3.3 VCS Alerts

The VCS alert is identified by the alert ID, which consists of:

u alert_type - indicates the alert type.

u cluster - indicates the cluster that generates the alert.

272 Version: A
7 Failure Processing

u cluster - indicates the system in which the alert is generated.

u object - indicates the name of the VCS object for which the alert is generated.
The object can be the cluster or the service group.

The alert is generated in the following format:

alert_type-cluster-system-object

For example, GNOFAILA-Cluster1-Replication indicates the alert of the GNOFAILA


type generated for the Replication service group on cluster 1.

Alert Type

The VCS can generate the alerts of the following types.

u CFAULT - Indicates that the cluster is faulty.

u GNOFAILA - Failover cannot be implemented for the global group cannot be


removed when it is in the online cluster. If the ClusterFailOverPolicy attribute is
set to Manual, and the wide area connector (WAC) is correctly configure and is
running upon occurrence of the failure, this alert will be displayed.

u GNOFAIL - Indicates that failover cannot be implemented for the global group
in the cluster or remote cluster.

Note:

The possible reasons that failover to the remote cluster cannot be


implemented for the global group.
u ClusterFailOverPolicy is set to Auto or Connected, and the VCS
cannot determine the remote cluster to which the failures of the
group can be forwarded.

u ClusterFailOverPolicy is set to Connected, and the faulty cluster


cannot communicate with one or more remote cluster(s) in the
ClusterList of the group.

u The WAC is offline or is not correctly configured in the cluster where


faulty group locates.

Version: A 273
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Management Alert

The alert should be processed manually. You can respond to the alerts according to
the following methods:

u If the reason of the alert can be ignored, you can use the Alerts dialog box or
haalert command on the Java console or Web console to delete the alert.
You need to provide the remark on reason of the deleted alert and the VCS will
record the remark into the engine log.

u Adopt the corresponding operation for the management alert of the


corresponding operation. You can use the Java or Web console to perform the
relevant operations.

u When the cancellation event of the alert occurs, the VCS will delete or cancel
some alerts.

If the any of the above operations is not performed and the VCS engine (HAD) is
running on at least one node in the cluster, the management alert will last. If the
HAD is not running on any of the nodes in the cluster, the management alert will
disappear.

Operations Related to the Alert

The following introduces the operations performed for the following alert types
through the Java or Web console:

u CFAULT - When this type of alert occurs, click "Take Action" to guide users
through the failure forwarding of the global group (which is offline before the
failure occurs in the cluster).

u GNOFAILA - When this type of alert occurs, click "Take Action" to guide users
to forward the failure of the global group (which is in the running status in the
remote cluster) to the remote cluster.

u GNOFAIL - The console does not provide related operations for this alert.

Cancellation Event

When the faulty cluster returns to the running the status, the VCS will delete the
CFAULT alerts.

274 Version: A
7 Failure Processing

The VCS will delete the GNOFAILA and GNOFAIL alerts to respond to the following
events:

u The status of the group changes from FAULTED to ONLINE.

u The failure of the group is eliminated.

u The group is deleted from the cluster that generates the alert.

7.2 Disk Troubleshooting

The following introduces how to eliminate the failures and restore from the failure
through the SFW.

Caution:

Exercise care when performing operations on the disk to avoid data loss
caused by misoperation.

7.2.1 Disk and Volume Status Information

If the disk or volume is faulty, restore and disk and volume as soon as possible to
avoid data loss.

7.2.1.1 Disk Status Description

One of the following described statuses will be displayed in the Status column of
the disk in the right pane of the console. When a disk is faulty, you can diagnose
and restore it according to the following table.

Caution:

Adopting the suggested operation may cause the volume to enter the
Imported status. However, these operations cannot ensure the integrity
of data.

Version: A 275
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Status Meaning Suggestion

The disk can be access and has no problems. The


Imported No measures are required to be taken.
status of the dynamic disk is normal.

The disk can be access and has no problems. The


Online No measures are required to be taken.
status of the basic disk is normal.
When an appropriate media is inserted into
No media is inserted into the CD-ROM to the the CD-ROM or other removable drives, the
No Media removable drive. Only the status of the CD-ROM status of the disk changes to Online. If the
or other removable drives is No Media. disk status is not changed immediately,
click Refresh to refresh the GUI.
Only the dynamic status will be in this status. The
disk will be marked in the Foreign only in the
following situations: For situation 1, use Import Dynamic Disk
u Situation 1: The disk is created as the Group to make the disk group available. In
dynamic disk on another computer and then the Import Dynamic Disk Group dialog
moved to the local computer without settings. box, click the check box to clear the host
u Situation 2: The disk contains auxiliary disk IDs of other systems.
groups (other disk groups different from that For situation 2, use Import Dynamic Disk
containing the computer bootable disk or Group to make the auxiliary disk group
system card) and the system is a dual-boot available. When you switch between
Foreign
system. When you switch between different operating systems, the active disk group will
operating systems, the disk containing be automatically imported. In the Import
auxiliary disk groups will be marked with the Dynamic Disk Group dialog box, click the
Foreign status and will not be imported check box to clear the host IDs of other
automatically. systems.
u Situation 3: The disk is initially created on the For situation 3, use the Merge Foreign
local computer and then moved or deleted. Disk command to restore the disk back to
Add the disk to the local computer as the its original disk group.
member of the disk group where the disk
located when it was created.
Right-click the disk and select Write
No Disk The new disk will be in this status. The disk Signature from the shortcut menu. The disk
Signature without signature cannot be used. type will change to Basic Disk, and can be
accessed or upgraded.

276 Version: A
7 Failure Processing

Status Meaning Suggestion

Only the dynamic status will be in this status. The


disk may be in the Offline status in the following
two situations:
u Situation 1: The disk is part of the system disk
configuration, but it cannot be found at
present. Make sure the disk is connected to the
Offline u Situation 2: The disk is inaccessible. The disk computer. Use Rescan to get the disk
may be damaged or unavailable. The error online.
icon will appear on the offline disk.
If the disk status is Offline and the disk
names changes to Missing Disk (#), the disk
cannot be found or identified although it was
available in the system.

The dynamic disk that cannot be found by the


Disconnected system will be in this status. The disk name Re-connect this disk.
changes to Missing Disk.

Failed to import the dynamic disk group that


contains the disk. All the disks that are failed to be
Import Failed Check the configuration to locate the failure.
imported in the dynamic disk group will be in this
status.
Failing is the supplementary message displayed
in the brackets behind the disk status. This status
Right-click the disk whose status is Failing,
indicates that the I/O error is detected in a certain
and select Reactivate Disk to make the
Failing area of the disk. All the volumes on the disk will be
disk Online and changes the statuses of all
in the Failed, Degraded or Failing status and
volumes on the disk to Healthy.
new volumes cannot be created on this disk. Only
the dynamic status will be in this status.

7.2.1.2 Volume Status Description

The following table describes the volume status that appears in the Status column
in the graph view or list view of the volume. If a volume is faulty, you can diagnose
and restore it according to the following table.

Version: A 277
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Caution:

Adopting the suggested operation may cause the volume to enter the
Healthy status. However, these operations cannot ensure the integrity of
data.

Status Meaning Suggestion

The volume can be accessed has no problems. This


Healthy status of volume is normal. Both the dynamic volume No measures are required to be taken.
and basic volume are in "Healthy" status.

It is synchronizing the data of the mirrored volume


No measures are required to be taken.
again to ensure two mirrored volumes contain the
Although you still can access the
same data. Both the dynamic and basic mirrored
mirrored volume securely during re-
volumes are in the Resynching status. After the re-
Resynching synchronization, you need to avoid
synchronization is completed, the statuses of the
configuration change (such as
mirrored volumes are restored to Healthy. It may take
disconnecting the mirroring) during
some time to synchronize data again, which depends
synchronization.
on the size of the mirrored volume.
It is re-generating the data and parity check of the
No measures are required to be taken.
RAID-5 volume. Both the dynamic and basic RAID-5
You can access the RAID-5 volume
Regenerating volumes are in the Regenerating status. After the re-
securely during re-generation of data
generation is completed, the statuses of the RAID-5
and parity check.
volumes are restored to Healthy.

278 Version: A
7 Failure Processing

Status Meaning Suggestion

u For situation 1, you can continue to


The Degraded status is only applicable to the
access the volume using the
following situations: There are three situations when
remaining online disks. However, it
the mirrored volume or RAID-5 volume in the basic of
is recommended that you restore
dynamic disk is in the Degraded status:
the volume as soon as possible.
u Situation 1: As one basic disk is not online, the
u For situation 2, you need to solve
data on the volume may be not fault-tolerant. If
the problem. Move all the disks that
one disk is offline or faulty, the redundancy of
contain this volume to a new place
RAID-5 volume may be lost. If one of the disks
or move them back to the original
containing sub disks is faulty, the redundancy of
place.
Degraded the mirrored volume may be lost.
u For situation 3, restore the basic
u Situation 2: If the disk containing the RAID-5 or
disk to the Online status and re-
mirrored volume is physically moved, it will be in
activate the disk using the
the Degraded status.
Reactivate Disk command. When
u Situation 3: The data on the volume is no longer
the disk is restored to the Online
fault-tolerant and the I/O error is detected on the
status, the status of the volume
basic disk. Once the I/O error is detected on the
changes to Degraded. Take further
disk, all volumes on the disk will be in the At Risk
measure to restore the volume
status. Only the dynamic mirrored volume or
back to the normal status as
RAID-5 volume will be in the Degraded status.
needed.
The faulty volume will be started automatically. The
error icon will appear on the faulty volume. The
dynamic volume and basic volume will be in the
Failed status. There are two situations when the
status of the volume changes to Failed.
u For situation 1, replace or restore
u Situation 1: One or more disk(s) that the volume
faulty disk(s).
spans is (are) faulty. If one disk is faulty, the
u For situation 2, move all the disks
Failed striped volume, simple volume, spanned volume
that contain the sub disks of the
or extended partition will be faulty. If two disks are
volume or move them back to the
faulty, the RAID-5 volume will be faulty. When all
original place.
mirrored volumes contained in the volume are
faulty, the mirrored volume or mirrored striped
volume will be faulty.
u Situation 2: One or more disk(s) that the volume
spans is (are) moved to another computer.

The volume is being formatting according to the


Formatting No measures are required to be taken.
standards selected by the user.

The volume is located in the dynamic disk group that Import the dynamic disk group
Stopped
is not imported. containing this volume.

Version: A 279
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Status Meaning Suggestion

Re-activate and re-scan the offline disk.


If any sub disk of the volume locates on the disk
If the status of the volume changes to
Missing whose status is Offline, the status of the volume will
Stopped or Failed, re-active the
change to Missing.
volume.
Failing is the supplementary message, displayed in
the brackets behind the volume status. Failing
indicates that Veritas Storage Foundation for
Windows encounters the I/O error on at least one sub
disk that contains the volume. However, the error Locate the disk that is faulty and then try
Failing
causes no harm to the data on the volume. Failing will to eliminate the failure.
send the message indicating that the disk integrity is
deteriorated. When the status of the volume is
Degraded (At Risk), the status of the basic volume is
usually Online (Failing).

7.2.2 Solving Common Problems

This section introduces how to solve the common problems.

7.2.2.1 Restoring the Off-line Dynamic Disk to Importing Status

This section introduces how to restore the off-line dynamic disk to importing status.

Note:

The off-line dynamic disk may be damaged or unavailable.

Procedure

1. Repair all the disk and controller, and make sure the disk is powered-on,
inserted into and connected with the PC.

2. Use the Rescan command to rescan all the devices on the SCSI bus, and
restore the disk to on-line status.

280 Version: A
7 Failure Processing

Select Rescan from the menu of Actions or right-click the StorageAgent node
in the tree view, and select Rescan. If many devices exist on the SCSI bus of
the PC, it may take a long time to rescan the bus. If a disk fails and the mirrored
volume or RAID-5 volume occurs, some new volumes will be created in other
positions during repairing.

3. If the disk is still off-line after rescaning, select this disk and execute the
Reactivate Disk command to restore the on-line status for the disk manually.

Click the disk icon in the tree view or Disk View and select the Reactivate Disk
command.

Note:

u The dynamic disks belonging to the Microsoft management group do


not support the Reactivate Disk.

u If reactivating the disk does not change the disk status, the disk or
the disk connection may be faulty.

4. Of the disk becomes on-line after the reactivating, check whether the volume is
running normally. If not, execute the Reactivate Volume command on the
volume.

Note:

The dynamic disks belonging to the Microsoft management group do not


support the Reactivate Volume command.

5. Run Chkdsk.exe to make sure the basic data on the disk is not damaged.

To run Chkdsk, open the CMD alert window and enter the following command.
chkdsk x:/f

x indicates the drive letter of the volume to be checked. /f indicates the any fault
discovered duing the repairing using Chkdsk. If /f is neglected, the Chkdsk will
run in read-only mode.

Chkdsk will clear the file system structure, but the invalid data may still exist on
the disk if you perform any operation when the disk is faulty. It is advisable to
run the utility program to check the completeness of the data. If the data is
damaged, replace the data with that in the backup.

Version: A 281
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

7.2.2.2 Restoring the Basic Disk to the On-line Status

This section introduces how to restore the basic disk to the on-line status.

Note:

If the basic disk is damaged or unavailable, it will not be shown in the


VEA GUI.

Procedure

1. Repair all the disk and controller, and make sure the disk is enabled, inserted
into and connected with the PC.

2. Rescan all the devices on the SCSI bus using the Rescan command, so as to
restore the disk to the on-line status.

Select Rescan from the menu of Actions or right-click the StorageAgent node
in the tree view, and select Rescan. If many devices exist on the SCSI bus of
the PC, it may take a long time to rescan the bus. If a disk fails and the mirrored
volume or RAID-5 volume occurs, some new volumes will be created in other
positions during repairing.

3. If the disk is restored to the on-line status after rescanning, check whether the
volume is normal.

If not, restore the volume to the normal status. See Restoring the Basic Volume
to the Normal Running Status.

4. Run Chkdsk.exe to make sure the basic data on the disk is not damaged.

Even though the disk and volume are both restored to on-line status, it is
important to check whether the basic data is complete. To run Chkdsk, open the
CMD alert window and enter the following command.
chkdsk x:/f

x indicates the drive letter of the volume to be checked. /f indicates the any fault
discovered during the repairing using Chkdsk. If /f is neglected, the Chkdsk will
run in read-only mode. If the data is damaged, replace the data with that in the
backup.

282 Version: A
7 Failure Processing

7.2.2.3 Deleting the Disk from the Computer

The following introduces how to delete the disk from the computer.

Background Information

To identify the physical disk indicated by the disk in VEA GUI, use the "Ping Disk"
command. This command will make the indicator LED built in the physical disk
enclosure flash until the command is stopped running.

The disk can be deleted only after all the volumes on the disk are deleted. You can
remain any mirrored volumes on the disk by deleting the mirrored volume whose
status is Missing on the disk. Deleting a volume will destroy the data on it.
Therefore, delete the disk only when the disk is permanently damaged or
unavailable.

Procedure

u Delete the basic disk.

Delete the basic disk from the computer. Select Rescan from the Actions
menu and the disk and its volumes are no longer displayed on the GUI.

u Delete the dynamic disk.

1) If a dynamic disk remains in the Offline or Missing status and it incurs


failure that cannot be restored, select the disk in the dynamic disk group
and select Remove Disk from Dynamic Disk Group from the menu.

2) Make sure the disks to be deleted are displayed in the right pane of the
window. Click OK.

After the dynamic disk is deleted from the disk group, it will change to the
basic disk.

3) Delete the basic disk from the computer. Select Rescan from the Actions
menu and the disk and its volumes are no longer displayed on the GUI.

7.2.2.4 Restoring the External Disk to the Online Status

Restoring the external disk to the online status depends on the original context of
the disk.

Version: A 283
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Warning:

As a volume may span multiple disks (for example, a mirrored or RAID-5


volume), validate the disk configuration and then move all the disks
related to the volume. If not all the disks are moved, the status of the
volume may be Degraded or Failed.

u If the external disk is created on another computer and moved to the dynamic
disk group of the current computer, use Import Dynamic Disk Group to get the
disk online.

The procedures for adding an external disk that is originally created on another
computer are as follows:

1) Right-click the disk and select Import Dynamic Disk Group. A dialog box
appears, displaying the name of the dynamic disk group.

2) Specify the name of the dynamic disk group according to the following
requirements:

¡ To remain the original name, click "OK".

¡ To set a new name, enter the name in the "New name" dialog box and
then click OK.

3) To import the dynamic disk group from other system, click the check box to
clear the host IDs of other systems. Import the disk group. All the existing
volumes on the disk will be visible and accessible.

u If the external disk has an auxiliary disk group (that is, the dynamic disk group
other than the one that containing the booted disk or system disk of the computer)
and it is switched between the OSs on a dual-boot computer, use the Import
Dynamic Disk Group command.

If a disk on a dual-boot computer has one or more auxiliary dynamic disk group
(s), this disk will be marked as Foreign and not be imported to the auxiliary disk
group upon it being switched between OSs. In this situation, the shared primary
dynamic disk group on the disk will be imported automatically.

284 Version: A
7 Failure Processing

u If the disk is originally created on the current computer and then deleted, and
now it is re-connected to the current computer, to restore the disk back to the
member of its original dynamic disk group, use the Merge Foreign Disk
command.

7.2.2.5 Restoring the Basic Volume to the Normal Running Status

This section introduces how to restore the basic volume to the normal running
status.

Procedure

1. Repair all the disk and controller, and make sure the disk is enabled, inserted
into and connected with the PC.

2. Select Rescan from the menu of Actions or right-click the StorageAgent node
in the tree view, and select Rescan. Rescan all the devices on the SCSI bus
and restore the disk to which the volume belongs to on-line status.

Note:

If many devices exist on the SCSI bus of the PC, it may take a long time
to rescan the bus.

7.2.2.6 Restore the Dynamic Volume to the Normal Status

The following introduces how to restore the dynamic volume to the normal status.

Procedure

1. Use the Rescan and Reactivate Disk commands to get offline the disk that the
volume belongs to.

Version: A 285
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

u The dynamic disk belonging to the Disk Group of Microsoft Disk


Management does not support the Reactivate Disk command.

u If one of the disks is faulty and it contains the mirrored volume or


RAID-5 volume, the restoring process includes the part of creating
the volume on another location.

2. When the disk is reactivated and gets online, view whether the volume is
running normally or not. If it is not normal, try to run the Reactivate Volume
command on it.

Note:

The dynamic disk belonging to the Disk Group of Microsoft Disk


Management does not support the Reactivate Volume command.

3. Run Chkdsk.exe to make sure the structure of the basic file system keeps
unchanged.

To run Chkdsk, open the CMD command prompt window and enter the
command:
chkdsk x:/f

x indicates the drive letter to the volume to be checked. / f indicates the errors
detected during Chkdsk restoring. If the /f option is ignored, Chkdsk will run in
the read-only mode.

If the user is working upon occurrence of errors on the disk, invalid data will still
remain on the disk after the Chkdsk clears the file system structure. It is
recommended that you run the corresponding application to check the data
integrity. If the data are damaged, use the backup to replace them.

7.2.2.7 Restoring the Volume Containing Degraded Data After


Moving the Disk Between Computers

The following introduce how to restore the volume containing degraded data after
moving the disk between the computers.

286 Version: A
7 Failure Processing

If the disks are moved between computers using the Deport Dynamic Disk Group
and Import Dynamic Disk Group commands, and the mirrored volume or RAID-5
dynamic volume contained by these disks have degraded data, you can restore the
volumes containing the degraded data following the steps below.

Procedure

1. Deport their disks from the computers whether they currently locates and
physically move them back to the computers whether they originally located.

2. Use Rescan to make sure all disks are installed correctly.

The statuses of the volumes are Degraded before the disks are moved. After
they are moved back, their statuses are still Degraded.

3. Make sure the disks that contain degraded mirroring or parity check information
are not in the Offline status.

If the statuses are Offline, check whether hardware is faulty and then
reconnect the disk if necessary.

4. Use the Reactivate Disk command to get the disks online.

Note:

u The dynamic disk belonging to the Disk Group of Microsoft Disk


Management does not support the Reactivate Disk command.

u If the hardware failure is eliminated, the status of the disk will change
to Healthy, all the mirrored volumes on this disk will be synchronized
again and all the RAID-5 volumes will generate the parity check
again.

5. (Optional) If the volume are still in the Degraded status, use the Reactivate
Volume command.

If the Veritas Storage Foundation for Windows gets the volume online
successfully, the status of the volume changes to Healthy.

Version: A 287
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

The dynamic disk belonging to the Disk Group of Microsoft Disk


Management does not support the Reactivate Volume command.

6. Deport the dynamic disk group and move all the disks on this dynamic disk
group to the second computer.

Move the disk group and all involved disks simultaneously to ensure the
volume on the second computer is Healthy.

7.2.2.8 Processing the Provider Error Upon Startup

The following introduces how to process the provider error upon SFW startup.

Background Information

In Veritas Storage Foundation for Windows, the provider is similar to the driver.
Each provider manages a specific hardware or software storage component. For
example, the disk provider manages all the disks that are regarded as disks by the
Windows OS. The provider detects the existing physical and logical entities and
stores the information in the distributed database of Veritas Storage Foundation for
Windows.

Procedure

1. If the error indicates the provider cannot be loaded is prompted upon startup of
Veritas Storage Foundation for Windows, right-click the managed server node
in the tree-like view of Veritas Enterprise Administrator and select Properties.
View the status of the provider in the displayed window.

The top of the window displays the providers loaded, and the bottom of the
window displays all the providers not loaded.

2. (Optional) If a provider fails to be loaded upon startup of Veritas Storage


Foundation for Windows, you need to find out the reason why the provider is
not loaded and then start the application.

Access http://www.symantec.com/business/support/index.jsp and contact the


Symantec technical support department to get help.

288 Version: A
7 Failure Processing

7.2.2.9 Other Failures

Table 7-3 shows other failures and how to eliminate them.

Table 7-3 Eliminating Other Failures

Problem Analysis Suggestion

If the disk type is "No Signature", the signature


should be written into the disk. When installing a
new disk, the signature should be written into To write the configuration into the
The disk type has not the disk to make the disk available. The system disk, right-click the disk under the
signature will not automatically write the signature "Disks" node and select "Write
because the disk may be imported from another Signature".
operating system and its configuration needs to
remain unchanged.

The RAID-5 volume can be created only when


Make sure there is sufficient
The RAID-5 volume there are at least three disks, and the RAID-5
unallocated space on three or more
cannot be created volume containing logs can be created only
disks.
when there are four disks.
Make sure there are sufficient
The mirrored volume The mirrored volume can be created only when
unallocated space on two or more
cannot be created there are two or more disks.
disks.
After the basic disk is upgraded and its dynamic
disk is immediately excluded, the dynamic group
1. Delete the dynamic disk group
named "Unknown" will occasionally occur.
and its content, and then restart
A unknown group appears Refresh the displayed content, or try to import
the computer. The dynamic
after the basic disk is the excluded dynamic disk group and delete the
disk group will be correctly
upgraded into a dynamic original group from the screen. If the error
displayed in the "Offline,
disk and its dynamic disk indicating disk not found occurs after the
Foreign" status as the excluded
is immediately excluded. dynamic disk group is imported. DO NOT
group.
perform any other operations on these disk in
2. Import the dynamic disk group.
Veritas Storage Foundation for Windows ;
otherwise, it may cause data loss.

Version: A 289
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Table 7-3 Eliminating Other Failures (Continued)

Problem Analysis Suggestion

After the Veritas Storage foundation for


Windows is uninstalled, the disk management
will only import the primary disk group. If there is
no primary disk group in Veritas Storage
The SFW disk group Foundation for Windows due to that the system
Create a dynamic disk group in the
cannot be used in the disk or booted disk is not encapsulated, the disk
disk management and merge the
management after the management cannot import one or more disk
external disk group into the dynamic
Veritas Storage foundation group(s) upon uninstallation of Veritas Storage
disk group.
for Windows is uninstalled. Foundation for Windows. It is because that the
disk management cannot import the auxiliary
disk group as the primary disk group. If there is
no primary disk group, the external disk group
cannot be merged.

After re-install the Veritas Storage


The dedicated dynamic
The dedicated dynamic disk group is stored in Foundation for Windows, to
disk group protection will
the registry. When the Veritas Storage continue to protect the disk group
be deleted after the Veritas
foundation for Windows is uninstalled, the that is previously protected, use the
Storage foundation for
corresponding registry will be deleted Add Dynamic Disk Group
Windows is uninstalled
simultaneously. Protection command to add the
and re-installed.
function back to the disk group.

Before the reservation of In case the reservation of SCSI cannot be


SCSI is released, it is released when the Storage Foundation server
unable to import the on the shared bus is excluded from the dynamic Release the reservation of SCSI
dynamic or auxiliary disk or auxiliary disk group of the cluster, it is unable and import the dynamic disk group
group of the cluster with to import the dynamic or auxiliary disk group of or auxiliary disk group of the cluster.
the dedicated dynamic the cluster with the dedicated dynamic disk
disk group protection. group protection.

Implement the self-inspection and ignore the


During restart, the drive is
message. The self-inspection will complete
damaged message may Wait for the system to start self-
automatically and then the system will restart.
occur and the system may inspection.
This may take some time, which depends on the
suggest self-inspection.
size of the system.

7.2.3 Command or Procedure for Fault Elimination and


Restoration

This section introduces the command or procedure for fault elimination and
restoration.

290 Version: A
7 Failure Processing

7.2.3.1 Refresh

If the disk and volume are normal but the recent modification has not been updated
to the VEA GUI, you can use the Refresh command.

Function Description

The Refresh command can refresh the information on the drive letter of the PC, file
system, volume and removable media. This command also checks whether the
previously non-readable volume is readable now. If no I/O operation is performed on
the modified disk, this command cannot obtain the disk modification since the last
reboot / rescan.

Procedure

Select Refresh from the View or Actions menu of VEA, or right-click the
StorageAgent node in the tree view to select Refresh.

Note:

The Refresh command is only applicable to the StorageAgent node and


all of its sub-nodes.

7.2.3.2 Rescan

It is advisable to use the Rescan command when you modify the disk (such as
delete or add a disk). Rescanning may take several minutes, depending on the
device quantity on the SCSI bus.

Function Description

The Rescan command rescans the SCSI bus to discover the disk modification. It
shares the function with the Refresh command, refreshing the information on the
drive letter, file system, volume and removable media.

Procedure

Select Actions and Rescan in sequence from the tool bar.

Version: A 291
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

u Click Tasks in the lower pane to bring up a progress bar. showing


the completed percentage of the rescan. After the Rescan command
is executed, you can view the detailed information of the system. If
the error mark persists, you should re-activate the disk or volume.

u The Rescan command only takes effect on the StorageAgent node


and its sub-nodes.

7.2.3.3 Replace Disk

If the disk fails or operates abnormally, you should replace the disk.

Function Description

The Replace Disk command replaces the faulty disk with an empty basic disk. It will
create the volume on the new disk. The content of non-redundancy volume is not
protected. The redundancy volume will be re-synchronized automatically. This
command is only applicable to the Missing disk. If the disk is replaced successfully,
the new disk will inherit the attributes (including disk name) of the previous one.

After the replacing, the faulty volume stays faulty on the replaced disk because
there is no valid data to be copied. If the previous disk is re-connected to the system
after being replaced, it will be shown as a foreign disk on the VEA console. The
system will create a disk group named Unknown Dg. Use the Merge Foreign Disk
command to re-add the disk into the previous dynamic disk group.

Note:

The dynamic disks in the Microsoft disk management group do not


support the Replace Disk command.

292 Version: A
7 Failure Processing

Procedure

1. Right-click Missing Disk and select Replace Disk. A dialog box will appear to
show the list of the empty basic disks.

2. Select a disk to replace the missing disk. Click OK to replace disk.

7.2.3.4 Merge Foreign Disk

The Merge Foreign Disk command is used to merge the foreign disk.

Function Description

If you have deleted a disk from the server and from Veritas Storage Foundation for
Windows, you can use the Merge Foreign Disk command to re-connect it to the
server as a member of the same dynamic disk group. The Merge Foreign Disk
command will restore the disk to the original status to serve as a member of the
dynamic disk group to which the disk belongs previously.

You should also use this command when you re-install a disk in the previous server
after you deleted it when the disk group was off-line, and connected the disk to other
server. This command is required because the disk has the disk group ID of other
server.

Note:

The dynamic disks in the Microsoft management disk group do not


support the Merge Foreign Disk command.

Procedure

1. Re-connect the disk to the previous server.

2. On the VEA console, select Rescan in the menu of Actions to re-scan. The
disk will be shown in the tree with a red cross. The dynamic disk group will be
shown as Unknown Group.

3. Right-click the disk label in the tree view and select Merge Foreign Disk.

4. Click Next. Select the disk to be merged in the Merge Foreign Disk Wizard
window.

Version: A 293
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Click Add to move the disk from the left pane to the right pane. Click Next.

5. If the data status on the disk is Healthy, click Next.

Note:

u If the volume is in Failed status, the data may not be complete


(though the data is complete as far as possible). Handle according to
Restore the Dynamic Volume to the Normal Status.

u If the disk is in Missing status, check whether it is connected


normally.

6. Click Finish to complete merging the foreign disk to the server.

The status of merged disk should be consistent with that before deleted from
the server. The disk should be in the previous dynamic disk group.

Note:

If the error mark persists, right-click the disk and select Reactivate Disk.

7.2.3.5 Reactivate Disk

The Reactivate Disk can be used to restart the disk manually.

Function Description

The Rescan may not able to clear the error mark of the dynamic disk. In this case,
the Reactivate disk command can be used. The dynamic disks marked with
Missing or Offline can be reactivated. After the reactivation, the connected disk
which is in Failed status should be marked as Online.

Note:

The dynamic disks belonging to the Microsoft management group do not


support the Reactivate Disk command.

294 Version: A
7 Failure Processing

Procedure

1. Right-click the disks with the error mark and select Reactivate Disk.

2. In the dialog box that appears, click Yes to reactivate the disk.

After the reactivation, the disk should be marked as Online if no mechanical or


other severe problem exists.

7.2.3.6 Reactivate Volume

The Reactivate Volume command re-synchronizes the volume to make it in the


Healthy status.

Function Description

If the dynamic volume is faulty, you should re-connect the disks in which this volume
belongs to the server using the Rescan command (if this command is unavailable,
use the Reactivate Disk command). After one or multiple disks are re-connected to
the server, if the volume is not restored to the Healthy status, you can use the
Reactivate Volume command.

Note:

The dynamic disks belonging to the Microsoft management disk group do


not support the Reactivate Disk and Reactivate Volume command.

Procedure

1. Right-click the volume and select Reactivate Volume.

Version: A 295
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Note:

u If plex of any volume or mirrored volume are still abnormal, perform


this operation. For the volume RAID-5, execute the Reactivate
Volume command to re-generate the volume.

u If the basic disk of the volume is normal, the volume may be restored
to the Healthy status, though its data may be damaged or invalid. It
is advisable to run Chkdsk.exe before using this volume. If you fail to
run Chkdsk, you should re-format the volume and restore its content
from backup.

7.2.3.7 Repair Volume (for Dynamic Volume RAID-5)

This section introduces how to repair the dynamic RAID-5 using the Repair volume
command. The Repair volume command deletes the damaged part in the volume
and re-create the deleted part on the other position on the normal dynamic disk.

Function Description

If the volume RAID-5 is in the Degraded status, and there is enough undistributed
space on the other dynamic disks for re-create the damaged disk of the volume, you
can use the Repair volume command to repair the volume RAID-5.

Note:

After the disk fails, you should rescan first, and the Repair volume menu
option is available.

Procedure

1. Right-click the damaged volume and select Repair volume.

2. The target disk is designated in the Repair volume dialog box. To select a disk
manually, click Manually assign destination disks to select the desired disk.
Select Disable Track Alignment to disable track alignment for the created
volume.

296 Version: A
7 Failure Processing

3. Comfirm the selection and click OK.

7.2.3.8 Repair volume (for Dynamic Mirrored Volume)

This section introduces how to repair the dynamic mirrored volume using the Repair
volume command. The Repair volume command deletes the damaged part in the
volume and re-create the deleted part on the other position on the normal dynamic
disk.

Function Description

If the disk to which the mirrored volume belongs fails, the volume is in the Degraded
status. The disk name will be modified to Missing Disk and a cross will appear
above the Missing Disk icon. The disk status becomes Offline. You can use the
Repair volume command to repair the dynamic mirrored volume.

Procedure

1. Right-click the damaged volume and select Repair volume.

2. Click the corresponding check box in the Repair volume dialog box to select
the mirroring to be repaired. Select Disable Track Alignment to disable track
alignment for the created mirroring.

3. Click OK to create mirroring in the available disk space of other dynamic disk.

7.2.3.9 Starting and Stopping the Veritas Storage Foundation for


Windows Service

Starting and stopping the Veritas Storage Foundation for Windows service helps
eliminating faults. For example, if the Veritas Storage Foundation for Windows
service stops on the server, you can restart it rather than restart the server. The
service restarting can solve some temporary problems. The Veritas Storage
Foundation for Windows service is also called vxsvc.

Procedure

u Start the Veritas Storage foundation for Windows service.

Version: A 297
UNM2000 Network Convergence Management System (Based on Windows) Active/Standby System Installation Guide

Enter the following command in the CMD window:


net start vxsvc
.

u Stop the Veritas Storage foundation for Windows service.

Enter the following command in the CMD window:


net stop vxsvc
.

298 Version: A
Appendix A Abbreviations

Common Object Request Broker


CORBA
Architecture
DB Data Base
DCN Data Communication Network
EMS Element Management System

GCO Global Cluster Option

IP Internet Protocol
NE Network Element
NMS Network Management System

NIC Network Information Center


RDS Replicated Data Set

Redundant Arrays of Independent


RAID
Disks
RVG Replicated Volume Group

VEA Veritas Enterprise Administrator

VCS Veritas Cluster Server


VVR Veritas Volume Replicator

Version: A 299
Product Documentation Customer Satisfaction Survey
Thank you for reading and using the product documentation provided by FiberHome. Please take a moment to
complete this survey. Your answers will help us to improve the documentation and better suit your needs. Your
responses will be confidential and given serious consideration. The personal information requested is used for
no other purposes than to respond to your feedback.

Name
Phone Number
Email Address
Company

To help us better understand your needs, please focus your answers on a single documentation or a complete
documentation set.

Documentation Name
Code and Version

Usage of the product documentation:


1. How often do you use the documentation?
□ Frequently □ Rarely □ Never □ Other (please specify)
2. When do you use the documentation?
□ in starting up a project □ in installing the product □ in daily maintenance □ in trouble
shooting □ Other (please specify)
3. What is the percentage of the operations on the product for which you can get instruction from the
documentation?
□ 100% □ 80% □ 50% □ 0% □ Other (please specify)
4. Are you satisfied with the promptness with which we update the documentation?
□ Satisfied □ Unsatisfied (your advice)
5. Which documentation form do you prefer?
□ Print edition □ Electronic edition □ Other (please specify)
Quality of the product documentation:
1. Is the information organized and presented clearly?
□ Very □ Somewhat □ Not at all (your advice)
2. How do you like the language style of the documentation?
□ Good □ Normal □ Poor (please specify)
3. Are any contents in the documentation inconsistent with the product?
4. Is the information complete in the documentation?
□ Yes
□ No (Please specify)
5. Are the product working principles and the relevant technologies covered in the documentation sufficient for
you to get known and use the product?
□ Yes
□ No (Please specify)
6. Can you successfully implement a task following the operation steps given in the documentation?
□ Yes (Please give an example)
□ No (Please specify the reason)
7. Which parts of the documentation are you satisfied with?

8. Which parts of the documentation are you unsatisfied with?Why?

9. What is your opinion on the Figures in the documentation?

□ Beautiful □ Unbeautiful (your advice)

□ Practical □ Unpractical (your advice)

10. What is your opinion on the layout of the documentation?


□ Beautiful □ Unbeautiful (your advice)
11. Thinking of the documentations you have ever read offered by other companies, how would you compare
our documentation to them?
Product documentations from other companies:

Satisfied (please specify)

Unsatisfied (please specify)

12. Additional comments about our documentation or suggestions on how we can improve:

Thank you for your assistance. Please fax or send the completed survey to us at the contact information
included in the documentation. If you have any questions or concerns about this survey please email at
edit@fiberhome.com