Вы находитесь на странице: 1из 234

Netcool Performance Manager 1.3.

Document Revision R2E2

Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note Before using this information and the product it supports, read the information in Notices on page 217.

This edition applies to version 1, release 3, modification 1 of Tivoli Netcool Performance Manager and to all subsequent releases and modifications until otherwise indicated in new editions. Copyright IBM Corp. 2007, 2011 US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

Contents
1 About This Documentation . . . . . . . . . . . . Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Required Skills and Knowledge . . . . . . . . . . . . . . Document Conventions . . . . . . . . . . . . . . . . . . . . Blank pages . . . . . . . . . . . . . . . . . . . . . . . . . . . Document Structure . . . . . . . . . . . . . . . . . . . . . . . User Publications . . . . . . . . . . . . . . . . . . . . . . . . Viewing the Online Help . . . . . . . . . . . . . . . . Architecture Overview . . . . . . . . . . . . . . . . System overview . . . . . . . . . . . . . . . . . . . . . . . . . Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . Server Architecture . . . . . . . . . . . . . . . . . . . . . . . Mediation Services . . . . . . . . . . . . . . . . . . . . . Application Framework . . . . . . . . . . . . . . . . . User Management Services . . . . . . . . . . . . . . . Database Services . . . . . . . . . . . . . . . . . . . . . . Overview of administrator tasks . . . . . . . . . . . . . Client application tasks . . . . . . . . . . . . . . . . . . Server application tasks . . . . . . . . . . . . . . . . . Data Flow Overview . . . . . . . . . . . . . . . . . . . . . . Network Elements . . . . . . . . . . . . . . . . . . . . . . Mediation Services . . . . . . . . . . . . . . . . . . . . . Application Framework . . . . . . . . . . . . . . . . . Setup Tasks . . . . . . . . . . . . . . . . . . . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Software Install Summary . . . . . . . . . . . . . . . . . Datasource setup . . . . . . . . . . . . . . . . . . . . . . . . LDAP - Tivoli Directory Server setup . . . . . . . Crontab setup . . . . . . . . . . . . . . . . . . . . . . . . . . . Virtuo User setup . . . . . . . . . . . . . . . . . . . . . Root User Setup . . . . . . . . . . . . . . . . . . . . . . Additional entries and scripts . . . . . . . . . . . . SAP setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAP configuration . . . . . . . . . . . . . . . . . . . . Application and system passwords . . . . . . . . . . Application Users . . . . . . . . . . . . . . . . . . . . . OS Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Users . . . . . . . . . . . . . . . . . . . . . . . . . Starting and Stopping the system . . . . . Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oracle Database . . . . . . . . . . . . . . . . . . . . . . . . . Starting the Oracle Database . . . . . . . . . . . . . Stopping the Oracle Database . . . . . . . . . . . . Tivoli Directory Server. . . . . . . . . . . . . . . . . . . . Starting the Tivoli Directory Server . . . . . . . Stopping the Tivoli Directory Server . . . . . . 1 1 1 1 2 3 3 3 5 5 5 6 6 7 7 8 8 8 8 9 9 9 9

Process Monitor . . . . . . . . . . . . . . . . . . . . . . . . . 25 Starting the Process Monitor . . . . . . . . . . . . . 25 Stopping the Process Monitor . . . . . . . . . . . . 26 Process Manager . . . . . . . . . . . . . . . . . . . . . . . . . 26 Starting the Process Manager . . . . . . . . . . . . 26 Stopping the Process Manager . . . . . . . . . . . . 26 Tivoli Netcool Performance Manager. . . . . . . . . 27 Starting Tivoli Netcool Performance Manager . 27 Stopping Tivoli Netcool Performance Manager 27 Tivoli Netcool Performance Manager Complete startup and shutdown . . . . . . . . . . . . . . . . . . . . . 27 Complete Startup . . . . . . . . . . . . . . . . . . . . . . 27 Complete Shut down . . . . . . . . . . . . . . . . . . . 29 5 Application Administration . . . . . . . . . . . 31 User administration . . . . . . . . . . . . . . . . . . . . . . . 31 User administration basics . . . . . . . . . . . . . . . 31 User management . . . . . . . . . . . . . . . . . . . . . 36 Role Management . . . . . . . . . . . . . . . . . . . . . 40 User Administration Command Line Tool . . 41 External Reporting administration . . . . . . . . . . . 44 Setting External Reporting properties . . . . . . 44 Setting SMTP properties . . . . . . . . . . . . . . . . 45 External Reporting Dictionary Mapping . . . . 45 Report Granularity . . . . . . . . . . . . . . . . . . . . . . . 47 Configure support for 15-minute reporting intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 Enable Report Definition GUI . . . . . . . . . . . . 48 Aggregation properties . . . . . . . . . . . . . . . . . . . . 49 Excel download properties . . . . . . . . . . . . . . . . . 50 Secondary keys . . . . . . . . . . . . . . . . . . . . . . . . . 50 Maintaining property values for User Comments, Reports and MyFavorites . . . . . . . . . . . . . . . . . . 51 User Comments . . . . . . . . . . . . . . . . . . . . . . . 51 Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 MyFavorites . . . . . . . . . . . . . . . . . . . . . . . . . . 52 KPI Aliases and User Defined Groups . . . . . . . 53 KPI Aliases . . . . . . . . . . . . . . . . . . . . . . . . . . 53 User defined groups . . . . . . . . . . . . . . . . . . . . 53 kpia_admin tool . . . . . . . . . . . . . . . . . . . . . . . 53 Import aliases . . . . . . . . . . . . . . . . . . . . . . . . . 54 Import user defined groups . . . . . . . . . . . . . . 55 Import aliases and groups . . . . . . . . . . . . . . . 55 Remove user defined groups . . . . . . . . . . . . . 56 Update and remove aliases and user defined groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Export aliases and groups . . . . . . . . . . . . . . . 57 File formats . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 KPI Browser configurable parameters . . . . . . . . 60
iii

13 13 13 14 14 15 15 15 16 16 18 20 20 20 20 23 23 23 24 24 24 24 25

Copyright IBM Corp. 2007, 2011

Configurable system variables . . . . . . . . . . . 60 Configurable service properties . . . . . . . . . . 64 6 Operations Tasks . . . . . . . . . . . . . . . . . . . 67 Daily Loader Operations Tasks . . . . . . . . . . . . . 67 Checking Loader Status . . . . . . . . . . . . . . . . 67 Checking for bad files . . . . . . . . . . . . . . . . . . 68 Loader Housekeeping . . . . . . . . . . . . . . . . . . . . . 71 Disk Space Usage . . . . . . . . . . . . . . . . . . . . . 71 Loader Configuration . . . . . . . . . . . . . . . . . . 72 Configuring multiple identical loaders . . . . . 73 Stability Settings. . . . . . . . . . . . . . . . . . . . . . . . . 73 Application directory management . . . . . . . . . . 73 Directory contents . . . . . . . . . . . . . . . . . . . . . 74 Tivoli Netcool Performance Manager log files 74 Loader log files . . . . . . . . . . . . . . . . . . . . . . . 74 Loader LIF file directory . . . . . . . . . . . . . . . 75 Crontab entries . . . . . . . . . . . . . . . . . . . . . . . 75 Datasource, Agent and KPI Cache Administration 77 Datasource Administration . . . . . . . . . . . . . . . . . 77 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Listing Datasources . . . . . . . . . . . . . . . . . . . . 78 Activating a Datasource . . . . . . . . . . . . . . . . 78 Deactivating a Datasource . . . . . . . . . . . . . . 79 Agent Maintenance. . . . . . . . . . . . . . . . . . . . . . . 79 Overview of Agent Activities . . . . . . . . . . . . 79 Agent activities and log files . . . . . . . . . . . . . 82 agent_admin Command Line Tool . . . . . . . . 82 KPI Cache Management. . . . . . . . . . . . . . . . . . . 88 Exporting User Defined Calculations . . . . . . 88 Importing User Defined Calculations . . . . . . 89 Synchronize internal computation engine KPI cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 System Maintenance . . . . . . . . . . . . . . . . 91 Schedule administration . . . . . . . . . . . . . . . . . . 92 Scheduled jobs . . . . . . . . . . . . . . . . . . . . . . . 92 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Scheduling system maintenance . . . . . . . . . . 94 Listing the status of all scheduled jobs . . . . . 94 Administrative options for the schedule_admin script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Reporting on server status . . . . . . . . . . . . . . . . . 96 Database check . . . . . . . . . . . . . . . . . . . . . . . 96 Directory server check . . . . . . . . . . . . . . . . . 96 SAPMON check . . . . . . . . . . . . . . . . . . . . . . 97 Tivoli Netcool Performance Manager check 97 Log files check . . . . . . . . . . . . . . . . . . . . . . . 97 Database monitoring . . . . . . . . . . . . . . . . . . . 98 Operating system checks . . . . . . . . . . . . . . . . 98

Managing the Oracle database . . . . . . . . . . . . . . 99 Starting and stopping the Oracle database . . . 99 Types of Oracle backups . . . . . . . . . . . . . . . 101 Redo logs . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Archiving redo logs . . . . . . . . . . . . . . . . . . . 102 Performing hardware diagnostics . . . . . . . . 104 Restoring data from backups . . . . . . . . . . . . 104 Database space administration . . . . . . . . . . . . . 105 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Monitor Oracle tablespaces . . . . . . . . . . . . . 106 Add Oracle tablespaces . . . . . . . . . . . . . . . . 106 Add Oracle datafiles . . . . . . . . . . . . . . . . . . 107 Modify Oracle datafiles . . . . . . . . . . . . . . . . 108 Drop Oracle tablespaces . . . . . . . . . . . . . . . 109 Resize an UNDO tablespace . . . . . . . . . . . . 110 Partition maintenance . . . . . . . . . . . . . . . . . . . . 111 Partition maintenance jobs . . . . . . . . . . . . . 111 Amend the partition maintenance job configuration . . . . . . . . . . . . . . . . . . . . . . . . 111 Partition maintenance command line tool . . 112 Adding partitions . . . . . . . . . . . . . . . . . . . . . 115 Deleting partitions . . . . . . . . . . . . . . . . . . . . 116 Pinning partitions . . . . . . . . . . . . . . . . . . . . . 116 Unpinning partitions . . . . . . . . . . . . . . . . . . 116 Exporting partitions . . . . . . . . . . . . . . . . . . . 117 Importing partitions . . . . . . . . . . . . . . . . . . . 117 Showing parameters . . . . . . . . . . . . . . . . . . 117 Listing parameters . . . . . . . . . . . . . . . . . . . . 117 Updating parameters . . . . . . . . . . . . . . . . . . 117 Listing partitions . . . . . . . . . . . . . . . . . . . . . 118 List pinned partitions . . . . . . . . . . . . . . . . . . 118 List sessions . . . . . . . . . . . . . . . . . . . . . . . . . 118 Update sessions . . . . . . . . . . . . . . . . . . . . . . 118 List spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Show logs . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Show errors . . . . . . . . . . . . . . . . . . . . . . . . . 119 Show status . . . . . . . . . . . . . . . . . . . . . . . . . 119 Managing disk space usage . . . . . . . . . . . . . . . 120 Monitoring the Oracle storage directories . . 120 Monitoring the $WMCROOT/logs directories . 121 Monitoring the $WMCROOT/var/loader/spool directories . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Reporting the size of filesystems . . . . . . . . 121 Working with log files . . . . . . . . . . . . . . . . . . . 122 Information about log files . . . . . . . . . . . . . 122 Removing log files . . . . . . . . . . . . . . . . . . . 123 Archiving log files . . . . . . . . . . . . . . . . . . . . 123 Loader LIF file directory. . . . . . . . . . . . . . . . . . 124 Java client processes . . . . . . . . . . . . . . . . . . . . 124 Filesystem backups . . . . . . . . . . . . . . . . . . . . . . 126

iv

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Importing and Exporting User Documents and Report Results. . . . . . . . . . . . . . . . . . . . . . . . . . 127 Importing definitions, templates, schedules and folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Exporting definitions, templates and schedules 129 Importing report results . . . . . . . . . . . . . . . . 131 Exporting report results . . . . . . . . . . . . . . . . 132 Deleting report templates . . . . . . . . . . . . . . 133 Running a report from the command line . . . . 134 Time Zone Support for Reporting . . . . . . . . . . 135 About Daylight Saving Time Rules . . . . . . 135 About Time Zone Regions . . . . . . . . . . . . . 138 Holiday Maintenance . . . . . . . . . . . . . . . . . . . . 141 List holidays . . . . . . . . . . . . . . . . . . . . . . . . 142 Add holidays . . . . . . . . . . . . . . . . . . . . . . . . 142 Delete holidays . . . . . . . . . . . . . . . . . . . . . . 142

Execute SBH definition(s) . . . . . . . . . . . . . . 158 Delete SBH definition(s) . . . . . . . . . . . . . . . 158 Prioritize SBH . . . . . . . . . . . . . . . . . . . . . . . 159 Enable/Disable calculation of Late Data for all Busy Hour definitions . . . . . . . . . . . . . . . . . 159 Customizing Stored Busy Hour definitions . . . 160 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Stored Busy Hour definition . . . . . . . . . . . . 160 12 Alarm Administration . . . . . . . . . . . . . . . 163 Alarm administration tool . . . . . . . . . . . . . . . . 163 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Manage Document Contexts . . . . . . . . . . . . 165 List Alarm Templates . . . . . . . . . . . . . . . . . 167 Alarm Definition Mib File . . . . . . . . . . . . . . 167 External Alarm API . . . . . . . . . . . . . . . . . . . . . 168 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 alarmapi_admin . . . . . . . . . . . . . . . . . . . . . . 168 Generate an alarm . . . . . . . . . . . . . . . . . . . . 169 Clear an alarm . . . . . . . . . . . . . . . . . . . . . . . 170 Display a list of available reports . . . . . . . . 170 Empty alarm spool daemon . . . . . . . . . . . . . 170 Data availability alarms . . . . . . . . . . . . . . . . 171 Generate data availability alarms . . . . . . . . . 173 Log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Parameter values - lists . . . . . . . . . . . . . . . . 175 13 The Summarizer and Summary Administration 179 Summarizer . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Switching the summary process on or off . . 180 Summary Log file . . . . . . . . . . . . . . . . . . . . 180 Start day of week . . . . . . . . . . . . . . . . . . . . . 182 Summary grace period . . . . . . . . . . . . . . . . . 182 Summarize old loaded data . . . . . . . . . . . . . 183 summary_admin CLI . . . . . . . . . . . . . . . . . . . . 183 Provision a summary . . . . . . . . . . . . . . . . . . 183 Delete a summary definition . . . . . . . . . . . . 185 Run a provisioned summary . . . . . . . . . . . . 185 Change the number of instances . . . . . . . . . 186 Export summary metadata . . . . . . . . . . . . . . 187 List summary definitions . . . . . . . . . . . . . . . 188 Prioritize summaries . . . . . . . . . . . . . . . . . . 188 Enable a summary . . . . . . . . . . . . . . . . . . . . 188 Disable a summary . . . . . . . . . . . . . . . . . . . 188 Configuring summary definitions . . . . . . . . . . 190 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Simple summary definition . . . . . . . . . . . . . 191 Complex summary definition . . . . . . . . . . . 194 Ignoring Data Availability . . . . . . . . . . . . . . 200 14 Technology pack administration tools . 201 The techpack_admin tool . . . . . . . . . . . . . . . . . 201

10 LCM Administration . . . . . . . . . . . . . . . . 145 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Loader Datasource . . . . . . . . . . . . . . . . . . . 145 NC Relations . . . . . . . . . . . . . . . . . . . . . . . . 145 Data availability . . . . . . . . . . . . . . . . . . . . . 145 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 List information for Datasources . . . . . . . . . . . 148 Listing Loader Datasources . . . . . . . . . . . . 148 Load Datasources, NC Relations and Data Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Loading a Datasource from XML . . . . . . . . 149 Loading a custom Datasource from XML . 149 Loading NC Relations from XML . . . . . . . 149 Loading Data Availability from XML . . . . 149 Merging of Data Availability blocks from XML 150 Unload Datasources and NC Relations . . . . . . 151 Unloading a Datasource to XML . . . . . . . . 151 Unloading a custom Datasource to XML . . 151 Unloading NC Relations to XML . . . . . . . . 151 Unloading Data Availability to XML . . . . . 152 Delete Datasources and NC Relations . . . . . . 152 Deleting NC Relations . . . . . . . . . . . . . . . . 153 LCM port change . . . . . . . . . . . . . . . . . . . . . . . 154 11 SBH Administration . . . . . . . . . . . . . . . . 155 Stored Busy Hour (SBH) Administration tool 155 Enable Busy Hour definition(s) . . . . . . . . . 156 Disable Busy Hour definition(s) . . . . . . . . . 156 Import Stored Busy Hour definition(s) . . . . 156 Export Stored Busy Hour definition(s) or values 157 List SBH definitions . . . . . . . . . . . . . . . . . . 157
Copyright IBM Corp. 2007, 2011

Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Applying a technology pack . . . . . . . . . . . . . . . 202 Memory for Java client processes . . . . . . . . 202 Exporting lists of dependencies . . . . . . . . . . . . 203 Patching a technology pack . . . . . . . . . . . . . . . 203 Listing technology pack modules . . . . . . . . . . . 203 Uninstalling a technology pack, and loaders . . 204 Technology pack . . . . . . . . . . . . . . . . . . . . . 204 Removing associated loaders . . . . . . . . . . . 204 Removing the Datasource . . . . . . . . . . . . . . 205 Dependent technology packs . . . . . . . . . . . 205 Displaying help. . . . . . . . . . . . . . . . . . . . . . . . . 205 Upgrading technology packs . . . . . . . . . . . . . 206 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 206 Effects of a technology pack upgrade . . . . . 207 Unsupported upgrade scenario . . . . . . . . . . 208 Upgrading or reinstalling installed technology packs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Using the migratealarms tool . . . . . . . . . . . 211 Appendix A: Problem Resolution and Errors 213 Notices 217

vi

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

1
1.1

About This Documentation

The Administration Guide provides instructions and general information on how to maintain and support IBM Tivoli Netcool Performance Manager.

Audience

This guide is intended for experienced system administrators, database administrators or other professionals who are responsible for maintaining a Tivoli Netcool Performance Manager installation.

1.2

Required Skills and Knowledge


Oracle databases IBM Tivoli Directory Server Linux and UNIX basics (such as file structures, text editing, and permissions) Linux and UNIX system administration.

This guide assumes you are familiar with the following:

This guide also assumes that you are familiar with your companys network and with procedures for configuring, monitoring, and solving problems on your network.

1.3

Document Conventions
Table 1:
Format Examples

This document uses the typographical conventions shown in the following table:
General Document Conventions
Description

ALL UPPERCASE Underscore

GPS NULL MYWEBSERVER See Document Conventions

Acronyms, device names, logical operators, registry keys, and some data structures. For links within a document or to the Internet. Note that TOC and index links are not underscored. Color of text is determined by browser settings. Heading text for Notes, and Warnings.

Bold

Note: The busy hour determiner is...

Copyright IBM Corp. 2007, 2011

Table 1:
Format Examples

General Document Conventions (Continued)


Description

SMALL CAPS

The STORED SQL dialog box... ...click VIEW... In the main GUI window, select the FILE menu, point to NEW, and then select TRAFFIC TEMPLATE. A busy hour is... A web server must be installed... See the User Guide
./wminstall $ cd /cdrom/cdrom0 /xml/dict http://abc.com/products/ addmsc.sh Type OK to continue. [root] # pkginfo | grep -i perl system Perl5 On-Line Manual Pages system Perl 5.005_03 (POD Documentation) system Perl 5.005_03 # cd <oracle_setup>

Any text that appears on the GUI.

Italic

New terms, emphasis, and book titles.

Monospace

Code text, command line text, paths, scripts, and file names. Text written in the body of a paragraph that the user is expected to enter. For contrast in a code example to show lines the user is expected to enter.

Monospace Bold

<Mono-

space italics>
[square bracket] log-archiver.sh [-i][-w][-t]

Used in code examples: command-line variables that you replace with a real name or value. These are always marked with arrow brackets. Used in code examples: indicates options.

1.3.1

Blank pages

Blank pages are used at the end of chapters to ensure the following chapter begins on an odd numbered page. These pages are intentionally blank. If the guide is printed double-sided and bound, each chapter will begin on a right-hand page.

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

1.4

Document Structure
Table 2: Document Structure

This document is organized in to the following chapters and appendices:


Chapter Description

Architecture Overview Setup Tasks Starting and Stopping the system Application Administration Operations Tasks

Architecture overview. Tasks accomplished as part of the installation of Tivoli Netcool Performance Manager. Starting and stopping Tivoli Netcool Performance Manager, and processes. Maintaining users, roles and privileges. Daily loader operations tasks.

Datasource, Agent and KPI Cache Administra- Datasource and agent administration. tion System Maintenance Scheduling maintenance, server status, maintaining tablespaces, partitions, disk spaces usage, file system backup and log files. Importing and exporting report definitions, time zone administration and holiday administration. Loader Configuration Manager Administration. Stored busy hour definition administration. Alarm administration tool. Technology pack administration. Problem resolution.

Tools LCM Administration SBH Administration Alarm Administration Technology pack administration tools Problem Resolution and Errors

The Summarizer and Summary Administration The Summarizer component, and data summarization process.

1.5

User Publications
release notes user guides online help

Tivoli Netcool Performance Manager software provides the following user publications:

The documentation is available for viewing and downloading on the infocenter at: http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/com.ibm.netcool_pm.doc/ welcome_tnpm.html

1.5.1

Viewing the Online Help

You can view Online Help for the Tivoli Netcool Performance Manager Web client. Using the Tivoli Netcool Performance Manager user interface, you can select HELP tabs or the HELP links for contextsensitive Help.
About This Documentation 3

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

2
2.1

Architecture Overview
System overview

The Tivoli Netcool Performance Manager System is comprised of Tivoli Netcool Performance Manager server(s) and a client layer. The client layer is the web-based user interface to the Tivoli Netcool Performance Manager application server. The Tivoli Netcool Performance Manager server architecture is comprised of several subsystems: Mediation services Gateways, Data Acquisition Tool Tivoli Netcool Performance Manager, application framework Client access layer services Platform Management services Business services Data Loading services Database services User Management Services Tivoli Directory Server - LDAP The deployment model of these subsystems depends on whether the implementation is on a centralized or distributed network system. For simplicity, this overview illustrates the single server deployment model.

2.1.1

Definitions

Tivoli Netcool Performance Manager server(s) - comprises of all services used by Tivoli Netcool Performance Manager including Mediation, User Management, Database Services and the Tivoli Netcool Performance Manager application framework. Tivoli Netcool Performance Manager application framework - the core application along with extensions for vendor technology packs that provide services to users to create and generate reports. Database services - an Oracle database. Mediation services - utilities to access data (datafiles) from network elements and transform them for

loading into the Tivoli Netcool Performance Manager database.

Copyright IBM Corp. 2007, 2011

The following figure illustrates the system in a client-single server deployment model.
Figure 1: Tivoli Netcool Performance Manager - Architecture

Each of the major components important to administering the server(s) are described in the following sections.

2.2
2.2.1

Server Architecture
Mediation Services

Performance Management Mediation Mediation services include the Gateway Framework and the Data Acquisition tool. Theses tools transfer, parse, manipulate and present performance data from the network elements to the application. The main output of this process is the production of a lif (loader intermediate format) file for loading into the database.

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

2.2.2

Application Framework

Performance Management Data Loading services The role of the loaders is to prepare and process the loading of the data into the database. On a system there may be any number of loaders running for any number of technologies. The loader process runs constantly, taking data from the loader spool directories and loading the performance data to the database. Platform Management services The platform management services are comprised of several process utilities that work together to set up, control and monitor the application server. SAP is the process management utility consisting of a process manager and monitor. These tools are installed on the server and run from a Korn shell: sapmon-na - The parent utility process to the application server process. It is responsible for the startup and control of the application framework. sapmgr-na - The process framework manager that registers all the Tivoli Netcool Performance Manager processes including the application server and the loaders. sap - the utility used to display the status, start and stop the registered processes. PM Business services The business services are the core of the Tivoli Netcool Performance Manager application. These services provide application access and maintenance capabilities. The services comprise of the following:
Application server - The application server consists of the JBoss application server used to communicate with a Lightweight Directory Access Protocol (LDAP) server and the datasource to generate reports. Agent framework and agents - The agent framework provides agents that gather information about the

datasource and the information necessary to define a report.


Dynamic SQL Generator - The SQL generator creates queries to collect the performance data as per the

report definition. Client access layer The client access layer is a subsystem of the JBoss server. It is the web/ HTML page server that provides the static and dynamic content for the web client interface.

2.2.3

User Management Services

User management is supported by a Lightweight Directory Access Protocol (LDAP) server. The LDAP server provides the framework for implementation of roles, groups and users for through a single 'signon' authentication.

Architecture Overview 7

2.2.4

Database Services

The web client accesses data stored in the database on the applicable database server. Data is kept in an Oracle Relational Database Management System (RDBMS). This data includes: Performance measurements, configuration information, and database-utilization information from the infrastructure equipment. Configuration data for the application itself; for example, the data loading formula and report definitions. Timetables used for scheduling reports, summarizing data, archiving data and performing automated management tasks.

2.3
2.3.1

Overview of administrator tasks


Client application tasks

As an administrator, you use the web client interface to accomplish the following tasks: User management - Add or delete users, and modify user access to the database and to data within the application.

2.3.2

Server application tasks

As an administrator of the application server and associated application tools, you use command line tools and UNIX or Linux commands to accomplish the following tasks: Monitor application processes - sap Starting and stopping Tivoli Netcool Performance Manager - sapmon-na, sapmgr-na, sap Maintain schedules - schedule_admin Monitor and maintain database partitions - schedule_admin, part_admin Monitor agent framework (jboss) activities - review logs, agent_admin Monitor the health of the server and its subsystems - various UNIX and Linux commands Configure parameters for user use - holiday_admin, user_admin, tz_admin, alarm_admin

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

2.4
2.4.1

Data Flow Overview


Network Elements

There are a large number of different elements in a telecommunications network, generating a very large amount of raw data counters used for Performance Management. Depending on the network technology, they may be as different as: BSCs, MSCs (GSM) UtranCells, Node Bs (UMTS) Cross-connects (transmission networks) The counters are supplied with different formats and meanings for different vendors. This variety of file formats must be transformed to a format that is readable by Tivoli Netcool Performance Manager loaders. This is accomplished using the Mediation Services, described below.

2.4.2

Mediation Services

The main role of the Mediation Services is take the data files from the managed elements, and present them in a specific format to the Tivoli Netcool Performance Manager loaders for population of performance data into the database. Mediation Services refers to all the software processes responsible for checking data and converting it to a common format. Gateways The Gateways are scripts usually written in Perl or AWK languages that have been designed to convert a specific set of performance counters from a defined equipment vendor to the standard .lif format used in Tivoli Netcool Performance Manager. Gateways are customized for this specific use and cannot be used for a data set generated by another vendor/equipment.

2.4.3

Application Framework

The Tivoli Netcool Performance Manager application framework operates on a layer between the Tivoli Netcool Performance Manager database and the Tivoli Netcool Performance Manager web client. These components: input data in the database retrieve and cache data from the database upon user request serve pages to the web client interface for use by the user manage schedule and services to maintain the database manage schedules and services related to user report generation Platform management services The role of the platform management services is to start and maintain the running of the application processes. If the processes are not running then a user can not access the system to run reports.
Architecture Overview 9

The process management framework consists of three process utilities; sapmon, sapmgr and sap. These services along with the applicable loader(s) must be running to: place the managed element data into the database provide basic application functionality to the user. Data loading Once data is delivered in the correct format to the applicable spool directory a loader loads the data in to the database.
Loaders

The role of the loaders is to prepare and process the loading of the data into the database. On a system there may be any number of loaders running for any number of technologies. The loader process runs constantly, taking data from the loader spool directories and loading the performance database.
Loader Configuration Manager

The loader configuration manager enables Datasources, Loader Configurations and NC Relations to be loaded from XML files into the administration database, and unloaded from the administration database to XML files. PM business services The performance management services consist of those services which provide the user with the functionality they need to access the database and produce performance management reports. Some of the services are common or core services that the application uses to maintain the system and services. Other services provide specific application functionality.
Common and core services

These services are associated with the underlying architecture and framework implemented in Tivoli Netcool Performance Manager. These services provide the base functionality to allow: the user to interact with the system the system to perform critical jobs that monitor and maintain the database and application framework the system to deploy technology packs, upgrades and patches the system to maintain the database the report generation process to occur These services are provided by the following components of the Tivoli Netcool Performance Manager application framework: Application server - The application server consists of the jboss application server and the agent framework. these components are used to: communicate with the LDAP server and datasource to generate reports. gather information about the datasource and other items to define a report.

10

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Legacy services - this framework consists of processes necessary for jobs to run that maintain and monitor the system. Some examples are: scheduler queues partition maintenance summary creation Report Generator - The report generator is a dynamic SQL generator that allows the user to interact with the interface and produce dynamic SQL queries to the database as per report definition or interaction request.
Application services

Alarm management - The alarm management module allows the user to view alarms. Client access layer The client access layer is a subsystem of the JBoss server. It is the web/ HTML page server that provides the static and dynamic content for the web client interface.

Architecture Overview 11

12

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

3
3.1

Setup Tasks

This chapter describes a number of tasks that will have been performed as part of the installation of Tivoli Netcool Performance Manager. You do not need to perform these tasks again, they are described for information purposes.

Overview
Software install summary Datasources LDAP setup Crontab setup SAP

Setup tasks include:

3.2

Software Install Summary

The Tivoli Netcool Performance Manager architecture is typically made up of four components, each with specific functions: The Application Component consists of the Tivoli Netcool Performance Manager software which is used to run an application framework The Gateway Component deals with the processing of data which is downloaded from datasources The Database Component consists of an Oracle Database which the system uses to store data The Client PC is used to run the Tivoli Netcool Performance Manager GUI. Components can be installed on a single server or distributed over several machines. For example, a single server could be used for the Application, Gateway and Database Components, or these three components could be split over three servers.

Copyright IBM Corp. 2007, 2011

13

Installation includes a number of tasks. The following table lists the main installation tasks. Table 3: Tivoli Netcool Performance Manager Installation
Task Description

User and Group Accounts Creation Software Installation Gateway Installation Technology Pack Installation Cronjob Installation

User and group accounts creation. Installing Tivoli Netcool Performance Manager, and required third party products. Deploying gateway packages. Installing technology packs. Installing cronjobs.

Configuring and Starting Tivoli Netcool Per- Configuring and starting Tivoli Netcool Performance formance Manager Manager.

3.3

Datasource setup

Datasources provide the system with the necessary performance data for reports. Datasources are typically servers that contain entity and performance data information. For more information on Datasources see Datasource, Agent and KPI Cache Administration on page 77.

3.4

LDAP - Tivoli Directory Server setup

The Tivoli directory server is an LDAP directory service to manage users, roles, and privileges. The directory server is installed as a prerequisite to the installation of Tivoli Netcool Performance Manager. For information on starting and stopping the directory server see Tivoli Directory Server on page 24. The directory server needs to be started to allow users to log in through the Tivoli Netcool Performance Manager GUI. Tivoli Netcool Performance Manager users, privileges, roles and groups can be altered/created through the GUI. See Application Administration on page 31.

14

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

3.5

Crontab setup

The crontab setup is installed and set up as part of the main installation. The crontab is installed using the following script:
$WMCROOT/admin/common/install/scripts/cron_install

Note: See the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for more information on installing Cron. The installation sets up the virtuo and root user scheduled Cron tasks. There are two different Cron setups installed for the two different users: virtuo user cron setup root user cron setup The installation uses the following two crontab files to setup the virtuo and root user crontab list.
$WMCROOT/admin/common/cron/core_root_crontab $WMCROOT/admin/common/cron/core_virtuo_crontab

3.5.1

Virtuo User setup

The following is a sample default crontab list for a basic installation. As user virtuo, enter:
crontab -l
0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 31 /data/trace_archive1 \*.log.\* 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -a -d 0 /appl/virtuo/logs \*.log.\* / data/trace_archive1 14,29,44,59 * * * * /appl/virtuo/bin/alarmapi_admin -da > /dev/null 2>&1 30 * * * * /appl/virtuo/bin/run_loader_cleanup 3600 > /dev/null 2>&1 0 3 * * * /appl/virtuo/admin/common/cron/cron_script -a -d 5 /appl/virtuo/logs/nc_archiver \*log.\* /data/trace_archive1 0 3 * * * /appl/virtuo/admin/common/cron/cron_script -p -d 3 /appl/virtuo/logs/loader \*.log.\* /data/trace_archive1 2,17,32,47 * * * * /appl/virtuo/bin/run_itm_rawcoverage_logger > /dev/null 2>&1 1,16,31,46 * * * * /appl/virtuo/bin/run_itm_usage_logger 15 > /dev/null 2>&1 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.csv 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.xml 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.xls

3.5.2

Root User Setup

The following is a sample default crontab list for a basic installation.

Setup Tasks 15

Note: Some environments will have additional entries. As user root:


crontab -l
0 23 * * * /appl/virtuo/admin/oracle/cron/roll_listener_log 0 23 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 2 CROND_LOG log.\* 0 23 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 2 CROND_OLOG olog.\* 0 23 * * * /appl/virtuo/admin/common/cron/roll_cron_log 0 22 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 2 /tmp crout\* 0 23 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 2 /appl/oracle/product/10.2.0/ db_1/network/log listener.log.\* 0 23 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 5 /oradump/vtdb vtdb_arch_\* 0 23 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 2 /appl/ldap/idsslapd-idsinst/ logs \*.log

3.5.3

Additional entries and scripts

The following script is also available:


0 2 * * * /appl/virtuo/admin/common/cron/archive_loader_data -wmcr /appl/virtuo

This scripts archives .lif data files (files produced by the gateways and processed by the loaders in large volumes). The entry shown above is not added to the crontab by default, it must be added by the administrator. Crontab entries can be added if more scripts are written or more log files are generated. Cron entries are added by editing the cron list using crontab -e The following files are the generated cron files - these are the files that are changed using crontab -e for root and virtuo users:
/var/spool/cron/crontabs/virtuo /var/spool/cron/crontabs/root

3.6

SAP setup

SAP is a process management utility consisting of a process manager and monitor. The Process Monitor manages the restart ability of Tivoli Netcool Performance Manager. The Process Manager registers all the Tivoli Netcool Performance Manager processes. SAP scripts are installed under $WMCROOT/bin as part of the core installation. The SAP manager and framework is started using the following commands, as user root:
Solaris svcadm enable sapmon-na svcadm enable sapmgr-na

16

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Linux service sapmonvirtuo start service sapmgrvirtuo start AIX /etc/rc.d/init.d/sapmonvirtuo start /etc/rc.d/init.d/sapmgrvirtuo start

For more information on starting and stopping SAP utilities see Process Monitor on page 25 and Process Manager on page 26. Processes are started using the following command:
Solaris svcadm enable sap-na Linux service sapvirtuo start AIX /etc/rc.d/init.d/sapvirtuo start

Individual processes can be started using:


Solaris svcadm enable <process_name>-na Linux service <process_name> start AIX sap start <process_name>

Information on processes can be displayed using:


sap disp

or;
sap disp -l

for verbose output

Producing the following example output for sap disp:


NAME STATE SINCE as STARTED Feb 13, 2009 asd STARTED Feb 13, 2009 nc_cache STARTED Feb 13, 2009 alarm_cache STARTED Feb 13, 2009 load_nokiabss_oss31ed3 stopped -

Producing the following example output for sap disp -l:


NAME as nc_cache alarm_cache STATE STARTED STARTED STARTED SINCE Oct 23, 2008 Oct 29, 2008 Oct 29, 2008 HOST <core_host> <core_host> <core_host> GROUP asgroup loadercache loadercache STIME Oct 23, 2008 Oct 29, 2008 Oct 29, 2008 PID 17277 6716 6726

Setup Tasks 17

load_<loadername> stopped

<target_host>

Ericsson GSM BSS

3.6.1

SAP configuration

The processes are automatically configured in SAP following core installation. The SAP tool uses property files to start the application server and configured loaders. These files are stored in the following locations
$WMCROOT/conf/processes/*.properties

The following is a sample application server property file:


# # application server # com.comp.process.as.exec=@{WMCROOT}/bin/run_as com.comp.process.as.params= com.comp.process.as.group=asgroup com.comp.process.as.start.pmgtprovider=false com.comp.process.as.host=${WMCHOST} com.comp.process.as.start.sequence=1

Table 4:
Variables

Application Server Property File - Variable Descriptions


Description

com.comp.process.as.exec=@{WMCROOT}/bin/ Describes the command that is run when the user run_as enters sap start. com.comp.process.as.params= com.comp.process.as.group=asgroup

The space separated command line arguments for the process. The variable is optional. Describes the group of processes that this process belongs to. As well as using sap start <process name>, it is possible to start a group by using sap start <group name>. The variable is optional. Defines whether or not the process makes callbacks to inform the framework of its init states. The variable is optional and defaults to false. Defines the name of the server that this process runs on. Defines the order in which processes are started. The numbers must be sequential. If more than one process is given the same sequence number the user will not know which process started first. This also applies to the default value of 0. If two processes are allowed to use the default value of 0 the user will not know which process started first. The variable is optional.

com.comp.process.as.start. pmgtprovider=false

com.comp.process.as.host=${WMCHOST} com.comp.process.as.start.sequence=1

The following is a sample loader property file:

18

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

com.comp.process.load_nokiabss_oss31ed3.exec=\@{WMCROOT}/bin/run_njloader com.comp.process.load_nokiabss_oss31ed3.params=nokiabss_oss31ed3 com.comp.process.load_nokiabss_oss31ed3.group=Nokia GSM BSS com.comp.process.load_nokiabss_oss31ed3.host=\${WMCHOST} com.comp.process.load_nokiabss_oss31ed3.start.pmgtprovider=false com.comp.process.load_nokiabss_oss31ed3.start.sequence=401 com.comp.process.load_nokiabss_oss31ed3.start.timeout=5000

Table 5:
Variables

Loader Property File - Variable Descriptions


Description

com.comp.process.load_nokiabss_oss31ed3.exec= \@{WMCROOT}/bin/run_njloader com.comp.process.load_nokiabss_ oss31ed3.params=nokiabss_oss31ed3 com.comp.process.load_nokiabss_ oss31ed3.group=Nokia GSM BSS com.comp.process.load_nokiabss_ oss31ed3.host=\${WMCHOST} com.comp.process.load_nokiabss_ oss31ed3.start.pmgtprovider=false

The command that is runs when users start the loader, run_njloader. The name of the loader, nokiabss_oss31ed3. The group of processes to which the loader belongs. The name of the server that this process runs on. Defines whether or not the process makes callbacks to inform the framework of its init states. The variable is optional and defaults to false. Defines the order in which processes are started. . The actual numbers do not have to be sequential. If more than one process is given the same sequence number the user will not know which process started first. The default value is 0 and this variable is optional. Defines the timeout period i.e. length of time to wait to restart the loader if the loader fails to start.

com.comp.process.load_nokiabss_ oss31ed3.start.sequence=401

com.comp.process.load_nokiabss_ oss31ed3.start.timeout=5000

Manual configuration of processes through SAP is not necessary.

Setup Tasks 19

3.7

Application and system passwords

This section provides advice on changing default passwords for Tivoli Netcool Performance Manager.

3.7.1

Application Users

For configured users, refer to User administration. The following Tivoli Netcool Performance Manager application users may not be modified: USERADM VIRTUO
SYSADM

3.7.2

OS Users

For the following UNIX users, customer system administration security rules may apply. VIRTUO ORACLE

3.7.3

Oracle Users

Database users The passwords for the following database users should be changed: SYS - To be changed by database administrator SYSTEM - To be changed by database administrator VIRTUO - To be changed by database administrator Note: The new passwords may need to be given to IBM Support if support are required to make database changes. The following user is not used in application but will be used by the Oracle Enterprise Manager. It is advised that this password is changed. DBSNMP - To be changed by database administrator Oracle users The following Oracle users are not used, these passwords can be changed without affecting the system. DIP - "LOCKED and EXPIRED" MGMT_VIEW - "LOCKED and EXPIRED" OUTLN - "LOCKED and EXPIRED" SYSMAN - "LOCKED and EXPIRED" TSMSYS - "LOCKED and EXPIRED"
20 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

WMSYS - "LOCKED and EXPIRED" For further information on Oracle Security see the Oracle white paper:
http://www.oracle.com/technology/deploy/security/database-security/pdf/twp_security_checklist_database.pdf

Setup Tasks 21

22

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

4
4.1

Starting and Stopping the system

This chapter describes starting and stopping the Tivoli Netcool Performance Manager, system.

Overview

The following applications must be running before Tivoli Netcool Performance Manager, can be started properly: Oracle Database (LDAP) Tivoli Directory Server (SAP) Process Monitor (SAP) Process Manager When these applications are running, Tivoli Netcool Performance Manager can be started. All the processes are automatically started upon bootup. Bootup should take place as part of the main installation, see the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for more information. For instructions on the complete startup and shutdown of Tivoli Netcool Performance Manager and processes see Tivoli Netcool Performance Manager Complete startup and shutdown on page 27. Note: A number of status checks can be performed on Tivoli Netcool Performance Manager applications and processes, see Reporting on server status on page 96.

4.2

Oracle Database

Note: For additional details on manually starting and stopping Oracle using SQL*Plus and the OS user as oracle see Starting and stopping the Oracle database on page 99.

Copyright IBM Corp. 2007, 2011

23

4.2.1

Starting the Oracle Database

To start the Oracle Database: 1. Enter the following command as user root:
Solaris svcadm enable database-na Linux service dboravirtuo start AIX /etc/rc.d/init.d/dboravirtuo start

4.2.2

Stopping the Oracle Database

To stop the Oracle Database: 1. Enter the following command as user root:
Solaris svcadm disable database-na Linux service dboravirtuo stop AIX /etc/rc.d/init.d/dboravirtuo stop

4.3
4.3.1

Tivoli Directory Server


Starting the Tivoli Directory Server

To start the Tivoli directory server: 1. Enter the following command as user root:
Solaris svcadm enable tds-na Linux service tdsna start AIX /etc/rc.d/init.d/tdsna start

24

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

4.3.2

Stopping the Tivoli Directory Server

To stop the Tivoli directory server: Enter the following command as user root:
Solaris svcadm disable tds-na Linux service tdsna stop AIX /etc/rc.d/init.d/tdsna stop

4.4

Process Monitor

The Process Monitor manages the restart ability of the application. Note: The process monitor must be started before the process manager. The process manager cannot be started until the process monitor is started.

Note: Distributed systems only. In a distributed environment the Process Monitor is only started on the Server containing the Application component.

4.4.1

Starting the Process Monitor

To start the Process Monitor: 1. Enter the following command as user root:
Solaris svcadm enable sapmon-na Linux service sapmonvirtuo start AIX /etc/rc.d/init.d/sapmonvirtuo start

Starting and Stopping the system 25

4.4.2

Stopping the Process Monitor

To stop the Process Monitor: 1. Enter the following command as user root:
Solaris svcadm disable sapmon-na Linux service sapmonvirtuo stop AIX /etc/rc.d/init.d/sapmonvirtuo stop

4.5

Process Manager

The Process Manager registers all the Tivoli Netcool Performance Manager processes. Note: Distributed systems only. In a distributed environment the Process Manager is only started on the Server containing the Application component.

4.5.1

Starting the Process Manager

To start the Process Manager, complete the following: 1. Enter the following command as user root:
Solaris svcadm enable sapmgr-na Linux service sapmgrvirtuo start AIX /etc/rc.d/init.d/sapmgrvirtuo start

4.5.2

Stopping the Process Manager

To stop the Process Manager, complete the following: 1. Enter the following command as user root:
Solaris svcadm disable sapmgr-na Linux service sapmgrvirtuo stop AIX /etc/rc.d/init.d/sapmgrvirtuo stop

This command does not stop processes.


26 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

4.6

Tivoli Netcool Performance Manager

Tivoli Netcool Performance Manager is started and stopped using the sap command. The sap command starts and stops all registered processes. For instructions on the complete startup and shutdown of Tivoli Netcool Performance Manager and processes, see Tivoli Netcool Performance Manager Complete startup and shutdown on page 27.

4.6.1

Starting Tivoli Netcool Performance Manager

To start the Tivoli Netcool Performance Manager application: 1. Enter the following command as user root:
Solaris svcadm enable sap-na Linux service sapvirtuo start AIX /etc/rc.d/init.d/sapvirtuo start

It may take a few minutes to start all the processes. You can check the loader logs for startup issues:
$WMCROOT/logs/loader/

Log information from the application server is written to:


$WMCROOT/logs/as/default

4.6.2

Stopping Tivoli Netcool Performance Manager

To stop the Tivoli Netcool Performance Manager application: 1. Enter the following command as user root:
Solaris svcadm disable sap-na Linux service sapvirtuo stop AIX /etc/rc.d/init.d/sapvirtuo stop

4.7
4.7.1

Tivoli Netcool Performance Manager Complete startup and shutdown


Complete Startup

The following procedure starts the Oracle database, the directory server, the Process Monitor, the Process Manager, and all Tivoli Netcool Performance Manager processes.

Starting and Stopping the system 27

Note: If you do not need to start up the Oracle database or the directory server, ignore instructions relating to starting the Oracle database and Directory server.

Note: In Solaris only, it is possible to start up all Tivoli Netcool Performance Manager applications, the directory server and the Oracle database using a single command: svcadm enable database-na tdsna sapmon-na sapmgr-na sap-na

Oracle Database Start the Oracle Database: 1. Enter the following command as user root on the Tivoli Netcool Performance Manager Server(s):
Solaris svcadm enable database-na Linux service dboravirtuo start AIX /etc/rc.d/init.d/dboravirtuo start

Directory Server Start the directory server: 1. Enter the following command as user root on the Tivoli Netcool Performance Manager server(s):
Solaris svcadm enable tds-na Linux service tdsna start AIX /etc/rc.d/init.d/tdsna start

Tivoli Netcool Performance Manager Important: Distributed systems only. In a distributed system, this section should be performed only on the server hosting the Application component. 1. Check which processes are currently running:
Solaris svcs "*-na*" Linux service sapmonvirtuo status service sapmgrvirtuo status service sapvirtuo status
28 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

AIX /etc/rc.d/init.d/sapmonvirtuo status /etc/rc.d/init.d/sapmgrvirtuo status /etc/rc.d/init.d/sapvirtuo status

2. Enter the following commands as user root to ensure that SAP process management is running, and start the remaining processes.
Solaris svcadm enable sapmon-na svcadm enable sapmgr-na svcadm enable sap-na Linux service sapmonvirtuo start service sapmgrvirtuo start service sapvirtuo start AIX /etc/rc.d/init.d/sapmonvirtuo start /etc/rc.d/init.d/sapmgrvirtuo start /etc/rc.d/init.d/sapvirtuo start

It may take a few minutes to start all the processes. You can check the loader logs for startup issues:
$WMCROOT/logs/loader/

Log information from the application server is written to:


$WMCROOT/logs/as/default

4.7.2

Complete Shut down

The following procedure shuts down Tivoli Netcool Performance Manager, all processes, the Process Manager, the Process Monitor, the Directory Server and Oracle. Note: If you do not need to shut down the Oracle database or the Directory server, ignore instructions relating to shutting down the Oracle database and Directory server.

Important: Distributed systems only. In a distributed system, this section should be performed only on the server hosting the Application component. Shut down the system as follows: Note: When disabling services, disable the services one at a time and in the given sequence. 1. Enter the following commands as user root on the Tivoli Netcool Performance Manager server(s):
Solaris Starting and Stopping the system 29

svcadm disable sap-na

Before continuing, check that the sap-na service is disabled by running the svcs sap-na command to check the status.
svcadm disable sapmgr-na svcadm disable sapmon-na Linux service sapvirtuo stop service sapmgrvirtuo stop service sapmonvirtuo stop AIX /etc/rc.d/init.d/sapvirtuo stop /etc/rc.d/init.d/sapmgrvirtuo stop /etc/rc.d/init.d/sapmonvirtuo stop

Check the appropriate log files and processes to ensure a graceful shutdown has occurred, see Tivoli Netcool Performance Manager check on page 97 and Log files check on page 97 for more information. Directory Server Shut down the Tivoli directory server: 1. Enter the following commands as user root on the Tivoli Netcool Performance Manager server(s):
Solaris svcadm disable tds-na Linux service tdsna stop AIX /etc/rc.d/init.d/tdsna stop

Oracle Database Shut down the Oracle Database: 1. Enter the following commands as user root on the Tivoli Netcool Performance Manager server(s):
Solaris svcadm disable database-na Linux service dboravirtuo stop AIX /etc/rc.d/init.d/dboravirtuo stop

30

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Application Administration

This chapter describes Tivoli Netcool Performance Manager application administration. This includes: User administration External Reporting administration Report Granularity Aggregation properties Secondary keys Maintaining property values for User Comments, Reports and MyFavorites KPI Aliases and User Defined Groups KPI Browser configurable parameters

5.1

User administration

The User Administration tool allows you to configure a wide range of ways for users to access the system using: users groups roles privileges

5.1.1

User administration basics

The User Administration tool is accessed from the GUI using the TOOLS tab, by selecting USER ADMINISTRATION from the drop-down list box.

Copyright IBM Corp. 2007, 2011

31

Figure 2:

User Administration

Users Tivoli Netcool Performance Manager users are those users in the LDAP repository that have been configured to use the application. Groups Groups are collections of users. Permission to access user documents such as reports is given to groups. Users can belong to more than one group. The system includes a number of predefined groups, shown in the following table, which cannot be edited. You can also create your own groups.
Table 6:
Name Description

Predefined Groups

Admin Everybody

Used to group administrators together. A group that automatically contains all of the users defined by the system.

Roles Roles are collections of privileges. Roles can contain other roles. Roles are assigned to users, not to groups. The system includes a number of predefined roles, shown in the following table, which cannot be edited. You can also create your own roles. The total set of privileges that a user has is determined by the roles assigned to that user, and the privileges associated with those roles.
32 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

You can view the privileges associated with a role. See Assigning/De-assigning Privileges to a Role on page 41 for information on how to determine the privileges in a role.
Table 7:
Name Description

Predefined Roles

Basic Web User

A limited user who can only read standard report definitions, read schedule definitions, read Vault documents, show users from Everybody group, show folders from Everybody group, access the Alarm Viewer. A typical user who has all the privileges of a Basic Web User and who can edit/ delete standard report definitions, run standard reports, edit/schedule definitions, has a personal documents area, read/edit MyFavorites pages, view UDC definitions. An advanced user who has all the privileges of a Normal Web User and who can also publish and edit Vault documents, view and edit UDCs, access the Alarm Exporter and Alarm Manager, Import and Export data with the Admin tool. The top-level administrator having all available privileges. System administrators have full control over the application.

Normal Web User

Power Web User

System Administrator

Privileges Privileges are a list of tasks and features available for users. Privileges are grouped into roles, which are then applied to a particular user. The following table describes the privileges available on the system.
Table 8: Privilege Descriptions
Privilege Applies To Description

Admin: Edit Datasource definitions

Administration software

Allows the user to perform actions associated with agents and agent activities. Allows the user to edit users, groups, and roles.

Admin: edit users and user Administration data software, Web Client Allow access to Alarm Exporter Allow access to Alarm Manager Allow access to Alarm Viewer Configure Jboss Create entity and field mappings (equivalencies) Edit Agent settings Edit/delete any existing reports Web Client Web Client Web Client Administration software Administration software Administration software Web client

Allows the user to create and modify alarm targets. Allows the user to activate and deactivate alarms, and to modify alarm definitions. Allows the user to view and acknowledge alarms. Allows the user to configure Jboss. Allows the user to model entity and field equivalencies. This privilege does not apply. Allows the user to modify properties and perform actions associated with agents and agent activities. Allows the user to modify all report results, regardless of the assigned permissions.

Application Administration 33

Table 8: Privilege Descriptions


Privilege Applies To Description

Edit/delete any existing folders or documents

Web client

Allows the user to modify all saved documents, regardless of the assigned permissions. Allows the user to create/edit/delete any remote UDC owned by any user. Allows the user to open and edit enterprise report definitions. This privilege does not apply. Allows the user to schedule reports to be run. Allows the user to open and edit local report definitions. Allows the user to enter information about an entity instance using the Entity Data Editor. This privilege does not apply. Allows the user to edit holiday definitions using the Holiday Administration tool. Allows the user to create, edit, and delete customized pages in the Web client. Allows the user to create/edit/delete a remote UDC owned by the current user. Allows the user to use the export tool. Allows the user to use the import tool. Allows the user to manage the auto downloading of scheduled reports that exist on remote servers. This privilege does not apply. Allows the user to promote a UDC. Allows the user to organize folders and save documents to the vault page, assuming they have the appropriate file permissions. Allows the user to read enterprise report definitions. This privilege does not apply. Allows the user to view customized pages in the Web client. Allows the user to see remote server report status from the Monitor tab. This privilege does not apply. Allows the user to view report schedules. Allows the user to view local report definitions. Allows the user to browse through documents that have been published to the vault page.

Edit any remote UDC defi- Administration nitions software Edit/delete enterprise report definitions Edit/delete schedule definitions Edit/delete standard Web report definitions Edit entity data Edit Holidays Edit MyFavorite pages Edit remote UDC definitions Export Data with the Admin Tool Import Data with the Admin Tool Manage AutoDownload entries Promote UDCs Web client Web client Web client Administration software Administration software Web client Administration software Administration software Administration software Web client

Administration software

Publish and edit Vault doc- Web client uments Read enterprise report def- Web client initions Read MyFavorite pages Read reports from remote servers Read schedule definitions Read standard Web report definitions Read Vault documents Web client Web client Web client Web client Web client

34

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 8: Privilege Descriptions


Privilege Applies To Description

Rename remote UDC defi- Administration nitions software Run classic reports on remote servers Web client

Allows the user to rename a remote user-defined calculation (UDC). Allows the user to run reports residing on a remote server. This privilege does not apply. Allows the user to run enterprise reports. This privilege does not apply. Allows the user to run report definitions. This privilege implies read and edit privileges. Allows a user to run an external user-managed executable after a scheduled report job has been completed. Allows the user to see documents in a folder in which the user does not have read or write permissions. If this privilege is not granted, the user can only see folders available to the group(s) they belong to. Allows the user to see all users, including those in groups the user does not belong to. If this privilege is not granted, the user can only see other users in the group(s) they belong to. Allows the user to organize folders in a private area and save documents there. Allows the user to view agent properties and agent activities, but not modify any associated properties or perform any associated actions. Allows user to view data availability. This privilege does not apply. Allows the user to view agent properties and agent activities, but not perform any actions. Allows the user to browse through documents on remote servers. Allows the user to view enterprise data availability. This privilege does not apply. Allows the user to view entity equivalencies, but not edit them. This privilege does not apply. Allows the user to view entity information in the Entity Data Editor, but not edit them. This privilege does not apply. View promoted user-defined-calculations (UDCs). Allows the user to view the definition for a remote user-defined calculation. Allows the user to view users, groups, and roles, but not edit them.

Run enterprise report defi- Web client nitions Run standard Web reports Runtime Accessor Show folders from Everybody group Web client Administration software Web client

Show users from Everybody group

Web client

User has a Personal Docu- Web client ments area View Agent settings Administration software Web client Administration software

View data availability View Datasource definitions

View documents on remote Web client servers View enterprise data avail- Web client ability. View entity and field map- Administration pings (equivalencies) software View entity data View promoted UDCs View remote UDC definitions View users and user data Administration software Administration software Administration software Administration software

Application Administration 35

5.1.2

User management

User Management covers the tasks of adding and maintaining users and groups, and associating users with roles.
Figure 3: User Management

Adding users You must have the appropriate privileges to add a user. The Add operation adds the user to the LDAP repository and to the database, creating both inet_user (anonymous user) and user entries. A user is added by default to the Everybody group, and assigned the Normal Web User role. To add a user: 1. In the MANAGE USERS tab, click the VIRTUO USERS tab. 2. Click the ADD USER button. The ADD USER dialog is displayed.

36

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Figure 4:

Adding a user

3. Enter the users details in the appropriate fields. 4. Click OK. The user is listed in the LIST OF USERS. Editing users You must have the appropriate privileges to edit a user. To edit a user: 1. In the MANAGE USERS tab, click the VIRTUO USERS tab. 2. Select the user from the LIST OF USERS. 3. Click the EDIT USER button. 4. Edit the users details as required. You cannot alter a users login ID. 5. Click OK. Deleting users Deleting a user removes the user from the server. You must have the appropriate privileges to delete a user. The following users cannot be deleted: useradm, virtuo and sysadm. To delete a user: 1. In the MANAGE USERS tab, click the VIRTUO USERS tab. 2. Select the user from the LIST OF USERS. Tip: Select more than one user using the Shift and Ctrl keys. 3. Click the DELETE SELECTED USER(S) button. A message is displayed asking you to confirm the deletion. 4. Click YES.
Application Administration 37

Creating and deleting groups Groups are primarily for users to determine who can access their reports. You must have the appropriate privileges to create or delete a group. You cannot delete the system predefined groups: everybody and admin. To create a group: 1. In the MANAGE USERS tab, click the USERS BY GROUP tab. 2. Click the ADD USER GROUP button. The ADD GROUP dialog is displayed.
Figure 5: Adding user groups

3. Enter a name for the group. 4. Click OK. The group is listed in the LIST OF AVAILABLE USER GROUPS. To delete a group: 1. In the MANAGE USERS tab, click the USERS BY GROUP tab. 2. Select the group you want to delete. Tip: Select more than one group using the Shift and Ctrl keys. You cannot delete a group that has one or more users associated with it. 3. Click the DELETE SELECTED USER GROUP(S) button. A message is displayed asking you to confirm the deletion. 4. Click YES. Adding and removing users to/from groups Adding users to groups is an easy way to allow users access to certain folders and reports. You must have the appropriate privileges to add a user to a group. To add/remove a user to a group: 1. In the MANAGE USERS tab, click the USERS BY GROUP tab. 2. Select the group that you want to add/remove users to/from, in the LIST OF AVAILABLE USER GROUPS.
38 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

3. Add/Remove users as required. Users are added by dragging the user from the AVAILABLE USERS box to the ASSOCIATED USERS box. Users are removed by dragging the user from the ASSOCIATED USERS box to the AVAILABLE USERS box. Tip: Select more than one user using the Shift and Ctrl keys.

Assigning and de-assigning users to/from a role Assigning users to a role allows you to determine how they interact with the system. You can restrict access to folders and systems, or you can grant special privileges to certain classes of users. You must have the appropriate privilege to assign users to a role. To assign/de-assign users to a role: 1. In the MANAGE USERS tab, click the USERS BY ROLE tab. 2. Select the role that you want to assign/de-assign users to/from, in the LIST OF AVAILABLE ROLES. 3. Assign/de-assign users as required. Users are assigned by dragging the user from the AVAILABLE USERS box to the ASSOCIATED USERS box. Users are de-assigned by dragging the user from the ASSOCIATED USERS box to the AVAILABLE USERS box. Tip: Select more than one user using the Shift and Ctrl keys.

Application Administration 39

5.1.3

Role Management

Role Management covers the tasks necessary for the setting up and maintaining of roles. Creating and Deleting Roles Creating a role allows you to group custom privileges that can then be assigned to users. You must have the appropriate privileges to create or delete a role. You cannot delete the system predefined roles: basic web user, normal web user, power web user and system administrator. To create a role: 1. Click the MANAGE ROLES tab. 2. Click the ADD ROLE button. The ADD ROLE dialog is displayed.
Figure 6: Adding roles

3. Enter details for the role. 4. Click OK. The role is listed in the LIST OF AVAILABLE ROLES. To delete a role: 1. Click the MANAGE ROLES tab. 2. Select the role you want to delete. Tip: Select more than one role using the Shift and Ctrl keys. You cannot delete a role that has one or more users associated with it. 3. Click the DELETE SELECTED ROLE(S) button. A message is displayed asking you to confirm the deletion. 4. Click YES.

40

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Assigning/De-assigning Privileges to a Role Assigning privileges to a role allows you to group the privileges you want to assign to users. Creating roles is convenient when several people share common privileges. See Privileges on page 33 for information on access rights for different privileges. You must have the appropriate privileges to assign privileges to a role. To assign/de-assign privileges to a role: 1. Click the MANAGE ROLES tab. 2. Select the role that you want to assign/de-assign privileges to/from, in the LIST OF AVAILABLE ROLES. 3. Assign/De-assign privileges as required.
ROLE

Privileges are assigned by dragging the privilege from the PRIVILEGES AVAILABLE FOR SELECTED box to the PRIVILEGES ASSOCIATED WITH SELECTED ROLE box.

Privileges are de-assigned by dragging the privilege from the PRIVILEGES ASSOCIATED WITH SELECTED ROLE box to the PRIVILEGES AVAILABLE FOR SELECTED ROLE box. Tip: Select more than one privilege using the Shift and Ctrl keys.

5.1.4

User Administration Command Line Tool

The user_admin tool provide a means of creating and deleting users as well as upgrading user passwords. This tool is intended to support bulk provisioning of users. It can be used in parallel with the User Administration GUI. This tool cannot be used off-line. It requires a virtuo administration login to the server hosting the Tivoli Netcool Performance Manager application. Usage
user_admin [-asconf conf_name] parameters -u <admin_user> -p <admin_password> -listusers simple -u <admin_user> -p <admin_password> -listusers detail -u <admin_user> -p <admin_password> -listroles -u <admin_user> -p <admin_password> -add -f <firstname> -ln <lastname> -uid <user_id> -up <user_password> -rf <role_filename> [-e <email_addr>] -u <admin_user> -p <admin_password> -modify -uid <user_id> -up <new_user_password> -u <admin_user> -p <admin_password> -delete -uid <user_id>

Table 9: Options for user_admin Script


Option
-u -p

Description

Administration user name. Administration password.

Application Administration 41

Option
-listusers simple -listusers detail -listroles -add -f <firstname> -ln <lastname> -uid <user_id> up <user_password> -rf <role_filename> [-e <email_addr>]

Description

List user identifiers only. List all user details for all users. List roles. Add user.
<firstname> is the users first name <lastname> is the users last name <user_id> is the users login ID <user_password> is the users password <role_filename> is the name of the role file, a role file

specifies a number of roles <email_addr> is the users email address, an email address is optional.
-modify -uid <user_id> -up <new_user_password> -delete -uid <user_id>

Modify user. <user_id> is the users login ID. <new_user_password> is the users new password. Delete user. <user_id> is the users login ID.

Listing Users The List operations are threefold. The two user listings consist of the user identifiers only, and the detailed listing of all user details. The role listing consists of listing all roles in the system. To list user ids:
user_admin -u <admin_user> -p <admin_password> -listusers simple user_admin -u <admin_user> -p <admin_password> -listusers detail user_admin -u <admin_user> -p <admin_password> -listroles

where:
<admin_user> is the administrators login ID <admin_password> is the administrators login

password

Adding Users The Add operation adds a user to LDAP and to the database, creating both inet_user and user entries. A user is added by default to the Everybody group in LDAP. A user is assigned to the roles specified in the role file. A user is assigned to all datasources in the system. To add a user:
user_admin -u <admin_user> -p <admin_password> -add -f <firstname> -ln <lastname> uid <user_id> -up <user_password> -rf <role_filename> [-e <email_addr>]

where:
<admin_user> is the administrators login ID <admin_password> is the administrators login <firstname> is the users first name

password

42

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

<lastname> is the users last name <user_id> is the users login ID <user_password> is the users password <role_filename> is the name of the role file, a role file specifies a number <email_addr> is the users email address, an email address is optional

of roles

Role Files A role file specifies a number of roles. A user is added to the Normal Web User role by default. Each role file is a text file with a role per line. The role name is the name of the role in LDAP, and not the user friendly role name as specified in the user interface. When adding a user, incorrect roles will be ignored and a warning message will be displayed. The user will be added to correctly named roles only. If all roles in the role file are incorrect, the user will still be added to the Normal Web User role. Example of a Role File:
WebUserNormal WebUserPower

Modifying a Users password


useradm, virtuo

The Modify operation allows a user password to be modified. The following users may not be modified: and sysadm.

To modify a user:
user_admin -u <admin_user> -p <admin_password> -modify -uid <user_id> -up <new_user_password>

where:
<admin_user> is the administrators login ID <admin_password> is the administrators login password <user_id> is the users login ID <new_user_password> is the users new password

Deleting Users The Delete operation removes a user completely from the system. This includes all references to the user in the database and in LDAP. The following users may not be deleted: useradm, virtuo and sysadm. To delete a user:
user_admin -u <admin_user> -p <admin_password> -delete -uid <user_id>

where:
<admin_user> is the administrators login ID <admin_password> is the administrators login password <user_id> is the user login ID of the user to delete

Application Administration 43

5.2

External Reporting administration

Note: The External Reporting feature does not handle multiple reports\schedules created with identical names. Duplicate report results can be exported to the same locations via database, local and FTP export. If multiple users happen to use the same report name and same schedule name, then previously exported data will be overwritten. This can occur in a single user scenario where the user has reports of the same name in different folders. It can also occur in a multiple user scenario where users happen to use the same names. Note that this issue is only caused by the name of report/schedule being the same. It is also important to note that this issue applies to both ad hoc and scheduled external exporting.

5.2.1

Setting External Reporting properties

The external reporting properties file is used to set properties and values for users exporting reports using the EXPORT OPTIONS dialog.

See Exporting Reports in the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for more information on using the Export Options dialog. The external reporting properties file is located in $WMCROOT/conf/externalreporting/ default.properties. An example external reporting properties file is shown below:
########### db connection path ######## vallent.ds.path=dbconnection vallent.ds.file=vtdb #XRT folders external.reporting.destination.folder=/appl/virtuo/var/rg/spool/export/reports/ local xrt.scripts.folder=/appl/virtuo/admin/scheduler/scripts #Database external.reporting.batch.size=1000 external.reporting.tablespace=TRAFFIC_LARGE #FTP - User May uncomment and change values for the keys below #external.reporting.ftp.default.server=localhost #external.reporting.ftp.default.folder=/tmp #external.reporting.ftp.default.port=21 #external.reporting.ftp.default.user=default #external.reporting.ftp.default.password=@ENC@:142A97452514B49Fdefault #SMTP external.reporting.smtp.subject=IBM Tivoli Netcool Performance Manager for Wireless Report

44

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

The following list describes each property: vallent.ds.path - database connection setting. Must not be changed. vallent.ds.file - database connection setting. Must not be changed. external.reporting.destination.folder - default location local exports will be written to. xrt.scripts.folder - default location where scripts are stored so they appear in the drop down list box for the Auto Run External Application feature when creating/editing a schedule. See Scheduling a report in the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for information on using this feature. external.reporting.batch.size - database setting. Must not be changed. external.reporting.tablespace - database setting. Must not be changed. external.reporting.ftp.default.server - default FTP host to be used for exporting reports using FTP. To set a default value uncomment the property and edit the property value if required. external.reporting.ftp.default.folder - default FTP server folder location to be used for exporting reports to using FTP. To set a default value uncomment the property and edit the property value if required. external.reporting.ftp.default.port - default FTP port number to be used for exporting reports using FTP. To set a default value uncomment the property and edit the property value if required. external.reporting.ftp.default.user - default FTP user to be used for exporting reports using FTP. To set a default value uncomment the property and edit the property value if required. external.reporting.ftp.default.password - default FTP password to be used for exporting reports using FTP. To set a default value uncomment the property and edit the property value if required. external.reporting.smtp.subject - default Subject entry for all emails sent using external reporting.

5.2.2

Setting SMTP properties

A number of properties must be set correctly in order for SMTP (email) export to work. These properties are found in the as-default.properties file located in /appl/virtuo/conf/as/. The following properties must be set:
smtp.host=<full DNS name of the SMTP host> smtp.port=25

5.2.3

External Reporting Dictionary Mapping

The XRT_DICT table contains the information that allows a user map a given report name to its corresponding export table name. Each row in this table consists of the report name, the corresponding export table name and the time stamp for when the table was last updated with data.
<report_name>

To retrieve the most current table to which a report exports, run the following SQL query, where is the name of the report:
SELECT aliasname FROM XRT_DICT WHERE reportname = '<report_name>' AND tstamp = (SELECT MAX(tstamp) FROM XRT_DICT WHERE reportname = '<report_name>');

Application Administration 45

To map the report's KPI names to their corresponding column names in the export table, query the MANGLER table with the following SQL query, where '<table_name>' is the table name returned by the query above:
SELECT name, mangling FROM mangler WHERE ctx = UPPER('<table_name>');

Note: This query will return all KPI names exported to the respective table, even KPIs that are no longer contained in the report definition.

46

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.3

Report Granularity

The granularity of a report relates to the minimum time intervals at which data can be reported on. The default granularity for report generation is 30 minutes. The granularity limit is set in the datasource record. To achieve greater report granularity, the time interval can be set to 15 minutes. There are two issues concerned with setting report granularity at 15 minutes: 1. Configuring support for 15-minute intervals so that the system is able to recognize and compute 15 minute intervals. 2. Enabling the Report Definition GUI to group report results in 15 intervals, by enabling the USE THE GROUP BY 15 MINUTES checkbox.

5.3.1

Configure support for 15-minute reporting intervals

A shell script file can be used to configure the system to support 15-minute reporting intervals. Note: This script with solve both issues if it is run before system initialization: sys_init. If this script is run after system initialization, you must following the instructions in Enable Report Definition GUI after completing this step. This file should be copied into /appl along with all of the other packages. To configure the 15 minute reporting: 1. Call report_configure_15m.sh without arguments:
virtuo$ ksh report_configure_15m.sh

File Content
#!/bin/ksh # Script to add reporting intervals as per prospect # should be run as user virtuo before sys_init export ORACLE_SID=vtdb echo "UPDATE pm_product_info SET value = 96 WHERE property = 'TimeSlices';" \ | sqlplus "virtuo/<password>" echo "UPDATE wm_system_value SET value = 5 WHERE variable_id = (SELECT id FROM wm_system_variable WHERE name = 'MinReportPeriod');" \ | sqlplus "virtuo/<password>"

This file does not need to be modified and can be used as is. Note: A reporting interval shorter than 15 minutes is currently not supported.

Application Administration 47

5.3.2

Enable Report Definition GUI

Note: You do not need to follow these instructions if the report_configure_15m.sh script was run before system initialization: sys_init. The following script does not solve issue number 1. This script would usually need to be run after the instructions in section 5.3.1, and after system initialization: sys_init. To Enable the Report Definition GUI: 1. Execute the following command as user virtuo:
update pe_datasource set report_timeslices=96 where report_timeslices=48; commit;

2. Restart the application server:


sap stop as sap start as

48

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.4

Aggregation properties
ForceSumSummarisation ForceOverrideCounterTimeAggregator

The behavior of time and entity aggregators may change depending on the values of the properties:

By default these properties are set to False. The following applies if these properties are set to True. ForceSumSummarisation - for complex KPIs with more than one counter or an expression, all aggregators are ignored and the aggregator SumSum is used instead. ForceOverrideCounterTimeAggregator - overrides a counters aggregation. For complex KPIs with only 1 counter and no expression, the counter aggregators are ignored and the KPI's aggregator is used. See the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for information on aggregation types. Aggregation properties and complex KPI computation The following rules apply by default to the computation of complex KPIs: If ForceSumSummarisation = false, if the complex KPIs effective time aggregator is min or max, the complex KPI is computed before time aggregation. If ForceSumSummarisation = false, if the complex KPIs effective entity aggregator is min or max, the complex KPI is computed after time aggregation. The following table outlines the relationship between these properties, Min Max aggregation and whether complex KPI calculation occurs before time aggregation or after time aggregation.
Table 10: Relationship between properties, aggregation types and complex KPI calculation
forceOverride CounterTimeAggregator Entity aggregation Time aggregation Before Time After Time forceSum Summarisation

False False False False True True True

False True True False False True True

Min or Max Min or Max All AGGR All AGGR Min or Max Min or Max All AGGR

Any AGGR but not Min or Max Any AGGR but not Min or Max Min or Max Min or Max Min or Max Min or Max Min or Max

N N Y Y N N N

Y Y N N N N N

For example, the first row of the table above should be read as: If ForesSumSummarisation is false and ForceOverrideCounterTimeAggregator is false computation of a complex KPI will occur after time aggregation if the time aggregator is any aggregator except Min or Max and the entity aggregator is Min or Max.

Application Administration 49

Use the dbsysval_admin tool to change the values for ForesSumSummarisation and ForceOverrideCounterTimeAggregator. Use the -values option to view a current value. For example:
dbsysval_admin -values ForesSumSummarisation

Use the -updates option to change a value. For example:


dbsysval_admin -update ForceOverrideCounterTimeAggregator False

After each successful value change of the data dictionary, kpicache_admin and summary_sync is run:
agent_admin -u sysadm -p <password> -run <activity_id> kpicache_admin -u sysadm -p <password> dsname summary_sync

5.5

Excel download properties

A property can be used to enable or disable the use quotes for the object id field when a report is downloaded to Excel format. For example, whether the object id field (normally CELL_ID) is surrounded by quotes, "1-1-1-1" or not, 1-1-1-1. The property is com.ibm.tnpmw.reporting.export.quoteStrings, the default value is true. The property is located in the following file:
/appl/virtuo/conf/reporting/default.properties

If quoteStrings=false, then quote marks are not used when a report is downloaded to Excel format. If quoteStrings=true, or set to some other value then quote marks are used when a report is downloaded to excel format. See Excel Downloads in the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for information on downloading a report to excel.

5.6

Secondary keys

A primary key is the field in a database that is the primary key used to uniquely identify a record in a database. A secondary key is an additional key, or alternate key, which can be use in addition to the primary key to locate specific data. The following is not supported for secondary keys: provisioning of busy hours with secondary keys complex summaries reports with KPIs based on traffic tables containing secondary keys and traffic tables not containing secondary keys. For a report with KPIs based on traffic tables having secondary key tables, KPIs must be associated with the same KPI group.

50

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.7

Maintaining property values for User Comments, Reports and MyFavorites

The dbsysval_admin tool can be used to view and change the values of a number of properties for the following: User Comments Reports MyFavorites To display the current value for a property use the -values option:
dbsysval_admin -values <property>

For example:
dbsysval_admin -values MyFavoritesMaxChartsDisplayed

To change the current value for a property use the -update option:
dbsysval_admin update <property> <value>

For example:
dbsysval_admin -update MyFavoritesMaxChartsDisplayed 8

5.7.1

User Comments

The following values can be viewed and changed for user comments: ReportCommentMaxLength - the maximum number of characters for a user comment. The default is 250. See the Tivoli Netcool Performance Manager: User Guide - Wireless Component for more information on User Comments.

5.7.2

Reports

The following values can be viewed and changed for reports: QuickReportMaxKPICharsShown - The maximum number of characters of a KPI name that are displayed in a table view of a report. The default is 20. QuickReportTruncCharsReverse - The truncation direction of KPI names. The default is False, truncation occurs from the end of the KPI name. If the value is set to True, truncation occurs from the beginning of the KPI name, so the last n characters are displayed. See the Tivoli Netcool Performance Manager: User Guide - Wireless Component for more information on viewing reports.

Application Administration 51

5.7.3

MyFavorites

The following values can be viewed and changed for a MyFavorites page: MyFavoritesMaxTableRowsDisplayed - The maximum number of table rows displayed for a MyFavorites page. The default is 300. MyFavoritesMaxChartsDisplayed - The maximum number of charts displayed for a MyFavorites page. The default is 5. MyFavoritesMaxElementsDisplayed - The maximum number of elements displayed in a MyFavorites page. An element can be a graph, a table or a comment. The default is 30. See the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for more information on the MyFavorites page.

52

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.8
5.8.1

KPI Aliases and User Defined Groups


KPI Aliases

A KPI alias is a name that better identifies a KPI to a user. KPI aliases are used in report results instead of standard technology pack KPI names. KPI aliases are also used in report definitions. Only KPIs of the following field types can have an alias: Peg PCalc Attribute UDC Where a Peg or PCalc KPI has an alias then all summaries and stored busy hours based on that KPI will also use the alias.
Table 11:
KPI type

Aliasing of summary and busy hour KPIs


Alias

Technology Pack Name

Raw Summary Busy Hour Complex Summary

Nok.Traffic.tch_request Nok.Traffic.daily.tch_request Nok.Traffic.sbhv.daily.tch_request Nok.Traffic.daily.max_tch_request

C1234 (aliased to the counter number) daily.C1234 sbhv.daily.C1234 Daily.max_C1234

See the Tivoli Netcool Performance Manager User Guide for more information on field types. Aliases should be used to map KPIs from technology packs to user defined names. Aliases should be used in conjunction with user defined groups to map the most relevant subset of KPIs from a technology pack. Note: Care should be taken not to alias very large numbers of KPIs, for example an entire technology pack. This will have a performance overhead during dictionary synchronization.

5.8.2

User defined groups

A user defined group is a group of KPIs and UDCs that are grouped together for some purpose. For example, grouping may reflect different network functions such as planning, optimization and management.

5.8.3

kpia_admin tool

The kpia_admin tool uses a file to map existing technology pack KPI names to alias names and user defined groups. A number of different file formats can be used to import the aliases and user defined groups in to the system. See File formats for supported file types.

Application Administration 53

Usage
kpia_admin [ -i | -e ] -f <filename> { -d <base dir> } { -t [ alias | group | all ] } { -a [ true | false ] }

kpia_admin -r <group name>

The following table lists the kpia_admin tool parameters:


Table 12:
Parameter
-i

kpia_admin parameters
Description

Option
-f <filename>

Type

Mandatory Import file. Specifies the file containing the KPI aliases and/or user defined group definitions that are to be imported into the system. Mandatory Export file. Specifies the file to export all KPI aliases and user defined group definitions to. Mandatory Remove group. Specifies the user defined group to be removed. Optional Specifies the base directory for the <filename> parameter. The base directory is the location from which all other (relative) pathnames are taken. If not provided the current directory is used.

-e

-f <filename>

-r

<group name>

-d

<base dir>

-t

[ alias | group | all ]

Optional

Type. Specifies whether to import only aliases, only user defined groups or all (both aliases and groups) in a file. By default all is used. Append. Specifies whether to append pre-defined groups in a file. If not specified the default value is false. If append is set to false then existing associations to predefined groups for KPIs are removed. If append is set to true then associations to pre-defined groups are retained. A pre-defined group is a group defined by a KPIs technology pack. Note: This option is also available in the xml file format as an attribute of groups. The CLI append value overrides the xml option value. See XML file format.

-a

[ true | false ]

Optional

5.8.4

Import aliases

To create an alias or a number of aliases you import a file which maps the aliases. If no directory is specified the files location is assumed to be the current directory. To import an alias file:
kpia_admin -i -f <filename> -t alias kpia_admin -i -f <filename> -d <directory> -t alias

For example:
kpia_admin -i -f aliases.csv -t alias kpia_admin -i -f aliases.alias -d /appl/virtuo/admin/aliases -t alias

54

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.8.5

Import user defined groups

To create a user defined group or groups you import a file which maps the groups to KPIs. To import a user defined groups file:
kpia_admin -i -f <filename> -t group

For example:
kpia_admin -i -f userdefinedgroups.csv -t group kpia_admin -i -f userdefinedgroups.groups -t group

Append to existing user defined groups Append is used to specify whether to append user defined groups to a KPIs existing user defined groups. If append is set to false then existing associations to user defined groups for KPIs are removed, and replaced with the new user defined groups. If append is set to true then associations to existing groups are retained, and the new user defined groups are "appended" to the existing groups. If not specified the default value is false. Note: This option is also available in the xml file format as an attribute of groups. The CLI append value overrides the xml option value. See XML file format. To import a user defined groups file and append to existing groups:
kpia_admin -i -f <filename> -t group -a true

5.8.6

Import aliases and groups

To create both aliases and user defined groups you import a file which maps the aliases and groups. To import a user defined groups file:
kpia_admin -i -f <filename> -t all

For example:
kpia_admin -i -f userdefinedgroups.csv -t all

Application Administration 55

5.8.7

Remove user defined groups

To remove a user defined group you specify the group to be removed. When a user defined group is removed using the -r option, it is removed for all KPIs. To remove a user defined group:
kpia_admin -r <user defined group name>

For example:
kpia_admin -r userdefinedgroupname1

5.8.8

Update and remove aliases and user defined groups

Aliases and user defined groups can be removed or changed by re-importing a file. Where a file has been imported to create aliases and groups, the same file can be used to update or remove aliases and groups by amending and re-importing the file. Note: If the -append option is set to true existing user defined groups are retained. When unspecified the -append option is set to false by default . The following example assumes the -append option is set to false. In this example the following csv file has been imported.
KPI name 1, KPI alias 1, KPI user defined group 1 KPI name 2, KPI alias 2, KPI user defined group 1

The file is then amended:


KPI name 1, KPI alias A, KPI user defined group 1 KPI name 2, KPI alias 2, KPI user defined group 2

When the file is imported again, KPI name 1s alias has been removed and replaced with KPI alias A. KPI name 2 is no longer a member of KPI user defined group 1, but is now a member of KPI user defined group 2. You can remove or update multiple aliases and user defined groups in a file, for all file types.

56

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

5.8.9

Export aliases and groups

To export all KPI aliases and user defined groups currently defined in the system, you specify the name and (optionally) the location of the file to export to. If you do not specify a location the current directory is used. Use the following file extensions: csv, xml, or kpia. It is these file extensions that contain both alias and group data. You cannot export KPI aliases and user defined groups separately. To export all KPI aliases and user defined groups:
kpia_admin -e -f <filename> kpia_admin -e -f <filename> -d <directory>

For example:
kpia_admin -e -f kpialiases1.csv kpia_admin -e -f kpialiases1.csv -d /appl/virtuo/admin/aliases/export

5.8.10

File formats

The kpia_admin tool supports a number of file formats that can be used to import and export KPI aliases and user defined groups. These include: CSV XML Customized files: kpia alias - import only group - import only Valid KPIs Only KPIs of the following field types can have an alias: Peg PCalc Attribute UDC See the Tivoli Netcool Performance Manager User Guide for more information on field types. Restrictions An alias does not have to be unique. Only one alias per KPI is permitted. An alias can be up to 50 characters long. An alias can only use alphanumeric characters and the underscore character. Alphabetic characters are case insensitive. An alias cannot begin with a numeric character. A user defined group name can be up to 50 characters long. A user defined group name can only use alphanumeric characters and the underscore character. Alphabetic characters are case insensitive. A user defined group name cannot begin with a numeric character. A KPI can be a member of multiple groups.

Application Administration 57

Note: If you specify a KPI more than once in a file the last entry is used to map aliases and groups. For example:
KPI name 1, KPI alias A, KPI user defined group 1 KPI name 1, KPI alias B, KPI user defined group 2

In this example, KPI name 1 will be mapped with KPI alias B and KPI user defined group 2. The first entry is ignored.

CSV The format for a csv file is KPI name, KPI alias and user defined group. For each KPI it is optional whether a KPI alias and a user defined group are specified. For example:
KPI name 1, KPI alias A, KPI user defined group 1 KPI name 2, KPI alias 2, KPI user defined group 1, KPI user defined group 2 KPI name 3, KPI alias 3, KPI name 4, , KPI user defined group 1, KPI user defined group 2

XML The format for an xml file is shown below. KPI alias, and user defined group are both optional. You can specify whether a KPI has an alias or is a member of a user defined group, or both. Append is also optional. If append is set to false then existing associations to pre-defined groups for KPIs are removed. If append is set to true then associations to pre-defined groups are retained. A pre-defined group is a group defined by a KPIs technology pack.
<fields> <field> <kpiname>kpi_name0</kpiname> <alias>alias0</alias> <groups append="true"> <group>group01</group> <group>group02</group> </groups> </field> <field> <kpiname>kpi_name1</kpiname> <alias>alias1</alias> <groups> <group>group01</group> <group>group02</group> </groups> </field> <field>

58

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

<kpiname>kpi_name1</kpiname> <alias>alias1</alias> <groups /> </field> <field> <kpiname>kpi_name2</kpiname> <alias /> <groups> <group>group01</group> </groups> </field> </fields>

Customized file types A number of customized file types can be used for: aliases and groups, using the file extension kpia aliases, using the file extension alias groups, using the file extension group For example,
userdefinedgroups1.group

A file using the appropriate format and extension must be used for customized file types. kpia This file format contains both alias and user defined groups in the following format:
kpi_name0 = alias0 | group01, group02, kpi_name1 = alias1 | group11, group12, group13 # This is a comment line

alias Used for import only. This file format contains only aliases in the following format:
kpi_name0 = alias0 kpi_name1 = alias1

group Used for import only. This file format contains only user defined groups in the following format:
kpi_name0 = group01, group02, group03

5.8.11

Log files

The kpi_admin log files location is: /appl/virtuo/logs/kpia/kpia.log

Application Administration 59

5.9

KPI Browser configurable parameters

Configuration of the KPI Browser is achieved using a number of system parameters and properties files. The following are used and can be configured: Systems variables stored in WM_SYSTEM_VALUES_V Service properties; Scheduler properties, in <deploy_area>\conf\simplereporting\default.properties Back-end cache properties, in <deploy_area>\conf\as\cache-default.properties

5.9.1

Configurable system variables

A number of parameters are available as configurable system variables, and are used to control the following aspects of the KPI Browser: Default aspects of graph rendering. Maximum and minimum limits for resource, KPI and graph selections. User interface caches. Report execution and queuing. The following table describes parameters that can be set for the KPI Browser.
Table 13: Configurable parameters
Parameter Type Allowed Values Default Description

KPIB_Enable3d KPIB_EnableToolTip

Boolean Boolean

NA NA

False False

Default state of 3D mode for Graphs. When true the graphs are displayed in 3D. Default state of tooltip mode for Graphs. When true the graphs have tooltips associated with them showing the value of each member in the graph. Default state of Legend mode for Graphs. When true the graphs are displayed with their legends When true KPI Selection mode is standard. When false the user defined mode is selected. In alias mode the user defined alias is used for KPI selection if available. If not available the long name is used. Max number of resources that can be selected. Max number of KPIs that can be selected. Min Increment value allowed when Field Interval is Raw.

KPIB_EnableLegend

Boolean

NA

True

KPIB_ModeStandard

Boolean

NA

True

KPIB_MaxResource Selection KPIB_MaxKpiSelection KPIB_IncrementRaw HourMin

Int Int Int

1+ 1+ 1+

20 20 1

60

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 13: Configurable parameters


Parameter Type Allowed Values Default Description

KPIB_IncrementRaw HourMax KPIB_IncrementRaw DayMin KPIB_IncrementRaw DayMax KPIB_IncrementDaily DayMin KPIB_IncrementDaily DayMax

Int

1+

336

Max Increment value allowed when Field Interval is Raw. Must be equal to or greater than KPIB_IncrementRawHourMin. Min Increment value allowed when Field Interval is Raw. Max Increment value allowed when Field Interval is Raw. Must be equal to or greater than KPIB_IncrementRawDayMin. Min Increment value allowed when Field Interval is Daily and Time is Day. Max Increment value allowed when Field Interval is Daily and Time is Day. Must be equal to or greater than KPIB_IncrementDailyDayMin. Min Increment value allowed when Field Interval is Daily and Time is Week. Max Increment value allowed when Field Interval is Daily and Time is Week. Must be equal to or greater than KPIB_IncrementDailyWeekMin. Min Increment value allowed when Field Interval is Weekly and Time is Week. Max Increment value allowed when Field Interval is Weekly and Time is Week. Must be equal to or greater than KPIB_IncrementWeeklyMin. Min Increment value allowed when Field Interval is Monthly. Max Increment value allowed when Field Interval is Monthly. Must be equal to or greater than KPIB_IncrementMonthlyMin Default number of graphs available in X axis that you see on screen. This can be changed dynamically from within the screen. Max number of graphs available in X axis. Must be equal to or greater than KPIB_GraphGridLayoutDefaultXaxis. Default number of graphs available in Y axis that you see on screen. This can be changed dynamically from within the screen.
Application Administration 61

Int Int

1+ 1+

1 14

Int Int

1+ 1+

1 28

KPIB_IncrementDaily WeekMin KPIB_IncrementDaily WeekMax

Int Int

1+ 1+

1 4

KPIB_IncrementWeekly Min KPIB_IncrementWeekly Max

Int Int

1+ 1+

1 8

KPIB_IncrementMonthly Int Min KPIB_IncrementMonthly Int Max KPIB_GraphGridLayout DefaultXaxis Int

1+ 1+

1 12

1+

KPIB_GraphGridLayout MaxXaxis KPIB_GraphGridLayout DefaultYaxis

Int

1+

10

Int

1+

Table 13: Configurable parameters


Parameter Type Allowed Values Default Description

KPIB_GraphGridLayout MaxYaxis KPIB_ReportPolling InshallahFrequency

Int

1+

10

Max number of graphs available in Y axis. Must be equal to or greater than KPIB_GraphGridLayoutDefaultYaxis. The number of milliseconds between polls on the client to check if the KPI Browser report has finished. Once finished the report is displayed on screen. The number of milliseconds before the client moves to a different time interval for polling the server to see if the KPI Browser report has finished. The time interval in milliseconds between polls on the client to check if the KPI Browser report has finished. This value is used only when the shift from initial to secondary frequency has occurred. Number of times to retry the running of a report when a polling of a report has timed out. The number of milliseconds before a polling for a report is considered to have timed out.

Int

1000+

2000

KPIB_ReportPolling TimeToFrequencyShift

Int

1000+

5000

KPIB_ReportPolling SecondaryFrequency

Int

1000+

20000

KPIB_retryTimes

Int

0+

KPIB_ReportPolling TimeOut KPIB_SupportedGraph Types

Int

1+

30000

String

Comma delimited integers

5,0,1,2,4,6,7, A list of comma delimited integers that rep1,14,15,17, resent graph types that are available for the KPI Browser report. The CSV order repre18,19,20 sents the order in which the graph names are displayed in the UI. See Valid graph values. 300 Time in seconds before unique selections held in cache for the Entity Selection expire. Time in seconds before unique selections held in cache for the Field Type expire. Time in seconds before unique selections held in cache for the SBH expire. Time in seconds before unique selections held in cache for KPI Group expire. Time in seconds before unique selections held in cache for Resource Search expire.

KPIB_GlobalFilterEntity Int UICacheTimeout KPIB_FieldTypeUICache Int Timeout KPIB_SBHListUICache Timeout KPIB_KpiGroupList UICacheTimeout KPIB_ResourceSearch UICacheTimeout Int Int Int

0+

0+ 0+ 0+ 0+

300 300 300 300

62

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Change a value The dbsysval_admin tool can be used to view and change values. To display the current value for a parameter use the -values option, as user virtuo:
dbsysval_admin -values <property>

For example:
dbsysval_admin -values KPIB_Enable3d

To change the current value for a property use the -update option, as user virtuo:
dbsysval_admin update <property> <value>

For example:
dbsysval_admin -update KPIB_Enable3d true

Valid graph values The following table lists each graph and corresponding value. Table 14: Allowed Graphs
Value Graph

0 1 2 3 4 5 6 7 11 14 15 17 18 19 20

Bar Stacked Bar Superimposed Bar Area Stacked Area Line Plot Scatter Stair Stacked Polyline Stacked Stair Summed Stair Stacked 100 Bar Stacked 100 Line Plot Stacked 100 Area Stacked 100 Stair

Application Administration 63

5.9.2

Configurable service properties

These are properties of the service used to run reports and cache data on the server. You can update values by amending the relevant service properties file. Scheduler properties, in <deploy_area>\conf\simplereporting\default.properties Back-end cache properties, in <deploy_area>\conf\as\cache-default.properties default.properties This properties file contains the property values for the simple report service.
Table 15:
Property Service Default Value

default.properties
Description

executorthreads

NA

10

Number of reports that can be executed concurrently. This is the number of threads in the simple report executor pool. Maximum number of reports that can be queued, waiting for a thread to execute. Further reports requests are cancelled automatically until there is space in the queue.

queue.length

NA

1000

file.maxage

simplereport_old

60 minutes Maximum age of the report on disk before the service deletes it.

exceution.maxage simplereport_exec 10 minutes Length of time in minutes before the service cancels a KPI Report that is running. storedef NA False It is possible to view the status of executing and executed reports using loadSimpleReportQueueDump.do from the web client. Setting this to true enables the display of the report definition in this web page. By default it is set to false to reduce memory footprint.

cache-default.properties This properties file contains the property values for the server caches used by the KPI Browser.
Table 16:
Property group Description

cache-default properties

Entity dimension field values Entity dimension field indexes

Entities are cached as they are requested and have associated vendor and technology properties. The field value cache and the index cache is then built up over time as requests are made. Requests are made based on vendor and technology. Matching entity indexes are retrieved from the Entity dimension field value cache, and cached here with a hash of the unique request. The field value cache and the index cache is then built up over time as requests are made. All KPIs are cached and have associated vendor, technology, entity and KPI Groups. Uses the same caching mechanism as the entity cache but entities are replaced by KPIs.

KPI fields

64

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 16:
Property group Description

cache-default properties

KPI indexes

Requests are made based on vendor, technology, entity and KPI Group. Matching KPI indexes are retrieved from the KPI fields cache, and cached here with a hash of the unique request. All KPIs are cached and have associated vendor, technology, entity and KPI Groups. Uses the same caching mechanism as the entity cache but entities are replaced by KPIs. Cache of each requested report. If a request for the same report is received it will be fetched from cache based on staleness.limit and exceution.maxage. Reads the serialized report result file into memory for rapid access. MaxObjects limits the number of report result files held in memory at a given time (default 5). MaxMemoryIdleTimeSeconds is the period of time to hold the reports in memory in seconds (default 5 minutes).

simple report results reportCBResults

Application Administration 65

66

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Operations Tasks

This chapter describes a number of operations tasks and housekeeping activities that are essential for the operation of the system. These include: Daily Loader Operations Tasks Loader Housekeeping Application directory management Although not regular tasks, the following is also described: Stability Settings

6.1

Daily Loader Operations Tasks

In a Tivoli Netcool Performance Manager system there may be a number of loaders running for a number of technologies. The loader process runs constantly taking data from the loader spool directories and loading it into the performance database. The loader process is critical to the functioning of the system. The following should be done on a daily basis: Check Loader status Check for bad files

6.1.1

Checking Loader Status

To check the status of the loaders, complete the following as user virtuo: 1. Enter the command:
sap disp

Output will be displayed listing the loaders that are started and loaders that are stopped. For example:
[virtuo]sap disp NAME as nc_cache alarm_cache STATE STARTED STARTED STARTED SINCE Mar 14, 2007 Mar 14, 2007 Mar 14, 2007

Copyright IBM Corp. 2007, 2011

67

load_ericssongsmbssneutral load_ericssongsmbss load_nokiagsmbssneutral load_nokiagsmbss load_motorolagsmbss load_motorolagsmbssneutral load_motorolaumtsutran load_ericssonumtsutran

STARTED STARTED STARTED STARTED stopped stopped stopped stopped

Mar 14, 2007 Mar 14, 2007 Mar 14, 2007 Mar 14, 2007 -

2. If the loaders have not started correctly, check the loader logs for startup issues. Loader log files are stored in the $WMCROOT/logs/loader directory. Note: nc_cache and alarm_cache are processes with which all loaders communicate. They must be started prior to starting the loaders. nc_cache provides a common view of all network configuration (nc) information to the loader processes. This ensures that the nc data is synchronized between all loaders. For example, if loader A rehomes a cell then loader B is made aware of this rehoming event. alarm_cache provides a common view of all alarms raised by the loaders.

6.1.2

Checking for bad files

The following directories should be checked for large amounts of bad files:
$WMCROOT/var/loader/spool/<datasourcename>/<datasourceversion>/bad

If there is a large number of bad files in either of these directories the loader log files should be checked. Loader log files are stored in the $WMCROOT/logs/loader directory. The following is an example output from a loader log file.
12:23:51,653 INFO [FileDispatcher] New files found in the datasource. Refresh completed. 12:23:51,756 INFO [DataLoader] TaskId[2]. File processing started for [INUSE.MSC_1-#CELL_DATA-#-BSS9.20081016.12.00.1-2-1.lif], 12:23:53,537 INFO fully 12:23:54,175 INFO cessfully 12:23:55,182 INFO [LoadMapBlockPool] TaskID[2], [CELL_DATA] LoadMap block loaded success[ExpressionMapPool] TaskID[2], [CELL_DATA] LoadMap block compiled suc[MasterCacheRequest] TaskID[2], Master Request. No Blocks [100 Diffs 1].

12:23:55,191 INFO [MasterCacheRequest] TaskID[2], Table [NC_BSC] version[0]. Number of lookups [2]. Number of diff [1]. Times [,Wed Feb 25 10:40:00 GMT 2009,Wed Feb 25 13:56:00 GMT 2009] 12:23:55,193 INFO [MasterCacheRequest] TaskID[2], Table [NC_CELL] version[0]. Number of lookups [100]. Number of diff [1]. Times [,Wed Feb 25 10:40:00 GMT 2009,Wed Feb 25 13:56:00 GMT 2009] 12:23:55,318 INFO [MasterCacheResponse] TaskID[2], Master Response. Return Code [RELOAD] Table[NC_CELL] version[32]. Number 12:23:55,320 INFO [MasterCacheResponse] TaskID[2], of new/updated rows[1]. 12:23:55,377 INFO

[MasterCacheRequest] TaskID[2], Master Request. No Blocks [100 Diffs 1].

12:23:55,379 INFO [MasterCacheRequest] TaskID[2], Table [NC_CELL] version[32]. Number of lookups [100]. Number of diff [1]. Times [,Wed Feb 25 10:40:00 GMT 2009,Wed Feb 25 13:56:00 GMT 2009]

68

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

12:23:55,575 INFO

[MasterCacheResponse] TaskID[2], Master Response. Return Code [RELOAD] Table[NC_CELL] version[null]. Number

12:23:55,576 INFO [MasterCacheResponse] TaskID[2], of new/updated rows[100]. 12:23:55,599 INFO 12:23:55,601 INFO [MasterCacheRequest] TaskID[2], of lookups [0]. Number of diff [0]. Times [] 12:23:55,613 INFO

[MasterCacheRequest] TaskID[2], Master Request. No Blocks [100 Diffs 0]. Table [NC_CELL] version[32]. Number

[MasterCacheResponse] TaskID[2], Master Response. Return Code [OK]

12:23:55,859 INFO [TableDAO] TaskID[2], Table [VNL_CELL_SDCCH_TAB], Evaluated [100] rows with null data for one or more of the columns [[V165GLHAHL26SEC5R00HW01QK4]] 12:23:55,921 INFO [TableDAO] TaskID[2], Merged [100] rows into traffic table [VNL_CELL_SDCCH_TAB] 12:23:56,038 INFO [TableDAO] TaskID[2], Merged [100] rows into traffic table [VNL_CELL_INTERFERENCE_TAB] 12:23:56,287 INFO [TableDAO] TaskID[2], Merged [100] rows into traffic table [VNL_CELL_HANDVR_RSLT_TAB] 12:23:56,358 INFO [TableDAO] TaskID[2], Table [VNL_CELL_TCH_TAB], Evaluated [100] rows with null data for one or more of the columns [[V1LL006AHL26SEC5R00HW01QK4]] 12:23:56,428 INFO [TableDAO] TaskID[2], Merged [100] rows into traffic table [VNL_CELL_TCH_TAB] 12:23:56,560 INFO [DAHandler] TaskID[2], Calculated DA information da_loaded_tables[4] wml_loaded_table_interval[2] da_loaded_blocks[0] 12:23:56,560 INFO Max Size[1000] [DAHandler] TaskID[2], DA:Not writing data to tables yet. Cache size[6],

12:23:56,562 INFO [AlarmHandler] TaskID[2], Sending [0] alarm objects to the alarm cache from file [/appl/virtuo/var/loader/spool/nokiabss_oss31ed3/OSS3.1_ED3/INUSE.MSC_1-#CELL_DATA-#-BSS9.20081016.12.00.1-2-1.lif] 12:23:56,563 INFO [FinalHandler] TaskID[2], File processing finished for [/appl/virtuo/var/ loader/spool/nokiabss_oss31ed3/OSS3.1_ED3/INUSE.MSC_1-#-CELL_DATA-#-BSS9.20081016.12.00.12-1.lif] with result [true] loading time [4] sec 12:23:56,563 INFO [FinalHandler] TaskID[2], No of processed blocks [100]; No of bad blocks: [0]; No of unprocessed blocks: [0] 12:23:57,629 INFO [DAHandler] TaskID[2], DA:Writing data to tables. da_loaded_tables[4] wml_loaded_table_interval[2] da_loaded_blocks[0] 12:23:57,689 INFO [DAAbstractDAO] TaskID[2], Merged [4] rows into table [wml_loaded_table_interval] for period [Start : Wed Feb 25 10:40:00 GMT 2009 End Wed Feb 25 11:10:00 GMT 2009] 12:23:57,720 INFO [DAAbstractDAO] TaskID[2], Merged [4] rows into table [wml_loaded_table_interval] for period [Start : Wed Feb 25 13:56:00 GMT 2009 End Wed Feb 25 14:26:00 GMT 2009] 12:23:57,747 INFO [DAAbstractDAO] TaskID[2], Merged [4] rows into table [da_loaded_tables] for period [Start : Wed Feb 25 10:40:00 GMT 2009 End Wed Feb 25 14:26:00 GMT 2009]

If the loader is sending lif files to the bad directory a number of SQL error messages may be generated. The example below shows an error in a loader log file.
12:29:08,764 ERROR [TrafficHandler] SQL_INSERT Could not insert [1] rows into table [NC_CELL] SQL_INSERT Could not insert [1] rows into table [NC_CELL] at com.ibm.tivoli.tnpmw.loader.dao.NcDAO.insert(NcDAO.java:140) at com.ibm.tivoli.tnpmw.loader.cache.nc.master.NcMasterCache.persistData(NcMasterCache.java:692) at com.ibm.tivoli.tnpmw.loader.cache.nc.master.NcMasterCache.applyFileDiff(NcMasterCache.java:412) at com.ibm.tivoli.tnpmw.loader.cache.nc.master.NcMasterCache.processRequest(NcMasterCache.java:319)

Operations Tasks 69

at com.ibm.tivoli.tnpmw.loader.cache.nc.master.rmi.NCCacheRemoteImpl.processRequest(NCCacheRemoteImpl.java:205) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at sun.rmi.server.UnicastServerRef.dispatch(Unknown Source) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.sql.BatchUpdateException: ORA-12899: value too large for column "VIRTUO"."NC_CELL"."CELL_ID" (actual: 98, maximum: 50) at oracle.jdbc.driver.DatabaseError.throwBatchUpdateException(DatabaseError.java:343) at oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:10720) at com.ibm.tivoli.tnpmw.loader.dao.NcDAO.insert(NcDAO.java:131) ... 15 more

The Oracle oerr ora command can be used to provide additional information about the problem. For example:
oerr ora 1400 01400, 00000, "cannot insert NULL into (%s)"

In some cases lif files may go to the bad directory and there may be very little information in the error logs to track the problem. In such cases the log level can be set to a higher level to produce logs with more information. Note: It is advisable to only to run the loader at log level DEBUG for short periods of time because the logs generated can be very large. Log level INFO is the normal logging requirement.

Changing the Loader Log Level To change the loader log level of any loader while it is running, use the loader_admin command. 1. Enter the following command, as user virtuo:
loader_admin -loglevel <loglevel> -instance <instance_name>

There are three settings for <loglevel>: DEBUG, INFO and ERROR. Note: You can also set the loader log level along with other loader properties using loader_admin -load <properties_xml> -instance <instance_name>.

70

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Log Levels The following table lists the three logging levels.
Table 17:
Logging Level Description

Logging Levels

DEBUG INFO

This level indicates low level messages that can be used to analyze component processing data. This level indicates activity in the system at certain points in the operation, such as starting up or shutting down. It will show whether a lif file was successfully processed or not, and which traffic tables were updated. This indicates errors that can be recovered from. This will be true for almost all error handling.

ERROR

DEBUG Level Logging - Impacts on Performance Log levels can be set for various parts of the system. In normal operation the log level for all of the subsystems should be set to INFO. DEBUG level can adversely impact performance.

6.2

Loader Housekeeping
Disk space usage Loader configuration Configuring multiple identical loaders

The following housekeeping task should be done on a daily basis:

6.2.1

Disk Space Usage

LIF files are constantly being parsed, loaded and moved to either good or bad directories. If a problem occurs LIF files may be parsed but not loaded. This may cause the $WMCROOT/var/loader/spool/ <datasourcename>/<datasourceversion> filesystem to fill up, preventing any new files from being created. After LIF files are processed they should be removed from the system, otherwise the $WMCROOT/var/ loader/spool/<datasourcename>/<datasourceversion> filesystem may fill up. 1. To check filesystem allocation and usage (in particular for /spool), enter the following command as user virtuo:
$df -k

Operations Tasks 71

6.2.2

Loader Configuration

The loader_admin -load and -unload commands can be used to change the configuration of a loader. The unload command exports a loader configuration to an xml file:
loader_admin -unload <properties_xml> -instance <instance_name>

These properties can be edited and then reloaded with:


loader_admin -load <properties_xml> -instance <instance_name>

Properties operate on a per-loader instance basis. Changes to properties of one instance will not affect the operation of another instance, even an instance of the same technology pack. The following is a list of editable properties: alarmcache.aged - retention in minutes before alarms are cleared automatically. alarms.enabled - can be true or false. Indicates whether alarm generation is turned on or off. datasource.bad - path to the folder for failed files. datasource.folder - source folder for lif files to be parsed. datasource.good - path to the folder for successful files. error.handling.mode - describes whether files are processed in file mode or block mode. In file mode the whole file fails if one block is bad. In block mode only the failed block is lost. 0 indicates file mode, 1 indicates block mode. log.folder - destination folder for log files. log.level - log level. Can be DEBUG, INFO or ERROR. reread.interval - how often to check for configuration changes. thread.pool.LifStreamProcessor - sets the number of threads for each section of the loader. thread.pool.TrafficHandler - sets the number of threads for each section of the loader. thread.pool.groups - defines the handlers that are to be used in loading. log.rehome.folder - destination folder for rehoming log. log.rehome.filename - destination file for rehoming log. log.filename - destination file for standard log. timezone - the timezone of that the LIFS being loaded were generated for. itm.interval - how often to log IBM Tivoli Monitoring (ITM) metrics. itm.folder - destination folder for ITM log. itm.file - destination file for ITM log. itm.enabled - whether ITM logging is enabled. See the Tivoli Netcool Performance Manager ITM Installation and Configuration Guide - Wireless Component, for more information. When new properties are loaded, a running loader will check for changes to properties that start with log. and rehoming. These properties will be reloaded without a restart. All other properties require a restart of the loader instance in order to take effect.

72

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: Threading properties. Threading is used to improve the overall performance of the loaders. The optimal setting is:
thread.pool.TrafficHandler = 4 thread.pool.LifStreamProcessor = 2

That is, 2 threads parsing the lif file and 4 threads writing the data to the traffic tables. Writing data to traffic tables takes twice as long as lif parsing. While it is possible to increase the number of threads beyond this, this results in a degradation in performance due to thread swapping.

6.2.3

Configuring multiple identical loaders

Under certain circumstances more than one loader of a particular type may need to be configured and started. This is especially true when a single loader is unable to cope with the number of

incoming LIF files. Multiple loaders need to be configured in order to handle the incoming data.
See the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for information on configuring multiple loaders.

6.3

Stability Settings

Flapping, otherwise known as oscillating Network Configuration data, occurs when LIF files are loaded which have slightly different Network Configuration parentage information for the same node id. This scenario may occur if two loaders are loading from different data sources and, for some reason, there are differences in the Network Configuration data uploaded from the different data sources. The stability period is the time during which it is permissible for re-parenting of an access key to take place. The default value of 2 hours, i.e. 7200 seconds, is set in the loader properties in the database. The property name is rehoming.allow.all. It can be changed using the loader_admin -load <properties_xml> -instance <instance_name> command.

6.4

Application directory management

Management of space in the /appl folder to prevent log files filling up the disk It is important for the efficient running of the system to monitor and manage the disk space of the /appl directory. To check the /appl directorys usage, execute the following command as user virtuo:
df -k /appl

Operations Tasks 73

6.4.1

Directory contents

The appl directory contains the following: Oracle 10g Software install /appl/oracle Tivoli Netcool Performance Manager software /appl/virtuo TDS (Tivoli Directory Server) ldap instance directories /appl/ Archive directory /appl/archive - this is not mandatory but may have been created during an upgrade Install packages and artifacts - not mandatory Tivoli Netcool Performance Manager log files.

6.4.2

Tivoli Netcool Performance Manager log files

The majority of logs in /appl are generated to the /appl/virtuo/logs directory. Each part of the application has a log file under the directories:
alarmapi as conf_read database external_reporting generic itm lcm loader part_maint sap_cli sapmgr sapmgr_cli sapmon sapmon_cli sbh_ext_tab_upgrader sbh_sk_remover storedbusyhour summariser vmm web

6.4.3

Loader log files

Loaders can generate a large amount of logging information especially if the data loading is generating error messages. The /appl directory can increase in size quickly due to this. Review the appropriate cron entry to access whether loader log files are archived with adequate frequently.

74

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Loader errors should be resolved quickly to avoid constant generation of large logs files.

6.4.4

Loader LIF file directory

The default loader LIF directory is located under /appl/virtuo/var/loader/spool. If LIF files are not removed from the system with adequate frequency, the /appl directory can increase in size quickly It is recommended the loader LIF directory be moved to a location other than /appl. This can be achieved by creating a soft link. It is also possible to change the following loader properties: datasource.folder, datasource.good, datasource.bad, log.folder, log.rehome.folder and itm.folder. See Loader Configuration. Oracle log files By default the Oracle vtdb instance runtime logs are generated to the following directory:
/appl/oracle/admin/vtdb

It is recommended to have the Oracle logs generated to a location other than /appl. This can be achieved by creating a soft link.

6.4.5

Crontab entries

Crontab entries are used to manage log files and report generation. Cron entries should be periodically reviewed to assess whether log files are archived with adequate frequently. Altering the frequency of the cron jobs may be required if very large logs are seen to be produced. Note: If the system is distributed over multiple servers, the log files will be on the server where the service in running. For example, the Oracle logs will be on the database server and application server logs will be on the application server in the locations specified. See Crontab setup for default root user and virtuo user crontab entries.

Operations Tasks 75

76

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Datasource, Agent and KPI Cache Administration


Datasource Administration. See Datasource Administration on page 77. Agent Maintenance. See Agent Maintenance on page 79. KPI Cache Management. See KPI Cache Management on page 88.

This chapter describes the following:

7.1

Datasource Administration

Datasources provide the system with the necessary performance data for reports. A Datasource is typically a server that contains entity and performance data information. After a software installation users and datasources need to be created. Users and datasources are created as part of the installation process. Note: See the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for more information on the installation process. The following command is used to create default users and datasources after installation:
sys_init -u <admin_user> -p <admin_pass> -h <host>

Alternatively, use a web browser and complete the following: 1. Insert the following in a web browser:
http://<hostname>:8080/sysinit

where <hostname> is the hostname for the machine. 2. Log in with the administration userid <admin_user> and the administration password <admin_pass>. After completion of the initialization of the default users, users are maintained using GUI User Administration tool. See User management on page 36. Datasources are maintained by the ds_admin script. The ds_admin script can perform the following task:
Copyright IBM Corp. 2007, 2011 77

Display existing datasources. See Listing Datasources on page 78. The ds_admin script is located in:
$WMCROOT/bin

The ds_admin script can be run from any directory as user virtuo.

7.1.1

Usage
ds_admin [-asconf conf_name] parameters - List Data Sources: -u user -p password -list - Activate Data Source: -u user -p password -activate dsname - Deactivate Data Source: -u user -p password -deactivate dsname

Note: The -asconf option will be used for multiple instances of the application server. It is reserved for use or removal of future implementations of the product.

7.1.2

Listing Datasources
ds_admin -u <user> -p <password> -list

1. Execute the ds_admin script using the following syntax: The parameters to be included with the ds_admin script to list datasources are described in Table 18.
Table 18:
Option
-u <user> -p <password>

Parameters for Listing Datasources

Description

Username. Password.

Example: The following command is sample list output.


[hostname:virtuo] ds_admin -u <user> -p <password> -list Name name name Host hostname hostname Active true true Enterprise true false true false Local JDBC

jdbc:oracle:thin:@tralee:1521:vtdb jdbc:oracle:thin:@tralee:1521:vtdb

7.1.3

Activating a Datasource
Note: The -activate option is used during the installation of the product. This option is currently reserved for use only by an application installation administrator.

78

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

7.1.4

Deactivating a Datasource
Note: The -deactivate option is reserved for future use or removal of future implementations of the product.

7.2

Agent Maintenance

When the system is installed, the scheduler process is configured to run certain standard administrative jobs/activities on a regular schedule. The administrative jobs/activities are maintained and monitored by two command line tools: schedule_admin
agent_admin

Refer to Schedule administration on page 92 for more information on the schedule_admin tool. The agent_admin tool is used to monitor and manage agent activities executed by the agent framework. These agent activities usually relate to supporting Tivoli Netcool Performance Manager activities such as creating summaries, synchronizing the LDAP database and cleaning up agent/file activities.

7.2.1

Overview of Agent Activities

Agents collect information from datasources and perform maintenance on the product. It is the responsibility of the agents to collect information from the datasources to: Populate a set of tables (data dictionary) which defines the entities and fields available from the datasource. Keep the source system database information synchronized with the local web tables. Once the information is populated a user can run a report via the web using the information to query the datasource via a dynamic SQL query. The query results are then available to the user via a web browser.

Datasource, Agent and KPI Cache Administration 79

The types of tasks the agents perform fall into four categories: Database Procedural Sweeper Summary Table 19 below describes the agent activities currently running for a typical installation. Note: Some of the agent activites in Table 19 are marked reserved. This indicates the agent activity is reserved for use or removal of future implementations of the product. Do not stop these activities from running as it may result in system consequences.These jobs have minimal impact on your system resources.
Table 19:
Agent Activities Label

Agent Types and Descriptions


Description

Procedure

Agent Activity Cleanup Temporary Report and Schedule Cleanup Datasource cleanup. Unused file deletion.

The procedure agent runs every hour to remove old agent activity information. The procedure agent runs first on initial use of the administration software. The procedure agent runs every hour to remove temporary reports and schedules. The procedure agent runs first on initial use of the administration software. This procedure agent runs daily and cleans up deleted datasources (reserved). The sweeper agent runs every 20 minutes to remove report results that are no longer active from the database and the file system. A consequence of this agent is the removal of report results from the Monitor tab. The LDAP synchronization agent updates the database with information about the configured datasource, changes to the datasource properties may occur if another Tivoli Netcool Performance Manager server uses the same datasource. The LDAP agent also runs periodically normally once every hour to ensure the Tivoli Netcool Performance Manager server and Tivoli directory server are synchronized. Updates mapping of rehomed instances (reserved).

Procedure

Procedure Sweeper

LDAP Synchronization

LDAP Synchronization

Procedure

Update Mapping of Rehomed Instances.

80

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Agent Activities

Label

Description

Instance Data

System Retrieve Entity Data

The instance data agent gathers information from the remote datasource that is used to track the network elements that are available for reports. The information is stored in tables maintained by the instance schema agent The instance data agent runs automatically if the instance schema agent adds a new table or column. The instance data agent also runs periodically usually every two hours to update the information on the Tivoli Netcool Performance Manager server. (reserved).

Data Dictionary

Data Dictionary Import

The data dictionary agent maintains the field and entity information on the Tivoli Netcool Performance Manager server. The information includes the following: Version information for the datasource. List of available entities. List of available fields. Data dictionary runs once when a new datasource is added to the system. This agent also runs periodically usually every 2 hours to ensure the Tivoli Netcool Performance Manager server is up to date. In an effort to avoid database errors, only one data dictionary agent is active at 1 time (reserved).

Summary

Summary Computations

Tivoli Netcool Performance Manager is configured to provide summary computations for all traffic counters gathered from a managed network element. The role of this agent is to assess if summaries need to be completed and to populate the database with the computed summaries. This agent runs every two hours and assesses what summaries are ready for creation. The Summary Computation agent ensures daily, weekly and monthly summary computations are performed. Tivoli Netcool Performance Manager is configured to provide busy hour computations for traffic counters specified in busy hour definitions. The role of this agent is to assess if busy hours need to be completed and to populate the database with the computed busy hours. This agent runs every two hours and assesses what busy hours are ready for calculation. The Busy Hour Calculation agent ensures daily, weekly and monthly busy hour calculations are performed.

Busy Hour

Busy Hour Calculation

Datasource, Agent and KPI Cache Administration 81

7.2.2

Agent activities and log files

Each of the agent activities produces an entry into the as-server.log file that details the activity, run time of the activity and status of the activity. The log file rolls over by size and then by date so you may notice that there are multiple log files each day. The log files are located in:
$WMCROOT/logs/as/default

7.2.3

agent_admin Command Line Tool

The agent_admin tool is used to monitor and manage agent activities executed by the agent framework. These agent activities typically relate to supporting system activities such as creating summaries, synchronizing the LDAP database and cleaning up agent/file activities. The agent_admin script can perform the following tasks: Display current activities. See Listing Current Activities on page 82. Listing past activities. See Listing Past Activities on page 84. Display activity logs. See Activity Logs on page 86. Running activities. See Running Activities on page 87. Cancelling activites. See Cancelling Activities on page 87. The agent_admin script is located in:
$WMCROOT/bin

This tool can be run from any directory as user virtuo. Usage
agent_admin [-asconf conf_name] parameters - Current activities: -u user -p password -list current - Past activities: -u user -p password -list past - Activity log: -u user -p password -logs id run_id - Run activity: -u user -p password -run id - Disable activity: -u user -p password -disable id - Enable activity: -u user -p password -enable id - Cancel activity: -u user -p password -cancel id

Note: The -asconf option will be used for multiple instances of the application server. It is reserved for future implementations of the product.

Listing Current Activities Current activities are agent activities that have not been completed, which includes activities which are running and activities which are waiting to run. Complete the following to display the list of current activities running for the system.
82 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

1. Execute the agent_admin script using the following syntax:


agent_admin -u <user> -p <password> -list current

The parameters to be included with the agent_admin script to list current activities are described in Table 20.
Table 20:
Option
-u <user> -p <password> -list current

Parameters for Listing Current Activities

Description

Username. Password. Lists current activities.

Example: The following command is sample list output of current activities.


[hostname:virtuo] agent_admin -u <user> -p <password> -list current Activity Data Start 1 :00 4 null 2 :00 12 51 null 570 null 97 null 48 Run Type Data Source Attempt Retry tralee null 1 tralee null State Entity Cancel Label N null 2006-10-02 17:25 Agent activity cleanup Active Start End

Data End PROCEDURE null null null INSTANCE null null null null null SWEEPER tralee PROCEDURE

SCHEDULED 1 N 1 N N null N

SCHEDULED

2006-10-04 12:34:00 Unused file deletion N null 2006-10-04 12:55 Temporary report N System 2006-10Retrieve

SCHEDULED

and schedule cleanup tralee-rs null tralee null SCHEDULED 1 N N 1 04 12:59:00 entity data 5 :00 13 14 :00 8 :00 3 :00 85 null 44 45 null 4 null 4 null LDAP_SYNC null SCHEDULED null 2006-10-04 13:04 N N null N null N null N null null 2006-10Data dictionary import 2006-09-13 14:34:16 tralee-rs null null tralee null tralee null null 1 1 1 N N

Ldap synchronization DICTIONARY null null null null null null SCHEDULED N 04 13:51:00

SUMMARY tralee-rs PROCEDURE

SCHEDULED SCHEDULED

2006-10-04 14:03 Summary computations 2006-10-04 14:55 Update Mapping o 2006-10-05 02:00 Datasource clean=up

f Rehomed Instances. PROCEDURE null null SCHEDULED 1 N

Current Activity Properties Table 21 describes the information available for each current activity. This information is useful when analyzing a list of current activities.

Datasource, Agent and KPI Cache Administration 83

Table 21:
Property Description

Current Activity Properties

Activity Run Type Datasource State Cancel

The unique ID of the agent activity, type and label. The Run ID of the unique agent activity type currently running. See Table 19 for a description of agent types. The datasource affected by the activity. Shows whether the activity is running or in the queue. Determines whether the activity was cancelled. If an activity was cancelled it is shown on the current list until the activity ends on the server. The next time an activity is expected to run. Date and time when the activity started (if available). The time a failed activity ended. The start time of gathering traffic data from a datasource. The end time of gathering traffic data from a datasource. The number of times a failed activity is retried. Shows whether the activity has previously failed. The entity of the datasource affected by the activity. See Table 19, for agent activity labels and descriptions.

Active Start End Data Start Data End Attempt Retry Entity Label

Note: The Activity property ID uniquely identifies an agent activity based on a combination of the agent label and type.

Listing Past Activities A past activity can be one of the following agent activities: Completed Failed Cancelled The number of days of past activities displayed depends on the setup of data retention and partitions during installation. It provides all past agent activities based on the install setting of the length of time to retain the Enterprise agent retention setting.

84

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Complete the following to list past activities: 1. Execute the agent_admin script using the following syntax:
agent_admin -u <user> -p <password> -list past

The parameters to be included with the agent_admin script to list past activities are described in Table 22.
Table 22:
Option
-u <user> -p <password> -list past

Parameters for Listing Past Activities

Description

Username. Password. Lists past activites.

Example: The following command is sample list output of past activities.


[hostname:virtuo] agent_admin -u <user> -p <password> -list past Activity Type Data source state Cancel Active Start End Entity Label Message 5 12 1 2 5 1 14 2 13 271 136 271 271 270 270 133 270 136 LDAP_SYNC null System null null null null null null null INSTANCE PROCEDURE PROCEDURE LDAP_SYNC PROCEDURE tralee COMPLETED FAILED 2006-11-14 18:15:09 null 2006-11-14 17:42:09 Entity System not found. 2006-11-14 17:33:09 null 2006-11-14 17:30:09 null 2006-11-14 17:15:29 null 2006-11-14 16:33:09 null 2006-11-14 16:31:09 null 2006-11-14 16:30:09 null 2006-11-14 16:26:19 No entity data exists fo 2006-11-14 16:15:19 null 2006-11-14 15:42:29 Entity System not found. 2006-11-14 15:33:19 null 2006-11-14 15:30:19 null 2006-11-14 15:15:09 null 2006-11-14 14:33:29 2006-11-14 18:15:10 2006-11-14 17:42:11 2006-11-14 17:33:09 2006-11-14 17:30:10 2006-11-14 17:15:30 2006-11-14 16:33:10 2006-11-14 16:31:09 2006-11-14 16:30:10 2006-11-14 16:26:20 r datasource tralee-rs. 5 12 1 2 5 1 269 135 269 269 268 268 LDAP_SYNC null System null null null INSTANCE PROCEDURE PROCEDURE LDAP_SYNC PROCEDURE tralee COMPLETED FAILED 2006-11-14 16:15:20 2006-11-14 15:42:31 2006-11-14 15:33:20 2006-11-14 15:30:20 2006-11-14 15:15:10 Ldap synchronization tralee-rs tralee tralee tralee tralee Retrieve entity data COMPLETED COMPLETED COMPLETED COMPLETED Agent activity cleanup Ldap synchronization tralee-rs tralee tralee tralee tralee Retrieve entity data COMPLETED COMPLETED COMPLETED COMPLETED COMPLETED COMPLETED FAILED Agent activity cleanup

Temporary report and schedule cleanup Ldap synchronization Agent activity cleanup Summary computations tralee

SUMMARY tralee-rs PROCEDURE DICTIONARY

Temporary report and schedule cleanup tralee-rs Data dictionary import

Temporary report and schedule cleanup Ldap synchronization

Datasource, Agent and KPI Cache Administration 85

2006-11-14 14:33:29 14 132 2006-11-14 14:31:29

null null

Agent activity cleanup COMPLETED Summary computations

null 2006-11-14 14:31:29 null

SUMMARY tralee-rs

Past Activity Properties Table 23 describes the information available for each past activity. This information is useful when analyzing a list of past activities.
Table 23:
Property Description

Past Activity Properties

Activity Type Datasource State Cancel Active Start End Entity Label Message

The unique ID of the agent activity, type and label. See Table 19 for a description of agent types. The datasource affected by the activity. Shows whether the activity completed or failed. Determines whether the activity was cancelled. The next time an activity is expected to run. Date and time when the activity started (if available). The time the activity ended. The network entity of interest for the specific activity. See Table 19, for agent activity labels and descriptions. Displays server messages related to the activity. Messages are typically received for failed activities.

Activity Logs The agent activity log contains a complete view of all the agent activities, which includes current and past agent activities. The agent_admin script allows you to view an activity for a specific run of an agent type. 1. Execute the agent_admin script using the following syntax:
agent_admin -u <user> -p <password> -logs <id> <run_id>

The parameters to be included with the agent_admin script to display activity logs are described in Table 24.

86

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 24:
Option
-u <user> -p <password> -logs <id> <run_id>

Parameters for Listing Activity Logs

Description

Username. Password. Lists activity logs, for the agent activity type id and the agent activity run_id.

Note: You can use the -list current option to determine the agent activity type.

Running Activities 1. Execute the agent_admin script using the following syntax:
agent_admin -u <user> -p <password> -run <id>

The parameters to be included with the agent_admin script to run an activity are described in Table 25.
Table 25:
Option
-u <user> -p <password> -run <id>

Parameters for Running Activities

Description

Username. Password. Runs the activity, the id is the identity of agent activity type to run.

Note: You can use the -list current option to determine the agent activity type.

Cancelling Activities You should stop a current activity if the activity continues to fail and is not expected to succeed. Cancelling an activity will only cancel the current activity. It will not stop the agent from performing future scheduled activities. Caution: Stopping an agent activity may stop an important aspect of the data acquisition process or server maintenance. 1. Execute the agent_admin script using the following syntax:
agent_admin -u <user> -p <password> -cancel <id>

The parameters to be included with the agent_admin script to cancel an activity are described in Table 26.

Datasource, Agent and KPI Cache Administration 87

Table 26:
Option
-u <user> -p <password> -cancel <id>

Parameters for Cancelling an Activity

Description

The administration user name. The administration password. Cancels the activity, the id is the identity of activity to cancel.

Note: You can use the -list current option to determine the agent activity type.ex

7.3

KPI Cache Management

The kpicache_admin tool can be used to export/import UDCs from/into the system in a format compatible with previous releases. In particular, it applies to UDCs that are not attached to a report or template, as is the case in previous releases. When used to export UDCs, it exports all UDCs in the system. The kpicache_admin tool is also for internal synchronization processes and for debugging. Note: Do not run more than one of the following tools, or more that one instance of any of these individual tools, at the same time: techpack_admin, sbh_admin, summary_admin, kpicache_admin or report_impexp. For example, do not run summary_admin and sbh_admin, or two instances of summary_admin at the same time).

Usage
kpicache_admin INFO conf set to default ERROR USAGE: kpicache_admin parameters - Synchronize the KPI cache: -u user -p password dsname - Dump UDCs: -u user -p password -d filename dsname - Load UDCs: -u user -p password -l filename dsname

Note: The -asconf option will be used for multiple instances of the application server. It is reserved for use or removal in future implementations of the product.

7.3.1

Exporting User Defined Calculations

To export UDCs: 1. Execute the kpicache_admin script using the following syntax:
kpicache_admin -u <user> -p <password> -d <filename> <dsname>

88

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

The parameters to be included with kpicache_admin script for dumping the UDC are described in Table 27. Example:
kpicache_admin -u sysadm -p <password> -d /tmp/UDC_export.xml crosshavenz2-rs

This will result in all provisioned UDCs being exported to a file.


Table 27:
Option
-u <user> -p <password> -d <filename> <dsname>

Parameters for Dumping the UDC

Description

Username. Password. Dumps the file <filename> for the datasource <dsname>. The path to where the file is exported must be specified or the script must be executed where the file is located. The name of the server must be used when specifying the <dsname>, and not the IP address.

7.3.2

Importing User Defined Calculations


kpicache_admin -u <user> -p <password> -l <filename> <dsname>

1. Run the kpicache_admin script using the following syntax: The parameters to be included with kpicache_admin script for loading the UDC dump are described in Table 28. Example
kpicache_admin -u sysadm -p <password> -l /tmp/UDC_import.xml crosshavenz2-rs

Table 28:
Option
-u <user> -p <password> -l <filename> <dsname>

Parameters for Loading the UDC

Description

Username. Password. Loads the file <filename> for the datasource <dsname> . The path to where the file is located must be specified or the script must be executed where the file is located. The name of the server must be used when specifying the <dsname>, and not the IP address.

7.3.3

Synchronize internal computation engine KPI cache

After importing UDCs, the user must re-build the internal computation engine KPI cache with the kpicache_admin synchronisation option. Complete the following procedure to synchronize the cache: 1. Run the kpicache_admin script using the following syntax:

Datasource, Agent and KPI Cache Administration 89

kpicache_admin -u <user> -p <password> <dsname>

The parameters to be included with the kpicache_admin script for synchronizing the KPI cache are described in Table 29. Example:
kpicache_admin -u sysadm -p <password> crosshavenz2-rs

Table 29:
Option
-u <user> -p <password> <dsname>

Parameters for Synchronizing KPI Cache

Description

Username. Password. Name of the datasource to be synchronized.

The UDCs will not be visible to the user from the UI until the GUI cache has also been re-built using the DICTIONARY service. This can be done manually using the agent_admin CLI tool. Example:
agent_admin -u sysadm -p <password> -list current | grep DICTIONARY agent_admin -u sysadm -p <password> -run 8

In this example '8' is the job id of the DICTIONARY service. The DICTIONARY service is also scheduled to run every two hours. Note: Re-synchronisation of the KPI cache for a datasource option is for internal use to ensure the datasource synchronisation process occurs correctly and allows for debugging. It does not perform any cache function, it only synchronizes current datasource tables. To update tables so the UDCs are visible to the user from the GUI, the dictionary synchronisation process should be executed by schedule or manually using agent_admin script.

90

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

System Maintenance

Maintenance of the system includes using scripts, processes, and programs that enable the Tivoli Netcool Performance Manager server to run at optimal performance levels. System maintenance tasks include the following tasks: Scheduling system maintenance Reporting on server status Managing the Oracle database Database space administration Partition maintenance Managing disk space usage Working with log files Loader LIF file directory Java client processes Filesystem backups

Copyright IBM Corp. 2007, 2011

91

8.1

Schedule administration

The scheduler process is configured to run certain standard administrative jobs on scheduled dates and times. You can change the date and time a job is run. The administrative jobs and activities are maintained and monitored by two command line tools: schedule_admin
agent_admin

Refer to agent_admin Command Line Tool on page 82 for more information on the agent_admin tool. To ensure reliable handling of data access and storage, the system uses several maintenance jobs and agent activities. The jobs invoked by the system perform the following functions: Maintain database partition, creation and cleanup. Perform job task cleanup. Synchronize information between various subsystems and tables. Perform summary computations. The schedule_admin tool is used to monitor and manage the jobs relating to core activities such as partitioning the database and miscellaneous task/job cleanup chores.

8.1.1

Scheduled jobs

Important: These schedule jobs: aggregator, file_missing, scenario_activation, bhupdate, smupdate, bh_clean, bh_summary currently have no functional impact on the system. Do not stop these jobs from running as it may result in system consequences. These jobs have minimal impact on your system resources. Table 30 below provides a description of the current scheduled jobs controlled by the schedule_admin tool. These jobs are currently in use.
Table 30:
Job
misc_clean

Scheduled Job Descriptions.

Category

Description

Cleanup Partition Maintenance

Task-status, active-task, active-job, schedule, pm-schedule logs are purged every two hours. Purges loaded traffic data and partitions that are older than a specified number of days. The number of days specified is set by the administrator during installation. Compresses data on the disk. Compressing a disk reorganizes daily data and reclaims as much as 40% of the disk. It also improves query time by 25% to 70%. Creates new daily future partitions to store data.

pm_daily

92

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Job
pm_weekly

Category

Description

Partition Maintenance

Purges summary data and busy hour partitions that are older than the date specified during installation, for summary and busy hour determinations. Compresses data on the disk. Compressing a disk reorganizes daily data and reclaims as much as 40% of the disk. It also improves query time by 25% to 70%. Creates new weekly partitions to store data. The default scheduled time for the pm_weekly job to run is 20:00:00. Purges summary and busy hour data and partitions that are older than the date specified during installation, for summary and busy hour determinations. Creates new monthly partitions to store data related to date-time scope and holiday definitions. Compresses data on the disk. Compressing a disk reorganizes daily data and reclaims as much as 40% of the disk. It also improves query time by 25% to 70%. The default scheduled time for the pm_monthly job to run is 20:01:00. Removes handled-events from the database that are older than a the 30day retention period on a daily basis. Default scheduled time is 22:00:00. Report group file purge and log files cleanup occurs three times per hour.

pm_monthly

Partition Maintenance

event_clean

Cleanup

rgfp

Cleanup

The schedule_admin script is used to schedule all administrative tasks. It is located in:
$WMCROOT/bin

The schedule_admin script can be run from any directory as user virtuo.

8.1.2

Usage
schedule_admin [-dbconf conf_name] parameters - List job types: -list types - List all jobs: -list all - List set job type cleanup times: -list cleanup - List jobs to be executed: -list next - Change job type limit: -limit job_type max - Enable job type: -enable job_type - Disable job type: -disable job_type - Set job type cleanup period: -setcleanup job_type period - period expressed in minutes - Schedule a job: -schedule job_name date time - date format: yyyymmdd - time format: hhmm - Schedule a job immediately: -schedule job_name immediate - Turn off a job: -schedule job_name off - Turn on a job: -schedule job_name on

System Maintenance 93

Note: The -dbconf option is reserved for future use or removal in future implementations of the product. The list of administrative options for the schedule_admin script are described in Administrative options for the schedule_admin script on page 95.

8.1.3

Scheduling system maintenance

Schedule administration tasks by executing the following command using the required syntax:
schedule_admin -schedule <job_name> <administrative option>

Example: The command below schedules the job pm_daily to run at 11:30 on the 27/08/2007.
schedule_admin -schedule pm_daily 20070827 1130

8.1.4

Listing the status of all scheduled jobs

The following command lists the status of all scheduled jobs in the system:
schedule_admin -list all

Example output:
JOB FAILURE |==============================================================================| | Job | Last Run | Failed | Duration | | | Time | Time | HH:MM:SS | |=======================|=====================|=====================|==========| JOB SUCCESS |==============================================================================| | Job | Last Run | Completed | Duration | | | Time | Time | HH:MM:SS | |=======================|=====================|=====================|==========| | misc_clean | 2006/09/01 16:00:13 | 2006/09/01 16:00:15 | 00:00:02 | | bh_clean | 2006/09/01 17:00:12 | 2006/09/01 17:00:13 | 00:00:01 | | pm_daily | 2006/09/01 10:13:46 | 2006/09/01 10:14:28 | 00:00:42 | | pm_weekly | 2006/09/01 10:13:16 | 2006/09/01 10:13:22 | 00:00:06 | | pm_monthly | 2006/08/25 20:02:39 | 2006/08/25 20:05:25 | 00:02:46 | | event_clean | 2006/09/01 10:15:59 | 2006/09/01 10:16:00 | 00:00:01 | |==============================================================================| JOBS RUNNING |========================================================| | Job | Last Run | Duration | | | Time | HH:MM:SS | |=======================|=====================|==========| | bh_summary | 2006/09/01 17:00:40 | 48:44:00 | |========================================================| NEXT RUN JOBS |=============================================| | Job | Next Run | | | Time |

94

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

|=======================|=====================| | pm_weekly | job not scheduled | | rgfp | job not scheduled | | bh_clean | job not scheduled | | pm_monthly | job not scheduled | | pm_daily | job not scheduled | | scenario_activation | job not scheduled | | misc_clean | job not scheduled | | event_clean | job not scheduled | |=============================================|

8.1.5

Administrative options for the schedule_admin script


Table 31:
Option
-list types -list all

schedule_admin script

Description

Lists the available job types. Lists the status of all the scheduled jobs on the Tivoli Netcool Performance Manager platform. Job names are: misc_clean pm_monthly pm_weekly pm_daily event_clean rgfp analyze_user_data

-list cleanup -list next -limit <job_type> [max] -enable <job_type> -disable <job_type>

Lists set job type cleanup times. Lists jobs to be executed. Changes the job type limit. Enables a particular job type. Disables a particular job type. entered in minutes.

-setcleanup <job_type> <period> Set a cleanup period for a set job type. The period must be -schedule <job_name> <date> <time>

Schedule a particular job. Date must be entered in YYYYMMDD format and time must be entered in the HHMM. Schedule a particular job to run at the current time. Turn off a job. Turn on a job.

-schedule <job_name> immediate -schedule <job_name> off -schedule <job_name> on

System Maintenance 95

8.2

Reporting on server status

Some healthchecks can be performed on the system to verify it is stable. If the Tivoli Netcool Performance Manager server is rebooted the following process checks can be performed to check all processes started successfully: Database check Directory server check SAPMON check Tivoli Netcool Performance Manager check Log files check Database monitoring

8.2.1

Database check

Important: In a distributed system this check should only be performed on the server hosting the Database component. 1. As user root enter the following:
Solaris svcs database-na Linux service dboravirtuo status AIX /etc/rc.d/init.d/dboravirtuo status

8.2.2

Directory server check

Important: In a distributed system this check should only be performed on the server hosting the directory server. 1. As user root enter the following:
Solaris svcs tds-na Linux service tdsna status AIX /etc/rc.d/init.d/tdsna status

96

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.2.3

SAPMON check

Important: In a distributed system this check should only be performed on the servers hosting the Application and Gateway components. 1. As user root enter the following:
Solaris svcs sapmon-na Linux service sapmonvirtuo status AIX /etc/rc.d/init.d/sapmonvirtuo status

One process should be returned.

8.2.4

Tivoli Netcool Performance Manager check

Important: In a distributed system this check should only be performed on the server hosting the Application component. The application framework processes are started automatically when the server is rebooted. 1. As user virtuo enter the following command to check the Tivoli Netcool Performance Manager processes:
sap disp -l

A sample of the output is shown below.


[hostname:virtuo] sap disp -l
NAME as nc_cache alarm_cache ........... STATE STARTED STARTED STARTED SINCE Oct 23, 2008 Oct 29, 2008 Oct 29, 2008 HOST <core_host> <core_host> <core_host> GROUP asgroup loadercache loadercache STIME Oct 23, 2008 Oct 29, 2008 Oct 29, 2008 PID 17277 6716 6726

2. Ensure there are no exceptions in the processes log files.


environment/default.properties

The location of the processes log files is set by the WMCLOGDIR variable in the $WMCROOT/conf/ file. These log files are usually located in $WMCROOT/logs.

8.2.5

Log files check

The majority of the Tivoli Netcool Performance Manager server log files are stored in $WMCROOT/logs. The main log files to check after restarting any of the processes are:
$WMCROOT/logs/as/default/* $WMCROOT/logs/loader/*

System Maintenance 97

8.2.6

Database monitoring

Note: The following checks should be completed by an Oracle Database Administrator. The following are some quick status checks that can be completed. 1. Alert Log Monitoring - Check the $ORACLE_BASE/admin/vtdb/bdump/alert_vtdb.log file for Oracle errors or warnings. 2. Oracle processes trace files - Check the $ORACLE_BASE/admin/vtdb/bdump/*.trc files. 3. Oracle listener status - Check the $ORACLE_HOME/network/log/listener.log file. 4. System Global Area (SGA) Memory Monitoring. Run the following command to provide free memory information for the main pools in the Oracle SGA:
export ORACLE_SID=vtdb; echo "select * from v\$sgastat;"|sqlplus virtuo/ <password>| grep "free memory"

For information on tablespaces, see Database space administration on page 105.

8.2.7

Operating system checks

Complete the following operating system checks from your operating system: System resource utilization Virtual memory status Available memory status Processor sanity check

98

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.3

Managing the Oracle database


Starting and stopping the Oracle database Types of Oracle backups Redo logs Performing hardware diagnostics Restoring data from backups

The following sections describe in detail the following database management functions:

8.3.1

Starting and stopping the Oracle database

By default when the Tivoli Netcool Performance Manager server is started or shut down, the system is automatically configured to start and shutdown the Oracle database. Disabling automatic startup and shutdown of the Oracle database To disable the automatic settings and allow manual startup and shutdown of the Oracle database complete the following: 1. Log in to the database as OS user oracle. 2. Open the oratab file located in:
Solaris /var/opt/oracle Linux /etc/ AIX /etc/

Database entries in the oratab file appear in the following format:


$ORACLE_SID:$ORACLE_HOME:<N|Y> Y signifies

that you want the system configured so that the database is automatically started upon system bootup and automatically shutdown when the system is shutdown. that you want to manually startup and shutdown the database.

N signifies

3. Find the entries for all the databases that you want to change. They are identified by the sid in the first field. Change the last field for each entry to N. Example:
vtdb:/appl/oracle/product/10.2.0/db_1:N

where /appl/oracle/product/10.2.0/ is the $ORACLE_HOME variable. Usually you should only have one entry for the SID vtdb.

System Maintenance 99

Manually shutting down the Oracle database 1. As user virtuo display the processes currently running:
sap disp

2. Stop all SAP processes, as user root:


Solaris svcadm disable sap-na Linux service sapvirtuo stop AIX /etc/rc.d/init.d/sapvirtuo stop

3. Monitor the process status and wait for all processes to stop, as user virtuo:
sap disp

4. Switch to OS user oracle:


su - oracle

5. Start SQL*Plus, and connect to the database as the DBA administrator:


$ export ORACLE_SID=vtdb $ sqlplus /nolog SQL> connect / as sysdba

6. Shut down the database and exit:


SQL> shutdown immediate SQL> exit

Manually starting the Oracle database 1. Switch to the OS user oracle:


su - oracle

2. Start SQL*Plus, and connect to the database as the DBA administrator:


$ export ORACLE_SID=vtdb $ sqlplus /nolog SQL> connect / as sysdba

3. Start the database:


SQL> startup SQL> exit

4. Start all SAP processes, as user root:


Solaris svcadm enable sap-na Linux service sapvirtuo start AIX /etc/rc.d/init.d/sapvirtuo start

5. Monitor the process status and wait for all processes to start, as user virtuo:
sap disp

100

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.3.2

Types of Oracle backups

Database backup is an important process that needs to be setup and carried out on the Tivoli Netcool Performance Manager platform on a regular basis. Data is constantly being read from and written to the Oracle database. Care must be taken to archive the data correctly. Important! Backup the Oracle database regularly. Backups of the database should be taken more often than backups of the Tivoli Netcool Performance Manager system. Oracle software offers two methods for backing up the database: offline online Use of Oracle Recovery Manager (RMAN) is the preferred method for both online and offline types of database backup. For manual user or cron driven backups, only offline backups are recommended for a Tivoli Netcool Performance Manager database. Online user driven backup is not recommended. For an offline backup, you must shut down the database before archiving data. Before you can determine which method is appropriate for your installation, you need to consider several factors: Size of available auxiliary storage. Cost of auxiliary storage required by the backup scheme. Level of server availability to users during backup. Ease of data restoration. For any of these backup modes, the auxiliary storage should be large enough to hold the backup image without operator intervention. Online backups using RMAN is the preferred backup method, with the database running in archive log mode. Online backup has the following advantages: The database remains available to users and data loading continues at all times. You do not need to back up the entire database at the same time. Allows the quickest and most flexible recovery. For complete information on performing online and offline backups, see your Oracle documentation: Oracle Database Backup and Recovery Basics, 10g Release 2 (10.2). Part Number B14192-03. Oracle Database Backup and Recovery Advanced Users Guide, 10g Release 2 (10.2). Part Number B14191-02. Oracle Database Backup and Recovery Reference, 10g Release 2 (10.2). Part Number B14192-03.

System Maintenance 101

8.3.3

Redo logs

Oracle software creates log files called redo logs, each of which contain a sequential log of actions applied to the database. These redo logs accumulate daily and contain up-to-the-minute details on database operations. By default, the Tivoli Netcool Performance Manager server installs the Oracle database in archive log mode. Although archive log mode requires more disk space for daily operations, it provides point-in-time recovery in case of data loss. You can restore data from the last online backup, then restore data from all archive logs since that date to return your database to its most current state. Note: Archive logs can take up large quantities of disk space if you do not regularly back them up. If you set up a backup schedule with only one or two full backups per week, you need to check the archive logs if you are not backing them up regularly.

8.3.4

Archiving redo logs

ARCHIVELOG mode Each database should have ARCHIVELOG mode enabled. This allows the redo logs to be archived instead of being overwritten. Store the archive logs in a separate place to the rest of the database files and ensure that they are periodically backed up. The archive logs can be used if and when a restore of the database is required to restore the database to the point in time when the database went down. This section describes how to check if the database is in ARCHIVELOG mode and how to enable and disable ARCHIVELOG mode. Checking ARCHIVELOG mode status To check ARCHIVELOG mode status, complete the following: 1. Log in as OS user oracle and enter the following commands:
$ export ORACLE_SID=<MYDB>

where <MYDB> is the name of the database


$ sqlplus /nolog SQL> connect / as sysdba

2. To check the ARCHIVELOG mode status, enter the following SQL command:
SQL> archive log list;

If the database is in ARCHIVELOG mode the following output is returned:


Database log mode Automatic archival Archive Mode Enabled

If the database is not in ARCHIVELOG mode the following is returned:


Database log mode Automatic archival No Archive Mode Disabled

102

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Updating ARCHIVELOG settings Update the log destination where the archive log files are stored from the default to a location which has sufficient space to store the archive logs. The recommended location is /oradump/<MYDB>. Ensure the log destination exists prior to setting it. Create the directory as OS user oracle if it does not already exist. Update the target archive location by completing the following: 1. Log in as OS user oracle and enter the following commands:
$ export ORACLE_SID=<MYDB>

where <MYDB> is the name of the database


$ sqlplus /nolog SQL> connect / as sysdba

2. To change the archive log destination, enter the following SQL command:
SQL> alter system set log_archive_dest='/oradump/<MYDB>' scope=both;

Enabling ARCHIVELOG mode To enable ARCHIVELOG mode status, complete the following: 1. Log in as user oracle and enter the following commands:
$ export ORACLE_SID=<MYDB>

where <MYDB> is the name of the database


$ sqlplus /nolog SQL> connect / as sysdba

2. To enable ARCHIVELOG mode status, enter the following SQL commands:


SQL> SQL> SQL> SQL> Shutdown Startup mount Alter database archivelog; alter database open;

3. To check the ARCHIVELOG mode status, enter the following SQL command:
SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence Archive Mode Enabled /oradump/<MYDB> 7 7 9

The database is now in ARCHIVELOG mode. Disabling ARCHIVELOG mode To disable ARCHIVELOG mode status, complete the following: 1. Log in as OS user oracle and enter the following commands:
$ export ORACLE_SID=<MYDB>

System Maintenance 103

where <MYDB> is the name of the database


$ sqlplus /nolog SQL> connect / as sysdba

2. To enable ARCHIVELOG mode status, enter the following SQL commands:


SQL> SQL> SQL> SQL> Shutdown Startup mount Alter database noarchivelog; alter database open;

3. To check the ARCHIVELOG mode status, enter the following SQL command:
SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Current log sequence No Archive Mode Disabled /oradump/<MYDB> 7 9

The database is now in NOARCHIVELOG mode.

8.3.5

Performing hardware diagnostics

Diagnostic checks on your hardware before restoring data are recommended, if you encounter a problem such as a corrupt database. Other database errors can surface when you restore a missing or corrupt file. For Example: When mounting a database, Oracle software stops when it encounters the first error, even though more than one error might exist. If you perform hardware diagnostics before restoring the missing file, you might find other corrupt databases as well.

8.3.6

Restoring data from backups

How you restore data depends on the procedures you used to back it up, as well as what you need to restore. For complete information on data restoration procedures, please see your Oracle documentation.

104

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.4

Database space administration

The dbspace_admin tool allows you to monitor database space usage, add tablespaces, drop tablespaces, add datafiles and modify datafiles. Note: Only experienced Oracle database administrators should perform tablespace operations. The tool is located in:
$WMCROOT/bin/dbspace_admin

The dbspace_admin script can be run from any directory as user virtuo.

8.4.1

Usage
Usage: [-stats] [-addfile] [-addtbs] [-droptbs] [-modifyfile] Database space stats: -stats - Display the tablespace and datafile usage statistics Add a data file: -addfile -tbsname <tablespace_name> -dirpath <dir_path> -extsize <size_in_kb> -totalsize <size_in_mb> - Add a datafile to an existing tablespace. - Recommended values: -totalsize=20000M (20GB) or multiples of it. This is the max autoextend size of each datafile. - Recommended values: -extsize=1024K for data tablespaces, -extsize=256K for index tablespaces Add a tablespace: -addtbs -tbsname <tablespace_name> -dirpath <dir_path> -extsize <size_in_kb> -totalsize <size_in_mb> - Add a tablespace. - Recommended values: -totalsize default value should be 20000MB (20GB). This is the maxsize it will autoextend to. - Recommended values: -extsize=1024K for data tablespaces, -extsize=256K for index tablespaces Drop a tablespace: -droptbs -tbsname <tablespace_name> - Drop an existing tablespace. - This should only be used if the tablespace can be safely dropped without affecting the system Modify a data file: -modifyfile -filepath <file_path> -totalsize <size_in_mb> - Modify the total/max size of an existing datafile

The following sections detail how to complete the following: Monitor Oracle tablespaces - list the space usage statistics by datafile and by tablespace Add Oracle tablespaces - add a tablespace to the database Add Oracle datafiles - add a datafile to an existing tablespace Modify Oracle datafiles - modify the max size of a datafile Drop Oracle tablespaces - drop an existing tablespace

System Maintenance 105

8.4.2

Monitor Oracle tablespaces

Oracle tablespaces should be monitored regularly to ensure sufficient space is available for new data to load. To monitor free space in the database: 1. Execute the following command:
dbspace_admin -stats

The -stats option is described in Table 32.


Table 32:
Option
-stats

Parameters for Monitoring Tablespaces

Description

Lists the space usage statistics by datafile and by tablespace. Includes the amount of space used, space free and percentage space free. Also includes a flag to show when a tablespace has less than 10% free space. Datafile statistics: Tablespace Name Tablespace file ID Free (%) Actual (Mb) Free (Mb) Used (Mb) Max (Mb) File name Tablespace statistics: Tablespace Name Over Free (%) Actual (Mb) Used (Mb) Free (Mb) Max (Mb)

8.4.3

Add Oracle tablespaces

Oracle database tablespaces can be added to the database. You should ensure that there is sufficient space in the target location for the amount of space you are allocating before attempting to add a tablespace. Database system privileges are needed to add tablespaces. You will be prompted for the system user password when executing the -addtbs option. To add a tablespace: 1. Execute the following command:

106

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

dbspace_admin -addtbs -tbsname <tablespace_name> -dirpath <dir_path> -extsize <size_in_kb> -totalsize <size_in_mb>

The parameters used with the -addfile option are described in Table 33.
Table 33:
Option
-tbsname -dirpath

Parameters for Adding a Tablespace

Description

The name of the tablespace to be created. The path to the directory where the databases file(s) will be created. If a total size of over 20Gb is specified, multiple datafiles will be added, each with a max size of 20Gb. The extent size for the new tablespace. This should be based on the projected growth rate of the tablespace. The total size in megabytes for the new tablespace. If a total size of over 20Gb is specified, multiple datafiles will be added, each with a max size of 20Gb. The total size specified is the maxisize each datafile can autoextend to. The recommended default value is: -totalsize=20000MB (20Gb)

-extsize

-totalsize

Example:
dbspace_admin -addtbs -tbsname VT_TEST_DATA -dirpath /oradata02/vtdb -extsize 1024 -totalsize 80000

vtdb.

This example shows the addition of an 80Gb tablespace, VT_TEST_DATA, to the directory, /oradata02/ This will result in 4*20Gb datafiles in the target directory, each with an extent size of 1024Kb.

8.4.4

Add Oracle datafiles

Oracle datafiles can be added to existing tablespaces using the dbspace_admin tool. You should ensure that there is sufficient space in the target location for the amount of space you are allocating before attempting to add datafiles. Database system privileges are needed to add datafiles. You will be prompted for the system user password on executing the -addfile option. To add a datafile: 1. Execute the following command:
dbspace_admin -addfile -tbsname <tablespace_name> -dirpath <dir_path> -extsize <size_in_kb> -totalsize <size_in_mb>

The parameters used with the -addfile option are described in Table 33.

System Maintenance 107

Table 34:
Option
-tbsname -dirpath

Parameters for Adding a Datafile

Description

The name of the tablespace that the datafile(s) will be added to. The path to the directory where the databases file(s) will be created. If a total size of over 20Gb is specified, multiple datafiles will be added, each with a max size of 20Gb. The increment size for the new datafile(s). This should match the increment size of the other datafiles associated with the specific tablespace. Differs from extent size which is set only on tablespace creation. The total size in megabytes for the new datafile(s). If a total size of over 20Gb is specified, multiple datafiles will be added, each with a maxsize of 20Gb. The total size specified is the maxsize each datafile can autoextend to. Recommended default value: -totalsize=20000MB (20Gb)

-extsize

-totalsize

Example:
dbspace_admin - addfile -tbsname VT_TEST_DATA -dirpath /oradata02/vtdb -extsize 1024 -totalsize 25000

This example shows the adding of 25Gb worth of datafiles to the tablespace, VT_TEST_DATA, in the directory, /oradata02/vtdb. This will result in one 20Gb datafile and one 5Gb datafile in the target directory, each with an extent size of 1024Kb.

8.4.5

Modify Oracle datafiles

Oracle datafiles can be modified, or resized. You should ensure that there is sufficient space in the target location for the amount of space you are allocating before attempting to modify a datafile. Database system privileges are needed to modify datafiles. You will be prompted for the system user password on executing the -modifyfile option. To modify a database file: 1. Execute the following command:
dbspace_admin -modifyfile -filepath <file_path> -totalsize <size_in_mb>

The parameters used with the -modifyfile option are described in Table 35.
Table 35:
Option
-filepath -totalsize

Parameters for Modifying a Datafile

Description

The file path to the databases file which is being modified. The total size in megabytes that the existing datafile will be resized to. It is not recommended to set the total max autoextend size to greater than 20Gb.

108

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Example:
dbspace_admin - modifyfile -filepath /oradata02/vtdb/vt_test_data06.dbf -totalsize 20000

This example shows the modifying of a datafile, /oradata02/vtdb/vt_test_data06.dbf, with a new maxsize of 20Gb. This will result in the max size of an existing datafile being changed so that it can autoextend to a max of 20Gb. The actual size of the datafile will not be changed.

8.4.6

Drop Oracle tablespaces

Oracle tablespaces can be dropped from the database. Only non-system tablespaces can be dropped using the dbspace_admin tool. System tablespaces such as SYSTEM, SYSAUX, TEMP and UNDOTBS, cannot be dropped. Important: A tablespace must not be dropped from the system unless you are sure that the tablespace is no longer needed. When a tablespace is dropped all associated tables and data will also be dropped. Database system privileges are needed to drop tablespaces. You will be prompted for the system user password on executing the -droptbs option. To drop a tablespace: 1. Execute the following command:
dbspace_admin -droptbs -tbsname <tablespace_name>

After a tablespace is dropped the associated datafiles will need to be manually dropped from the system as the oracle user. The dbspace_admin tool will list the files that will need to be removed. The parameters used with the -droptbs option are described in Table 35.
Table 36:
Option
-tbsname

Parameters for Dropping a Tablespace

Description

The name of the tablespace to be dropped.

Example:
dbspace_admin -droptbs -tbsname VT_TEST_DATA

This example shows the dropping of a tablespace, VT_TEST_DATA, from the database. This will result in the tablespace and all associated datafiles being dropped from the database. The associated datafiles listed in the resulting output must be manually dropped from the system as the oracle user.

System Maintenance 109

8.4.7

Resize an UNDO tablespace

An UNDO tablespace is used for implementing automatic undo management in the Oracle database. The Oracle database uses an UNDO tablespace for maintaining undo records which are used to undo or roll back uncommitted changes made to the database. To resize an UNDO tablespace run the commands below on the database server as user virtuo:
$ export ORACLE_SID=<MYDB> $ sqlplus SYS AS SYSDBA SQL> SELECT file_name, tablespace_name, bytes/1024/1024 UNDO_SIZE_MB, SUM (bytes/1024/1024) OVER() TOTAL_UNDO_SIZE_MB FROM dba_data_files d WHERE EXISTS (SELECT 1 FROM v$parameter p WHERE LOWER (p.name)='undo_tablespace' AND p.value=d.tablespace_name);

The resulting size of an UNDO tablespace can be calculated by a formula based on the result returned by:
CURR_UNDOTBS_FILE_SIZE_MB : NEW_UNDOTBS_FILE_SIZE_MB = CURR_UNDOTBS_FILE_SIZE_MB + CURR_UNDOTBS_FILE_SIZE_MB*0.2

Note: It is not recommended to grow any datafiles above 20Gb in size. If a tablespace needs to be grown over 20Gb a new datafile has to be added. Check if enough free disk space exists on the disk where the UNDOTBS file(s) are placed. If free disk space is available and the resulting size of the datafile is less then 20Gb - resize the UNDO tablespace datafile with the command:
SQL> ALTER DATABASE DATAFILE '<undotbs_file_name>' RESIZE <new_undotbs_size>M ;

If new files have to be added to the UNDO tablespace run the command:
SQL> ALTER TABLESPACE <undotbs_name> ADD DATAFILE 'new_file_name' SIZE <20%_of_current_undotbs_size>M ;

Check the current size of the UNDO tablespace:


SQL> SELECT file_name, tablespace_name, bytes/1024/1024 UNDO_SIZE_MB, SUM (bytes/1024/1024) OVER() TOTAL_UNDO_SIZE_MB FROM dba_data_files d WHERE EXISTS (SELECT 1 FROM v$parameter p WHERE LOWER (p.name)='undo_tablespace' AND p.value=d.tablespace_name);

110

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.5

Partition maintenance

Partition maintenance is a system maintenance job which is responsible for the creation, deletion and optimization of date range partitioned tables. It enables a seamless process of creating, deleting, moving and analyzing partitions without impacting the performance of the system. Tivoli Netcool Performance Manager uses partitioned, index organized tables (IOTs) for storing traffic data. This ensures that large data tables are stored in a structured format, with the aim of minimizing the usage of space and optimizing the performance of report generation. The partition maintenance job deals with the adding of partitions, to ensure that sufficient partitions exist in the future, and with the purging of old data from the system. The adding and deleting of data is based on configurable data retention settings within the partition maintenance process. As well as creating and deleting data, the partition maintenance process helps optimize the database performance by: analyzing tables and gathering partition level statistics. moving data in IOT partitions to compress and optimize the data

8.5.1

Partition maintenance jobs

The partition maintenance process is automated through several jobs that are scheduled to run automatically. These jobs are used to automatically create partitions for future dates and also to remove partitions where old unwanted data is stored. The scheduler automatically sets partition maintenance to run at a specified time nightly. The partition maintenance jobs that are executed are described in Table 30. To change the default time when partition maintenance jobs are executed refer to Schedule administration on page 92.

8.5.2

Amend the partition maintenance job configuration

If the partition maintenance job is always cancelled by the timeout (by default 6 hours) resulting in some missing table partitions, amend the partition maintenance job configuration to increase the speed of the partition maintenance job. This will ensure that all partitions are created during the partition creation window. Tuning the Oracle database parameters As database user SYS AS SYSDBA from SQLplus, verify the current setting for the job_queue_processes parameter as follows: 1. Log in as OS user oracle and enter the following commands:
$ export ORACLE_SID=<MYDB>

where <MYDB> is the name of the database


$ sqlplus /nolog SQL> connect sys/<system_password> as sysdba

2. Verify the current setting for the job_queue_processes parameter:


System Maintenance 111

SQL> SHOW PARAMETER job_queue_processes

3. The returned value should be >= 10. If it is not, enter the command:
SQL> ALTER SYSTEM SET job_queue_processes=10 SCOPE=BOTH;

Increase the number of partition maintenance sessions To increase the number of partition maintenance sessions, enter the following commands as database user virtuo from SQLplus:
SQL> UPDATE wm_system_values_v SET value=X WHERE name='PartMaintSessions'; SQL> COMMIT;

where X = 2. If this does not solve the problem, set X = 4. Do not exceed a value of 4. If having 4 sessions does not solve the problem, increase the partition maintenance job running time. Increase the partition maintenance job running time
virtuo

To increase the partition maintenance job running time, enter the following commands as database user from SQLplus:
SQL> UPDATE wm_system_values_v SET value=60*60*N WHERE name='PartMaintMaxRunTime'; SQL> COMMIT;

where N = the number of hours to run, maximum 8. Note: Do not change the number of licenses for the partition maintenance job from the default value 1. You can verify the number of licences by entering the following command as database user virtuo:
SQL> SELECT maximum FROM sched_license WHERE job_type=20;

The resulting value should be 1.

8.5.3

Partition maintenance command line tool

Partition maintenance uses a command line tool, called part_admin, to allow users to manually run partition maintenance tasks and update partition maintenance settings. The part_admin script is located at the following location and can be executed from any directory as user virtuo:
$WMCROOT/bin/part_admin

Partition maintenance CLI parameters The scope of the partition maintenance CLI can be limited by the following parameters:
[-type] [-subtype] [-tabtype] [-sdate] [-edate] [-filter] [-param] [-value]

Partition maintenance CLI parameters are described in the following table:

112

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 37:
Parameter
-type -subtype

Partition maintenance CLI parameters

Description

Type of table, examples include TECHPACK, ENTITY and TABLE. Subtype associated with a specific type. Subtypes are associated with the main types, examples include: TECHPACK For example, "Neutral GSM Core", "Ericsson GSM BSS
R10"

ENTITY: TABLE
-tabtype -filter

For example, "CELL", "MSC", "OSI_Channel" For example, "traffic", "softalarm", "sSUMDaily"

Table type, examples include "traffic", "softalarm", "sSUMDaily" Filter the output by table name, examples include
"ERI_BSC_TBF_GSL_TAB" "ERI_%_TAB"

-sdate -edate -param

Start time, sdate format: yyyymmddhh24. End time, edate format: yyyymmddhh24. Parameter for updating partition maintenance parameters, examples include data_retention, tablespace_name Value for update partition maintenance parameters and settings.

-value

Partition maintenance CLI tasks The following table lists tasks that can be completed using the part_admin tool:
Table 38:
Option
-add

Partition maintenance CLI parameters

Description

Add partitions. Scope can be limited using:


[-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

-delete

Delete partitions. Scope can be limited using:


[-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

-pin

Pin partitions into the database to allow them to be maintained outside of the defined data retention periods. Scope can be limited using:
[-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

-unpin

Unpin partitions from the database to allow them to be dropped from the system when they are outside of the defined data retention periods. Scope can be limited using:
[-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

System Maintenance 113

Option
-export

Description

Export partition data from the database. Scope can be limited using:
-tname <TABLE_NAME> -sdate <DATE> -edate <DATE>

or,
-pname <TABLE_NAME>:<PART_NAME> -import

Import data into the database which has been previously exported using the part_admin tool. Scope can be limited using:
-pname <TABLE_NAME>:<PART_NAME>

-showparams -listparams

Display all configurable partition maintenance parameters. Display all parameters settings. Scope can be limited using:
[-type] [-subtype] [-tabtype]

-updateparams

Update specific partition maintenance settings. Scope can be limited using:


[-type] [-subtype] [-tabtype] [-filter] [-param] [-value]

-listsessions

Lists the current number of configurable parallel partition maintenance sessions.


Update the number of configurable parallel partition maintenance sessions. Scope can be limited using:
[-value]

-updatesessions

-listtypes

The scope of partition maintenance tasks can be limited by type. The -listtypes option can be used to list the available types on the system. Valid types are TECHPACK, ENTITY and TABLE. Subtypes are associated with the main types, examples include: TECHPACK For example, "Neutral GSM Core", "Ericsson GSM BSS R10" ENTITY: For example, "CELL", "MSC", "OSI_Channel" TABLE For example, "traffic", "softalarm", "sSUMDaily" Scope can be limited using [-type]

-listpart

Lists partitions by table name to allow the user to see what partitions currently exist on the system. The list can be limited using:
[-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

-listpinned

Lists partitions by table name to allow the user to see what partitions are currently pinned on the system. The list can be limited using:
[-type] [-subtype] [-tabtype] [-filter]

-listspace -logs

Display the current space settings per tablespace. Display the partition maintenance logs per job id. Scope can be limited using:
[-id]

-errors -status -help

Display the partition maintenance error logs per job id. Display the status of the active partition maintenance job. Display help.

114

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.5.4

Adding partitions

Adding a partition adds a partition to an individual table or a group of tables. Partitions are generally added to allow backloading of data into a table or if specific data was being added to a particular table. Adding partitions outside of the retention period will also pin them into the database. The syntax of the add partitions option is as follows:
part_admin -add [-type] [-subtype] [-tabtype] [-filter] -sdate -edate

The following example will add partitions to all traffic tables in the Neutral GSM Core technology packs from the 1st January 2007 to the 10th January 2007:
part_admin -add -type TECHPACK -subtype "Neutral GSM Core" -tabtype "traffic" sdate 2007010100 -edate 2007011000

The following example will add partitions to the table ERI_BSC_TBF_GSL_TAB from the 1st January 2007 to the 10th January 2007:
part_admin -add -filter "ERI_BSC_TBF_GSL_TAB" -sdate 2007010100 -edate 2007011000

Backloading data When you backload old data, you must ensure partitions exist for the time period that you are backloading for. If these partitions do not exist then you must add the partitions before commencing backloading. Existing partitions can be shown using the following command:
part_admin -listpart [-type] [-subtype] [-tabtype] [-filter]

Partitions are created based on the retention settings defined in the /appl/virtuo/conf/vmm/ default.properties file and in the PART_TABLES table. The following command will show the current retention settings on the system:
part_admin -listparams [-type] [-subtype] [-tabtype] [-filter]

where:
DATA RETENTION PAST RETENTION

is the number of partitions which will be kept in the past. is the minimum number of partitions which are automatically created in the past. is the minimum number of partitions which are automatically created in the

FUTURE RETENTION

future. If the necessary partitions do not exist for the data being backloaded, then run the part_admin -add command to add the necessary partitions. This will ensure that the data is loaded correctly and will be managed correctly by partition maintenance.

System Maintenance 115

8.5.5

Deleting partitions

Deleting a partition drops a partition from an individual table or a group of tables. The syntax of the deleting partitions option is as follows:
part_admin -delete [-type] [-subtype] [-tabtype] [-filter] -sdate -edate

The following example will delete partitions from all daily summary tables of entity type X25 from the 1st January 2007 to the 10th January 2007:
part_admin -delete -type ENTITY -subtype "X25" -tabtype "sSUMDaily" -sdate 2007010100 -edate 2007011000

The following example will delete partitions from all tables with a name like %_DSM from the 1st January 2007 to the 10th January 2007:
part_admin -delete -filter "%_DSM" -sdate 2007010100 -edate 2007011000

Important: Deleting partitions on a regular basis is not usually required. Scheduled partition maintenance jobs usually control this task. If you need to delete partitions using this procedure ensure you are deleting the correct partitions to prevent loss of a large amount of important data.

8.5.6

Pinning partitions

Partitions are pinned into the database to allow them to be maintained outside the defined data retention periods. The syntax of the pinning partitions option is as follows:
part_admin -pin [-type] [-subtype] [-tabtype] [-filter] -sdate -edate

The following example will pin partitions from all softalarm tables of entity type X25 from the 1st January 2007 to the 10th January 2007:
part_admin -pin -type ENTITY -subtype "X25" -tabtype "softalarm" -sdate 2007010100 -edate 2007011000

8.5.7

Unpinning partitions

Partitions are unpinned from a database to allow them to be dropped when they are outside the defined data retention periods. The syntax of the pinning partitions option is as follows:
part_admin -unpin [-type] [-subtype] [-tabtype] [-filter] -sdate -edate

The following example will unpin partitions from all softalarm tables from the 1st January 2007 to the 10th January 2007:
part_admin -unpin -type TABLE -subtype "softalarm" -sdate 2007010100 -edate 2007011000

116

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.5.8

Exporting partitions

Partitions can be exported to allow them to be maintained outside of the database. Partitions can be exported for a particular date range or for a particular partition. The syntax of the exporting partitions options are as follows:
part_admin -export -tname <TABLE_NAME> -sdate <DATE> -edate <DATE>

or,
part_admin -export -pname <TABLE_NAME>:<PART_NAME>

The following example will export partitions in table ERI_BSC_TBF_GSL_TAB from the 1st January 2007 to the 10th January 2007:
part_admin -export -tname ERI_BSC_TBF_GSL_TAB -sdate 2007010100 -edate 2007011000

The following example will export the partition P2007011000 in table ERI_BSC_TBF_GSL_TAB:
part_admin -export -pname ERI_BSC_TBF_GSL_TAB:P2007011000

8.5.9

Importing partitions

Partitions which have been exported through the part_admin CLI can be re-imported using the import option. The syntax of the importing partitions option is as follows:
part_admin -import -pname <TAB_NAME>:<PART_NAME>

The following example will import the partition P2007011000 in table ERI_BSC_TBF_GSL_DSM:
part_admin -import -pname ERI_BSC_TBF_GSL_DSM:P2007011000

8.5.10

Showing parameters

The show parameters option allows the user to see all the configurable partition maintenance parameters. The syntax of the show parameters option is as follows:
part_admin -showparams

8.5.11

Listing parameters

The list parameters option allows the user to view the current parameter settings, limited by type/ subtype/table type or table name (filter). The syntax of the list parameters option is as follows:
part_admin -listparams [-type] [-subtype] [-tabtype]

8.5.12

Updating parameters

The update parameters option allows the user to update the current parameter setting. The user can update the retention periods, the partitioning period, tablespace etc.

System Maintenance 117

Important: The impact of these changes on the system should be considered before implementation. The updates to a parameters setting can be limited by type/subtype/table type or table name. The syntax of the update parameters option is as follows:
part_admin -updateparams [-type] [-subtype] [-tabtype] [-filter] [-param] [-value]

The following example will update the data_retention to 90 days for all softalarm tables:
part_admin -updateparams -type TABLE -subtype "softalarm" -param data_retention value 90

8.5.13

Listing partitions

Partitions can be listed by table name to allow the user to see what partitions currently exist on the system. The list can be limited by type/subtype/table type/table name or time period. The syntax of the list partitions option is as follows:
part_admin -listpart [-type] [-subtype] [-tabtype] [-filter] [-sdate] [-edate]

8.5.14

List pinned partitions

Pinned partitions can be listed by table name to allow the user to see what partitions are currently pinned on the system. The list can be limited by type/subtype, table type and table name. The syntax of the list pinned partitions option is as follows:
part_admin -listpinned [-type] [-subtype] [-tabtype] [-filter]

8.5.15

List sessions

The list sessions option allows the user to view the current number of configurable parallel partition maintenance sessions. The syntax of the lists sessions option is as follows:
part_admin -listsessions

8.5.16

Update sessions

The update sessions option allows the user to update the number of configurable parallel partition maintenance sessions. The number of parallel partition maintenance sessions is the number of parallel, slave sessions which can run to create partitions. Important: The number of sessions is dependent on the of CPUs on a system and the impact of changing this setting should be considered before implementation.

118

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

The syntax of the update sessions option is as follows:


part_admin -updatesessions -value

8.5.17

List spaces

The list space option allows the user to view the current space settings per tablespace. The syntax of the list space option is as follows:
part_admin -listspace

8.5.18

Show logs

The logs option allows the user to view the partition maintenance logs. The syntax of the logs option is as follows:
part_admin -logs [-id <job_id>/latest]

The -logs option will give a summary of all partition maintenance logs by job id along with a start time, end time and total duration. The -logs -id <job_id> option will give detailed logs of all tasks that ran for a particular job. The -logs -id latest option will give detailed logs of all tasks that ran for the latest jobs.

8.5.19

Show errors

The errors option allows the user to view the partition maintenance error logs. This will show all errors that have occurred in partition maintenance in the last 30 days, ordered by time. The syntax of the errors option is as follows:
part_admin -errors

8.5.20

Show status

Show status allows the user to view the current status of the active partition maintenance job. The syntax of the show status option is as follows:
part_admin -status

System Maintenance 119

8.6

Managing disk space usage


Ensure enough disk space is available for the requirements of the database. Ensure enough disk space is available for the software installation. Archiving for log files is setup.

Before attempting to manage disk space, you must meet the following requirements:

Whenever disk utilization is near 100%, the Tivoli Netcool Performance Manager processes may be blocked. It is important to monitor disk space and remove old files because recovery from a disk overflow might require rebooting the system. You should conduct regular checks on the amount of disk space available in order to ensure that: Oracle is running at peak performance. Server processes continue to run. Data continues to load. The main files to monitor for disk space usage are: Data files Log files Tablespace files All other files should be a static file size.

8.6.1

Monitoring the Oracle storage directories

1. As user virtuo enter the following command to report the total disk space usage in the Oracle storage directories:
du -ks /ora*

Example output is shown below.


4730154 /oradata01 18675938 /oradata02 17844018 /oradata03 24129314 /oradata04 2 /oradump 393458 /oralogs1 393458 /oralogs2

2. As user virtuo enter the following command to report the available capacity in the oracle storage directories:
df -k /ora*

120

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

8.6.2

Monitoring the $WMCROOT/logs directories

1. As user virtuo enter the following command to report on the disk space usage in each of the $WMCROOT/log directories:
du -k $WMCROOT/logs

Example output is shown below.


53 33 18 29 1564 1565 41094 41095 9679 2 1 4066 911 57452 /appl/virtuo/logs/sapmgr /appl/virtuo/logs/sapmon /appl/virtuo/logs/sap_cli /appl/virtuo/logs/conf_read /appl/virtuo/logs/web/default /appl/virtuo/logs/web /appl/virtuo/logs/as/default /appl/virtuo/logs/as /appl/virtuo/logs/vmm /appl/virtuo/logs/sapmgr_cli /appl/virtuo/logs/loader/CHOPPED /appl/virtuo/logs/loader /appl/virtuo/logs/gways /appl/virtuo/logs

8.6.3

Monitoring the $WMCROOT/var/loader/spool directories

1. As user virtuo enter the following command to report the total disk space usage in the $WMCROOT/var/loader/spool directories:
du -ks $WMCROOT/var/loader/spool

Example output is shown below.


367832 /appl/virtuo/var/loader/spool

8.6.4

Reporting the size of filesystems

The following command reports on the size of the filesystems and the space in use: 1. As user virtuo enter the following command:
df -k /ora* $WMCROOT

Example output is shown below.


Filesystem /dev/dsk/c0t2d0s0 /dev/dsk/c0t2d0s1 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s1 /dev/dsk/c0t2d0s3 /dev/dsk/c0t0d0s6 /dev/dsk/c0t1d0s1 /dev/dsk/c0t1d0s0 kbytes 25822151 25822151 25822151 25822151 10327372 10327372 10327372 15493995 used avail capacity 4755778 20808152 19% 18701562 6862368 74% 17869642 7694288 70% 24154938 1408992 95% 10258 10213841 1% 403714 9820385 4% 403714 9820385 4% 9810710 5528346 64% Mounted on /oradata01 /oradata02 /oradata03 /oradata04 /oradump /oralogs1 /oralogs2 /appl

System Maintenance 121

8.7

Working with log files

Crontab entries are used to manage log files and report generation. Cron entries should be periodically reviewed to assess whether log files are archived with adequate frequently. Altering the frequency of the cron jobs may be required if very large logs are seen to be produced. Note: If the system is distributed over multiple servers, the log files will be on the server where the service in running. For example, the Oracle logs will be on the database server and application server logs will be on the application server in the locations specified. The cron installation uses the following two crontab files to setup the virtuo and root user:
$WMCROOT/admin/common/cron/core_root_crontrab $WMCROOT/admin/common/cron/core_virtuo_crontrab

See Crontab setup for default root user and virtuo user crontab entries.

8.7.1

Information about log files

Tivoli Netcool Performance Manager can perform the following functions for log files: Remove old log files. See Removing log files on page 123. Archive old log files.See Archiving log files on page 123. The cron_script script is used to perform the functions described above. It is located in:
[hostname:virtuo] /$WMCROOT/admin/common/cron

For help when using the cron_script script, enter the following:
./cron_script -help

Tivoli Netcool Performance Manager saves the following types of logs on the server: Appserver logs - Application Server logs in $WMCROOT/logs/as/default/ VMM tool - Technology Pack Activation logs in $WMCROOT/logs/vmm SAP Manager logs - in $WMCROOT/logs/sap* Loader logs - in $WMCROOT/logs/loader Install logs - in $WMCROOT/admin/oracle/schema/core/install_schema.log
$WMCROOT/admin/ds/schema/install_schema.log $WMCROOT/admin/logs/pmw_install.log

Web access logs - in $WMCROOT/logs/web/default The cron entries below are setup for the following log files within the Tivoli Netcool Performance Manager installation. Tivoli Netcool Performance Manager supports the removal, rollover and archiving of the files below:
[hostname:virtuo]crontab -l 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 31 /data/trace_archive1 \*.log.\*

122

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

0 1 * * * /appl/virtuo/admin/common/cron/cron_script -a -d 0 /appl/virtuo/logs \*.log.\* / data/trace_archive1 14,29,44,59 * * * * /appl/virtuo/bin/alarmapi_admin -da 30 * * * * /appl/virtuo/bin/run_loader_cleanup 3600 0 3 * * * /appl/virtuo/admin/common/cron/cron_script -a -d 5 /appl/virtuo/logs/nc_archiver \*log.\* /data/trace_archive1 0 3 * * * /appl/virtuo/admin/common/cron/cron_script -p -d 3 /appl/virtuo/logs/loader \*.log.\* /data/trace_archive1 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.csv 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.xml 0 1 * * * /appl/virtuo/admin/common/cron/cron_script -r -d 1 /appl/virtuo/var/rg/spool/ export/reports \*.xls

8.7.2

Removing log files

As user virtuo complete the following steps to remove a log file: 1. Move to the directory where the cron_script is located:
cd /$WMCROOT/admin/common/cron

2. Run the following command:


./cron_script -r -d <dirname> <filename>

The parameters to be included with the -r option to remove old log files are described in Table 39.
Table 39:
Option
<-d> (optional)

Parameters for Removing Log Files

Description

The -d option is optional. Use the next argument to specify the age of the file in days. The directory the log files are currently stored in. The filename of the file you want to remove.

<dirname> <filename>

Note: Multiple files can be removed at the same time using the (*) wildcard. For example:
***/appl/virtuo/admin/common/cron/cron_script -r -d 31 /data/ trace_archive1\*.log\*

8.7.3

Archiving log files

Complete the following steps as user virtuo to archive a log file: 1. Move to the directory where the cron_script is located:
cd /$WMCROOT/admin/common/cron

2. Run the following command:


System Maintenance 123

cron_script -a -d <dirname> <filename> <archive_dir>

The parameters to be included with the -a option to remove old log files are described in Table 40.
Table 40:
Option
<-d> (optional)

Parameters for Archiving Log Files

Description

The -d option is optional. Use the next argument to specify the age of the file in days. The directory the log files are currently stored in. The filename of the file you want to remove. The directory to archive the files to.

<dirname> <filename> <archive_dir>

Note: Multiple files can be archived at the same time using the (*) wildcard. Example:
***/appl/virtuo/admin/common/cron/cron_script -a -d 0 /data/trace_log1\*.log.\* / data/trace_archive1

8.8

Loader LIF file directory

The archive_loader_data script is used to archive .lif files. These are files produced by the gateways and processed by the loaders in large volumes. The archive_loader_data script is located in:
[hostname:virtuo] $WMCROOT/admin/common/cron

The default loader LIF directory is located under /appl/virtuo/var/loader/spool. If LIF files are not removed from the system with adequate frequency, the /appl directory can increase in size quickly.

8.9

Java client processes

If a JavaTM client process requires an extensive use of memory and the default Java heap size is insufficient, then the process can fail with an OutOfMemoryError Java exception:
java.lang.OutOfMemoryError: Java heap space

This error can occur when installing or upgrading large technology packs and when using the sbh_sk_remover tool. This problem is resolved by increasing the memory available to Java client processes. Available memory is increased by amending the ANT_OPTS environment variable. Important: If you are installing large technology packs, it is strongly recommended that you set the ANT_OPTS variable to a value of 1G prior to installation. Do not allow the installation to fail before increasing the ANT_OPTS value.

124

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

To increase available memory for Java client processes: 1. Execute the following command:
export ANT_OPTS="-Xmx1G"

2. Run the client process. For example, run the technology pack installation:
techpack_admin -a

3. After the client process completes successfully, reset the ANT_OPTS variable to its original value:
unset ANT_OPT

Note: You do not need to stop and re-start the application server after you reset the ANT_OPTS value.
Confirming the correct setting is being used

To confirm the correct memory setting is being used, you run a Java client tool and then check how much memory the process has been assigned. In the example below the ANT_OPTS variable has been set to 512m. To confirm the correct memory setting is being used: 1. Run the following commands:
techpack_admin -l installed & ps -ef | grep java

Example output:
virtuo 1492 25435 2 09:12:55 pts/2 0:03 /appl/virtuo/jre/bin/java Xmx512m -classpath /appl/virtuo/ant/lib/ant-launcher

In this example the correct setting being used is 512m.

System Maintenance 125

8.10

Filesystem backups

To protect the system from loss of data, regular file system backups must be made in addition to backing up the Oracle database. The following filesystems should be included in the backup:
/ /var /appl /export/home /data/trace_log1 /data/trace_archive1

A backup schedule should include full and incremental backups of these filesystems. Ensure that the backup media are correctly labelled and stored in a secure location, and that the integrity of the backups is checked periodically.

126

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

9
9.1

Tools

This chapter covers a number of additional tools available to administer and support the system.

Overview
Importing and Exporting User Documents and Report Results Running a report from the command line Time Zone Support for Reporting Holiday Maintenance

This chapter covers the following:

9.2

Importing and Exporting User Documents and Report Results

Note: Do not run more than one of the following tools, or more that one instance of any of these individual tools, at the same time: techpack_admin, sbh_admin, summary_admin, kpicache_admin or report_impexp. For example, do not run summary_admin and sbh_admin, or two instances of summary_admin at the same time. The report_impexp tool provides functionality for: importing and exporting report definitions importing and exporting report templates importing and exporting report schedules importing and exporting report results deleting report templates Note: Caution is required when transferring exported files to and from a Microsoft Windows computer. Some Windows text editors automatically insert carriage returns in files. You can remove the carriage returns by using the dos2unix program.

Copyright IBM Corp. 2007, 2011

127

The report_impexp tool is located in:


$WMCROOT/bin

The report_impexp tool can be executed from any directory as user virtuo. For help when using the report_impexp script, enter the following:
report_impexp -h [1|2|3|4|5]

where 1 = help for importing artifacts 2 = help for exporting artifacts 3 = help for importing report results 4 = help for exporting report results 5 = help for deleting templates

9.2.1

Importing definitions, templates, schedules and folders

The import operation imports all user documents from an import file or directory to a destination directory. usage
report_impexp -i -u <user_id> -p <password> -f <file|dir> -l <server_directory> -dup <failonerror|overwrite|ignore>

options The options and parameters to be used with the report_impexp tool for importing report definitions, templates, schedules and folders are described in the following table.
Table 41:
Option
-i -u <user> -p <password> -f <file|dir>

Options for report_impexp tool - import user documents


Description

Initiates the import process. The username of the user invoking the import tool. The users password. Name of xml file from which to import user documents. Can be a reference to a file or a folder. If a reference to a folder is given then a recursive search is carried out for all files with an .xml extension. Each file is then imported.

128

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Option
-l <server_directory>

Description

Destination on server that is the root location for imported documents. This must be a valid user folder location. For example:
/Users/sysadm /Users/peter/folder1 /Users/mary/backup/daily

If the -l option is not used then the location and associations are taken from the imported document(s).
-dup [failonerror| overwrite|ignore]

Import process behavior. This option is specified with one of the following three parameters:
failonerror - fail when an existing document with the same

name is found
overwrite - overwrite when existing documents with the same

names are found ignore - ignore documents that already exist

Examples
report_impexp -i -u <user> -p <password> -f import.xml -l /Users/mark -dup ignore report_impexp -i -u <user> -p <password> -f import.xml -dup ignore

9.2.2

Exporting definitions, templates and schedules

The export operation exports all specified user documents to an output file. usage
report_impexp -e -c <a|d|t|s> -u <user_id> -p <password> -m <pattern_match> -f <file> -l <server_directory> [-r <true|false>]

options The options and parameters to be used with the report_impexp tool for exporting report definitions, templates, schedules and folders are described in the following table.
Table 42:
Option
-e

Options for report_impexp tool - export user documents


Description

Initiates the export process.

Tools 129

Option
-c

Description

Content or type of object to export. The command line options for -c are as follows:
[all|definition|template|schedule]

all - selects report definitions, report templates, schedules and folders definition - selects report definitions and folders template - selects report templates and folders schedule - selects schedules and folders Shortcuts [a|def|t|s] a - all of the types below def - report definitions t - report templates s - report schedules The all definitions and template selections will include any dependent folders.
-u <user> -p <password> -m <pattern_match>

The username of the user invoking the export tool. The users password. Pattern matches for user documents using standard wildcards. Quotes must be placed around the wildcards. For example, for all report definitions use "*". File to export documents to. This must be an xml file. Location on server to export the file to. This must be a user folder location or path that has been defined during report definition. For example, /Users/sysadm. User folder locations can be checked in the GUI. Locations can be found in the BROWSE tab, see the Tivoli Netcool Performance Manager :User Guide - Wireless Component. The user should have permission to read the document contents of the selected location.

-f <file> -l <server_directory>

-r

Optional. Recursive search (sub-directories) of source root location for other files that match the search pattern specified by the -m option. The -r option is specified by one of the following parameters:
true false

Examples
report_impexp -e -c all -u <user> -p <password> -m "*" -f export.xml -l /Users/ username -r true

130

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

9.2.3

Importing report results

Note: Report results do not contain any result data, they contain metadata about the location of the results file. Results files are stored locally on the application server where the request was initiated, in WMCROOT/var/rg/spool/. To provide the data it is important for the user to externally manage the copying and pasting of the results files. The reason for this external handling of result data is that the size of the results file can be large. The import operation imports report results from an import file or directory to a destination directory. usage
report_impexp -i -c <r> -u <user_id> -p <password> -f <file|dir> -l <server_directory> -dup <failonerror|overwrite|ignore>

options The options and parameters used with the report_impexp tool for importing report results are described in the following table.
Table 43:
Option
-i -c <r>

Options for report_impexp - import report results


Description

Import. Content or type of object to import. Use the following parameter:


r - report results

-u <user> -p <password>

Username. The users password.

Tools 131

Option
-f <file|dir>

Description

The file or directory to import. This option is specified by one of the following parameters:
file - the file to import. dir - the directory of files to import.

This must be a valid user folder location. For example:


/Users/sysadm /Users/peter/folder1 /Users/mary/backup/daily

If the -l option (below) is not used then the location and associations are taken from the imported document(s). The name specified by the <data-source-pk> field in the file(s) must match the name of the destination machine. In the example below, server.one.locale1.abc.com is the name of the destination machine:
report results fields ============== <data-source-pk> <name>server.one.locale1.abc.com</name> </data-source-pk> -l <server_directory> -dup [failonerror|overwrite| ignore]

Destination on server that is the root location for imported documents. Import process behavior. Note: While this option must be set, it is not used on the server side. Because no relationship is maintained on the server side between report results and definitions, the server can load multiple copies. This option is specified with one of the following three parameters:
failonerror - fail when an existing report with the same name is found overwrite - overwrite when existing files with the same names are

found
ignore - ignore files that already exist

Examples
report_impexp -i -c results -u <user> -p <password> -f /appl/virtuo/tmp/ exportResults.xml -l /Users/sysadm -dup ignore

9.2.4

Exporting report results

The export operation exports report results to an output file. usage


report_impexp -e -c <r> -u <user_id> -p <password> -m <pattern_match> -f <file> l <server_directory> [-r]

options The options and parameters to be used with the report_impexp tool for exporting report results are described in the following table.
132 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 44:
Option
-e -c

Options for report_impexp tool - export report results


Description

Initiates the export process. Content or type of object to export. This option uses the following parameter:
r - report results

-u <user> -p <password> -m <pattern_match>

The username of the user invoking the export tool. The users password. Pattern matches for document names using standard wildcards. Quotes must be placed around the wildcards. For example, for all report results use "*". The file to export documents to. This option is specified by the following parameter:
file - the file to export to.

-f <file>

-l <location> -r

Destination on server that is the root location for exported documents. Optional. Recursive search (sub-directories) of source root location for other files that match the search pattern specified by the -m option. The -r option is specified by one of the following parameters:
true false

Note: If the report results are not on the file system where the report_impexp tool is being used then the report result content will not appear in the exported dataset.

Examples
report_impexp -e -c results -u <user> -p <password> -m 'Report_27*' -f exportResults.xml -l /Users/sysadm -r true

9.2.5

Deleting report templates

The delete operation deletes report templates. usage


report_impexp -d -u <user> -p <password> -m <pattern_match> -l <server_directory> [-r]

options The options and parameters to be used with the report_impexp tool for deleting report templates are described in the following table.

Tools 133

Table 45:
Option
-d -u <user> -p <password> -m <pattern_match>

Options for report_impexp tool - delete report templates


Description

Initiates the delete process. The username of the user initiating the operation. The users password. Pattern matches for document names using standard wildcards. Quotes must be placed around the wildcards. For example, for all template names use "*". Location on server of report templates. Optional. Recursive search (sub-directories) of source root location for other files that match the search pattern specified by the -m option. The -r option is specified by one of the following parameters:
true false

-l <location> -r

Examples
report_impexp -d -u <user_id> -p <password> -m "CDMA*" -l /Users/sysadm/ report_impexp -d -u <user_id> -p <password> -m "DAILY_TCH" -l /Users/sysadm -r true

9.3

Running a report from the command line

You run a report using the report_run command line tool. To run a report from the command line:
report_run -u <username> -p <password> -r <report_name> [-f folder_path]

Table 46:
Option
-u <username> -p <password> -r <report_name>

Options for the report_run tool

Description

The username of the user running the report. The users password. Report name. You can use a pattern match for report names using the % wildcard. Optional. Location of the report definition to run. Lists all report names.

-f <folder_path> -l

134

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Examples
report_run -u <username> -p <password> -r MSCHandover% report_run -u <username> -p <password> -r MSCHandover Daily -f Users/sysadm report_run -u <username> -p <password> -l

Report results are available from the PERSONAL DOCUMENTS page of the BROWSE tab. Results are displayed in the location set in the reports definition.

9.4

Time Zone Support for Reporting

The report scheduler offers users the option of running reports according to a different timezone than the one where the server is located. A number of timezone regions and Daylight Saving Time (DST) rules are already defined in the system. It is possible to define additional timezone regions and DST rules. The administrator can: Set up Daylight Saving Time (DST) rules Define timezone regions, and assign DST rules to them if required The following sections explain: About Daylight Saving Time Rules About Time Zone Regions

9.4.1

About Daylight Saving Time Rules

A DST rule defines the boundaries (starting and ending point) and the amount of DST savings, in minutes. A DST boundary is a date time reference. A single DST rule can be referenced by more than one timezone region. You can define several different types of rules. Usually the entries follow a day-of-the-week-in-month form, for example, "first Sunday in April." You can also define a rule by specifying any of the following forms: Exact day of the month The day of the week occurring on or after an exact day of the month The day of the week occurring on or before an exact day of the month The interface assigns numeric values to the days of the week. Sunday is defined as 1, Monday is defined as 2, and so on. The following table shows an example of a DST rule for Seattle. It defines daylight saving time as starting the first Sunday in April at 2:00 a.m., and ending the last Sunday in October at 2:00 a.m.

Tools 135

Table 47:

Example of DST rule for the Seattle region


example

attribute

Starting

Month Day day of Week Time

April first Sunday 2:00 AM October last Sunday 2:00 AM 60 Minutes

Ending

Month Day day of week Time

Savings

Savings

DST Rule ID Each DST rule has an ID. When a new DST rule is created, the system automatically assigns a number to the rule, and displays it after the rule is created. The identifier is always an integer, and cannot be changed. You can list DST rules and their IDs, see List DST rules. Use the tz_admin script to manage rules for timezone regions. This script allows you to perform the following tasks: Create a DST rule List DST rules Delete a DST rule Create a DST rule To create a DST rule:
tz_admin -adddstr start <start_format> end <end_format> mins

where <start_format> and <end_format> can take any of the following forms: after MM dayofweek DD HH:mm before MM dayofweek DD HH:mm first MM dayofweek DD HH:mm last MM dayofweek DD HH:mm exact MMDD HH:mm and where:
Table 48:
attribute description

DST rule

after before
136

First (1=Sun, ...) on or after Last (1=Sun, ...)or before

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 48:
attribute description

DST rule

first last exact dayofweek D M H mm mins

First (1=Sun, ...) Last (1=Sun, ...) Exact day, specify month (MM) and day (DD) For example, 0415 represents April 15. The numerical values for Sunday to Saturday (1 to 7) Day of the month DD Month, 01-12 Hours 00-24 Minutes 0-60 Minutes of DST time: 1 to 1440.

For example the DST for Europe/London is:


tz_admin -adddstr start last 03 dayofweek 01 01:00 end last 10 dayofweek 01 01:00 mins 60

which translates as: start on the last Sunday in March at 01:00, end on the last Sunday in October at 01:00, with 60 minutes of DST. List DST rules To list a DST rule:
tz_admin -listdstr <dst_rule_id>

where:
Table 49:
Option
dst_rule_id

Options for listing a timezone region

Description

DST Rule ID

To list all DST rules:


tz_admin -listdstr all

Delete a DST rule To delete a DST rule:


tz_admin -deletedstr <dst_rule_id>

where:
Table 50:
Option
dst_rule_id

Options for listing a timezone region

Description

DST Rule ID

Tools 137

9.4.2

About Time Zone Regions

A timezone region is different from a timezone. In the Mountain timezone of the United States, for example, the state of Arizona does not observe Daylight Saving Time, whereas the majority of the rest of the timezone does. The administrator defines timezone regions, which the user can then select when scheduling reports. The following are the default regions available at installation time; you can add others at any time: America/Anchorage America/Buenos Aires America/Caracas America/Chicago America/Denver America/Honolulu America/Indianapolis America/Lima Peru America/Mexico City America/New York America/Noronha America/Phoenix America/Puerto Rico America/San Francisco America/Santiago America/Sao Paulo America/Seattle Asia/Bangkok Asia/Calcutta Asia/Dubai Asia/Hong Kong Asia/Jerusalem Asia/Riyadh Asia/Tokyo Australia/Adelaide Australia/Brisbane Australia/Perth Australia/Sydney Europe/Athens Europe/Berlin

138

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Europe/Brussels Europe/Helsinki Europe/London Europe/Madrid Europe/Moscow Europe/Paris Europe/Rome Europe/Vienna Europe/Warsaw Europe/Zurich Greenwich Mean Time Pacific/Auckland Use the tz_admin script to manage timezone regions. This script allows you to perform the following tasks: Create a timezone region List timezone regions Change the timezone region Delete a timezone region Assign a timezone region a DST rule Create a timezone region You can set the current time zone region for your computer as well as define new regions to fit your business needs. You create a timezone region by providing a name, GMT offset in minutes, and optionally assigning a DST rule to it. To create a timezone region:
tz_admin -addtzr <timezone_name> <gmt_offset> [<dst_rule_id>]

where:
Table 51:
Option

Options for creating a timezone region

Description

timezone_name Name of the timezone region (64 characters maximum) gmt_offset dst_rule_id

GMT offset in minutes (-720 to 720). Longitudes west of GMT are negative, and longitudes east of GMT are positive. DST Rule ID

Tools 139

List timezone regions To display the current timezone region:


tz_admin -listcurtzr

To list all timezone regions:


tz_admin -listtzr all

To list a specific timezone region:


tz_admin -listtzr <timezone_name>

where:
Table 52:
Option
timezone_name

Options for listing a timezone region

Description

Name of the timezone region (64 characters maximum)

Change the timezone region You can change the current timezone region. Warning! Existing schedules that deliver reports based on timezone regions could be impacted when you change the region on a computer. The schedules are based on the number of hours difference between two computers and are not linked dynamically to a regions name. When you change a timezone region for a computer, the hour relationship will not change. To change the current timezone:
tz_admin -addcurtzr <timezone_name>

where:
Table 53:
Option

Options for listing a timezone region

Description

timezone_name Name of the timezone region (64 characters maximum)

Delete a timezone region To delete a timezone region:


tz_admin -deletetzr <timezone_name>

where:
Table 54:
Option

Options for listing a timezone region

Description

timezone_name Name of the timezone region (64 characters maximum)

140

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Assign a timezone region a DST rule To assign a timezone region a DST rule:
tz_admin -linktzr <timezone_name> <dst_rule_id>

where:
Table 55:
Option

Options for listing a timezone region

Description

timezone_name Name of the timezone region (64 characters maximum) dst_rule_id

DST Rule ID

9.5

Holiday Maintenance

Holidays are definable dates you can include in or exclude from reports. Including or excluding holidays can provide for more accurate data depending on the monitoring that is conducted. For example, a weekly report that monitors common traffic patterns can become skewed if it includes data from a holiday with abnormal traffic. Conversely, you might want to create a report that only includes data from holidays. The holiday_admin tools provides the means to list and alter the holiday definitions in the system. This tool cannot be used off-line. It requires a virtuo administrative login. This tool is available on Windows as holiday_admin.js. It can be invoked either directly from the command line or by choosing either the cscript or wscript command processor. Using holiday_admin you can: list holidays add holidays delete holidays Parameters for holiday_admin are as follows:
Table 56:
Option
-u -p -list -add <date>

Options for listing a timezone region

Description

Administrators user name. Administrators password. Lists all holidays. Adds a holiday, the date is specified as MMDDYY. For example, 041506 represents April 15. You can add more then one date by separating each date with a space, for example 041506 041606 April 15. You can remove more then one date by separating each date with a space, for example 041506 041606

-remove <date> Deletes a holiday, the date is specified as MMDDYY. For example, 041506 represents

Tools 141

9.5.1

List holidays

To list current holidays:


holiday_admin -u <admin_user> -p <admin_password> -list

where:
<admin_user>

is the administrators login ID is the administrators login password

<admin_password>

9.5.2

Add holidays

To add a holiday:
holiday_admin -u <admin_user> -p <admin_password> -add <date> <date>

where:
<admin_user>

is the administrators login ID is the administrators login password

<admin_password>

<date> is the date of the holiday to be added specified as YYYY-MM-DD. You can add more then one date by separating each date with a space, for example 2009-05-19 2009-05-20.

9.5.3

Delete holidays

To delete a holiday:
holiday_admin -u <admin_user> -p <admin_password> -remove <date> <date>

where:
<admin_user>

is the administrators login ID is the administrators login password

<admin_password>

<date> is the date of the holiday to be deleted specified as YYYY-MM-DD. You can remove more then one date by separating each date with a space, for example 2009-05-19 2009-05-20. Setting UDC constants The parameter_admin a command line tool that is used to set constants that can be included in UDC expressions. For example, for UDCs using the crit traffic function a parameter can be defined as the DGOS. For example:
parameter_admin -s DEFAULT_DGOS -t float -v 0.6

Then in crit UDC expressions the parameter can be used. For example:
crit( [Cell]![{Neutral.tch.available_ch}] , parameterFloat("DEFAULT_DGOS"), "B")

Where parameterFloat returns the float value of the configuration parameter. See If the DGOS needs to be changed, the parameter value can be changed it changes in all the expressions in which it is used.
142 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Tools 143

144

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

10

LCM Administration

The lcm_admin (Loader Configuration Manager) tool enables Datasources, Loader Configurations and NC Relations to be loaded from XML files into the administration database, and unloaded from the administration database to XML files. This tool also enables the addition, updating and deletion of Loader Datasources, Loader Configurations and NC Relations. This chapter covers the following: Overview List information for Datasources Load Datasources, NC Relations and Data Availability Unload Datasources and NC Relations Delete Datasources and NC Relations LCM port change

10.1
10.1.1

Overview
Loader Datasource

A Loader Datasource contains a set of mappings from input data to database tables. A Datasource is uniquely identified through its Datasource name, Datasource version, Technology Pack name and Technology Pack version.

10.1.2

NC Relations

Each NC Relations entry provides a relationship between the network configuration data used by a network element, with generated performance data. For example, a relations entry can show a relationship between an NC cell table which contains information on the location of a network element in the network, and a cell traffic table which is a performance table that contains performance data generated by the network element.

10.1.3

Data availability

If a technology pack is provided with Data Availability definitions, these definitions are contained in an .xml file. This file contains information about the technology pack, and a list of blocks of data that are loaded for that technology pack.

Copyright IBM Corp. 2007, 2011

145

Note: Data availability is not supported when a data block has been loaded with different time intervals within the same period of time. For example: Block a block B1 has been loaded from different lif files relating to the same time period T1 to T2, but the lif files use different time intervals - 30 minutes 60 minutes and 15 minutes. In such a case a data availability report on such a block would provide inaccurate results.

10.1.4

Usage

Usage: lcm_admin { -h | -help | --help | -v | -version | --version | } | { -listdatasources | -list } | { -load <xml_file> } | { -loadcustom <datasource_xml_file> } | { -merge <data_availability_xml_file> Adds data availability blocks to any already loaded. } | { -unload [<datasource_xml_file>] -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> [-type <datasource_type>] Type must be specified if using merged loadmaps -unload [<relations_xml_file>] -relations [-nctable <nc_table>] [-sourcetable <source_table>] -unload [<dataavailability_xml_file>] -dataavailability -techpack <techpack_name> -tpversion <techpack_version> -unloadcustom [<datasource_xml_file>] -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> [-type <datasource_type>] Type must be specified if using merged loadmaps } | { -delete <datasource_xml_file> | -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name>

146

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

-tpversion <techpack_version> Delete of a datasource also deletes any loader configuration related to it. -delete <relations_xml_file> | -relations -nctable <nc_table> [-sourcetable <source_table>] If the nctable does not have a source table please pass -sourcetable followed by " " or "" or "*"

} -reread -datasource <datasource_name> -dsversion <datasource_version> } NOTE: If a command is given with a name option, but the name parameter contains spaces, then double-quotes should be used, for example, lcm_admin -delete -datasource "Nokia T13" -dsversion 2.0 -techpack "GPRS CN" -tpversion 1.0

The options for the lcm_admin script are described in Table 57.
Table 57:
Option
-listdatasources

lcm_admin options
Description

Lists a summary of all Datasources in the administration database. Lists a summary of all Loader Configurations and all Datasources in the administration database. Loads a Loader Datasource into the administration database from an XML file <xml_file>. Load all files in the loadmaps dir. Load data availability blocks for a technology pack.

-list

-load <xml_file>

-loadcustom <datasource_xml_file>

Loads a custom Loader Datasource into the administration database from an XML file <datasource_xml_file>. Adds data availability blocks to any already loaded. Unloads a Datasource by specifying either an XML file, or the Datasources identifying parameters.

-merge <data_availability_xml_file>

-unload [<datasource_xml_file>]| -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> -unload [<dataavailability_xml_file>] -dataavailability -techpack <techpack_name> -tpversion <techpack_version>

Unloads a data availability xml file by specifying an XML file, technology pack and technology pack version.

LCM Administration 147

Option
-unload [<relations_xml_file>] -relations [-nctable <nc_table>] [-sourcetable <source_table>] -unloadcustom [<datasource_xml_file>]| -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> -delete <datasource_xml_file> -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> -delete [<relations_xml_file>] -relations [-nctable <nc_table>] [-sourcetable <source_table>]

Description

Unloads a relations xml file by specifying either an XML file, or a combination of NC table and source table. Unloads a custom Datasource by specifying either an XML file, or the Datasources identifying parameters.

Deletes a Datasource by specifying either an XML file, or the Datasources identifying parameters.

Deletes NC Relations entry(s) by specifying either an XML file, or a combination of the NC Relations entry(s) NC table and source table.

10.2
10.2.1

List information for Datasources


Listing Loader Datasources

The -list or -listdatasources option, displays a summary of all Loader Datasources in the database, in the following format:
Teckpack Name Ericsson GSM BSS Nokia GSM BSS Version 1.0 1.0 DataSource Name Ericsson BSS Nokia BSS Version R10 OSS3.1 ED3 Technology GSM GSM Vendor Ericsson Nokia -------------------------------------------------------------------------

The list is sorted in ascending alphabetical order by Technology Pack name, then by ascending Technology Pack version, then in ascending alphabetical order by Datasource name and ascending Datasource version. To list Datasources, as user virtuo:
lcm_admin -listdatasources

or;
lcm_admin -list

148

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

10.3
10.3.1

Load Datasources, NC Relations and Data Availability


Loading a Datasource from XML

The -load option is used to create a new Datasource or update an existing Datasource from a specified Loader Datasource XML file. If the Datasource specified in the XML file already exists, then this Datasource is updated with the configuration in the loaded XML file. If the Datasource does not exist an attempt is made to create a new Datasource in the database. To load a Datasource, as user virtuo:
lcm_admin -load <xml_file>

10.3.2

Loading a custom Datasource from XML

The -loadcustom option is used to create a new custom Datasource or update an existing custom Datasource from a specified Loader Datasource XML file. If the Datasource specified in the XML file already exists, then this Datasource is updated with the configuration in the loaded XML file. If the Datasource does not exist an attempt is made to create a new Datasource in the database. To load a custom Datasource, as user virtuo:
lcm_admin -loadcustom <datasource_xml_file>

10.3.3

Loading NC Relations from XML

The load option is used to create or update an NC Relations entry or set of NC Relations entries in the database from a specified NC Relations XML file. If the NC Relations specified in the XML file already exist, then this NC Relations entry(s) is updated with the configuration in the loaded XML file. If the NC Relations specified in the XML file do not exist then an attempt is made to create a new NC Relations entry(s) in the database. Note: If an attempt to load a new set of NC Relations is made, and any of these relations already exist in the database, then none of the NC Relations will be entered into the database. If an attempt is made to update a set of NC Relations entries in the database and any of the NC Relations entries do not exist in the database then none of the NC Relations entries are updated. To load NC Relations, as user virtuo:
lcm_admin -load <relations_xml_file>

10.3.4

Loading Data Availability from XML

The -load option is used to load data availability for a technology pack.

LCM Administration 149

If a technology pack is provided with Data Availability definitions, these definitions are contained in an .xml file. This file contains information about the technology pack, and a list of blocks of data that are loaded for that technology pack. If the value of the block is set to true, i.e. <block name="ATMUSAGE">true</block>, then after the file is loaded the loader will start producing Data Availability (DA) statistics every time it loads a block of ATMUSAGE data. Note: This file is supposed to be user-customized: blocks can be switched on or off by the user. Blocks that are tracked take much longer to load than blocks that are not. Therefore, for performance reasons, enable Data Availability only on entities with a low number of elements (for example - BSC, BS, MSC) but not on cells, processors and other entities with a large number of elements.

Note: Data availability is not supported when a data block has been loaded with different time intervals within the same period of time. For example: Block B1 has been loaded from different lif files relating to the same time period T1 to T2, but the lif files use different time intervals - 30 minutes 60 minutes and 15 minutes. In such a case a data availability report on such a block would provide inaccurate results. To load data availability for a technology pack: 1. Check which loaders are running by using the following command as user virtuo:
sap disp

2. Stop the appropriate loader by using the following command as user virtuo:
sap stop <loader name>

3. Load the data availability xml by using the following command as user virtuo:
lcm_admin -load <data_availability.xml>

For example:
lcm_admin -load dataavailability_umts_siemens_utran_umr050_1.0.0.0.xml

4. Restart the loader as user virtuo:


sap start <loader_name>

10.3.5

Merging of Data Availability blocks from XML

The -merge option is used to add data availability blocks to any already loaded. The -load option overwrites any previously loaded blocks and enables the loader to start producing DA statistics for the blocks in the file passed to it. The -merge option will add any blocks not already having DA statistics loaded to the list of existing blocks, to produce DA statistics. To load data availability for a technology pack:
lcm_admin -merge <data_availability_xml_file>

150

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

10.4
10.4.1

Unload Datasources and NC Relations


Unloading a Datasource to XML

The -unload option is used to save a Loader Datasource to an XML file in its original format so that it can be modified and saved back to the database using the -load command. All Datasource parameters must be specified (apart from the XML file name) to unload a Datasource, including Datasource name, Datasource version, the Technology Pack name and the Technology Pack version. To unload a Datasource, as user virtuo:
lcm_admin -unload [<datasource_xml_file>] -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version>

where:
[<datasource_xml_file>] is the name of the file to be created. The file can be created with a relative or absolute file path. If a relative path is specified, it is relative to the current working directory. Any existing file using the same file name shall be overwritten. If the optional parameter [<datasource_xml_file>] is omitted then file naming will use a standard output.

10.4.2

Unloading a custom Datasource to XML

The -unloadcustom option is used to save a custom Loader Datasource to an XML file in its original format so that it can be modified and saved back to the database using the -loadcustom command. All Datasource parameters must be specified (apart from the XML file name) to unload a Datasource, including Datasource name, Datasource version, the Technology Pack name and the Technology Pack version. To unload a custom Datasource, as user virtuo:
lcm_admin -unloadcustom [<datasource_xml_file>] -datasource <datasource_name> dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version>

where:
[<datasource_xml_file>] is the name of the file to be created. The file can be created with a relative or absolute file path. If a relative path is specified, it is relative to the current working directory. Any existing file using the same file name shall be overwritten. If the optional parameter [<datasource_xml_file>] is omitted then file naming will use a standard output.

10.4.3

Unloading NC Relations to XML

The unload option is used to save an NC Relation entry(s) in its original format to an XML file so that it can be modified and saved back to the database using the load command. The [<relations_xml_file>] parameter is optional. It is used to specify the name of the file to be created. The file can be created with a relative or absolute file path. If a relative path is specified, it is relative to the current working directory. Any existing file using the same file name shall be overwritten. If the optional parameter [<relations_xml_file>] is omitted then file naming will use a standard output.

LCM Administration 151

A combination of NC table and Source table can be used to retrieve NC Relations entries. If the sourcetable option is not specified:
lcm_admin -unload [<relations_xml_file>] -relations -nctable <nc_tablename>

then all NC relations entries in the lc_relations table with the specified NC table name will be unloaded. To unload any NC Relations entry for a specified NC table and Source table the following command is used:
lcm_admin -unload [<relations_xml_file>] -relations -nctable <nc_tablename> sourcetable <source_tablename>

To unload any NC Relations entry that has a specified NC table and a blank Source table the following command is used:
lcm_admin -unload [<relations_xml_file>] -relations -nctable <nc_tablename> sourcetable " "

10.4.4

Unloading Data Availability to XML

The -unload option is used to unload a data availability file for a technology pack, in its original format so that the file can be modified and saved back to the database using the -load command. All parameters must be specified to unload the file including the Technology Pack name and the Technology Pack version. To unload a data availability xml file, as user virtuo:
lcm_admin -unload [<dataavailability_xml_file>] -dataavailability -techpack <techpack_name> -tpversion <techpack_version>

where:
<dataavailability_xml_file> is the name of the file to be created. The file can be created with a

relative or absolute file path. If a relative path is specified, it is relative to the current working directory. Any existing file using the same file name shall be overwritten. If the optional parameter [<dataavailability_xml_file>] is omitted then file naming will use a standard output.

10.5

Delete Datasources and NC Relations

The -delete option is used to delete a Datasource. You can not delete a Datasource that is associated with a Loader Configuration. A Datasource can be deleted using two methods: by specifying the Datasource name, Datasource version, Technology Pack name and Technology Pack version of the Datasource to be deleted. by specifying a Datasource XML file which will be parsed to obtain the Datasource name, Datasource version, Technology pack name and Technology Pack version of the Datasource to be deleted. To delete a Datasource by specifying Datasource parameters:
152 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

lcm_admin -delete -datasource <datasource_name> -dsversion <datasource_version> techpack <techpack_name> -tpversion <techpack_version>

To delete a Datasource by specifying a Datasource XML file:


lcm_admin -delete <datasource_xml_file>

10.5.1

Deleting NC Relations

The -delete option is used to delete an NC Relations entry. An NC Relations entry can be deleted using two methods: by specifying a combination of the NC Relations NC table and Source table. by specifying an NC Relations XML file which will be parsed to obtain the NC Relations to be deleted. To delete any NC relations entry that has an NC table and a specified Source table the following command is used:
lcm_admin -delete -relations -nctable <nc_tablename> -sourcetable <source_tablename>

This will delete any NC Relations entry with the specified nctable and sourcetable combination. To delete any NC Relations entry that has an NC table and a blank Source table the following command is used:
lcm_admin -delete -relations -nctable <nc_tablename> -sourcetable " "

To delete a set of NC Relations entries by specifying an NC Relations XML file the following command is used:
lcm_admin -delete -relations <relations_xml_file>

This will attempt to delete all NC Relations entries from the database that are specified in the NC Relations XML file. If an NC Relations entry specified in the NC Relations XML file does not exist in the database an error will be logged, and the lcm_admin tool will continue to delete the remaining NC Relations entries specified in the NC Relations XML file.

LCM Administration 153

10.6

LCM port change

The PROPERTYSERVICE_PROPERTY table contains a property name and value that is used by the lcm_admin tool.
PROPERTY_NAME domain VALUE localhost:8080

If the application server port ASHTTPPORT=8080 changes, then the new port value must be changed in the PROPERTYSERVICE_PROPERTY table to allow the Loader Configuration Manager to work correctly. Use the following SQL to update the port value.
export ORACLE_SID=vtdb sqlplus virtuo/<password> update PROPERTYSERVICE_PROPERTY set VALUE = 'localhost:<new port value>' where PROPERTY_NAME='domain' commit; exit

154

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

11

SBH Administration

The Busy Hour feature provides a way of calculating the busiest hour of the day for a performance metric. It reflects the peak demands on the network, for that metric, for each day. This section describes: Stored Busy Hour (SBH) Administration tool Customizing Stored Busy Hour definitions

11.1

Stored Busy Hour (SBH) Administration tool

Note: Do not run more than one of the following tools, or more that one instance of any of these individual tools, at the same time: techpack_admin, sbh_admin, summary_admin, kpicache_admin or report_impexp. For example, do not run summary_admin and sbh_admin, or two instances of summary_admin at the same time). The sbh_admin (Stored Busy Hour Administration) tool enables you to: Enable and disable SBH definitions Import SBH definitions Export SBH definitions and values List SBH definitions Execute SBH definitions Delete SBH definitions Prioritize SBH processing Turn on/off late data recalculation

Copyright IBM Corp. 2007, 2011

155

11.1.1

Enable Busy Hour definition(s)


-n Specify the <name> of the Busy Hour(s) to enable. Wildcards are possible using the % character.

sbh_admin -enable -n <name>

Example:
sbh_admin -enable -n SBH1

Enables the Stored Busy Hour definition SBH1.

11.1.2

Disable Busy Hour definition(s)


-n Specify the <name> of the Busy Hour(s) to disable. Wildcards are possible using the % character.

sbh_admin -disable -n <name>

Example:
sbh_admin -disable -n SBH1

Disables the Stored Busy Hour definition SBH1.

11.1.3

Import Stored Busy Hour definition(s)

Stored busy hour definitions are imported using xml files. See Customizing Stored Busy Hour definitions for information on the content of these files.
sbh_admin -i (-f <file> | -p <directory>) -m (overwrite|ignore|fail) -f -p -m Import the Stored Busy Hour Definition from <file>. Import all busy hour definitions stored in <directory>. All files with an .xml suffix will be imported. mode overwrite If the definition already exists, update it. ignore fail If the definition already exists, do not update it and do not fail the import. If the definition already exists, fail the import.

Examples:
sbh_admin -i -f SBH1.xml -m overwrite

Import the Busy Hour defined in SBH1.xml, overwriting the definition if it already exists.
sbh_admin -i -p indir -m overwrite

If two files exist in indir (SBH1.xml SBH2.xml).The definitions in SBH1.xml and SBH2.xml will be imported. The data dictionary must be run in order to be able to access the stored busy hour KPI from the UI. This can be done manually using the agent_admin CLI tool. For example:
agent_admin -u sysadm -p <password> -run <DataDictionaryid>

156

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

11.1.4

Export Stored Busy Hour definition(s) or values

sbh_admin -e -n <name> -t (definition|value) (-f <file> | -p <directory>) -scope <scope> -start <startDate> -end <endDate> -n -t -f Specify the <name> of the Busy Hour to export. Wildcards are possible using the % character Export the definition or the value. Export the Stored Busy hour Definition(s) to <file>. Note, it is possible to export more than one definition to a single file. Export all busy hour definitions that match <name> to <directory>. The file names will be the name of the SBH. Export either daily, weekly or monthly values. Only applicable when exporting values Export values that were calculated between <startDate> and <endDate> Export values that were calculated between <startDate> and <endDate> The dates must be in the format dd/MM/yy If these options are omitted the latest values will be calculated.

-p -scope -start -end

Examples:
sbh_admin -e -n SBH1 -t definition -f SBH1.xml

Export the definition of SBH1 to the file SBH1.xml.


sbh_admin -e -n SBH% -t definition -p outdir

Export all definitions that match SBH% to the directory outdir. If two definitions exist (SBH1 SBH2), the files outdir/SBH1.xml and outdir/SBH2.xml will be created.
sbh_admin -e -n SBH1 -t value -f SBH1.xml -scope daily -start 12/02/07 -end 15/02/ 07

Export the daily values of SBH1 calculated between 12/2/07 and 15/02/7 to the file SBH1.xml.

11.1.5

List SBH definitions


-n Specify the <name> of the Busy Hour to list. Wildcards are possible using the % character. If this option is omitted then all definitions will be listed. Print detailed information on the definition: If this option is omitted then only the name and enabled/disabled will be printed. Print the information to a <file>. If omitted the information will be printed to the screen.

sbh_admin -l (-n name) (-detail) (-f <file>)

-detail

-f

Example:
sbh_admin -l -n SBH1 -detail -f SBH1.log

Print detailed information on SBH1 to the file SBH1.xml.

SBH Administration 157

11.1.6

Execute SBH definition(s)


-n -f -start -end Specify the <name> of the Busy Hour(s) to execute. Wildcards are possible using the % character Print the results of the execution to <file>. If this option is omitted then the results will be printed to the screen Execute calculations between <startDate> and <endDate> Execute calculations between <startDate> and <endDate> The dates must be in the format dd/MM/yy If these options are omitted the latest values will be calculated.

sbh_admin -r -n <name> -f <file> -start <startDate> -end <endDate>

Example:
sbh_admin -r -n SBH1

Execute the Stored Busy Hour definition SBH1.

11.1.7

Delete SBH definition(s)


-n Specify the <name> of the Busy Hour(s) to delete. Wildcards are possible using the % character

sbh_admin -d -n <name>

Example:
sbh_admin -d -n SBH1

Delete the SBH definition SBH1. Note: When a Stored Busy Hour definition is deleted the data for the SBH is retained in the system. If a deleted Stored Busy Hour definition is (re-)provisioned, the data calculated for the SBH will be available again. See the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for information on provisioning an SBH definition.

158

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

11.1.8

Prioritize SBH

It is possible to prioritize the execution order of SBHs to ensure the most important busy hours can be processed first when scheduled jobs are run. The default priority is 99, this is the lowest priority. The highest priority is 1.
sbh_admin -s -n <name> -t priority -v <priority_value> -n -v Specify the <name> of the Stored Busy Hour to prioritize. Wildcards are possible using the % character Priority value, 1-99

Example:
sbh_admin -s -n SBH1 -t priority -v 1

Sets the priority of the SBH definition SBH1 to 1. Example:


sbh_admin -s -n Cell% -t priority -v 1

Sets the priority of all SBH definitions beginning with Cell to 1. Where a number of busy hours have the same priority they are executed according to busy hour name, in ascending order. For example, of two busy hours SBHA1 and SBHB2 with the same priority, SBHA1 will be executed first.

11.1.9

Enable/Disable calculation of Late Data for all Busy Hour definitions


-ld Late data calculation may be enabled/disabled on a per Busy Hour basis by setting the calculate-late-data attribute. enable/disable: The calculate-late-data attribute in all definitions is ignored. individual: The calculation or not of late data is specified by the definition itself.

sbh_admin -ld <enable|disable|individual>

Example:
sbh_admin -ld enable

Enable the calculation of Late Data for all Busy Hour definitions.

SBH Administration 159

11.2
11.2.1

Customizing Stored Busy Hour definitions


Overview

This section describes customizing Stored Busy Hour (SBH) definitions. You can customize an existing SBH definition by exporting and then re-importing the definition, see Import Stored Busy Hour definition(s) on page 156, and Export Stored Busy Hour definition(s) or values on page 157 for more information. Alternatively, the SBHs xml file can be edited before the being provisioned or imported.

11.2.2

Stored Busy Hour definition

An example SBH xml file is shown below:


<busy-hour-definition-list> <busy-hour-definition> <name>Complex_Determiner_Complex_Values</name> <focal-entity>Cell</focal-entity>

<determiner-type>max</determiner-type>
<busy-hour-determiner> <entity>Cell</entity> <field-name>Nokia.Packet_Control_Unit.sum_rlc_ul_traffic</field-name> <tp-field-id>W1G2O3XAHK26SEC6000HW01QK4</tp-field-id> </busy-hour-determiner> <busy-hour-values> <busy-hour-value> <entity>Cell</entity> <field-name>Nokia.Packet_Control_Unit.sum_rlc_ul_traffic</field-name> <tp-field-id></tp-field-id> </busy-hour-value> <busy-hour-value> <entity>Cell</entity> <field-name>Nokia.Packet_Control_Unit.total_sum_rlc_traffic</field-name> <tp-field-id></tp-field-id> </busy-hour-value> <busy-hour-value> <entity>Cell</entity> <field-name>ratio2_rl_traffic</field-name> <tp-field-id></tp-field-id> </busy-hour-value> </busy-hour-values> <rollup>true</rollup> <busy-hour-attributes> <rank-count>1</rank-count> <first-required>2007-11-01</first-required> <busy-hour-calculation-type>nonsliding</busy-hour-calculation-type> <disable>false</disable> <calculate-late-data>true</calculate-late-data> </busy-hour-attributes> </busy-hour-definition> </busy-hour-definition-list>

160

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

The following list describes the content of an SBH xml file: <busy-hour-definition-list> - All stored busy hour definition files must start with this tag, indicating that the contents are a set of stored busy hour definitions. This tag must occur only once. <busy-hour-definition> - Indicates the start of an individual definition. Multiple definitions can be included in a <busy-hour-definition-list>. <name>Complex_Determiner_Complex_Values</name> - The name of the stored busy hour. Maximum of 60 characters. <focal-entity>Cell</focal-entity> - This is the entity level at which the determiner will be calculated. It must be the same or higher in the hierarchy as the determiner KPIs entity. <determiner-type>max</determiner-type> - Smallest or largest value to be used for the busy hour: min or max. If a value is omitted the default max is used. max uses the greatest value for the busy hour. min uses the smallest value for the busy hour. For example, min is used where a KPI represents the percentage of a channels availability - the busier the equipment the less percentage availability there is. <busy-hour-determiner>
<entity>Cell</entity> <field-name>Nokia.Packet_Control_Unit.sum_rlc_ul_traffic</field-name> <tp-field-id>W1G2O3XAHK26SEC6000HW01QK4</tp-field-id> </busy-hour-determiner>

This set of tags specifies which KPI is used to decide which hour of the day is the busiest. The <entity> tag contains the entity of the KPI being used. The <field-name> tag contains the name of the KPI. The <tp-field-id> tag contains the Universal Unique Identifier (UUID) of the KPI. The UUID is case insensitive, but the case of the field name must match that in the technology pack. Either field-name or tp-field-id can be left blank provided the other is specified. The tags cannot be omitted, but either can be empty. When the busy hour is calculated, the value and timestamp of this field are stored in the sbh_busy_hour_daily, sbh_busy_hour_weekly and sbh_busy_hour_monthly tables. <busy-hour-values> - This tag contains the list of KPIs to use as the associated values of the busy hour. When the busy hour is executed, the determiner is calculated first to get the busiest hour and then all the KPIs in the values list are calculated at that hour. <busy-hour-value>
<entity>Cell</entity> <field-name>ratio2_rl_traffic</field-name> <tp-field-id></tp-field-id> </busy-hour-value>

This set of tags is identical to those for the busy-hour-determiner. The values are stored in one of two ways: in TRAFFIC_xBH tables or in ENTITY_xBH tables. Peg KPIs that are not being rolled up are stored in the busy hour equivalent of their traffic table. For each traffic table there are three busy hour tables, with the _TAB of the traffic table replaced with _DBH, _WBH, and _MBH, (for

SBH Administration 161

daily, weekly and monthly values). These tables are created by the system when the technology packs are installed. PCalcs and UDCs are stored in entity busy hour tables. These tables are named with the entity name followed by _DBH, _WBH and _MBH. These tables are created when technology packs are installed, but only contain the instance_id, entity_id columns, timestamp and measurement_seconds. Columns are added to these tables when busy hours that contain PCalcs or UDCs are installed. Columns to store PCalcs are named with the UUID of the KPI. Columns to store UDCs are named with a conflated form of the UDC name that has been altered to suit column naming requirements. The mapping of field name to column name is stored in the MANGLER table. Columns are added to the tables until the configured limit is reached and a new table is created with the number of the table added to the name. For example, CELL_DBH1, CELL_WBH1, CELL_MBH1...... _DBH2 and so on. <rollup>true</rollup> - The rollup tag specifies whether or not the associated values are rolled up to the focal entity. If false, KPIs are calculated and stored at their own entity. If true, all associated values are rolled up to the focal entity and stored in the entity xBH tables of the focal entity. This includes Pegs, PCalcs and UDCs. When Pegs are being rolled up, the columns in the entity busy hour tables are named using the UUID of the KPI. This tag is optional and must contain true or false. If not present, it defaults to false. <busy-hour-attributes> - This set of tags configures the busy hour. It is an optional tag, but if it is included then all the attributes must be specified. <rank-count>1</rank-count> - The number of hours to calculate. For example, 3 will calculate the busiest three hours of the day and also all the associated values at each of the three hours. This must be a number from 1 to 24. <first-required>2007-11-01</first-required> - The earliest date that the busy hour will attempt to be calculated. Any dates before this, whether they are specified manually on the command line or automatically during an agent-invoked run, will be ignored. The date format is YYYY-MM-DD. <disable>false</disable> - Whether or not the busy hour is active. Must be true or false. <busy-hour-calculation-type>nonsliding</busy-hour-calculation-type> - This defines whether the busy hour is sliding or not. It can be either sliding or nonsliding. Nonsliding means the busy hour will always be aligned to hours, for example 14:00 to 15:00 or 18:00 to 19:00. Sliding busy hours are calculated down to the interval of the data, for example 14:15 to 15:15. <calculate-late-data>true</calculate-late-data> - Should the system recalculate a busy hour for a date on or before the most recently calculated busy hour. For example, if yesterday has already been calculated, and more data for yesterday is received, that is considered late data.

162

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

12

Alarm Administration

Alarm administration consists of: The Alarm administration tool The External Alarm API

12.1
12.1.1

Alarm administration tool


Overview

The alarm_admin tool enables Alarm Template xml files to be loaded into the database, unloaded to a file, updated or deleted. Using the alarm_admin tool you can: List an Alarm Template Manage contexts The Alarm Manager application requires that Alarm Templates be present in the database in order for alarm definitions to be created; alarm definitions are based on Alarm Templates. It is also possible to create Alarm Templates in the Alarm Manager application. Before an Alarm Template can be created, it must have a context. For more information on the Alarm Manager see the Tivoli Netcool Performance Manager: User Guide - Wireless Component. Document contexts All Alarm Templates (and alarm definitions) are organized into a tree-like hierarchical structure consisting of document contexts. A context is a particular node in the tree, for example: +GSM LAYER +----- GSM ALARM TEMPLATE LAYER 1 +---------------------------- CONGESTION ALARMS All the elements shown above are nodes, the top-level node GSM Layer is referred to as the root node. The same document context tree is used to logically organize Alarm Templates and Alarm Definitions.

Copyright IBM Corp. 2007, 2011

163

Note: Before an Alarm Template can be created, it must have a context.

Alarm Template XML Documents All Alarm Templates created are identified uniquely by a context name, an alarm name and a version ID. These fields are represented by xml tags in Alarm Template documents. The <AlarmContext> tag specifies the full context path which the document should be assigned to e.g. "GSM.GSM Layer 1". The <AlarmName> tag specifies the name of the alarm which corresponds to the X.733 Specific Problem field e.g. "btsCongestion". The <VersionID> tag specifies the numerical version of the Alarm Template xml document, for example "1.0", "1.1", "1.2" and so on. Alarm Template xml documents loaded into the database must conform to the Alarm Template DTD. Version Numbering Alarm Template xml documents must contain a version number that must contain a fractional part represented as a whole number. For example, 1.0, 2.0 and 3.0 are all valid version numbers whereas 1.1 is not. This is to keep the versioning scheme in line with Alarm Definitions which use the same version numbering system. Usage Usage for alarm_admin:
{ -h | -help | --help | } | { -load <alarm_template_xml_file> | -drop -context <context> -name <name> -version <version> | -unload -context <context> -name <name> -version <version> -file <file> | -list | -createcontext <document_context_path> | -removecontext <document_context_path> [-r]

} The options for the alarm_admin script are described in Table 58.

164

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 58:
Option
-list -load <alarm_template_xml_file>

alarm_admin options
Description

Shows a summary of all loaded Alarm Templates. Not currently supported. Relative or absolute file path and name of the Alarm Template xml file to load. If a relative path is given, it must be relative to the current working directory. Not currently supported. Requires the context, template name and version. Not currently supported. Requires the context, template name, version and file. Must be a one or more path element names separated by "." e.g. "IP Layer.IP Layer 1". Must be a one or more path element names separated by "." e.g. "IP Layer.IP Layer 1".

-drop -context <context> -name <name> -version <version> -unload -context <context> -name <name> -version <version> -file <file> -createcontext <document_context_path> -removecontext <document_context_path> [-r]

12.1.2

Manage Document Contexts

The -createcontext option is used to create a new document context path where each node in the given path will be created as necessary. The -removecontext option is used to remove either a single node of a context path, or the context path node and all its child nodes recursively (using the optional -r qualifier). Because document contexts are created automatically as needed when loading Alarm Templates into the database, this option should not be needed very often. However, document contexts may need to be set up to provide a particular tree structure, even though some nodes in the tree may not yet contain Alarm Templates. Note: If an alarm template is being created using the Alarm Manager tool, a context must exist before the template can be created. See the Tivoli Netcool Performance Manager: User Guide - Wireless Component, for information on the Alarm Manager.

Creating a Document Context Note: This option is not supported in this release of the product. The -createcontext option, creates all nodes in the document context path specified. For example:
alarm_admin -createcontext One.Two.Three

Alarm Administration 165

Creates the nodes: One One.Two One.Two.Three Nodes are created on an as required basis, so that any nodes in the context path that currently exist are effectively ignored. Removing a Document Context The -removecontext option, removes one or more context nodes in the given context path. This function may be needed for maintenance when certain context paths are not referred to by any Alarm Templates and are therefore no longer needed. Removing a Single Context Node It is not possible to remove a context node that has children, so for a given context path only a childless node can be removed. For example, given that the context path One.Two.Three.Four.Five:
alarm_admin -removecontext One.Two.Three.Four.Five

will remove the last node (Five). Removing a subcontext tree To remove a context node and its children, you use the -removecontext option together with the -r flag. This will remove the context node of the given context path and all of its children, its children's children and so on recursively down the context tree. For example, for a context One.Two.Three.Four.Five:
alarm_admin -removecontext One.Two.Three -r

will remove the nodes: One.Two.Three and its child contexts One.Two.Three.Four One.Two.Three.Four.Five with the following nodes remaining: One One.Two

166

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: When using the -r option the parent node must be specified, you cannot specify a subset of the context. In the example above, specifying alarm_admin -removecontext Two.Three.Four.Five -r will return an error. It is not possible to remove the specified document context path when there are Alarm Templates or Alarm Definitions that are stored under that context node or any of its children. This is true when removing a single context and also a complete context subtree. In the latter case the specified context cannot be removed if it, or any of its child context nodes, contains any Alarm Templates. Note: Alarm Definitions are organized in the same document context hierarchy as Alarm Templates.

12.1.3

List Alarm Templates

The -list option displays a list of all Alarm Templates in the database. To list all Alarm Templates:
alarm_admin -list

The following is a -list output example:


Alarm Template Summary [IP Tech Layer.IP Tech Layer 1.Congestion Alarms.btsCongestion] Version 3.0 Version 2.0 Version 1.0 [IP Tech Layer.IP Tech Layer 1.Dropped Call Alarms.tchAvailability] Version 1.0

12.1.4

Alarm Definition Mib File

The metricaAlarmTrap.mib file can be extracted from Netcool_feature.zip file which is available in the following location:
/appl/virtuo/conf/netcool_rulesfiles

Alarm Administration 167

12.2
12.2.1

External Alarm API


Overview

The External Alarm API enables administrators to raise and clear alarms using a command line tool and using PL-SQL. It is also possible to configure and generate data availability alarms. Alarms are viewed in the Alarm Viewer, see the Tivoli Netcool Performance Manager: User Guide Wireless Component, for information on using the Alarm Viewer. This section describes the following topics: alarmapi_admin Generate an alarm Clear an alarm Display a list of available reports Empty alarm spool daemon Generate data availability alarms

12.2.2
Usage

alarmapi_admin

alarmapi_admin -d Alarm Spool Daemon (usually run by sap). alarmapi_admin -e Empty Alarm Spool. alarmapi_admin -da Generate Data Availability alarms (usually run by cron). alarmapi_admin -g notification_id ev_source ev_type monitored_attribute \ managed_object_class managed_object_instance alarm_predicate probable_cause \ specific_problem additional_text trend_indication report_id alarm_severity [event_time] Generate a specific alarm. alarmapi_admin -r Display available report definitions with their report IDs.

168

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

12.2.3

Generate an alarm

To generate an alarm using the command line:


alarmapi_admin -g <notification_id> <ev_source> <ev_type> <monitored_attribute> <managed_object_class> <managed_object_instance> <alarm_predicate> <probable_cause> <specific_problem> <additional_text> <trend_indication> <report_id> <alarm_severity> [event_time]

To generate an alarm using PL-SQL:


exec ALARMAPI.alarm(notification_id, ev_source,ev_type, monitored_attribute, managed_object_class, managed_object_instance, alarm_predicate, probable_cause, specific_problem, additional_text, trend_indication,report_id, alarm_severity,SYSDATE);

Where:
Table 59:
Parameter
notification_id ev_source ev_type monitored_attribute

Alarm API parameters


PL-SQL format Description
INTEGER Alarm notification Id. VARCHAR2(255) Event/Alarm source. VARCHAR2(255) Event type, see Event types. NUMBER Attribute value that triggered the alarm, for example 51.3.

CLI format
Integer String String String

Default
None. None. None. None. None.

managed_object_class String managed_object _instance alarm_predicate probable_cause specific_problem additional_text trend_indication

VARCHAR2(255) Entity name, for example cell.

String String String String String String

VARCHAR2(255) Entity instance name/local key, None. for example 12345. VARCHAR2(255) Alarm predicate. None. VARCHAR2(255) Probable alarm cause, see Prob- None. able causes. VARCHAR2(255) Specific alarm problem. VARCHAR2(255) Any additional text. VARCHAR2(255) 'more severe' (raised) or 'less Severe' (cleared), see Trend Indications. NUMBER Associated report Id, given using alarmapi_admin -r. None. None. None.

report_id alarm_severity event_time

Integer String

None. None.

VARCHAR2(255) Alarm severity, see Severity.

String: DATE yyyyMMddHHmmss.

Optional. Alarm time. When Current the alarm reflects an event that time. happened in the past, this should be set to the event time. If not provided, it is assumed to be the current time.

Command line example:


alarmapi_admin -g 3 "Seizure Attempts" Environmental 0 Cell 10002 seizure_attempts informationMissing "Threshold Crossed" additional_text "more Severe" 1401 Major 20071231133153

Alarm Administration 169

PL-SQL example: 1. Ensure the alarm spool daemon is running, see Empty alarm spool daemon, for instructions on starting/stopping the daemon. 2. As user virtuo:
exec ALARMAPI.alarm(3,'Test','Environmental',123.456,'Cell','12346', 'seizure attempts','informationMissing','Threshold crossed', 'Some Additional Text','more Severe',1401,'Major', SYSDATE);

Note: Generating alarms from the command line can be slow. If many alarms need to be generated, it is recommended that they are batched using the PL-SQL alarming feature.

12.2.4

Clear an alarm

Alarms can be cleared by using the CLEARED event type. For example, to clear an alarm using the command line:
alarmapi_admin -g 3 "Seizure Attempts" Environmental 0 Cell 10002 seizure_attempts informationMissing "Threshold Crossed" additional_text "less Severe" 1223 Cleared 20071231133153

12.2.5

Display a list of available reports

Reports available for association with Tivoli Netcool Performance Manager alarms can be listed. To display a list of available reports:
alarmapi_admin -r

The values listed are used to define the report_id parameter when generating alarms from the command line and PL-SQL, and when configuring data availability alarms. Note: Linking an alarm to a report Id makes it possible for users to open in-context reports directly from other supported applications.

12.2.6

Empty alarm spool daemon

The Alarm Spool Daemon is a component of the PL-SQL alarm functionality. It runs as a sap process and permanently polls the database for PL-SQL generated alarms, forwarding PL-SQL generated alarms to the Tivoli Netcool Performance Manager software. Interaction with the component is usually unnecessary. If many alarms have been queued (in case of backlog for example), it is possible to empty the alarm spool of all its alarms. To empty the alarm spool: 1. Stop the daemon, as user virtuo:
sap stop asd
170 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

2. Empty the spool:


alarmapi_admin -e

3. Re-start the daemon:


sap start asd

12.2.7

Data availability alarms

Data availability alarms are raised when availability reaches a level below a defined level of availability. A raised alarm is cleared when the level of availability returns to above the defined level. Data availability alarms are configured using the following tables: DA_MONITOR DA_MONITOR_ENTITY Several alarm monitors can be configured using the DA_MONITOR table. Each alarm monitor can be configured to monitor a set of block/entity combinations using the DA_MONITOR_ENTITY table. DA_MONITOR table The following lists the tabless parameters. The system uses these parameters to fill-in the generated alarm if the computed availability is lower than the required threshold.
Table 60: DA_MONITOR table
Name
MONITOR_ID MIN_PERCENT

Description

Null?

Type

Unique Monitor ID. The minimum required data availability percentage (0=0%, 100=100%). An alarm will be raised if data availability is lower than minimum required data availability percentage value, and cleared if the availability is equal to, or greater than this value.

NOT NULL NOT NULL

INTEGER NUMBER

OBJECT_CLASS OBJECT_INSTANCE PROBABLE_CAUSE SEVERITY ADDITIONAL_TEXT NOTIFICATION_ID REPORT_ID DELAY_MINUTES

Entity name, for example cell. Entity instance name/local key, for example 12345. Probable alarm cause, see Probable causes. Alarm severity, Severity. Any additional text. Alarm notification ID. Associated report ID.

NOT NULL NOT NULL NOT NULL NOT NULL NOT NULL NOT NULL NOT NULL

VARCHAR2(255) VARCHAR2(255) VARCHAR2(255) VARCHAR2(255) VARCHAR2(255) INTEGER INTEGER INTEGER

Delay in minutes for data latency. For exam- NOT NULL ple, if data is usually loaded into the database 1 hour after the actual data timestamp, 60 needs to be entered.

Alarm Administration 171

Table 60: DA_MONITOR table


Name
ENABLED

Description

Null?

Type

1 enables the monitor. Any other value will disable the monitor.

NOT NULL

INTEGER

One row should be created for each alarm monitor. Configure monitor The following example uses SQL*Plus. 1. Connect to the database and configure the monitor, for example:
sqlplus virtuo/<PASSWORD>@VTDB << END DELETE FROM DA_MONITOR_ENTITY; DELETE FROM DA_MONITOR; INSERT INTO DA_MONITOR( MONITOR_ID,MIN_PERCENT,OBJECT_CLASS, OBJECT_INSTANCE,PROBABLE_CAUSE,SEVERITY, ADDITIONAL_TEXT,NOTIFICATION_ID, REPORT_ID,DELAY_MINUTES,ENABLED) VALUES(1,50,'BS','Dublin', 'informationMissing','Major', 'Base Station Dublin is probably down',3,1223,0,1); COMMIT; END

DA_MONITOR_ENTITY table The following lists the tabless parameters.


Table 61:
Name
MONITOR_ID BLOCK_ID

DA_MONITOR_ENTITY table
Null? Type

Description

Unique monitor ID defined for the monitor.

NOT NULL NOT NULL

INTEGER INTEGER VARCHAR2(255) VARCHAR2(255)

The block id as defined in the DA_BLOCKNAMES table.

ENTITY_NAME

The entity name (e.g. 'Cell') of the entity to NOT NULL monitor as defined in the WMN_ENTITY table.
NOT NULL the unique identifier (e.g. CELL_ID) of the entity to monitor. For a given entity, the unique identifier column can be found under the FDN_ATTR column of the WMN_ENTITY table (it is usually <entity>_ID).

LOCAL_KEY

One row must be configured for each entity/block to monitor, for each alarm monitor. The entity name and local key parameters do not have to be the same as object class and object instance. Usually, a high-level object (such as a BSC) will be configured for the alarm content, and a set of lowerlevel objects (such as cells) will be configured for the monitored blocks. Several combinations of identical block Id, entity name and local key are supported but only if each combination is unique.
172 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Configure a monitor block The following example uses SQL*Plus. 1. Connect to the database and configure the monitor block, or set of monitor blocks, for example:
sqlplus virtuo/<PASSWORD>@VTDB << END DELETE FROM DA_MONITOR_ENTITY; INSERT INTO DA_MONITOR_ENTITY( MONITOR_ID,BLOCK_ID, ENTITY_NAME,LOCAL_KEY) VALUES (1,2,'Cell','12345'); INSERT INTO DA_MONITOR_ENTITY( MONITOR_ID,BLOCK_ID, ENTITY_NAME,LOCAL_KEY) VALUES (1,3,'Cell','12345'); INSERT INTO DA_MONITOR_ENTITY( MONITOR_ID,BLOCK_ID, ENTITY_NAME,LOCAL_KEY) VALUES (1,2,'BS','12345'); INSERT INTO DA_MONITOR_ENTITY( MONITOR_ID,BLOCK_ID, ENTITY_NAME,LOCAL_KEY) VALUES (1,2,'Cell','12346'); COMMIT; END

12.2.8

Generate data availability alarms

Data availability alarms are raised and cleared automatically on an hourly basis. Data availability alarms are cleared when data availability returns to a value equal to or greater than that set by the MIN_PERCENT parameter. Data availability alarms are normally executed using a cron job. By default this cron job processes data availability alarms every 15 minutes, from 14 minutes passed the hour. The following is the default cron entry:
14,29,44,59 * * * * /appl/virtuo/bin/alarmapi_admin -da

The entry can be changed. Data availability alarms can also be generated manually. To manually generate a data availability alarm:
alarmapi_admin -da

Alarm Administration 173

Monitor interval For each alarm monitor, the data availability alarm period is defined as: truncoffset(now)1hour-delay,truncoffset(now)delay Where: now - is the time the data availability alarm process starts at (defined in a cron entry by default as at 14, 29, 44 and 59 minutes past the hour, each hour). delay - is the DELAY_MINUTES value specified in the DA_MONITOR table offset - is the value of ibm.tivoli.tnpmw.alarms.alarmapi.da.intervalMinutes It is possible to define a different offset by changing the value for the property ibm.tivoli.tnpmw.alarms.alarmapi.da.intervalMinutes found in appl/virtuo/conf/ alarm_external_api/alarmapi.properties. The following table illustrates monitoring intervals given global offset settings, delays for each monitor and Data Availability alarm processor wake-up times: The following table gives example monitoring intervals:
Table 62:
Offset (minutes)

Monitoring intervals
Wake-up Monitoring Interval

Delay (minutes)

60 60 60 60 60 60 15 15 15 15 15 15

0 30 180 0 30 180 0 30 180 0 30 180

14:14 14:14 14:14 14:29 14:29 14:29 14:14 14:14 14:14 14:29 14:29 14:29

[13:00-14:00[ [12:30-13:30[ [10:00-11:00[ [13:00-14:00[ [12:30-13:30[ [10:00-11:00[ [13:00-14:00[ [12:30-13:30[ [10:00-11:00[ [13:15-14:15[ [12:45-13:45[ [10:15-11:15[

12.2.9

Log file

The log file for the Alarm API feature can be found at:
/appl/virtuo/logs/alarmapi/alarmapi*.log

174

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

12.2.10

Parameter values - lists

A number parameters have a defined values, including: Event types Probable causes Trend Indications Severity Event types Note: Tivoli Netcool Performance Manager may accept non-standard values, but alarms with non-standard values cannot be forwarded to SNMP recipients. The following table lists all the possible values for event types.
Table 63:
Event type
Communications QualityOfService ProcessingError Equipment Environmental

Event Types

Probable causes The following table lists all the possible values for probable_cause.
Table 64:
Probable cause
adapterError airCompressorFailure airConditioningFailure airDryerFailure aIS antennaFailure applicationSubsystemFailture applicationSubsystemFailure authenticationFailure backplaneFailure bandwidthReduced bandwidthReducedX733 batteryChargingFailure batteryDischarging batteryFailure breachOfConfidentiality broadcastChannelFailure

Probable Causes
Probable cause
lossOfRedundancy lossOfSignal lossOfSignalX733 lossOfSynchronisation lowBatteryThreshold lowCablePressure lowFuel lowHumidity lowTemperatue lowWater materialSupplyExhausted memoryMismatch modulationFailure multiplexerProblem multiplexerProblemX733 nEIdentifierDuplication nonRepudiationFailure

Alarm Administration 175

Table 64:
Probable cause
cableTamper callEstablishmentError callSetUpFailure commercialPowerFailure communicationsProtocolError communicationsSubsystemFailure configurationOrCustomisationError configurationOrCustomizationError congestion congestionX733 connectionEstablishmentError coolingFanFailure coolingSystemFailure corruptData coruptData cpuCyclesLimitExceeded databaseInconsistency dataSetOrModemError dataSetProblem degradedSignal degradedSignalX733 delayedInformation demodulationFailure denialOfService diskFailure dteDceInterfaceError duplicateInformation enclosureDoorOpen enclosureDoorOpenX733 engineFailure equipmentIdentifierDuplication equipmentMalfunction excessiveBER excessiveErrorRate excessiveResponseTime excessiveRetransmissionRate excessiveVibration explosiveGas externalEquipmentFailure externalIFDeviceProblem externalPointFailure farEndReceiverFailure fileError fileErrorX733 fire fireDetected fireDetectorFailure flood framingError

Probable Causes
Probable cause
other outOfCPUCycles outOfHoursActivity outOfMemory outOfMemoryX733 outOfService outputDeviceError pathTraceMismatch payloadTypeMismatch performanceDegraded powerProblem powerProblems powerSupplyFailure pressureUnacceptable proceduralError processorProblem processorProblems protectingResourceFailure protectionMechanismFailure protectionPathFailure pumpFailure pumpFailureX733 queueSizeExceeded realTimeClockFailure receiveFailure receiveFailureX733 receiverFailure receiverFailureX733 rectifierFailure rectifierHighVoltage rectifierLowFVoltage reducedLoggingCapability remoteAlarmInterface remoteNodeTransmissionError remoteNodeTransmissionErrorX733 replaceableUnitMissing replaceableUnitProblem replaceableUnitTypeMismatch resourceAtOrNearingCapacity responseTimeExcessive retransmissionRateExcessive routingFailure sfwrDownloadFailure sfwrEnvironmentProblem signalLabelMismatch signalQualityEvaluationFailure smoke softwareError softwareErrorX733

176

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 64:
Probable cause
framingErrorX733 frequencyHoppingFailure fuseFailure generatorFailure heatingVentCoolingSystemProblem highHumidity highTemperature highWind humidityUnacceptable iceBuildUp informationMissing informationModificationDetected informationOutOfSequence inputDeviceError inputOutputDeviceError intrusionDetection invalidMessageReceived iODeviceError keyExpired lanError leakDetected lineCardProblem localNodeTransmissionError localNodeTransmissionErrorX733 lossOfFrame lossOfFrameX733 lossOfMultiFrame lossOfPointer lossOfRealTimel

Probable Causes
Probable cause
softwareProgramAbnormallyTerminated softwareProgramError storageCapacityProblem storageCapacityProblemX733 synchronizationSourceMismatch systemResourcesOverload temperatureUnacceptable terminalProblem thresholdCrossed timeoutExpired timingProblem timingProblemX733 toxicGas toxicLeakDetected tranceiverFailure transmissionError transmiterFailure transmitFailure transmitFailureX733 transmitterFailure trunkCardProblem unauthorizedAccessAttempt unavailable underlayingResourceUnavailable underlyingResourceUnavailable unexpectedInformation ventilationsSystemFailure versionMismatch versionMismatchX733

Trend Indications The following table lists all the possible values for trend_indication.
Table 65: Trend Indications
Trend Indication
More Severe Less Severe No Change

Alarm Administration 177

Severity The following table lists all the possible values for alarm_severity.
Table 66:
Severity
Cleared Indeterminate Critical Major Minor Warning

Severity values

178

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

13

The Summarizer and Summary Administration

Often there is a need to reuse data collected for a particular period multiple times, for example, a whole days, weeks or month's worth of data. Traffic data aggregated over predefined time intervals (raw, hour, day, week, month) can be stored in the database for later use as normal KPIs. This aggregated data is called summary data and the process of collecting this data is called summarization. The summarization process is run once a day. The summary_admin tool is used for creating, deleting, running, and exporting summary definitions, as well as setting the number of summary engine instances that can run concurrently. Note: Do not run more than one of the following tools, or more that one instance of any of these individual tools, at the same time: techpack_admin, sbh_admin, summary_admin, kpicache_admin or report_impexp. For example, do not run summary_admin and sbh_admin, or two instances of summary_admin at the same time). This chapter describes the following: The Summarizer The summary_admin CLI tool Configuring summary definitions

Copyright IBM Corp. 2007, 2011

179

13.1

Summarizer
Processing of summaries can be switched on and off. If summaries are switched on they will run if they are switched off they will not run. A Summary log file is written to the appserver log file directory in $WMCROOT/logs/as/ default. This is a log for summaries ran by the scheduler. The start day of the week can be set. The start day of the week can vary depending on the geographical location. The summary process allows you to set the start day of the week to the day the user requires. The summarizer will summarize old loaded data automatically. When the summarizer process is started it detects whether old loaded data is loaded into the system. Data is defined as old loaded data if it is older than 1 day. If the old loaded data has already being summarized then it will be re-summarized and the old summary data is deleted and the new summary data is populated in the summary table. If the old loaded data is older than 1 week or 1 month it will cause weekly and monthly summaries to be re-calculated respectively.

The Summarizer component and the summarization process supports the following:

13.1.1

Switching the summary process on or off

Switching the summaries on or off is done by setting a value in the wm_system_values_v view. The agent_admin tool runs the summary process but the summary process detects whether it is to process summaries or exit. See Agent Maintenance on page 79, for information on the agent_admin tool. Processing of summaries can be switched on or off by setting the ProcessSummary value in the wm_system_values_v view to either true or false. To view what the ProcessSummary value is set to, logon to sqlplus, and run the following sql statement.
SELECT * FROM wm_system_values_v WHERE name = 'ProcessSummary';

If the value is set to true, summaries will be run, if set to false summaries will not be run. To change the value for ProcessSummary run the following SQL statement.
UPDATE wm_system_values_v set value = 'TRUE' WHERE name = 'ProcessSummary'; Commit;

13.1.2

Summary Log file

A Summary log file is written to the appserver log file directory in $WMCROOT/logs/as/ default. This is a log for summaries ran by the scheduler, the name of the log file is assummary.log.

180

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: The log file for summaries ran by the summary_admin CLI tool is located in: $WMCROOT/logs/ summariser/summariser-server.log. The information contained in the log file depends on the DEBUG level set for the application. Information can include: Details on whether the summary process is switched on or off. The start time for the summary process. Details on summaries that have run in the past including how many rows were processed by the summary. The end time for the summary process. The following example information is typical of information contained in the as-summary.log file.
10:09:17,431 INFO switched on. [summary.NewDataHandler] (Thread-123) Summary execution is

10:09:17,529 INFO [summary.NewDataHandler] (Thread-123) Summary engine execution has started at Thu Feb 15 10:09:17 GMT 2007 10:09:29,436 INFO [summary.SummaryQueryBuilder] (Thread-123) 0 rows were stored in table VNL_CELL_SDCCH_DSM for dates between Tue Feb 13 00:00:00 GMT 2007 and Wed Feb 14 23:59:59 GMT 2007 10:09:37,595 INFO [summary.SummaryQueryBuilder] (Thread-123) 100 rows were stored in table VNL_CELL_HO_CAUSE_DSM for dates between Tue Feb 13 00:00:00 GMT 2007 and Wed Feb 14 23:59:59 GMT 2007 10:10:11,326 INFO [summary.SummaryQueryBuilder] (Thread-123) 200 rows were stored in table VNL_CELL_HANDVR_RSLT_DSM for dates between Tue Feb 13 00:00:00 GMT 2007 and Wed etc .. 10:32:28,194 INFO [summary.NewDataHandler] (Thread-39) Summary engine execution has completed at Thu Feb 13 11:32:28 GMT 2007

Information for summaries can also be found in the summary_history table:


describe summary_history; Name Null Type ------------------------------ ----------------------------------------------------------------------SUMMARY_HISTORY_ID SUMMARY_ID SUMMARY_INTERVAL FIRST_AVAIL NOT NULL NOT NULL NUMBER NUMBER CHAR(1) DATE

The Summarizer and Summary Administration 181

LAST_AVAIL LAST_RUN_START LAST_RUN_END LAST_RUN_RESULT LAST_RUN_TEXT LAST_SUCCESSFUL_RUN

DATE DATE DATE NUMBER(6) VARCHAR2(200) DATE

13.1.3

Start day of week

The start day of the week can vary depending on where in the world you are located, the summary process allows you to set what day of the week the user wants to use as the start of the week. To view what the start day of the week is set to, log on to sqlplus and run the following SQL statement:
SELECT * FROM wm_system_values_v WHERE name = 'StartOfWeek';

The following list details what value determines what day of the week. SUNDAY MONDAY TUESDAY THURSDAY FRIDAY SATURDAY 7 1 2 4 5 6

WEDNESDAY 3

The start of the week should be set when the system is installed and should not be altered afterwards, to set the value for StartOfWeek, issue the following command by using SQLPLUS:
UPDATE wm_system_values_v SET value = COMMIT; '1' WHERE name = 'StartOfWeek';

13.1.4

Summary grace period

The summary service is normally scheduled to run every night, so when the day has finished (rolled over) the service will wait a certain amount of time (the 'grace period') before running all the summaries. This is to give the system time to collect all the necessary data from its data feeds. For example, if the summary service is scheduled to run at midnight (00:00), it would actually run at 02:00, if the grace is set to 2. This value can be found in:
/appl/virtuo/conf/summaryservice/default.properties

and can be changed by setting the attribute:


com.vallent.pm.summaryservice.core.engine.

The default is 2. The values supported are 0 to 23.


182 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

13.1.5

Summarize old loaded data

The summarizer will summarize or re-summarize old loaded data automatically for Daily, Weekly, or Monthly data only. Raw or Hourly data is not summarized by the summarizer. When the summarizer process is started, it detects if old loaded data is loaded onto the system. Daily, Weekly, or Monthly data is determined to be old loaded data if it is older than 1 day old. If the old loaded data has already being summarized then it will be re-summarized. The old summary data is deleted and the new summary data is populated in the summary table. If the old loaded data is older than 1 week or 1 month it will cause weekly and monthly summaries to be re-calculated respectively. Summarizing old loaded data can be turned off, see Calculate-late-data tag.

13.2

summary_admin CLI

The summary_admin CLI tool is a ksh shell script located in the $WMCROOT/bin directory, and is used primarily to run one-off summaries. You use the summary_admin CLI tool to create (provision), export, delete, run, disable, or enable summary definitions. In addition, you can set the number of summary engine instances that can run concurrently.

13.2.1

Provision a summary

The create options allow the user to create a summary definition in one of two ways: using parameters using an XML file Only simple summary definitions can be created using the parameters option, whereas both simple and complex definitions can be created using the XML option. Note: Both options are mutually exclusive.

Create a summary using parameters


Usage: summary_admin -c -source <source_table|UDC> ( [-ir] [-sir] [-fr] [-ts] [- ignoreaggr] [-entity] [-udc] ) -c Mandatory. Use this switch to create a summary. Mandatory. Specify the source for the summary to be a traffic table, or all UDCs on system. [-ir <raw|hourly|daily|weekly|monthly>] Specify what intervals to create for the summary; raw, hourly, daily, weekly, or monthly. If the source is traffic table then daily, weekly and monthly intervals are created by default. If the source is UDC then raw and hourly intervals -source <source_table_name|UDC>

The Summarizer and Summary Administration 183

are created by default. [-sir <daily|raw>] Optional. The interval of an existing source summary table. [-fr <DD-MM-YYYY>] Optional. The date that the summary is to run from. [-ts <tablespace name>] Optional. An existing tablespace in which the summary will be created. [-ignoreaggr <yes|no>] Optional. Ignore the average of average check. [-entity <entity_level>] Optional. The entity level from which to create a UDC summary. [-udc <udc_name>] Optional. A UDC to add to an existing summary.

Note: The ir option is used to identify the type of summary to create, while the sir option is used to identify the type of summary that will be used as a source for the summary creation. By default, the source (-sir) is raw performance tables.

Note: The sir option is to be used in cases where the user wants to create a summary based on an existing summary, i.e. create a weekly or monthly summary based on a daily summary. The sir and ir options are used in conjunction.

Note: The use of the ir option does not require the use of the sir option, as this option defaults to raw.

Note: It is not permissible to create a monthly summary based on a weekly summary.

Create a summary by using an XML file


Usage: summary_admin -c f <filename> [-ignoreaggr <yes|no>] -c -f <filename> Mandatory. Use this switch used to create a summary. Mandatory. The filename and path of an XML file to

use for creating the summary. [-ignoreaggr <yes|no>] Ignore the average of average check.

Note: The following is an example of how to provision a summary using an XML file:
summary_admin c f /appl/virtuo/import/filename.xml

184

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: By default summary definition provisioning during technology pack installation is switched off. If summary definition provisioning during technology pack installation is required then the following property must be set to true:
vallent.vmm.techpack.provision.summaries

This property is found in:


/appl/virtuo/conf/vmm/default.properties

See the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for information on technology pack installation and provisioning summary definitions.

13.2.2

Delete a summary definition


-delete <summary_name> Mandatory. The name of a summary to delete. Optional. The interval of a summary to delete. If no interval is specified then all intervals are deleted. [-udc <udc_name>] Optional: A UDC to delete from an existing summary.

Usage: summary_admin delete <summary_name> [-ir] [-udc] [-ir <raw|hourly|daily|weekly|monthly>]

Note: If the optional interval switch is omitted, then all three summaries (daily, weekly, monthly) will be deleted from the system.

13.2.3

Run a provisioned summary


[-ir] -fr -er | -previous -r <summary_name> Mandatory. Use this switch and the summary name to run a summary. [-ir <raw|hourly|daily|weekly|monthly>] The summary interval. mandatory for raw and hourly intervals, optional otherwise. If no interval is specified, then daily, weekly, and monthly intervals are run. -fr <DD-MM-YYYY> -er <DD-MM-YYYY> -previous Start date for the summary. End date for the summary. Optional. Runs the summary for the previous complete period.

Usage: summary_admin -r <summary_name>

The Summarizer and Summary Administration 185

To run a raw or hourly summary, you must specify the start and end date in the format dd-mm-yyyy hh:mm, for example 31-07-2011 00:00. To run a daily, weekly, or monthly summary, you can specify the start and end times in the format dd-mm-yyyy, for example 31-07-2011. If the optional interval switch is omitted then the default response of the summary service is to execute the daily, weekly, and monthly summaries of the specified summary. To run a particular summary for a particular time period, all switch options must be used: interval, start date, and end date. However, to run the daily, weekly, and monthly summaries for a particular summary definition then omit the ir (interval required) option, and specify the start -fr and end -er dates. If a weekly summary is run that uses start and end dates that span two weeks (for example, assuming todays date as 10-Sept-07 and fr = 9-AUG-07 er = 16-AUG-07), then the default action of the summary service will be to run two summaries, one for the full week containing the start date (5-AUG-07 to 11AUG-07), and another for the full week containing the end date (12-AUG-07 to 18-AUG-07). The same applies for the monthly option. The -previous option can be used instead of specifying dates when running a summary. When you use the -previous option, the summary runs for the last complete period, depending on its interval. For example, if the current time and date is 01-07-2011 13:35, and you run the -previous option for a summary which is at an hourly interval, the date and time run in the summary is between 01-07-2011 12:00 and 01-07-2011 12:59. This also applies to raw summaries. In addtion, daily summaries run for the previous day, weekly summaries run for the previous week, and monthly summaries run for the previous month. Note: A summary will run even if the data set is incomplete. That is, if todays date is 23-Aug-07 and the start of the week is Sunday the 19-Aug-07, and the user runs:
summary_admin r ir weekly fr 19-08-2007 er 23-08-2007

then the summary will still run even though the week has not rolled over and the data set is incomplete for the week. A date in the future cannot be specified, and will return an error.

Note: On completion, the command will display a BUILD SUCCESSFUL message. The BUILD SUCCESSFUL message indicates that the summary run has started and returned successfully. It does not indicate if the summary run was successful. The summary run outcome is listed in the output.

13.2.4

Change the number of instances


-set -n -v Mandatory. Use this switch used to set summary parameters. The name of the parameter, for example, noin - number of summary instances. The new value of the parameter.

Usage: summary_admin set n noin v

186

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

To change the number of summary engine instances that can run concurrently use:
summary_admin set n noin v 3

13.2.5

Export summary metadata


-e [<summary_name>] Mandatory. Use this switch to export summaries. If no summary name specified then all summaries are exported. -t <user|standard|techpack> Optional. The creational summary type. It cannot be used if the summary name is specified, or with temporal type -ir. -ir <daily|weekly|monthly> Optional. The temporal summary type. It cannot be used if the summary name is specified, or with creational type -t. -f <filename> Mandatory. The name of an XML file into which the metadata is exported.

Usage: summary_admin -e [ summary_name | -t | -ir ] -f <filename>

Note: For example: To export all the nok_trx_rx_quality metadata, that is, daily, weekly, monthly:
summary_admin e nok_trx_rx_quality f /appl/virtuo/export/nok_trx_rx_quality.xml

Note: To export the metadata of all daily provisioned summaries:


summary_admin e -ir daily f /appl/virtuo/export/daily_summaries.xml

The summary definition XML contains the XML for daily, weekly, and monthly summaries. When exporting 'daily' alone, the daily XML data is not extracted from the summary definition XML, the whole XML is exported. In this way, the daily part of the export command relates to the summary history. The example above would export all the provisioned summaries and would include summary history information related to daily summaries.

Note: To export the metadata of all provisioned summaries:


summary_admin e f /appl/virtuo/export/all_summaries.xml

To export the metadata of all standard provisioned summaries:


summary_admin e t standard -f /appl/virtuo/export/all_summaries.xml

Note: If the summary name is specified after the e option, then the user cannot use either the t or ir option. The following is not valid:
summary_admin -e vnl_cell_tch -ir daily -f /tmp/vnl_cell_tch_export.xml

or
summary_admin -e vnl_cell_tch t standard -f /tmp/vnl_cell_tch_export.xml

The Summarizer and Summary Administration 187

Note: If summaries are provisioned during technology pack installation they can be exported using the creational type techpack.

13.2.6
-l

List summary definitions


Lists all provisioned summary definitions. Lists the specified summary definition.

Usage: summary_admin l [summary_name] -l <summary_name>

13.2.7

Prioritize summaries

It is possible to prioritize the execution order of summaries to ensure the most important summaries can be processed first when scheduled jobs are run. The default priority is 99, this is the lowest priority. The highest priority is 1.
summary_admin -set -n <name> -t priority -v <priority_value> -n -v Specify the <name> of the Summary to prioritize Priority value, 1-99

Examples:
summary_admin -set -n VNL_CELL_TCH -t priority -v 1

Sets the priority of the VNL_CELL_TCH to 1. Where a number of summaries have the same priority they are executed according to the summary type, simple before complex, and then by summary name sorted in ascending order. For example, if a number of summaries have the same priority, then the simple summaries are executed first, followed by the complex summaries. Within the simple and complex summaries, summaries are executed according to summary name in ascending order. For example, summary SummaryA is executed before SummaryB.

13.2.8

Enable a summary

By default, scheduled summaries are enabled so that they run as part of the scheduled summary run. You can run the following command to re-enable a disabled summary:
summary_admin -set -n <summary_name> -t enable -v true -n The name of the summary to enable.

Example:
summary_admin -set -n NOK_CELL_RES_AVAIL# -t enable -v true

Enables the summary NOK_CELL_RES_AVAIL#.

13.2.9

Disable a summary

You can disable a scheduled summary so that it is not ran as part of the scheduled summary run.
summary_admin -set -n <summary_name> -t enable -v false

188

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

-n

The name of the summary to disable.

Example:
summary_admin -set -n NOK_CELL_RES_AVAIL# -t enable -v false

Disables the summary NOK_CELL_RES_AVAIL#.

The Summarizer and Summary Administration 189

13.3
13.3.1

Configuring summary definitions


Overview

This section describes customizing summary definitions. You can only customize an existing summary definition. A summary definition will need to have been provisioned or created before it can be customized, see Provision a summary on page 183 for more information. To customize an existing summary you must first export the summary. For example: To export all the SIE_CELL_CCCH_CH summary metadata:
summary_admin -e SIE_CELL_CCCH_CH -f /appl/virtuo/export/ SIE_CELL_CCCH_CH.xml

KPI naming conventions The following convention is used to name KPIs:


Vendor.field_branch.group.field_name

For example, the raw counter Neutral.tch.blocks has: Vendor=Neutral field_branch=tch no group, meaning it is a technology pack raw KPI field_name=blocks For example, the summary KPI Neutral.tch.daily.blocks has: Vendor=Neutral field_branch=tch group=daily field_name=blocks

190

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

13.3.2

Simple summary definition

A simple summary definition is essentially a mapping between a traffic table and a summary table. The following is an example summary definition XML file for a simple summary definition.
<Summaries> <summary> <name>VNL_CELL_TCH</name> <source> <table>VNL_CELL_TCH</table> </source> <summary-attributes> <intervals-required> <interval type="daily" source="raw"/> <interval type="weekly" source="daily"/> <interval type="monthly" source="daily"/> </intervals-required> <calculate-late-data>true</calculate-late-data> <summary-type>standard/summary-type> </summary-attributes> <enabled>true</enabled> </summary> </Summaries>

Name tag The summary name is specified by the <name> field:


<name>VNL_CELL_TCH</name>

Table tag The related source table used to calculate the summary is specified by the <table> field, for example
<source> <table>VNL_CELL_TCH</table> </source>

For each raw KPI associated with each column of a source traffic table, a summary KPI will be created. If, for example, a raw KPI is named Neutral.tch.blocks, then the associated summary KPI will be named Neutral.tch.daily.blocks. Intervals tag A summary can be defined for three intervals and their source intervals:
<intervals-required> <interval type="daily" source="raw"/> <interval type="weekly" source="daily"/>

The Summarizer and Summary Administration 191

<interval type="monthly" source="daily"/> </intervals-required>

In our simple summary definition example, the above configuration will create the following summary tables: VNL_CELL_TCH_DSM based on VNL_CELL_TCH_TAB VNL_CELL_TCH_WSM based on VNL_CELL_TCH_DSM VNL_CELL_TCH_MSM based on VNL_CELL_TCH_DSM Note: Daily summaries can only be based on raw data. Deriving weekly and monthly summaries from raw data will impair performance. If weekly and/or monthly intervals are specified and they depend on daily, then daily needs to be specified first. It is possible to specify what intervals should be created, by removing <interval> tags. Calculate-late-data tag The following tag is used to control late data calculation for daily, weekly and monthly data.
<calculate-late-data>true</calculate-late-data>

If the tag is set to true late data calculation is enabled, if it is set to false late data calculation is disabled. This setting is used only when summarization is run for scheduled summarization, it is ignored when running summarization using the summary_admin command line tool. The summarizer will re-summarize or summarize old loaded data automatically. When the summarizer process is started it detects if old loaded data is loaded onto the system. Data is determined to be old loaded data if it's older than 1 day old. If the old loaded data has already being summarized then it will be re-summarized and the old summary data is deleted and the new summary data is populated in the summary table. If the old loaded data is older than 1 week or 1 month it will cause weekly and monthly summaries to be re-calculated respectively. The following sql command can be used to update the setting in a summary that is already in the system, without the need to export, delete, and re-provision the summary:
update summary_definition set definition = updateXML(definition, '//calculate-late-data/text()', 'false') where summary_definition.summary_name = 'VNL_CELL_TCH';

Status tag If the summary definition is an exported (unloaded) definition it also contains the summary execution status for each of the interval types defined for the summary, for example:
<status> <interval>weekly</interval>

192

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

<first-available>2008-09-07</first-available> <last-available>2008-09-14</last-available> <last-run> <start-time>2008-10-20T16:16:38.000+01:00</start-time> <end-time>2008-10-20T16:16:38.000+01:00</end-time> <last-run-success>2008-10-20T16:16:38.000+01:00</last-run-success> <result>1</result> <reason>Successful run 64 row(s) updated</reason> </last-run> </status>

In the summary status it lists the last results, numbers of rows updated and the start and end time of the last occasion the summary was run.

The Summarizer and Summary Administration 193

13.3.3

Complex summary definition

A complex summary is essentially a mapping between individual KPIs and summary KPIs. Complex summaries based on raw or hourly data are available from Tivoli Common Reporting or directly from the database, they are not available in the Tivoli Netcool Performance Manager UI. You must manually create a complex summary definition by creating an XML file with content that conforms to the structure of the following example complex summary definition XML file.
<Summaries> <summary> <name>NOK_CELL_HANDOVERS</name> <source> <entity>Cell</entity> <field-list> <field> <source-field> <entity>Cell</entity> <field-name>Nokia.Handovers.bsc_i_att_hscsd</field-name> <tp-field-id>vydglg6ahk26sec6000hw01qk4</tp-field-id> </source-field> <dest-field/> <aggregators> <field-aggregator>A</field-aggregator> <field-aggregator>Z</field-aggregator> <field-aggregator>M</field-aggregator> </aggregators> </field> </field-list> </source> <summary-attributes> <intervals-required> <interval type="raw" source="raw"/> <interval type="hour" source="raw"/> <interval type="daily" source="raw"/> <interval type="weekly" source="daily"/> <interval type="monthly" source="daily"/> </intervals-required> <calculate-late-data>true</calculate-late-data> <summary-type>tech pack</summary-type> </summary-attributes> <enabled>true</enabled> </summary> </Summaries>

194

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Entity tag This specifies the focal entity of the summary.


<entity>BSC</entity>

In the previous example summary definition XML file, the first instance of <entity>, can be set to to enable entity rollup to BSC level.

Source-field tag This tag specifies information about an existing source KPI, in most cases a technology pack KPI such as a PEG or PCALC, but UDCs can also be used. For summaries with raw or hourly intervals, PEGs cannot be used. Nested elements are given in the following table:
Table 67:
Element
entity field-name tp-field-id

Source-filed tag nested elements


Example

Description

The entity of the KPI. The full name of the KPI (including vendor, branch, group (optional) and name.

Cell Neutral.tch.blocks

Optional. The UUID of the KPI as provided by the tech- vydglg6ahk26sec6000hw01qk4 nology pack.

UDCs do not have a UUID, so the tp-field-id can be omitted for UDCs. Aggregators tag This tag is used to create one summary KPI per aggregator, per source KPI. This will configure the system to aggregate the same raw value (defined by the source-field tag) using different aggregation functions (avg, sum, min, and so on). The following table lists all aggregators.
Table 68:
Code Name

Aggregators
Entity aggregation

Time aggregation

T G R F A c C L V Q

AvgNull AvgMax AvgMin AvgSum Average Count Max MaxNull MaxAvg MaxSum

avg avg avg avg avg count max max max max

nil max min sum avg count max nil avg sum

The Summarizer and Summary Administration 195

Table 68:
Code Name

Aggregators
Entity aggregation

Time aggregation

H E O D M P Y X W N Z B K J I S

MaxMin MinAvg MinMax MinSum Min MinNull NullAvg NullMin NullSum NULL NullMax SumNull SumMax SumAvg SumMin Sum

max min min min min min nil nil nil nil nil sum sum sum sum sum

min avg max sum min nil avg min sum nil max nil max avg min sum

Configuring specific aggregators In the complex summary definition given here:


<source-field> <entity>Cell</entity> <field-name>Nokia.Handovers.bsc_i_att_hscsd</field-name> <tp-field-id>vydglg6ahk26sec6000hw01qk4</tp-field-id> </source-field> <dest-field/> <aggregators> <field-aggregator>A</field-aggregator> <field-aggregator>Z</field-aggregator> <field-aggregator>M</field-aggregator> </aggregators> </field>

The three summary KPIs created will be the following:


Table 69:
Summary KPI full name

Example Summary KPIs - using specific aggregators


uuid Time Agg Entity Agg function function
average

Nokia.Handovers.daily.avgavg_bsc_i_att_hscsd AVGAVG_vydglg6ahk26sec6000hw01qk4 average

196

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Table 69:
Summary KPI full name

Example Summary KPIs - using specific aggregators


uuid Time Agg Entity Agg function function
max min

Nokia.Handovers.daily.nilmax_bsc_i_att_hscsd

NILMAX_vydglg6ahk26sec6000hw01qk4 nil

Nokia.Handovers.daily.minmin_bsc_i_att_hscsd MINMIN_vydglg6ahk26sec6000hw01qk4 min

The Summarizer and Summary Administration 197

Configuring the default aggregator


<field-list> <field> <source-field> <entity>Cell</entity> <field-name>Nokia.Handovers.bsc_i_att_hscsd</field-name> <tp-field-id>vydglg6ahk26sec6000hw01qk4</tp-field-id> </source-field> <dest-field/> <aggregators/> <!-- empty means use the default aggregator --> </field> </field-list>

If the aggregators tag is empty, then the summary KPI uses the default aggregator of the raw KPI. Assuming S is the default aggregator, the following properties would be applied.
Table 70:
Summary KPI full name

Example Summary KPI - using default aggregator


uuid Time Agg Entity Agg function function

Nokia.Handovers.daily.bsc_i_att_hscsd

vydglg6ahk26sec6000hw01qk4 sum

sum

This is especially useful for summaries based on PCalcs. For PEGs the default aggregator is not specified in SummaryInstance.xml files. This is to avoid possible duplication with the default PEG summaries that are created when a technology pack is automatically installed, see the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component, for more information. Note: Users who manually modify a SummaryInstance*.xml file should ensure no duplication of the default aggregators takes place. This would cause summary KPI naming clashes, unless the <destfield> tag is overridden in one of the duplicates.

Configuring specific aggregators and a default aggregator If both a default aggregator and specific aggregators are needed, then two <field> tags with the same <source-field>, can be used, for example:
<field-list> <!-- specific aggregators here --> <field> <source-field> <entity>Cell</entity> <field-name>Nokia.Handovers.bsc_i_att_hscsd</field-name> <tp-field-id>vydglg6ahk26sec6000hw01qk4</tp-field-id> </source-field>
198 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

<dest-field/> <aggregators> <field-aggregator>A</field-aggregator> <field-aggregator>Z</field-aggregator> <field-aggregator>M</field-aggregator> </aggregators> </field> <!-- default aggregator here --> <field> <source-field> <entity>Cell</entity> <field-name>Nokia.Handovers.bsc_i_att_hscsd</field-name> <tp-field-id>vydglg6ahk26sec6000hw01qk4</tp-field-id> </source-field> <dest-field/> <aggregators/> </field> </field-list>

dest-field tag The <dest-field> tag is used to avoid name clashes that can occur between simple and complex summary KPIs. The following elements can be nested inside a <dest-field> tag, they are all optional:
<dest-field> <vendor>MyVendor</vendor> <field-branch>myBranch</field-branch> <field-name>myNewKpiName</field-name> </dest-field>

In this example, the summary KPI would be named:


MyVendor.myBranch.daily.myNewKpiName

A custom uuid would be generated in the form VMMxxxxxxxxxx, since at least one component of the destination field was customized. Customization of the dest-field can be combined with customization of aggregators. The example in the previous section would now give the following summary KPIs:
Table 71:
Summary KPI full name MyVendor.myBranch.daily.avgavg_myNewKpiName MyVendor.myBranch.daily.nilmax_myNewKpiName MyVendor.myBranch.daily.minmin_myNewKpiName

Summary KPIs - using destination and aggregator customization


uuid Time Agg Entity Agg function function average max min

AVGAVG_VMM1234567890123 average NILMAX_VMM1234567890123 nil MINMIN_VMM1234567890123 min

The Summarizer and Summary Administration 199

Intervals-required tag A summary can be defined for five intervals and their source intervals:
<intervals-required> <interval type="raw" source="raw"/> <interval type="hour" source="raw"/> <interval type="daily" source="raw"/> <interval type="weekly" source="daily"/> <interval type="monthly" source="daily"/> </intervals-required>

It is possible to specify what intervals should be created, by removing <interval> tags. Note: Raw, Hourly, and Daily summaries can only be based on raw data. Deriving weekly and monthly summaries from raw data will impair performance. If weekly or monthly intervals are specified and they depend on a daily interval, then the daily interval needs to be specified first.

13.3.4

Ignoring Data Availability

Summaries are executed each day based on Data Availability. The summary will run if there is data loaded for a given period, regardless of the amount of data. No percentage threshold of data availability is used. If there is no data present for the period being summarized, then the summary will not be initiated. However, summaries can be run to ignore Data Availability calculations, by executing the summaries on the command line.

200

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

14
14.1

Technology pack administration tools


The techpack_admin tool

The techpack_admin tool applies standard technology packs to a Tivoli Netcool Performance Manager system. This tool is found in $WMCROOT/bin. Note: Do not run more than one of the following tools, or more that one instance of any of these individual tools, at the same time: techpack_admin, sbh_admin, summary_admin, kpicache_admin or report_impexp. For example, do not run summary_admin and sbh_admin, or two instances of summary_admin at the same time).

14.1.1

Usage
-a -e -d DirName -n TechPackName -p -d DirName Mandatory:Apply new TechPack modules dependencies Mandatory:Export the UDC's and reports that are dependent on a tech pack Mandatory:The Dir where the files will be exported to Mandatory:The name of the TP in "" quotes Mandatory:Patch an already installed TP Mandatory:The root dir where the tech pack patch files are located under e.g. using -d $WMCROOT/admin/techpacks/ <techpack_dir> applies patch from the following dir $WMCROOT/admin/techpacks/<techpack_dir>/ <version>/patches/metalayer Mandatory:The name of the TP in "" quotes Mandatory:The version of the TP to be patched Mandatory:List TechPack modules installed uninstalled audit [-n TechPackName] Optional:The name of the TP in "" quotes for use with audit -n TechPackName -h Mandatory:Uninstall a named tech pack module techpack Mandatory:The name of the TP in "" quotes Help option

Usage: techpack_admin -parameters

-n TechPackName -v Version -l

-u

Copyright IBM Corp. 2007, 2011

201

Note: Before running the techpack_admin tool, ensure that the as process is running by running the command sap disp as The status of the as process should be STARTED. If it is not, run the sap start as command.

14.2

Applying a technology pack

The -a option applies technology pack modules to a system. The technology pack modules to apply are read from the $WCMROOT/admin/techpacks/new_techpacks file. The new_techpacks file is created automatically by the installation of technology pack modules on the system. Technology pack modules are installed in $WMCROOT/admin/techpacks. The new_techpacks file lists the name and version of each technology pack to be applied. An example of a new_techpacks file is:
Neutral_GSM_BSS_NSS_GOM 1.0 Neutral_GSM_Core 1.0 Ericsson_GSM_BSS_R10 1.0

The technology packs are applied in the order in which they appear in the new_techpacks file. If all technology packs are successfully applied, the new_techpacks file will be deleted. If the application of any technology pack fails, the techpack_admin tool will stop applying any further technology packs and will roll back the DML application of any previous technology packs that have been successfully applied.

14.2.1

Memory for Java client processes

If you are installing large technology packs, it is strongly recommended that you set the ANT_OPTS environment variable to a value of 1G prior to installation. To set the ANT_OPTS environment variable: 1. Execute the following command:
export ANT_OPTS="-Xmx1G"

2. After technology pack installation completes successfully, reset the ANT_OPTS variable to its original value:
unset ANT_OPT

Note: You do not need to stop and re-start the application server after you reset the ANT_OPTS value.

202

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

14.3

Exporting lists of dependencies

The -e option is used to export lists in .csv format of all UDCs, reports and report templates that are dependent on a technology pack. This information is useful when uninstalling a technology pack. For example:
techpack_admin -e dependencies -d <DirName> -n "Nokia GSM BSS"

where <DirName> is the directory to write the .csv files to. In this example, all dependent UDCs, reports and report templates, will be listed in the files Nokia_GSM_BSS_udcs.csv and Nokia_GSM_BSS_reports.csv in the directory specified in the -d option.

14.4

Patching a technology pack

The -p option is used to apply a patch to an existing technology pack. For example:
techpack_admin -p -d <DirName> -n <TechPackName> -v <Version>

where:
<DirName> is the root directory under which the technology pack patches are located. For example using $WMCROOT/admin/techpacks/<techpack_dir> applies the patch from the directory $WMCROOT/admin/techpacks/<techpack_dir>/<version>/patches/metalayer. <TechPackName> <Version>

is the name of the technology pack to be patched in quotes.

is the version number of the technology pack to be patched.

techpack_admin -p -d $WMCROOT/admin/techpacks/Nokia GSM BSS -n "Nokia GSM BSS" -v 3.1

14.5

Listing technology pack modules


installed - provides a list of technology packs applied to the system, UDCs, and all data related to the applied technology packs. uninstalled

The -l option lists the technology pack modules which are:

- provides a list of technology packs that have been uninstalled from the system.

- returns an audit log with information on a technology pack including information on any customizations and entities that were customized. Technology pack name is optional and must be in quotes.
audit [-n TechPackName]

For example:
techpack_admin -l audit -n "Nokia GSM BSS NetAct OSS3.1 ED3"

Technology pack administration tools 203

14.6
14.6.1

Uninstalling a technology pack, and loaders


Technology pack
a technology pack. Technology pack modules are installed in $WMCROOT/

The -u option uninstalls admin/techpacks.

To uninstall a technology pack as user virtuo, enter the following command: 1. List the technology packs that are installed on the system:
techpack_admin -l installed

Make sure the technology pack you are going to uninstall has no dependencies. All dependent technology packs must be uninstalled beforehand. See Dependent technology packs on page 205, for information on dependent technology packs. 2. Uninstall the technology pack:
techpack_admin -u techpack -n "<TechPackName>"

For example:
techpack_admin -u techpack -n "Ericsson UMTS UTRAN R3.0"

3. You will be asked to confirm the uninstallation. The uninstall may take several minutes, do not interrupt the uninstall. Data dictionary When a technology pack is uninstalled the data dictionary is disabled. When a technology pack is reinstalled, as is the case when upgrading a technology pack, the data dictionary is automatically enabled and run. If it is necessary to permanently delete a technology pack and not reinstall it, then after the technology pack has been uninstalled, the data dictionary must be re-enabled and run. To do this:
agent_admin -u sysadm -p <password> -enable <DataDictionaryid> agent_admin -u sysadm -p <password> -run <DataDictionaryid>

14.6.2

Removing associated loaders

To remove loaders for a technology pack, enter the following command:


loader_admin -delete -techpack <TechPackName> -tpversion <TechPack Version>

where:
<TechPackName>

is the name of the technology pack in "" quotes. is the version of the technology pack in "" quotes.

<TechPack Version>

For example:
loader_admin -delete -techpack "Ericsson UMTS UTRAN" -tpverison "1.0"

204

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: The loader_admin -delete command replaces the techpack_admin -u loader command used in previous versions of Tivoli Netcool Performance Manager.

14.6.3

Removing the Datasource

Delete the datasources associated with the technology pack using lcm_admin -delete, see section Delete Datasources and NC Relations on page 152.

14.6.4

Dependent technology packs

A number of technology packs define top-level concepts that are not specific to any equipment vendor implementation. For example, concepts like 'Wireless CELL' or 'Access Point'. These technology packs are GOM (Global object model) technology packs, and they are used by other dependent technology packs. You cannot delete a technology pack used by another technology pack. You can use the techpack_admin -l installed command to display dependencies between technology packs. The following example output illustrates this:
[moduleInventory] TechPack: [moduleInventory] Module: [moduleInventory] Release: [moduleInventory] Technology: [moduleInventory] Subsystem: [moduleInventory] Vendor: [moduleInventory] Installed: [moduleInventory] Requires: [moduleInventory] [moduleInventory] [moduleInventory] [moduleInventory] UMTS (1.0.20) Ericsson UMTS UTRAN R3.0 (1.0) UMTS RAN Ericsson (ERI) 6 Mar 2007 16:57:41 Neutral GSM BSS/NSS GOM (1.0) Neutral GPRS BSS GOM (1.0) Neutral UMTS UTRAN GOM (1.0) Neutral GPRS/UMTS CN GOM (1.0) Neutral Core GOM (1.0)

In the example above you cannot uninstall Neutral GSM BSS/NSS GOM without first uninstalling Ericsson UMTS UTRAN R3.0.

14.7

Displaying help

The -h option displays the usage of the tool.

Technology pack administration tools 205

14.8
14.8.1

Upgrading technology packs


Introduction

Be aware of the potential effects of installing a technology pack: Effects of a technology pack upgrade. Be aware of the following unsupported scenario: Unsupported upgrade scenario. Follow the instructions in the following section: Upgrading or reinstalling installed technology packs. Before starting a technology pack upgrade: Test all technology pack upgrades on production environments on a mirrored environment. Read all the following sections. Note: When upgrading a technology pack, shut down any associated loaders for the period of the upgrade. The technology pack upgrade requires a two-step uninstall/install process as detailed below. This will not remove any data loaded up to the time of the upgrade. Technology packs are identified by: The technology pack name and release. The datasource name and version. To decide which of the following upgrade scenarios is required, find the name and release of the technology packs installed (including loadmap datasource name and version) and the same for the new technology packs for installation. For an installed technology pack: 1. Enter the following command as user virtuo to find the technology pack name and release of an installed technology pack:
techpack_admin -l installed

2. Enter the following command as user virtuo to find the associated datasource name and version of an installed technology pack:
lcm_admin -listdatasources

For a new technology pack deployed to the filesystem: The name and release of a new technology pack deployed to the filesystem in $WMCROOT/admin/ techpacks can be found in the main.xml file included in the metalayer directory of the technology pack.

206

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

For example:
<Module name="Nokia GSM BSS" release="1.0" compatible_release="1.0" vendor="Nokia" technology="GSM" subsystem="RAN" vendor_version="OSS3.1 ED3" techpack="GSM" techpack_version="1.1.2"/>

The loadmap datasource name and version can be found in the datasource*.xml file in the loadmaps directory in the technology pack directory under $WMCROOT/admin/techpacks. For example:
<DataSource Name="Nokia BSS" Technology="GSM" Type="Vendor" Vendor="Nokia" Version="OSS3.1 ED3"/>

14.8.2

Effects of a technology pack upgrade

You should be aware of the potential effects on your system of upgrading a technology pack: Data Availability configuration Technology pack definitions Complex summary definitions that depend on daily computations Migrating Alarms Data Availability configuration If you want to retain your existing Data Availability configuration, export your Data Availability configuration before the upgrade and import it after the upgrade. Technology pack definitions A technology pack upgrade can contain additional or modified busy hours and summary definitions, but these default definitions are not automatically installed or upgraded. Check the technology pack readme file for any information that could affect reports, UDCs, or busy hour and summary definitions, for example, renamed counters. If the default technology pack definitions included with the technology pack have changed, or if you have made changes to your definitions, such as adding counters, or changing aggregators, you must reconcile the definitions after the upgrade and manually merge any differences. You must also handle all custom-defined reports, UDCs, busy hours and summary definitions in the same way as the default definitions defined in the technology pack. Complex summary definitions that depend on daily computations You may have configured weekly or monthly complex summary definitions that depend on daily computations. If the original KPI type is average aggregation, the weekly values would be re-averaged after the upgrade, with unexpected results. You must use the -ignoreaggr option when provisioning this type of summary. The normal procedure of backing up all summaries to one .XML file before upgrade and re-using the same .XML file will not work unless you use the -ignoreaggr option.

Technology pack administration tools 207

Migrating Alarms The sequence of steps when upgrading a technology pack is different if you are migrating alarms data. To determine the ruleset_ids for the old technology pack, you must run the migratealarms tool, see Using the migratealarms tool on page 211, after the new version of the techpack is installed, and before the loaders are deleted.

14.8.3

Unsupported upgrade scenario

If the technology pack being upgraded contains loadmaps where the datasource name and version match the old datasource name and version but the technology pack name and release version are different, an error will occur. This is because you are attempting to associate a loader with more than one technology pack or with a different version of the same technology pack which is not supported.

14.8.4

Upgrading or reinstalling installed technology packs

This section covers the following scenarios: The technology pack release and datasource version is different to the installed technology packs. The technology pack release is different but the datasource version is the same as the installed technology packs. Reinstalling existing technology packs. To upgrade or reinstall installed technology packs: 1. Check which loaders are running by using the following command as user virtuo:
sap disp

2. Stop all loaders by using the following command for each loader process as user virtuo:
sap stop <loader name>

Note: If all loaders are not stopped when you try to upgrade the technology pack, the NC tables are not updated. 3. If you are reinstalling existing technology packs, back up existing loadmaps to allow the new technology packs to be deployed correctly. Back up and move the technology pack directories in $WMCROOT/admin/techpacks to $WMCROOT/ admin/techpacks/backup. 4. Unload existing loadmaps and loadmaps customizations. Execute a full loadmap dump by running the following command:
lcm_admin -unload backup_loadmaps.xml -datasource <datasource_name> -dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> type <Neutral or Vendor>

208

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Back up any loadmap customizations by running the following command:


lcm_admin -unloadcustom custom_loadmaps.xml -datasource <datasource_name> dsversion <datasource_version> -techpack <techpack_name> -tpversion <techpack_version> -type <Neutral or Vendor>

5. To avoid conflicts, remove or rename the technology pack .remove scripts in $WMCROOT/admin/ software that have the same name as the technology packs you are about to install. 6. Read the new technology pack readme file for specific instructions for the new technology pack. 7. Export and remove Stored Busy Hours (SBH) and Complex Summary definitions. For more information on how to do this, see Stored Busy Hour (SBH) Administration tool on page 155 and summary_admin CLI on page 183. Ensure that ensure all exports and removals are complete before proceeding to uninstall the technology packs. 8. Uninstall the technology packs in reverse order of installation or according to their dependencies. 9. Run the following command to retrieve the list of technology packs installed. The dependencies between technology packs are listed also.
techpack_admin -l installed

Note: A technology pack cannot be uninstalled if it has dependant technology packs installed. 10. Run the following command to verify that BUSYHOUR and SUMMARY tasks are not running before you disable them. If tasks are running, allow them to complete before proceeding.
agent_admin -u sysadm -p <sysadm_password> -list current ...
12:48:55,055 INFO [AgentTool] Login... Activity Run Type Data Source State Cancel Active Start End Data Start Data End Attempt Retry Entity Label

8 182 DICTIONARY <server>.com-rs DISABLED N 2009-05-15 12:26:00 null null null null 1 N null Data dictionary import 10 177 BUSYHOUR <server>.com-rs DISABLED N 2009-05-15 12:41:00 null null null null 1 N null Busy Hour calculation 2 353 PROCEDURE <server>.com SCHEDULED N 2009-05-15 12:50:00 null null null null 1 N null Temporary report and schedule cleanup 4 2112 SWEEPER <server>.com SCHEDULED N 2009-05-15 12:55:00 null null null null 1 N null Unused file deletion 5 352 LDAP_SYNC <server>.com SCHEDULED N 2009-05-15 12:59:00 null null 2009-04-30 19:26:52 null 1 N null Ldap synchronization 9 178 SUMMARY <server>.com-rs DISABLED N 2009-05-15 13:05:00 null null null null 1 N null Summary computations 1 354 PROCEDURE <server>.com SCHEDULED N 2009-05-15 13:35:00 null null null null 1 N null Agent activity cleanup 3 15 PROCEDURE <server>.com SCHEDULED N 2009-05-16 02:00:00 null null null null 1 N null Datasource cleanup

11. Note the activity ID of the BUSYHOUR and SUMMARY agents. In the above example, the BUSY HOUR ID is 10 and the SUMMARY ID is 9. Disable them as follows:
agent_admin -u sysadm -p <sysadm_password> -disable <BUSYHOUR ID> agent_admin -u sysadm -p <sysadm_password> -disable <SUMMARY ID>

Technology pack administration tools 209

12. To uninstall the list of technology packs run the following for each technology pack installed:
techpack_admin -u techpack -n <old_techpack_name>

13. Deploy the new set of technology packs using the procedure in the section 7.4 Installing the Technology Pack Step 1 of the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component. 14. If you are upgrading a set of installed technology packs with a new technology pack version but the same datasource name and version, you need to complete this step if you want to keep your existing loadmaps. Otherwise, skip to Step 15. To keep your existing loadmaps, move the newly deployed loadmaps out of the $WMCROOT/admin/ techpacks/<techpack>/<release>/loadmaps directory so that they will not be automatically installed when you apply the technology packs in Step 15. 15. Apply the technology packs by running the following command as user virtuo:
techpack_admin -a

16. Check the following logs for errors or failure messages:


$WMCROOT/logs/vmm/vmm-server.log*

17. Check the unloaded loadmap customizations for reapplying, if necessary. This may not be required as customizations may be included in the new loadmap by default. 18. If alarms are provisioned on the old technology pack, run the migratealarms tool to migrate all alarms definitions and states from the old technology pack, otherwise continue to the next step. 19. If you are upgrading technology packs with a different datasource version, list the loaders and if required, delete any unwanted older versions of the loader:
sap disp loader_admin -delete -techpack <old_techpack_name> -tpverison <old_techpack_name> lcm_admin -delete <datasource_xml>

Note: The loader_admin -delete command replaces the techpack_admin -u loader command used in previous versions of Tivoli Netcool Performance Manager. 20. If you unloaded Summary Definitions when uninstalling the old version of the technology pack, then reload them using the -ignoreaggr option. Reload the exported SBH definitions also. Note: Any import failures due to technology pack upgrade changes need to be amended in the definitions and re-imported. 21. As user root run the following commands:
Solaris svcs sapmgr-na svcadm disable sapmgr-na svcs sapmgr-na

210

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

svcadm enable sapmgr-na svcs sapmgr-na Linux service sapmgrvirtuo status service sapmgrvirtuo stop service sapmgrvirtuo status service sapmgrvirtuo start service sapmgrvirtuo status AIX /etc/rc.d/init.d/sapmgrvirtuo status /etc/rc.d/init.d/sapmgrvirtuo stop /etc/rc.d/init.d/sapmgrvirtuo status /etc/rc.d/init.d/sapmgrvirtuo start /etc/rc.d/init.d/sapmgrvirtuo status

22. If multiple instances of a loader are required, refer to Configuring multiple identical loaders in the Tivoli Netcool Performance Manager: Installation Guide - Wireless Component. 23. Start all relevant loaders:
sap disp sap start <sap loader name>

24. Stop and re-start the NC cache to apply any changes to NC attributes and mappings:
sap stop nc_cache sap start nc_cache

25. Import any newly delivered SBH and Summary Definitions that are supplied with the upgraded techpacks if they are required. Decide if you need the new definitions based on a comparison of the existing summaries and SBH definitions that were supplied with the old technology pack, and any changes supplied with the new definitions. Enable the agents for running the summaries and SBHs as follows:
agent_admin -u sysadm -p <sysadm_password> -enable <SUMMARY ID> agent_admin -u sysadm -p <sysadm_password> -enable <BUSYHOUR ID>

26. Check the technology pack readme files for potential effects on your system as discussed in Effects of a technology pack upgrade on page 207. The next time you run SBH or Summary Definitions, check the completion status and error logs for potential errors due to technology pack changes, such as renamed or deleted KPIs.

14.8.5

Using the migratealarms tool

The migratealarms tool migrates all alarms definitions and states from an old technology pack version to a new version. Run the migratealarms tool after the technology pack is upgraded and before the old loader definitions are removed. Run the migratealarms tool for every technology pack upgrade.
Usage: -type migratealarms -otn <"old techpack name"> -otv <old techpack version> ntn <"new technology pack name"> -ntv <new technology pack version> -type migratealarms -otn Mandatory specifies what function tool is to perform Mandatory,old techpack name Technology pack administration tools 211

Double quotes must be used if the name of the techpack name contains spaces -otv -ntn -ntv Mandatory,old techpack version Mandatory,new techpack name Double quotes must be used if the name of the techpack name contains spaces Mandatory,new techpack version

Example Running this command updates all ruleset_ids for the old technology pack alarms and updates them with the ruleset_ids for the new technology pack alarms.
$WMCROOT/admin/techpackupgrade/techpack_upgrade -type migratealarms -otn UMTS_Ericsson_UTRAN_P5 -otv 2.0.0.0 -ntn "Ericsson UMTS UTRAN" -ntv 3.0.0

You are prompted to turn off the nc_cache and alarm_cache processes, and all loaders.
$WMCROOT/admin/techpackupgrade/techpack_upgrade -type migratealarms The nc_cache, alarm_cache and all loaders should not be running if the migratealarms command is to be ran. Check sap to determine if these processes are still running. If they are then stop them. Are all the above processes stopped <y or n> ?

Answer y to enable all alarms to be migrated correctly.

212

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Appendix A : Problem Resolution and Errors


A.1
A.1.1

Problem Resolution
Running Multiple Instances on the Same Server

If there are multiple Tivoli Netcool Performance Manager instances on the same server then the names used for each performance database should be different, otherwise each loader will try to attach to the same database.

A.1.2
Problem

Duplicate lc_relations Entries

Because there is no unique index on the lc_relations table, it is possible to create duplicate entries in this table. If this reoccurs on an ongoing basis, the number of records in this table can grow exponentially. This in turn can result in slow loader startup. Resolution This can be avoided by deleting the duplicate entries from the table. The following query can be used to identify duplicate entries:
select source_tabname, nc_tabname, access_key, master_tabname, count(*) from lc_relations group by source_tabname, nc_tabname, access_key, master_tabname having count(*) > 1

A.1.3

Unresponsive script error

When a very large report an unresponsive script error may be seen. If the user clicks on the CONTINUE option in the Warning dialog, the report completes as normal. This error occurs using the Mozilla Firefox browser. There is a browser setting to increase the time given to a java script. To Change the setting on a Mozilla Firefox browser. 1. Open the page about:config.

Copyright IBM Corp. 2007, 2011

213

Do this by typing about:config in to the Location bar address field. 2. Change the setting dom.max_script_run_time to 20 seconds. This should be sufficient for all java scripts. This problem has not been seen on Internet Explorer.

A.1.4

Adobe Flash Player

When the user interface is accessed for the first time and Adobe Flash player is not installed on the system, it should install automatically. However, on a computer that has been upgraded from Windows XP to Windows Vista, Flash Player fails to install automatically and must be manually installed. Once installed, Tivoli Netcool Performance Manager works correctly.

A.2
A.2.1

Errors
Installation errors

SQLFatalErrorException: ORA-28000: the account is locked This error is expected to appear in log files under /appl/oracle/product/10.2.0/db_1/ cfgtoollogs/emca/vtdb/ during application setup and can be safely ignored. OutOfMemoryError: Java heap space This error is seen when running Tivoli Netcool Performance Manager Java client processes. This error can occur when installing or upgrading large technology packs and when using the sbh_sk_remover tool.
java.lang.OutOfMemoryError: Java heap space

This problem is resolved by increasing the memory available to Java client processes. Available memory is increased by amending the ANT_OPTS variable. To increase available memory for Java client processes: 1. Execute the following command:
export ANT_OPTS="-Xmx1G"

2. Re-run the client process. For example, re-run the technology pack installation:
techpack_admin -a

3. After the client process completes successfully, reset the ANT_OPTS variable to its original value:
unset ANT_OPT

If you are installing or upgrading a large technology pack, it is recommended that you set the ANT_OPTS variable to a value of 1G prior to installation. Do not allow the installation to fail before increasing the ANT_OPTS value.
214 IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Note: You do not need to stop and re-start the application server after you reset the ANT_OPTS value.
Confirming the correct setting is being used

To confirm the correct memory setting is being used, you run a Java client tool and then check how much memory the process has been assigned. In the example below the ANT_OPTS variable has been set to 512m. To confirm the correct memory setting is being used: 1. Run the following commands:
techpack_admin -l installed & ps -ef | grep java

Example output:
virtuo 1492 25435 2 09:12:55 pts/2 0:03 /appl/virtuo/jre/bin/java Xmx512m -classpath /appl/virtuo/ant/lib/ant-launcher

In this example the correct setting being used is 512m.

215

216

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785, U.S.A. For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual Property Department in your country or send inquiries, in writing, to: Intellectual Property Licensing Legal and Intellectual Property Law IBM Japan, Ltd. 3-2-12, Roppongi, Minato-ku, Tokyo 106-8711 Japan The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Copyright IBM Corp. 2007, 2011 217

Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Licensees of this program who wish to have information about it for the purpose of enabling: (i) the exchange of information between independently created programs and other programs (including this one) and (ii) the mutual use of the information which has been exchanged, should contact: IBM Corporation 5300 Cork Airport Business Park Kinsale Road Cork Ireland. Such information may be available, subject to appropriate terms and conditions, including in some cases, payment of a fee. The licensed program described in this document and all licensed material available for it are provided by IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any equivalent agreement between us. Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary. Users of this document should verify the applicable data for their specific environment. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.

218

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. If you are viewing this information in softcopy format, the photographs and color illustrations may not appear.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.

Other company, product, or service names may be trademarks or service marks of others.

219

220

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

Index
A
activate datasource . . . . . . . . . . . . . . . . . . . .78 add database file . . . . . . . . . . . . . .106, 107 partition . . . . . . . . . . . . . . . . . . . . .115 adding users . . . . . . . . . . . . . . . . . . . . . . . .36 Admin group . . . . . . . . . . . . . . . . . . . . .32 agent activities . . . . . . . . . . . . . . . . . . . . .79 activities log files . . . . . . . . . . . . . . .86 activity properties . . . . . . . . . . . . . . .83 activity properties, past . . . . . . . . . . .86 administration . . . . . . . . . . . . . . . . .79 cancel activity . . . . . . . . . . . . . . . . .87 list activities . . . . . . . . . . . . . . . . . . .82 list activities, past . . . . . . . . . . . . . . .84 log files . . . . . . . . . . . . . . . . . . . . . .82 run activity . . . . . . . . . . . . . . . . . . . .87 types . . . . . . . . . . . . . . . . . . . . . . . .80 agent_admin see agent agent_admin tool . . . . . . . . . . . . . . .82, 92 see also agent alarm administration . . . . . . . . . . . . . . . .163 alarm definition mib file . . . . . . . . .167 context . . . . . . . . . . . . . . . . . . . . . .165 template . . . . . . . . . . . . . . . . . . . . .164 template version . . . . . . . . . . . . . . .163 template, list . . . . . . . . . . . . . . . . . .167 alarm_admin tool see alarm application server property file . . . . . . . . . . . . . . . . . . .18 archive redo logs . . . . . . . . . . . . . . . . . . . .102 archive logs . . . . . . . . . . . . . . . . . . . . .102 archiving log files . . . . . . . . . . . . . . . . .123

redo logs . . . . . . . . . . . . . . . . . . . . 102

C
cancel agent activity . . . . . . . . . . . . . . . . . . 87 Checking . . . . . . . . . . . . . . . . . . . . . . . 67 commands lar . . . . . . . . . . . . . . . . . . . . . . . . . 123 configuring archive logging . . . . . . . . . . . . . . . 102 context alarm . . . . . . . . . . . . . . . . . . . . . . 165 corrupt database checking for . . . . . . . . . . . . . . . . . 104 creating groups . . . . . . . . . . . . . . . . . . . . . . 38 roles . . . . . . . . . . . . . . . . . . . . . . . . 40 crontab setup . . . . . . . . . . . . . . . . . . . . . . . . 15

D
data summarize old . . . . . . . . . . . . 182, 183 database add file . . . . . . . . . . . . . . . . . 106, 107 backup . . . . . . . . . . . . . . . . . . . . . 101 check . . . . . . . . . . . . . . . . . . . . 96, 98 disable automatic startup/shutdown . . 99 drop tablespace . . . . . . . . . . . . . . . 109 manage . . . . . . . . . . . . . . . . . . . . . . 99 modify datafile . . . . . . . . . . . . . . . 108 monitor . . . . . . . . . . . . . . . . . . . . . . 98 restoring from backup . . . . . . . . . . 104 space management . . . . . . . . . . . . . 105 start . . . . . . . . . . . . . . . . . . . . . 24, 99 start, manual . . . . . . . . . . . . . . . . . 100 status . . . . . . . . . . . . . . . . . . . . . . . 96 stop . . . . . . . . . . . . . . . . . . . . . 24, 99 stop, manual . . . . . . . . . . . . . . . . . 100 datafile add file . . . . . . . . . . . . . . . . . . . . . 107 modify . . . . . . . . . . . . . . . . . . . . . 108 datasource activate . . . . . . . . . . . . . . . . . . . . . . 78 administration . . . . . . . . . . . . . . . . . 77 deactivate . . . . . . . . . . . . . . . . . . . . 79 list . . . . . . . . . . . . . . . . . . . . . 78, 148 load from xml . . . . . . . . . . . . . . . . 149
221

B
backup database . . . . . . . . . . . . . . . . . . . . .101 backups
Copyright IBM Corp. 2007, 2011

loader configuration . . . . . . . . . . . .145 setup . . . . . . . . . . . . . . . . . . . . . . . .14 unload from xml . . . . . . . . . . .151, 152 day of week summarizer . . . . . . . . . . . . . . . . . .182 Daylight Saving Time rules . . . . . . . . . . . . . . . . . . . . . . .135 daylight savings time rule . . . . . . . . . . .135 dbspace management . . . . . . . . . . . . . . . . . .105 dbspace admin tool See tablespace deactivate datasource . . . . . . . . . . . . . . . . . . . .79 delete groups . . . . . . . . . . . . . . . . . . . . . . .38 partition . . . . . . . . . . . . . . . . . . . . .116 roles . . . . . . . . . . . . . . . . . . . . . . . .40 users . . . . . . . . . . . . . . . . . . . . . . . .37 directory server see LDAP disk space usage . . . . . . . . . . . . . . . . . . . .71, 120 documentation font usage . . . . . . . . . . . . . . . . . . . . .1 structure . . . . . . . . . . . . . . . . . . . . . . .3 typographical conventions . . . . . . . . . .1 user . . . . . . . . . . . . . . . . . . . . . . . . . .3 viewing Web Help . . . . . . . . . . . . . . .3 drop tablespace . . . . . . . . . . . . . . . . . . .109 ds_admin tool see datasource DST rules . . . . . . . . . . . . . . . . . . . . . . .135 identifier . . . . . . . . . . . . . . . . . . . .136

flapping . . . . . . . . . . . . . . . . . . . . . . . . 73 font usage documentation . . . . . . . . . . . . . . . . . . 1

G
groups . . . . . . . . . . . . . . . . . . . . . . . . . associating with a user . . . . . . . . . . . creating . . . . . . . . . . . . . . . . . . . . . . delete . . . . . . . . . . . . . . . . . . . . . . .

32 38 38 38

H
hardware performing diagnostic checks . . . . . 104 healthcheck . . . . . . . . . . . . . . . . . . . . . . 96 holiday add . . . . . . . . . . . . . . . . . . . . . . . . 142 dates maintenance . . . . . . . . . . . . . 141 delete . . . . . . . . . . . . . . . . . . . . . . 142 list dates . . . . . . . . . . . . . . . . . . . . 142 holiday_admin tool see holiday

I
import partition . . . . . . . . . . . . . . . . . . . . 117 UDC . . . . . . . . . . . . . . . . . . . . . . . . 89 init.ora . . . . . . . . . . . . . . . . . . . . . . . . 102

J
job event_clean . . . . . . . . . . . . . . . . . . . pm_daily . . . . . . . . . . . . . . . . . . . . pm_monthly . . . . . . . . . . . . . . . . . . pm_weekly . . . . . . . . . . . . . . . . . . . rgfp . . . . . . . . . . . . . . . . . . . . . . . . schedule . . . . . . . . . . . . . . . . . . . . .

93 92 93 93 93 92

E
errors partition maintenance . . . . . . . . . . .119 event_clean . . . . . . . . . . . . . . . . . . . . . .93 Everybody group . . . . . . . . . . . . . . . . . .32 export partition . . . . . . . . . . . . . . . . . . . . .117 UDC . . . . . . . . . . . . . . . . . . . . . . . .88

K
kpi cache administration . . . . . . . . . . . . . . . . . 88 synchronize . . . . . . . . . . . . . . . . 88, 89 kpicache_admin tool . . . . . . . . . . . . . . . 88 see also kpi cache

L
lar . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 lcm_admin tool see loader configuration LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . 32

F
file system size of . . . . . . . . . . . . . . . . . . . . . .121
222

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

check . . . . . . . . . . . . . . . . . . . . . . . .96 setup . . . . . . . . . . . . . . . . . . . . . . . .14 start . . . . . . . . . . . . . . . . . . . . . . . . .24 status . . . . . . . . . . . . . . . . . . . . . . . .96 stop . . . . . . . . . . . . . . . . . . . . . . . . .24 list agent activities . . . . . . . . . . . . . . . . .82 agent activities, past . . . . . . . . . . . . .84 datasource . . . . . . . . . . . . . . . .78, 148 partition . . . . . . . . . . . . . . . . . . . . .118 pinned partition . . . . . . . . . . . . . . .118 scheduled jobs . . . . . . . . . . . . . . . . .94 sessions, partition maintenance . . . .118 spaces, partition maintenance . . . . .119 load datasource from xml . . . . . . . . . . . .149 loader configuration from xml .149, 150 nc relations from xml . . . . . . . . . . .149 loader check bad files . . . . . . . . . . . . . . . . .68 check status . . . . . . . . . . . . . . . . . . .67 configuration manager . . . . . . . . . .145 disk space usage . . . . . . . . . . . . . . . .71 log level, change . . . . . . . . . . . . . . . .70 operations tasks . . . . . . . . . . . . . . . .67 property file . . . . . . . . . . . . . . . . . . .18 loader configuration . . . . . . . . . . . . . . .145 load from xml . . . . . . . . . . . .149, 150 log files . . . . . . . . . . . . . . . . . . . . .97, 122 agent activities . . . . . . . . . . . . . .82, 86 archiving . . . . . . . . . . . . . . . . . . . .123 check . . . . . . . . . . . . . . . . . . . . . . . .97 summary . . . . . . . . . . . . . . . . . . . .180 log-archiver.sh . . . . . . . . . . . . . . . . . . .123 logs archive . . . . . . . . . . . . . . . . . . . . . .102 loader, change level . . . . . . . . . . . . .70 partition maintenance . . . . . . . . . . .119 redo . . . . . . . . . . . . . . . . . . . . . . . .102

N
nc relations . . . . . . . . . . . . . . . . . . . . . delete . . . . . . . . . . . . . . . . . . . . . . load from xml . . . . . . . . . . . . . . . . unload from xml . . . . . . . . . . . . . .

145 153 149 151

O
Overview . . . . . . . . . . . . . . . . . . . . . . . 13

P
parameters partition maintenance . . . . . . . . . . . 117 partition add . . . . . . . . . . . . . . . . . . . . . . . . 115 delete . . . . . . . . . . . . . . . . . . . . . . 116 errors . . . . . . . . . . . . . . . . . . . . . . 119 export . . . . . . . . . . . . . . . . . . . . . . 117 import . . . . . . . . . . . . . . . . . . . . . . 117 list . . . . . . . . . . . . . . . . . . . . . . . . 118 list pinned . . . . . . . . . . . . . . . . . . . 118 logs . . . . . . . . . . . . . . . . . . . . . . . 119 pin . . . . . . . . . . . . . . . . . . . . . . . . 116 sessions, list . . . . . . . . . . . . . . . . . 118 sessions, update . . . . . . . . . . . . . . . 118 spaces, list . . . . . . . . . . . . . . . . . . . 119 status . . . . . . . . . . . . . . . . . . . . . . 119 partition maintenance . . . . . . . . . . . . . . 111 add partition . . . . . . . . . . . . . . . . . 115 delete partition . . . . . . . . . . . . . . . 116 errors . . . . . . . . . . . . . . . . . . . . . . 119 export partition . . . . . . . . . . . . . . . 117 import partition . . . . . . . . . . . . . . . 117 list partition . . . . . . . . . . . . . . . . . . 118 list pinned partition . . . . . . . . . . . . 118 logs . . . . . . . . . . . . . . . . . . . . . . . 119 parameters . . . . . . . . . . . . . . . . . . 117 pin partition . . . . . . . . . . . . . . . . . 116 sessions, list . . . . . . . . . . . . . . . . . 118 sessions, update . . . . . . . . . . . . . . . 118 spaces, list . . . . . . . . . . . . . . . . . . . 119 status . . . . . . . . . . . . . . . . . . . . . . 119 pin partition . . . . . . . . . . . . . . . . . . . . 116 pm_daily . . . . . . . . . . . . . . . . . . . . . . . . 92 pm_monthly . . . . . . . . . . . . . . . . . . . . . 93 pm_weekly . . . . . . . . . . . . . . . . . . . . . . 93 privileges . . . . . . . . . . . . . . . . . . . . . . . 32 about . . . . . . . . . . . . . . . . . . . . . . . 33
223

M
maintaining tablespaces . . . . . . . . . . . . . . . . . . .105 misc_clean . . . . . . . . . . . . . . . . . . . . . . .92 modify datafile . . . . . . . . . . . . . . . . . . . . .108

assigning to roles . . . . . . . . . . . . . . .41 process scheduler . . . . . . . . . . . . . . . . . . . . .92 process manager. See sapmgr process monitor. See sapmon properties agent activities . . . . . . . . . . . . . . . . .83 agent activities, past . . . . . . . . . . . . .86 property file application server . . . . . . . . . . . . . . .18 loader . . . . . . . . . . . . . . . . . . . . . . .18 publications user . . . . . . . . . . . . . . . . . . . . . . . . . .3

R
redo logs . . . . . . . . . . . . . . . . . . . . . . .102 archive . . . . . . . . . . . . . . . . . . . . . .102 archiving . . . . . . . . . . . . . . . . . . . .102 log files redo . . . . . . . . . . . . . . . . . . . . .102 report time zones . . . . . . . . . . . . . . . . . . .135 restoring databases . . . . . . . . . . . . . . . . . . . .104 missing files . . . . . . . . . . . . . . . . . .104 rgfp . . . . . . . . . . . . . . . . . . . . . . . . . . . .93 roles . . . . . . . . . . . . . . . . . . . . . . . . . . .32 assigning privileges to . . . . . . . . . . . .41 assigning users to . . . . . . . . . . . . . . .39 creating . . . . . . . . . . . . . . . . . . . . . .40 delete . . . . . . . . . . . . . . . . . . . . . . . .40 managing . . . . . . . . . . . . . . . . . . . . .40 run agent activity . . . . . . . . . . . . . . . . . .87

S
SAP setup . . . . . . . . . . . . . . . . . . . . . . . .16 sapmgr start . . . . . . . . . . . . . . . . . . . . . . . . .26 stop . . . . . . . . . . . . . . . . . . . . . . . . .26 sapmon check . . . . . . . . . . . . . . . . . . . . . . . .97 start . . . . . . . . . . . . . . . . . . . . . . . . .25 status . . . . . . . . . . . . . . . . . . . . . . . .97 stop . . . . . . . . . . . . . . . . . . . . . . . . .25 schedule list jobs . . . . . . . . . . . . . . . . . . . . . .94
224

maintenance . . . . . . . . . . . . . . . . . . 94 schedule_admin tool . . . . . . . . . . . . 79, 92 scheduler options . . . . . . . . . . . . . . . . . . . . . . 95 process . . . . . . . . . . . . . . . . . . . . . . 92 See also schedule server status reports for . . . . . . . . . . . . . . . 96 session partition maintenance . . . . . . . . . . 118 sessions partition maintenance . . . . . . . . . . 118 setup crontab . . . . . . . . . . . . . . . . . . . . . . 15 datasource . . . . . . . . . . . . . . . . . . . . 14 LDAP . . . . . . . . . . . . . . . . . . . . . . . 14 root user . . . . . . . . . . . . . . . . . . . . . 15 SAP . . . . . . . . . . . . . . . . . . . . . . . . 16 tasks . . . . . . . . . . . . . . . . . . . . . . . . 13 virtuo user . . . . . . . . . . . . . . . . . . . . 15 spaces partition maintenance . . . . . . . . . . 119 stability settings . . . . . . . . . . . . . . . . . . 73 start database . . . . . . . . . . . . . . . . . . 24, 99 database, manual . . . . . . . . . . . . . . 100 LDAP . . . . . . . . . . . . . . . . . . . . . . . 24 sapmgr . . . . . . . . . . . . . . . . . . . . . . 26 sapmon . . . . . . . . . . . . . . . . . . . . . . 25 TNPM . . . . . . . . . . . . . . . . . . . 23, 27 status database . . . . . . . . . . . . . . . . . . 96, 98 LDAP . . . . . . . . . . . . . . . . . . . . . . . 96 loader . . . . . . . . . . . . . . . . . . . . . . . 67 partition maintenance . . . . . . . . . . 119 sapmon . . . . . . . . . . . . . . . . . . . . . . 97 TNPM . . . . . . . . . . . . . . . . . . . . . . 97 status of server . . . . . . . . . . . . . . . . . . . 96 stop database . . . . . . . . . . . . . . . . . . 24, 99 database, manual . . . . . . . . . . . . . . 100 LDAP . . . . . . . . . . . . . . . . . . . . . . . 24 sapmgr . . . . . . . . . . . . . . . . . . . . . . 26 sapmon . . . . . . . . . . . . . . . . . . . . . . 25 TNPM . . . . . . . . . . . . . . . . . . . 23, 27 summarizer day of week . . . . . . . . . . . . . . . . . 182

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

old data . . . . . . . . . . . . . . . . .182, 183 start day of week . . . . . . . . . . . . . .182 switch on/off . . . . . . . . . . . . . . . . .180 summary log file . . . . . . . . . . . . . . . . . . . . . .180 switch summarizer on/off . . . . . . . . . . . . .180 synchronize kpi cache . . . . . . . . . . . . . . . . . .88, 89

T
tablespace add datafile . . . . . . . . . . . . . . . . . .107 increase . . . . . . . . . . . . .106, 108, 109 monitor . . . . . . . . . . . . . . . . . . . . .106 tablespaces maintaining . . . . . . . . . . . . . . . . . .105 technology packs administration tools . . . . . . . . . . . .201 upgrading . . . . . . . . . . . . . . . . . . . .206 template alarm . . . . . . . . . . . . . . . . . . . . . . .164 list, alarm . . . . . . . . . . . . . . . . . . . .167 version, alarm . . . . . . . . . . . . . . . .163 time zone regions . . . . . . . . . . . . . . . . .138 DST rules . . . . . . . . . . . . . . . . . . .135 time zones DST . . . . . . . . . . . . . . . . . . . . . . . .135 TNPM architecture . . . . . . . . . . . . . . . . . . . .5 check . . . . . . . . . . . . . . . . . . . . . . . .97 install summary . . . . . . . . . . . . . . . .13 start . . . . . . . . . . . . . . . . . . . . . .23, 27 status . . . . . . . . . . . . . . . . . . . . . . . .97 stop . . . . . . . . . . . . . . . . . . . . . .23, 27 system maintenance . . . . . . . . . . . . .91 typographical conventions . . . . . . . . . . . . .1 tz_admin tool see time zones

sessions, partition maintenance . . . . 118 user setup root . . . . . . . . . . . . . . . . . . . . 15 setup virtuo . . . . . . . . . . . . . . . . . . . 15 user management . . . . . . . . . . . . . . . . . . 36 user publications . . . . . . . . . . . . . . . . . . . 3 UserEdit privilege . . . . . . . . . . . 38, 39, 40 users adding . . . . . . . . . . . . . . . . . . . . . . 36 delete . . . . . . . . . . . . . . . . . . . . . . . 37 permissions . . . . . . . . . . . . . . . . . . . 32 roles associated with . . . . . . . . . . . . 39 Web client . . . . . . . . . . . . . . . . . . . 32

W
Web client users . . . . . . . . . . . . . . . . . . 32 Web Help format . . . . . . . . . . . . . . . . . . . 3

U
UDC export . . . . . . . . . . . . . . . . . . . . . . .88 import . . . . . . . . . . . . . . . . . . . . . . .89 unload datasource from xml . . . . . . . .151, 152 nc relations from xml . . . . . . . . . . .151 update
225

226

IBM Tivoli Netcool Performance Manager: Administration Guide - Wireless Component

IBM

Printed in the Republic of Ireland.

Copyright IBM Corp. 2007, 2011

227

Вам также может понравиться