Академический Документы
Профессиональный Документы
Культура Документы
Charlotte Brooks
Michel Baus
Michael Benanti
Ivo Gomilsek
Urs Moser
ibm.com/redbooks
International Technical Support Organization
August 2003
SG24-6886-01
Note: Before using this information and the product it supports, read the information in “Notices” on
page xxiii.
This edition applies to IBM Tivoli Storage Resource Manager (product number 5698-SRM), IBM Tivoli Storage
Resource Manager for Databases (product number 5698-SRD), IBM Tivoli Storage Resource Manager for
Chargeback (product number 5698-SRC), and IBM Tivoli Storage Resource Manager Express Edition
(5698-SRX)
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv
Part 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Contents v
6.2 Using the standard reporting functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
6.2.1 Asset Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
6.2.2 Storage Subsystems Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6.2.3 Availability Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6.2.4 Capacity Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
6.2.5 Usage Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6.2.6 Usage Violation Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6.2.7 Backup Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
6.3 Tivoli Storage Resource Manager ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.3.1 ESS Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
6.4 IBM Tivoli Storage Resource Manager top 10 reports . . . . . . . . . . . . . . . . . . . . . . . . 316
6.4.1 ESS used and free storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
6.4.2 ESS attached hosts report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6.4.3 Computer Uptime reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6.4.4 Growth in storage used and number of files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
6.4.5 Incremental backup trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6.4.6 Database reports against DBMS size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
6.4.7 Database instance storage report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
6.4.8 Database reports size by instance and by computer . . . . . . . . . . . . . . . . . . . . . 329
6.4.9 Locate the LUN on which a database is allocated . . . . . . . . . . . . . . . . . . . . . . . 331
6.4.10 Finding important files on your systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
6.5 Creating customized reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.5.1 System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6.5.2 Reports owned by a specific username . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
6.5.3 Batch Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
6.6 Setting up a schedule for daily reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
6.7 Setting up a reports Web site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
6.8 Charging for storage usage. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
Contents vii
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
Figures xi
4-113 ESS CIM/OM startup screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
4-114 Installation directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4-115 Installation size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4-116 Welcome screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4-117 Current version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4-118 Install size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
4-119 Installation finished . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4-120 CIM/OM Logins in navigation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4-121 Defining CIM/OM login. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
4-122 Running discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
4-123 Finding CIM/OM discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
4-124 Discovery job output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
4-125 Storage Subsystem Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
5-1 Tivoli Storage Resource Manager Monitoring features . . . . . . . . . . . . . . . . . . . . . . 160
5-2 OS Monitoring tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5-3 New Scan job creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5-4 OS Monitoring - Jobs list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5-5 Tivoli Storage Resource Manager Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5-6 Computer Group definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5-7 Save a new Computer Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
5-8 Final Computers Group definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
5-9 Filesystem Group definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5-10 Directory group definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5-11 Computers by directory definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5-12 Directories by computer configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
5-13 Final Directories Group definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
5-14 List of available users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
5-15 List of available user after Scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
5-16 Discovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
5-17 Discovery When to Run options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5-18 Discovery job options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5-19 Ping process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5-20 Ping job configuration - Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5-21 Ping job configuration - When to Ping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
5-22 Ping job configuration - Alert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
5-23 Ping failed popup for GALLIUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5-24 Mail message for GALLIUM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5-25 Probe process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5-26 New Probe configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5-27 Probe alert - mail configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5-28 Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
5-29 New Profile - Statistics tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
5-30 New Profile - File filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5-31 New Condition Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5-32 New Profile - Conditions Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5-33 New Profile - New condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
5-34 New Profile - Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5-35 Profile save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
5-36 Scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5-37 New Scan configuration - Filesystem tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5-38 New Scan configuration - Profiles tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5-39 New Scan - Save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5-40 Alerts mechanisms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Figures xiii
5-94 Database Probe definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5-95 Database profile definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
5-96 Database Scan definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
5-97 Instance Alert definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
5-98 Instance Alert output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
5-99 Database alert definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
5-100 Database Quota - Users tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6-1 Reporting capabilities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
6-2 IBM Tivoli Storage Resource Manager main screen showing reporting options . . . 249
6-3 Tivoli Storage Resource Manager standard reporting . . . . . . . . . . . . . . . . . . . . . . . 251
6-4 Tivoli Storage Resource Manager Lab Environment . . . . . . . . . . . . . . . . . . . . . . . . 252
6-5 Reporting - Asset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6-6 Reporting - Asset - By Computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6-7 Report - GALLIUM assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
6-8 Reporting - Assets - System-wide view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
6-9 Monitored directories report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6-10 Northwind database asset details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6-11 System-wide view of database assets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
6-12 Create a new database table group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
6-13 Add SQL Server tables to table group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
6-14 Add Oracle tables to table group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6-15 Tables added to table group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
6-16 Table group added to scan job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6-17 Displaying Scan job logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
6-18 Tables by total size asset report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
6-19 Reports - Availability - Ping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
6-20 Reports - Availability - Computer Uptime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
6-21 Disk capacity report selection window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
6-22 Capacity report - A23BLTZM Disk 0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
6-23 Database Capacity report by Computer Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
6-24 Largest tables by RDBMS type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
6-25 Monitored tables by RDBMS type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
6-26 Create a Constraint - Filesystems tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6-27 Create a Constraint - file types tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
6-28 Edit a Constraint file filter - before change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
6-29 Edit a Constraint file filter - after change . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
6-30 Create a Constraint - Options tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
6-31 Create a Constraint - Alert tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
6-32 Create a Constraint - save . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
6-33 Constraint violation report selection screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
6-34 Constraint violations by computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
6-35 Graph of capacity used by Constraint violating files . . . . . . . . . . . . . . . . . . . . . . . . 275
6-36 Alert log showing Constraint violations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
6-37 Create Quota - Users tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
6-38 Create Quota - Computers tab. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
6-39 Create Quota - When to Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
6-40 Create Quota - Alert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6-41 Create Quota - save. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
6-42 Run new Quota job . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6-43 Alert Log - Quota violations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
6-44 Alert Log - Quota violation detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6-45 Quota violations by computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
6-46 Quota violation graphical breakdown by file size . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Figures xv
6-100 Report for Filesystem/Logical Volumes Part 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
6-101 Computer view to the filesystem with capacity and free space . . . . . . . . . . . . . . . . 318
6-102 ESS selection per computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6-103 ESS connections to computer report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
6-104 Computer Uptime report selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6-105 Computer Uptime report part 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
6-106 Computer Uptime report graphical combined (stacked bar) . . . . . . . . . . . . . . . . . . 321
6-107 Computer Uptime report graphical (bar chart) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
6-108 Generate Full Backup Size report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6-109 Select History chart for File count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
6-110 History chart space used by a computer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
6-111 History chart: File count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
6-112 Incremental Range selection based on filespace . . . . . . . . . . . . . . . . . . . . . . . . . . 324
6-113 Summary of all filespace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
6-114 Selection for Filesystem and computer to generate a graphic. . . . . . . . . . . . . . . . . 325
6-115 Bar chart for Incremental Range Size by Filesystem. . . . . . . . . . . . . . . . . . . . . . . . 326
6-116 Pie chart selected with number of files which have modified. . . . . . . . . . . . . . . . . . 326
6-117 Total Instance storage used network wide. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
6-118 DBMS drill down to the computer reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
6-119 DBMS drill down to the computer result. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
6-120 DBMS report Total Instance Storage by Instance . . . . . . . . . . . . . . . . . . . . . . . . . . 329
6-121 Instance report RDBMS overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6-122 Instance running on computer TONGA first part . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
6-123 Instance running on computer TONGA second part . . . . . . . . . . . . . . . . . . . . . . . . 330
6-124 LUN report selection for an Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
6-125 Database select File and Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6-126 Report DB2 File in a Pie Chart for DB2 File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
6-127 LUN information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
6-128 Create Profile for own File search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6-129 Create new Condition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
6-130 Create Condition add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
6-131 Saved Condition in new Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
6-132 Listed Profiles containing Search for Tivoli Storage Manager Option File. . . . . . . . 337
6-133 Add Profile to Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
6-134 Add Profiles to Scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
6-135 Report with number of found Tivoli Storage Manager Option Files . . . . . . . . . . . . . 339
6-136 Create Orphaned File search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6-137 Update the Orphaned selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
6-138 Update the selection with own data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6-139 Enter the file search criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
6-140 File Filter selection reconfirm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
6-141 bind the Orphan search into Profiles to apply to Filesystems column . . . . . . . . . . . 342
6-142 Scan log check. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
6-143 Summary report of all Tivoli Storage Manager option files . . . . . . . . . . . . . . . . . . . 343
6-144 File selection for computer BONNIE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
6-145 Report for Tivoli Storage Manager Option file searched . . . . . . . . . . . . . . . . . . . . . 344
6-146 File detail information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
6-147 My Reports - System Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346
6-148 My Reports - Storage Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
6-149 Available System Reports for databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
6-150 Create My Storage Capacity report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
6-151 My Storage Report saved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
6-152 Monitored Tables by RDBMS Types customized report . . . . . . . . . . . . . . . . . . . . . 351
Figures xvii
8-15 Event Group Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8-16 Assign Event Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422
8-17 Assigned Event Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
8-18 Configured Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423
8-19 TEC Console main screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
8-20 TEC console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
8-21 General tab of event . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
8-22 Event attribute list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
8-23 Setting the TEC server properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
8-24 Enabling TEC events for the default scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
8-25 Enable TEC events for discovery of new computers . . . . . . . . . . . . . . . . . . . . . . . . 430
9-1 Tivoli Data Warehouse data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
9-2 Warehouse pack structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 434
9-3 Application installation only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9-4 Verify the fully qualified hostname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
9-5 Enter username and password of the data warehouse database . . . . . . . . . . . . . . 437
9-6 Enter path to the Warehouse Pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
9-7 Additional products installation dialog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9-8 Start actual installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
9-9 Successfully finished installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
9-10 DB2 Client Configuration Assistant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9-11 Choose how to make a connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
9-12 Choose communication protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
9-13 Enter hostname and DB2 instance port . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9-14 Name the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
9-15 Register database with ODBC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
9-16 Test connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
9-17 Enter UID and password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
9-18 Test successfully . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
9-19 DB2 Control Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9-20 Data Warehouse Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
9-21 Warehouse Sources for IBM Tivoli Storage Resource Manager . . . . . . . . . . . . . . . 446
9-22 Data Source Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
9-23 BTM_ITSRM_Source Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
9-24 Target Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
9-25 Enter password for DB2 CDW target . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
9-26 Subject Areas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 449
9-27 Open the Work in Progress window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
9-28 Run New Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
9-29 Selecting the steps to run . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
9-30 Work in Progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 451
9-31 Schedule Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
9-32 Schedule a Process times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
9-33 Task Flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
9-34 E-mail alert. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
9-35 Change mode to production. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
9-36 Scheduled process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454
9-37 Run process manually . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
9-38 Manually run steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455
9-39 COMP table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
9-40 CDW entries from Warehouse Pack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
10-1 Tivoli Desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 459
10-2 Policy Region tonga-region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
Figures xix
xx IBM Tivoli Storage Resource Manager: A Practical Introduction
Tables
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are
inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED,
INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrates programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and
distribute these sample programs in any form without payment to IBM for the purposes of developing, using,
marketing, or distributing application programs conforming to IBM's application programming interfaces.
ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United
States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems,
Inc. in the United States, other countries, or both.
C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic
Transaction LLC.
Other company, product, and service names may be trademarks or service marks of others.
Storage growth continues to accelerate, and the cost of disk can approach 80% of total
system hardware costs. Yet, the storage in most businesses is typically only about 50% used.
How can you take control of your storage assets to render utilization more efficient and make
the most of your storage dollars?
IBM® Tivoli® Storage Resource Manager helps you discover, monitor, and create enterprise
policies for your filesystems and databases. You will find out where all your storage is going,
and be able to act intelligently on this information. Application availability is improved because
you will have early warnings when filesystems are running out of space. If you are thinking
about server consolidation, you can use IBM Tivoli Storage Resource Manager to help
efficiently utilize your accumulated storage resources.
This IBM Redbook shows how to install, configure, and protect the IBM Tivoli Storage
Resource Manager environment; how to create policies; how to define automated actions like
scripts or SNMP events when policies are violated; and how to produce detailed, meaningful
storage reports. This book is intended for those who want to learn more about IBM Tivoli
Storage Resource Manager and those who are about to implement it.
The second edition of this redbook is updated for IBM Tivoli Storage Resource Manager
Version 1.2 and includes information on IBM TotalStorage® Enterprise Storage System
reporting using CIM/OM, filesystem extension, as well as on how to integrate IBM Tivoli
Storage Resource Manager with other Tivoli products.
Charlotte Brooks is an IBM Certified IT Specialist and Project Leader for Tivoli Storage
Management and Open Tape Solutions at the International Technical Support Organization,
San Jose Center. She has 12 years of experience with IBM in the fields of pSeries™, AIX®,
and storage. She has written ten redbooks, and has developed and taught IBM classes on all
areas of storage management. Before joining the ITSO in 2000, she was the Technical
Support Manager for Tivoli Storage Manager in the Asia Pacific Region.
Michel Baus is an IT Architect for @sys GmbH, an IBM Business Partner in Germany. He
has eight years of experience in the areas of UNIX, Linux, Windows and Tivoli Storage and
System Management. He holds several certifications including technical, sales, and is an IBM
Tivoli Certified Instructor. He has developed and taught several storage classes for IBM
Learning Services, Germany. He was a member of the team that wrote the redbook
Managing Storage Management, SG24-6117.
Michael Benanti is an IBM Certified IT Specialist in Tivoli Software, IBM Software Group. In
his six years with IBM, he has focused on architecture, deployment, and project management
in large SAN implementations. Mike also works with the Tivoli World Wide Services Planning
Organization, developing services offerings for IBM Tivoli SAN Manager and IBM Tivoli
Ivo Gomilsek is an IT Specialist for IBM Global Services, Slovenia, supporting the Central
and Eastern European Region in architecting, deploying, and supporting SAN/storage/DR
solutions. His areas of expertise include SAN, storage, HA systems, xSeries® servers,
network operating systems (Linux, MS Windows, OS/2®), and Lotus® Domino™ servers. He
holds several certifications from various vendors (IBM, Red Hat, Microsoft). Ivo was a
member of the team that wrote the redbook Designing and Optimizing an IBM Storage Area
Network, SG24-6419, and contributed to various other redbooks on SAN, Linux/390, xSeries,
and Linux. Ivo has been with IBM for five years and was an author of the first edition of this
redbook.
Urs Moser is an Advisory IT Specialist with IBM Global Services in Switzerland. He has more
than 25 years of IT experience, including more than 13 years experience with Tivoli Storage
Manager and other storage management products. His areas of expertise include Tivoli
Storage Manager implementation projects and education at customer sites, including
mainframe environments (OS/390®, VSE, and VM) and databases. Urs was a member of the
team that wrote the redbook Using Tivoli Storage Manager to Back Up Lotus Notes,
SG24-4534.
The authors of the first edition of this Redbook: Michael Benanti, Hamedo Bouchmal, John
Duffy, Trevor Foley, and Ivo Gomilsek.
Brian Delaire, Doug Dunham, Barry Eberly, Nancy Hobbs, Sumant Padbidri, Jason Perkins
IBM Tivoli Storage Resource Manager Development, San Jose
Your efforts will help increase product acceptance and customer satisfaction. As a bonus,
you'll develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Preface xxvii
Comments welcome
Your comments are important to us!
We want our Redbooks™ to be as helpful as possible. Send us your comments about this or
other Redbooks in one of the following ways:
Use the online Contact us review redbook form found at:
ibm.com/redbooks
Send your comments in an Internet note to:
redbook@us.ibm.com
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. QXXE Building 80-E2
650 Harry Road
San Jose, California 95120-6099
This section describes the technical changes made in this edition of the book and in previous
editions. This edition may also include minor corrections and editorial changes that are not
identified.
Summary of changes
for SG24-6886-01
for IBM Tivoli Storage Resource Manager: A Practical Introduction
as created or updated on August 19, 2003.
New information
Release of Version 1, Release 2 of IBM Tivoli Storage Resource Manager:
– Automatic file system extension
– Enterprise Storage Server® (ESS) Subsystem Reporting
– LUN Provisioning for ESS Subsystem
– Tivoli Enterprise™ Console (TEC) and other Tivoli products Integration
IBM Tivoli Storage Resource Manager Express Edition
Part 1 Introduction
In this part we introduce the concepts of Storage Resource Management and the benefits it
can bring to an organization. Then we overview IBM Tivoli Storage Resource Manager.
SRM tools will help companies lower their cost of storage, and of storage management.
Subsequent chapters introduce a solution for SRM - IBM Tivoli Storage Resource Manager,
and discuss deployment architectures, installation and design considerations, operations, and
maintenance.
SRM Definitions
SRM is a collection of automated tools that enable
administrators to visualize a distributed collection of storage
resources, to make intelligent, informed decisions about the
usage of those resources"
Enterprise Storage, Storage Resource Management Update, Sep 2001
ibm.com/redbooks
Today computers are typically the only vehicle for storing a company's business data.
Computers and storage are now mission-critical.
Open environments today are larger and the systems are much more heterogeneous than in
the last century. Table 1-1 summarizes some of the other major differences.
ibm.com/redbooks
1.2.1 Growth
The single biggest issue is growth. Growth is being driven by three general trends:
Business transaction volumes are growing
Businesses are now storing more information, from different formats and sources, than
ever before. These include audio, graphical, and other scanned data that previously was
stored only on film, paper, or other traditional media.
These new data types (like music, video clips, images, graphical files, etc.) require more
storage per file than older data types like flat files.
The data and storage infrastructure that support this growth is itself growing dramatically.
Storage growth rate is estimated to range from 50-125% annually, depending on the industry
or consultant report of your choice.
Rapid infrastructure growth creates a number of technology and management issues, shown
in Figure 1-3.
ibm.com/redbooks
Server growth
Major companies have hundreds of large UNIX servers, and sometimes thousands of
Microsoft Windows servers. They are deploying more servers every quarter, and most large
companies have a large variety of different hardware and software platforms (often not by
design) rather than standardizing on particular configurations.
Staffing growth
While we know that storage and data are growing rapidly, support staff numbers are not. This
only exacerbates the problem. An average corporate server may be supporting in the order of
3 TB of data in the coming years, yet it is estimated that a typical systems administrator can
manage only 1 TB. Since in today’s economic times, businesses are looking to cut costs,
most are shrinking rather than increasing their IT departments. Clearly, more intelligent and
powerful applications will be required to support this environment.
Costs
Storage is a large portion of IT budgets. Even with disk prices dropping at 30% per year (on
average), if storage requirements grow at 100% per year, total costs spent on storage will
grow 40% year-over-year. Storage has to be managed.
The fact that storage is inefficiently used is doubly critical in today’s environment of tight
budgets:
1. Storage administrators do not have the tools to answer questions like:
– How much storage will I need next year, given my current growth rates?
– How fast are my databases growing?
– What servers are running out of storage today?
– Can I compare the forecast on demand versus capacity from last year to the actual rate
of growth that occurred this year?
2. Because they do not have the answers to these and other questions, they wind up:
– Buying storage at the last minute (paying too much money for it)
– Buying too much (better to spend too much money on storage than to not have enough
when it is urgently needed)
Storage Resource Management tools would help the storage administrator answer these
questions, and allow corporations to buy the right amount of storage at the right time.
Utilization inefficiencies
Data protection schemes (RAID, mirroring, replication, etc.) are used to protect data from disk
failures and other hardware errors. Allocating and using additional disk for data protection is a
good business decision, and is not an inefficient use of storage.
However, there are many other ways that disk is used inefficiently. Here are a few examples,
and note that if the data is mirrored or RAIDed, then the problem is accordingly multiplied.
1. With direct-attached storage (whether internal or attached to a SAN) in some cases, a
very small percentage of available storage is actually used for application data.
2. Applications are installed, but then are not used. No one tries to locate these unused files.
Application upgrades can also leave unneeded files.
3. Many files are created once, used once, and never accessed or used again; for example,
for testing purposes. This is an example of a stale or obsolete file.
4. Some files are duplicated to other directories or systems, and later the need for the
duplicate file goes away. The duplicate file is no longer needed, but it is cheaper to leave
the duplicate file where it is rather than spend the time to try to find it.
5. It is increasingly common to find music files (often illegally copied) video clips, and other
personal data items placed onto expensive corporate storage.
Current open systems storage utilization rates can range from as low as 25% (direct-attached
Windows servers) to 50-60% (SAN-attached storage). What this means is that on average, if
a company has 100 GB of storage in a filesystem, there is about 25 to 50 GB of actual
important data on that 100 GB of storage. The rest of the disk space is being wasted.
NT Direct-attached 25%
FC SAN-attached 70%
Example 1-1 (for a low-end NT environment) and Example 1-2 (for a high-end UNIX
environment) show how the numbers can add up.
The client
spent 6 x 150 x $640= $576,000 for 32TBs of raw disk
to get 1 x 150 x $640 = $96,000 for 3.75TB of disk used for storing data, or
15.4 cents per MB usable.
Vendors argue that disk costs 1.8 cents per MB ($576,000/(32.4*1000) = 1.77 cents). While
true, it is misleading. Companies buy usable disk, not raw storage.
Two comments:
1. The difference is partly the cost of unmanaged storage (and partly the cost of
protection).
2. 15 cents per MB is close enough to the cost of enterprise disk to justify investigating
storage consolidation.
Typical efficiency (space used/space available) in enterprise FC SAN Storage is less than
50%. (It is more that the rate for internal storage because more attention is paid to
expensive fibre channel storage.) For the purposes of this example, we are assuming a 50%
‘best case’ scenario.
To get 3.75 TB of usable disk, the customer would have to buy 7.5TB of disk from a vendor.
Using 72GB mirrored disks which cost over $15,000 each, the customer would
buy 14 disks/TB * 3.75TBs * 2 (efficiency factor) * $15,000**/disk = $1,575,000,
to get 14 * 3.75 * $15,000 = $787,500 of usable (3.75TBs) of disk, or
42 cents per MB list price usable.
** -
90% of the current list price from one well-known storage vendor for a 72GB disk
If you extend the two examples above to 150 TB of data, then customers would spend either
$23,000,000 (for the NT example) or $63,000,000 (for the enterprise example) for storage.
These costs are the price for not managing storage well.
How much of this could be re-captured by using Storage Resource Management software?
Storage Resource Management can help storage administrators improve the efficiency of
disk utilization. It is hard to quantify exactly the efficiency rates in the UNIX/Windows space,
since use of such tools is relatively new. However, in the mainframe world with DFSMS,
efficiency rates of over 95% disk utilization are common. If in the UNIX/Windows space, we
can conservatively assume that we could achieve rates of 80%, then Figure 1-4 shows the
cost savings that might be possible in our examples above.
ibm.com/redbooks
Figure 1-4 SRM helps you recapture dollars already spent on storage
NT Example
Enterprise Storage
ibm.com/redbooks
Figure 1-5 Predicted savings from managed storage versus unmanaged storage
One key piece of information is shown in Figure 1-4. By using SRM software to improve our
utilization, then, using existing storage, we can absorb 27 months of growth in the Windows
example, or seven months of growth in the enterprise storage example - this represents a
significant cost benefit.
ibm.com/redbooks
Today, when the user calls and says “my application ran out of disk space and just stopped!”
administrators (storage administrators, network administrators, application administrators, or
platform administrators) have to scramble to get the application running again.
Meantime, the application is down, the company is losing money, and user satisfaction is very
low. Not being able to track the space used against space available is very expensive.
To find the stale files, duplicate files, or inappropriate files, the storage administrator would
have to get write access to all the servers in the environment, write the custom scripts, debug
them, run them regularly and review the resulting information manually, and then try to act on
it, while trying to perform his normal duties. The scripts also have to be maintained so that
they cater for new servers, new LUNs or volumes, new filesystems, new applications, new
policies, and so on. Doing all this manually is very difficult, if not almost impossible.
ibm.com/redbooks
Figure 1-7 Scope of the problem - total storage, total number of filesystems
In this projection we used 100 GB for the size of the average UNIX host today, 25 GB for the
average Windows host, and 150 TB of storage as a target for the total storage in the average
large company. We also made some assumptions as to the number of filesystems per
UNIX/Windows host. We believe that this is a quite conservative projection. If you use larger
numbers, then the numbers are even more daunting. Nonetheless, the projection illustrates
the point: by 2004, an average large company will be managing:
7,500 filesystems
150TBs of storage
3,750 servers
We have already described the tools that today’s administrators typically use:
Custom-written scripts for different operating systems
Some individual point solutions
Spreadsheets and PC databases
Visio diagrams
Manual update processes
And good memories
ibm.com/redbooks
In trying to project the staffing cost for storage administration (and only for administering disk)
we started with Figure 1-7, made some assumptions, and looked at the numbers. We made
two different projections - one based on the number of Gigabytes of storage that an
administrator would administer with today’s tools, and one based on the number of servers
that an administrator could manage. The assumptions were conservative.
For storage, we assumed that UNIX administrators could handle 3 TB, and Windows
administrators could handle 1 TB, and that the weighted average cost of an administrator was
$100,000 per year. Adjust your own model according to your own situation, since salary costs
vary greatly among different countries and cities, as well as within industry.
Even with conservative assumptions, administering disk will cost a lot of money. These
numbers are significant, and in parallel the situation facing the IT service industry in 1985
before the introduction of storage management tools on the mainframe. After DFSMS was
introduced to the mainframe, storage administration labor costs dropped by 90%.
Fewer studies have yet been performed in the UNIX/Windows world on the impact of storage
management tools on storage administration costs. If we were to use 45% (half the savings
achieved in the mainframe world) as a working guideline for the savings, we could achieve in
the UNIX/Windows world, given the large numbers, and the figure would be substantial.
Platform administration
A company with hundreds of UNIX and thousands of Windows servers across different
business units has thousands of separate filesystems to administer. Managing that many
anything is difficult. A growing percentage of companies have started consolidating storage
into SANs, but they still have the same number of storage entities to manage. Filesystems are
still assigned to individual application servers, and storage on the FC storage frame is
logically segregated.
Some companies have FC storage pools, NAS storage pools, and direct-attached storage
environments. Each FC storage pool is managed by its own storage manager. Each NAS pool
has its own manager. Each small group of direct-attached servers has its own platform
administrator. These labor costs can be at the user department level, at the division IT level,
or at the corporate IT level. The costs are hard to aggregate, but are large.
In either case, the corporation is paying for IT professionals to manage the backup and
recovery function. Dollars are either hidden in parts of individual’s salaries across the many
different departmental budgets, or prominently displayed (i.e. a large figure) in a centralized
budget.
The current set of Tivoli solutions already provide many of the functions in the Business
Management section (that is, Systems Management, Storage Management, and Security
Management).
ibm.com /redbooks
As the Storage Resource Management tools start to implement reporting based on the
storage devices themselves, not just reporting from the operating systems view, the tools
need to know how to get this data from various storage devices. In the past and often still
today, such information was only accessible through vendor APIs as there still is no
standardized way to extract data from the storage device. For example, if the Storage
Resource Management tool wants to report where in the storage array particular data is
located, it will need to communicate to the storage device through a custom API to get this
information. This approach has several drawbacks:
CIM/WBEM technology uses a powerful human and machine readable language called the
managed object format (MOF) to precisely specify object models. Compilers can be
developed to read MOF files and automatically generate data type definitions, interface stubs,
and GUI constructs to be inserted into management applications.
SMIS object models are extensible, as explained in “SMI Specification” on page 18, enabling
easy addition of new devices and functionality to the model, and allowing vendor-unique
extensions for added-value functionality.
Management Application
Device Types
Standard
Tape Library Many Other Object
Switch Array
Model per
MOF Device
MOF MOF MOF
Vendor
Unique
Function
ibm.com/redbooks
SMI Specification
SNIA has fully adopted and enhanced CIM standard for Storage Management in its SMI
Specification. The SMI Specification was launched in mid-2002 to create and develop a
universal open interface for managing storage devices including storage networks.
Management Tools
ibm.com/redbooks
The idea behind SMIS is to standardize the management interfaces so that management
applications can utilize these and provide cross device management. This means that a
newly introduced device can be immediately managed as it will conform to the standards.
The models and protocols in the SMIS implementation are platform-independent, enabling
application development for any platform, and enabling them to run on different platforms. The
SNIA will also provide interpretability tests which will help vendors to test their applications
and devices if they conform to the standard.
SLP
TCP/IP
CIMxml
CIM operations over http
TCP/IP
ibm.com/redbooks
The CIM Agent or CIM Object Manager (CIM/OM) will translate a proprietary management
interface to the CIM interface. An example of a CIM/OM is the IBM CIM Object Manager for
the IBM TotalStorage Enterprise Storage Server (ESS).
In the future, more and more devices will be native CIM compliant, and will therefore have a
built-in Agent as shown in the “Embedded Model” in Figure 1-13.
When widely adopted, SMIS will streamline the way that the entire storage industry deals with
management. Management application developers will no longer have to integrate
incompatible feature-poor interfaces into their products. Component developers will no longer
have to “push” their unique interface functionality to applications developers. Instead, both will
be better able to concentrate on developing features and functions that have value to
For more information on SMIS/CIM/WBEM, see the SNIA and DMTF Web site:
http://www.snia.org
http://www.dmtf.org
ibm.com/redbooks
IBM Tivoli Storage Resource Manager is enabled for CIM/WBEM based storage management
and as more and more devices become CIM enabled, it will be ready to manage them,
enabling a single point of management control for different storage devices.
IBM Tivoli Storage Resource Manager addresses the goals identified above, and offers
storage administrators the reporting tools needed to understand:
How much space is allocated to each application server, and how much is being used?
How fast data is growing (for a server, a filesystem, a type of data, etc.)?
How much space is being wasted?
How much space is available across a business unit or the enterprise?
How the data is distributed inside storage device (as of time of writing this was only
available for IBM ESS)?
Forecast requirements
And many other issues
Tivoli Storage Resource Manager monitors storage assets, capacity, and usage across an
enterprise. Tivoli Storage Resource Manager can look at:
Storage from a host perspective: Manage all the host-attached storage, capacity and
consumption attributed to filesystems, users, directories, and files
Storage from an application perspective: Monitor and manage the storage activity inside
different database entities including instance, tablespace, and table
Storage utilization and provide chargeback information.
Tivoli Storage Resource Manager provides over 300 standardized reports (and the ability to
customize your own reports) about filesystems, databases, and storage infrastructure. These
reports provide the storage administrator information about:
Assets
Availability
Capacity
Usage
Usage violation
Backup
Through monitoring and reporting, Tivoli Storage Resource Manager helps the storage
administrator prevent outages in the storage infrastructure. Armed with timely information, the
storage administrator can take action to keep storage and data available to the application.
Tivoli Storage Resource Manager also helps to make the most efficient use of storage
budgets by allowing administrators to use their existing storage more efficiently, and more
accurately predict future storage growth.
2.1.2 Architecture
Tivoli Storage Resource Manager architecture is shown in Figure 2-1.
Tivoli Storage
Resource
Manager HP/ UX
Server
Web Server
Managed
Storage
Browser Repository
ibm.com/redbooks
The Server system manages a number of Agents, which can be servers with storage
attached, NAS systems or database application servers. Information is collected from the
Agents and stored in a database repository. The stored information can then be displayed
from a native GUI client or browser interface anywhere in the network. The GUI or browser
ibm.com/redbooks
ID C
Direct Connect
Clients SRM Server
Managed
Servers
(Agents)
WWW Server
SRM Database I DC
Repository
Web Conect
Clients
ibm.com/redbooks
An RDBMS (either locally or remote) manages the repository of data collected from the
Agents, and the reporting and monitoring capabilities defined by the users.
WWW Server
The Web Server is optional, and handles communications to allow remote Web access to the
Server. The WWW Server can run on the same physical server as the SRM Server.
Novell NetWare and NAS devices do not currently support locally installed Agents - they are
managed through an Agent installed on a machine that uses (accesses) the NetWare or NAS
device. The Agent will discover information on the volumes or filesystems that are accessible
to the Agent’s host.
The Agents are quite lightweight. Agents listen for commands from the host, and then perform
a Probe (against the operating system), and/or a Scan (against selected filesystems). Normal
operations might see one scheduled Scan per day or week, plus various ad hoc Scans.
Chapter 5, “Operations: Policy, Quotas, and Alerts” on page 159 provides details of Scans
and Probes.
Web-connect clients use the WWW Server to access the user interface through a Web
browser. The Java administrative applet is downloaded to the Web Client machine and
presents the same user interface that Direct-connect Clients see.
Server
The following platforms are supported for Tivoli Storage Resource Manager Server at the
time of writing:
Windows NT 4.0 or higher with SP4.0 or above
Windows 2000
Windows XP
Windows Server 2003
AIX 4.3.3, 5.1
HP-UX 11.0
Solaris 2.6 or 7, 8, or 9
Red Hat Linux 6.2, 7.1, 7.2 (64-bit is not supported)
The database repository on the server can be local for all the databases, and remote for IBM
DB2 UDB, MS SQL-Server, Sybase, and Oracle.
2.2.4 Cloudscape
Interbase (formerly shipped for a database repository with IBM Tivoli Storage Resource
Manager) has been replaced with IBM’s Cloudscape database for use as an IBM Tivoli
Storage Resource Manager repository. You can easily install this lightweight database and
use it for demonstration purposes, trial licenses, test environments, and so on. See the IBM
Tivoli Storage Resource Manager Installation Guide, GC32-9066, for more information about
Cloudscape support.
When you first run Tivoli Storage Resource Manager (and Tivoli Storage Resource Manager
for Databases) against your servers and disks, filesystems and databases, you find out:
What space is used on what servers and storage
What files are using that space
Which database applications have sufficient space, and which do not
Customers typically find that utilization percentage across the enterprise is low - typically less
than 50%. Therefore, generally the initial focus is on housecleaning - delete stale, old, or
inappropriate files. After housecleaning, storage utilization should now have reached even
lower levels - maybe 40% this time. After completing this step, you can continue to more long
Enhancing revenue
Before using Tivoli Storage Resource Manager to manage your storage, it was difficult to get
advance warning of out-of-space conditions on critical application servers. If an application
did run out of storage on a server, it would typically just stop. This means revenue generated
from that application or the service provided by it also stopped, and this incurred a high cost
to fix it, as fixing unplanned outages fast is usually expensive.
With Tivoli Storage Resource Manager, applications will not run out of storage. You will know
when they need more storage, and can get it at a reasonable cost before an outage occurs.
You will avoid the loss of revenue and services, plus the additional costs associated with
unplanned outages.
Quotas
Chargeback
ibm.com/redbooks
Figure 2-6 shows the Tivoli Storage Resource Manager dashboard. This is the default
right-hand pane display when you start Tivoli Storage Resource Manager and shows a quick
summary of the overall health of the storage environment. It can quickly show you potential
problem areas for further investigation.
The dashboard contains four viewable areas, which cycle among seven pre-defined panels.
To cycle, use the Cycle Panels button. Use the Refresh button to update the display.
Enterprise-wide summary
The Enterprise-wide Summary panel shows statistics accumulated from all the Agents. The
statistics are:
Total filesystem capacity available
Total filesystem capacity used
Total filesystem free capacity
Total allocated and unallocated disk space
Total disk space unallocated to filesystems
Total number of monitored servers
Total number of unmonitored servers
Total number of users
Total number of disks
Total number of filesystems
Total number of directories
Total number of files
Alerts Pending
This panel shows active Alerts that have been triggered but are still pending.
Pings
A Ping is a standard ICMP Ping which checks registered Agents for availability. If an Agent
does not respond to a Ping (or a pre-defined number of Pings) you can set up an Alert to take
some action. The actions could be one, any, or all of:
SNMP trap
Notification at login
Entry in the Windows event log
Run a script
Send e-mail to a specified user(s)
Pings are used to generate Availability Reports, which lists the percentage of times a
computer has responded to the Ping. An example of an Availability Report for Ping is shown
in Figure 2-7. Availability Reports are discussed in detail in 6.2.3, “Availability Reporting” on
page 262.
Probes
Probes are used to gather information about the assets and system resources of monitored
servers, such as processor count and speed, memory size, disk count and size, filesystems,
etc. If Tivoli Storage Resource Manager for Databases is licensed, then Probes also gather
information about the files, instances, logs, and objects that makeup the monitored
databases. The data collected by the Probe process is used in the Assets Reports described
in 6.2.1, “Asset Reporting” on page 252.
Scans
The Scan process is used to gather statistics about usage and trends of the server storage. If
Tivoli Storage Resource Manager for Databases is licensed, then Scans also gather
information about the storage usage and trends within the monitored databases. Data
collected by the Scan jobs are tailored by Profiles. Results of Scan jobs are stored in the
enterprise repository. This data supplies the data for the Capacity, Usage, Usage Violations,
and Backup Reporting functions. These reports can be scheduled to run regularly, or they can
be run ad hoc by the administrator.
Profiles limit the scanning according to the parameters specified in the Profile. Profiles are
used in Scan jobs to specify what file patterns will be scanned, what attributes will be
gathered, what summary view will be available in reports and the retention period for the
statistics. Tivoli Storage Resource Manager supplies a number of default Profiles which can
be used, or additional Profiles can be defined. Table 5-1 on page 180 shows the default
Profiles provided. Some of these include:
Largest files - Gathers statistics on the largest files
Largest directories - Gathers statistics on the largest directories
Most at risk - Gathers statistics on the files that have been modified the longest time ago
and have not been backed up since modified (Windows Agents only)
Figure 2-10 shows a sample of a report produced from data collected in Scans.
This report shows a list of the filesystems on each Agent, the amount of space used in each,
expressed in bytes and as a percentage, the amount of free space, and the total capacity
available in the filesystem.
2.4.3 Reporting
Reporting in Tivoli Storage Resource Manager is very rich, with over 300 pre-defined views,
and the capability to customize those standard views, save the custom report, and add it to
your menu for scheduled or ad hoc reports. You can also create your own individual reports
according to particular needs and set them to run as needed, or in batch (regularly). Reports
can be produced in table format or a variety of charting (graph) views. You can export reports
to CSV or HTML formats for external usage.
Reports are generated against data already in the repository. A common practice is to
schedule Scans and Probes just before running reports.
Reporting can be done at almost any level in the system, from the enterprise down to a
specific entity and any level in between. Figure 2-6 on page 34 shows a high-level summary
report. Or, you can drill down to something very specific. Figure 2-11 is an example of a
Reports can be produced either system-wide or grouped into views, such as by computer, or
OS type.
Restriction: Currently, there is a maximum of 32,767 (216 -1) rows per report. Therefore,
you cannot produce a report Tivoli Storage Resource Manager to list all the .HTM files in a
directory containing a million files. However, you can (and it would be more productive to
do so) produce a report of the 20 largest files in the directory, or the 20 oldest files, for
example.
Tivoli Storage Resource Manager allows you to group information about similar entities (disk,
filesystems, etc.) from different servers or business units into a summary report, so that
business and technology administrators can manage an enterprise infrastructure. Or, you can
summarize information from a specific server - the flexibility and choice of configuration is
entirely up to the administrator.
You can report as at a point in time, or produce a historical report, showing storage growth
trends over time. Tivoli Storage Resource Manager reporting lets you track actual demand for
disk over time, and then use this information to forecast future demand for the next quarter,
two quarters, year, etc. Figure 2-12 is an example of a historical report, showing a graph of
the number of files on the C drive on the Agent WISLA.
Reporting categories
Major reporting categories for filesystems and databases are:
Assets Reporting uses the data collected Probes to build a hardware inventory of the
storage assets. You can then navigate through a hierarchical view of the assets by drilling
down through computers, controllers, disks, filesystems, directories, and exports. For
database reporting, information on instances, databases, tables, and data files is
presented for reporting.
Storage Subsystems Reporting provides information shows storage capacity at a
computer, filesystem, storage subsystem, LUN, and disk level. These reports also enable
you to view the relationships among the components of a storage subsystem. Storage
Subsystem Reporting is available at the time of writing for the IBM TotalStorage Enterprise
Storage Server (ESS)
Availability Reporting shows responses to Ping jobs, as well as computer uptime.
Capacity Reporting shows how much storage capacity is installed, how much of the
installed capacity is being used, and how much is available for future growth. Reporting is
done by disk and filesystem, and for databases, by database.
Usage Reporting shows the usage and growth of storage consumption, grouped by
filesystem, and computers, individual users, or enterprise-wide.
Usage Violation Reporting shows violations to the corporate storage usage policies, as
defined through Tivoli Storage Resource Manager. Violations are either of Quota (defining
2.4.4 Alerts
An Alert defines an action to be performed if a particular event occurs or condition is found.
Alerts can be set on physical objects (computers and disks) or a logical objects (filesystems,
directories, users, databases, and OS user groups). Alerts can tell you, for instance, if a disk
has a lot of recent defects, or if a filesystem or database is approaching capacity.
Alerts on computers and disks come from the output of Probe jobs and are generated for
each object that meets the triggering condition. If you have specified a triggered action
(running a script, sending an e-mail, etc.) then that action will happen if the condition is met.
Alerts on filesystems, directories, users, and OS user groups come from the combined output
of a Probe and a Scan. Again, if you have specified an action, that action will be performed if
the condition is met.
Figure 2-14 shows the Alert Log. The entries Alert Log, All, Computer, and Filesystem are in
red, signifying that an Alert threshold has been reached. Drilling down on Computer, you can
see the details of the Alert. We can see it was caused by the system VMWARE2KSRV1 being
unreachable.
Tivoli Storage Resource Manager can directly produce an invoice or create a file in CIMS
format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate,
and charge for IT resources and costs. For more information on CIMS see the Web site:
http://www.cims.com.
Chargeback is a very powerful tool for raising the awareness within the organization of the
cost of storage, and the need to have the appropriate tools and processes in place to manage
storage effectively and efficiently.
Administrator.hb
100 5 0.52
Refer to 6.8, “Charging for storage usage” on page 364 for more details on Chargebacks.
ibm.com/redbooks
Part 2 Design
considerations
In this part we present some things to consider when designing an IBM Tivoli Storage
Resource Manager solution, specifically covering some deployment scenarios. We present
the basic architecture and describe how higher availability can be implemented.
ID C
Direct-connect
Clients SRM Server
10 10 %%
% 10 Managed
10 % 10 % Servers
10 % 10 %
10 10
% 10
%%
Scheduled Batch
Reports WWW Server
SRM Database I DC
Repository
Web Connect
Clients
ibm.com/redbooks
Server roles
Monitoring
Discovery
Probes
Pings
Scans
Policy Management
Quotas
Constrains
Scheduled Actions (SCRIPTS)
Alerts
Alerts (SCRIPTS)
ibm.com/redbooks
Monitoring
– Discovery - The Server searches the network to discover machines which do not have
Agent code installed (that is, not yet being monitored by IBM Tivoli Storage Resource
Manager). It will add them to the Unmanaged list (shown in Figure 3-3 on page 51) so
they can be potentially managed later. Only Windows systems in the same domain as
the IBM Tivoli Storage Resource Manager Server will be discovered.
– Probes - The Server will collect the inventory of storage assets of Managed Systems
(computers, controllers, disk driver, filesystems, logical units, etc.) and store it in the
database repository.
– Pings - The Server checks the availability of the Managed Systems by issuing TCP/IP
ping commands to the system. This function is not available for NAS devices and
NetWare servers.
– Scans - The Server Scans the Managed Systems to gather information on usage and
consumption.
The Server roles described above are covered in more detail in Chapter 5, “Operations:
Policy, Quotas, and Alerts” on page 159.
All Storage Resource Management operations are controlled from the Server side. The
Server communicates with the Agents (Managed Systems) when it is performing those tasks.
No managed tasks are performed on the Agent itself. The Agent is just performing the Scans
and script execution on behalf of the Server. Also, all the communication with the database is
done on the Server side for performance reasons. The data is transmitted from the Agent to
the Server and the Server then stores it in the database repository. With such an approach,
there is no need for any database connectivity software on the Agents. Also, since the
Direct-connect Clients and Web Connect Clients for reporting request data through the
Server, rather than directly from the database, they also do not require any database
connectivity software installed.
As everything is controlled and run from the Server side, reliability and availability is a key
consideration for the system which is running the IBM Tivoli Storage Resource Manager
Server.
When you install the Tivoli Storage Resource Manager Server in a new environment, an
Agent is automatically installed on the same system as the Server. In this case after the initial
discovery job, all the Windows systems from the domain or workgroup of the Server system
will be displayed under Unmanaged Computers.
3.2.2 Scripts
Scripts are executed as a result of either of the following events:
Scheduled actions - Batch Reports
Alerts - An Alert can trigger an action, which can be a script
The following steps explain how scripts are run when they are triggered:
The Server looks in its local \scripts directory.
If the script with the required name is in that directory, the Server will load the script, and
send it to the Agent where it is designated to run.
The Agent receives the script, saves it into a temporary file, and runs it.
After the script is finished, the temporary file on the Agent is deleted.
There are two possible scenarios where the script may not run from the Server:
The script already exists on the Agent. In this case the Agent will run the local script
directly instead. The Agent is always checked first to see if it has a local copy, before
running it from the Server.
You did not check the Agent may run scripts sent by server option during the
installation process as described in 4.3.3, “Installation of the Server code” on page 71:
Without this option set, Agents will not receive scripts from the server for execution.
Note: The advantage of setting the policy that Agents may run scripts from the Server is
that you can then install and maintain only one repository for all scripts. This can ease
the management of the scripts and it will also give you consistency.
Agent types
StorageAgent for OS (includes NAS)
StorageAgent for Databases
StorageAgent for Chargeback
Agent roles
Executing Probes and Scans on behalf of the SRM server
Executing scripts in case of
Scheduled Actions
Alerts
ibm.com/redbooks
The Agent code is required on every system you want to manage. As the Agents
communicate through TCP/IP, the Managed System needs IP connectivity to the Server.
Tip: It is recommended that you divide NAS exported filesystems among the Managed
Systems, which access the NAS device. This means that the workload of scanning and
probing is shared among the Agents.
Novell NetWare servers - For retrieving the storage information from the servers and
volumes within NDS trees, you must install the Agent code on a Windows system where a
Novell NetWare client is already located. The Agent code uses native NetWare calls from
these systems. The requirements for a Windows Agent to scan NetWare systems are:
– Running Windows 2000 or Windows NT 4 SP4 and above
– Installed a NetWare Client
– Has access to the Novell NetWare servers and volumes within your environment. This
means that you must have a user ID with the correct access level to be able to perform
queries into the NDS trees.
A single Server instance can theoretically support more than 1000 Agents. Of course, the
load on the Server side will increase with the number of jobs defined. The load of the jobs on
the Server and Agents depends of the job definition. For example, a Scan which will look for
all files will run much longer, and be more CPU-intensive easier than a Scan which will look
for only particular file types.
The Agent should be installed on every system you want to manage. For managing NAS
devices and Novell NetWare servers, you need to install Agents on the systems using the
NAS and NetWare filesystems, as described in 3.3, “IBM Tivoli Storage Resource Manager
Agent” on page 52.
Tip: If possible, it is recommended that you use a separate system for the database
repository.
Filesystem extension uses the ESS Common Information Model/Object Manager (CIM/OM)
to interact with ESS subsystems. See “SMI Specification” on page 18 for more information on
CIM/OM. The IBM Tivoli Storage Resource Manager server communicates with the CIM/OM
server over an IP network using the HTTPS protocol. CIM/OMs installed on the same network
subnet as the IBM Tivoli Storage Resource Manager server can be automatically discovered.
The Service Location Protocol (SLP) is used to discover CIM/OMs.
For information about supported versions of the CIM/OM, see the IBM Tivoli Storage
Resource Manager Support Website at:
http://www-3.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageResourceManager.html
Restriction: Automatic discovery is not supported for CIM/OMs installed on Sun Solaris or
HP-UX.
In our lab setup (Figure 3-5), the CIM/OM server is installed on a host called W2KADVTSM,
which talks to the ESS (ESSF20) through Ethernet. The IBM Tivoli Storage Resource
Manager server (W2KADVTSRM) makes an HTTPS connection over the network directly to
the CIM/OM server. Neither the IBM Tivoli Storage Resource Manager server, nor the
CIM/OM server need to be connected through Fibre Channel to the ESS.
43p
AIX 5.1 ML 4 ESSF20
ITSRM Agent 172.31.1.1
tsmsrv43p
172.31.1.155 2109
Intranet
If you just want ESS LUN reporting, then you do not need agents on the machines connected
to the ESS through Fibre Channel. For additional information (filesystems, devices, etc.) and
filesystem-extension and LUN provisioning, there must be an agent on the hosts connected to
the ESS.
I DC
IP
NFS
IP
Tivoli SRM Server
CIFS
NAS Device
CIFS exported network drives
NFS exported network drives
Windows sytem
CIFS imported network drive(s)
ibm.com/redbooks
In this example we also divided the workload of scanning the NAS device over the two
systems. Depending on the size of the NAS filesystems, it is recommended to spread the
scanning workload among the systems running the Agent code.
NAS discovery
After you complete the installation of the Agents for the systems accessing the NAS devices,
initial discovery will be performed. The discovery job is sent to every managed UNIX Agent
and to one managed Windows Agent in each Windows domain:
Windows - The Agent responsible for the discovery will issue an SNMP query to all the
Windows systems and NAS devices in the domain. If the Vendor Identification Number
matches a number defined in the file config\nas.config in the installation directory, the
system will be considered as a NAS device. In Example 3-1 you can see the nas.config file
from our lab installation.
After the initial Agent installation the entry for Microsoft is not present. We added the entry
to recognize the IBM NAS 200 device in our lab.
The 311 entry is the generic identification number for Windows systems so all Windows
machines will be discovered. You can later limit the login to the NAS devices (as shown in
Figure 3-7) selecting only the NAS device(s) you want to manage, and leave all the others
unselected. After discovering the NAS devices, the Agent will perform a login into each
device. By default, the password supplied during installation will be used. If the NAS
device requires a different password you can supply this password for each filesystem
separately as shown in “Configuration: General settings” on page 108.
Attention: If you decide to put the 311 entry in nas.config file, all Windows based
systems with SNMP enabled will be recognized as Other NAS devices, as shown in
Figure 3-7. This means that any Windows systems without installed Agents will no
longer show up under unmanaged devices. This could cause potential for a confusing
situation as you may think that all Windows based systems are managed, since they do
not appear in the unmanaged list.
Tip: Because of the mentioned reasons, if your Windows powered NAS device allows
installation of third party products, we recommend that you install the Agent on the
device itself.
UNIX - All the Managed Systems that have filesystems mounted from other machines will
be used for discovering the NAS devices. The Agent uses the mount table (on Solaris,
auto-mount config files are also used) for the imported mounts. After this, it will perform an
SNMP query, and if the identification number returned is listed in the file nas.config, the
Note: If the NAS device does not report back on the SNMP query, it will appear in the
Unmanaged Computers Report.
IPX/
SPX
IP
I DC
Novell NetWare
running version 4.0 or above
IP
Tivoli SRM Server
IDC
IPX/
SPX
Windows sytem
with installed Novell NetWare client
and acces to the NDS data
Novell NetWare
running version 4.0 or above
Tivoli SRM Agent installed
ibm.com/redbooks
In this example, the data for the Novell NetWare server is extracted using Novell NDS
information. More than one NetWare server can be monitored from a single Managed System
with the Agent installed.
Attention: The system which will manage Novell Servers should have a user ID with
sufficient rights to perform queries to the NDS trees.
IDC
Direct Connect
Clients SRM Server
10 10 %%
% 10 Managed
10 % 10 % Server
10 % 10 %
10 10
% 10
%%
Scheduled Batch
Reports WWW Server
Repository
Web Conect
Clients
ibm.com/redbooks
This type of installation can have certain scalability limitations as you need to take care of the
database growth and maintenance. This type of installation is available on all supported Tivoli
Storage Resource Manager Server platforms, providing the database product itself is
supported on that operating system. For example, Microsoft SQL-Server is only available for
Windows systems.
In our lab we performed this type of installation using a Windows 2000 Server system with
IBM DB2 Version 7.2 as the underlying database. The details of the installation are covered in
Chapter 4, “IBM Tivoli Storage Resource Manager installation” on page 67.
I DC
Direct Connect
Clients
10 10 %%
% 10 Managed
10 % 10 % Server
10 % 10 %
10 10
% 10
%%
WWW Server
Scheduled Batch
Reports
Database
server I DC
SRM Database
Web Conect
Repository
Clients
Remote
Database
ibm.com/redbooks
ID C
IDC
ID C
Database Server
with Tivoli SRM Database
ibm.com/redbooks
The standby Server has to be installed with the same settings as the primary one, and it
needs to have access to the same database. Also, whenever you make changes to the
primary Server you need to make the same changes to the secondary Server.
In the event of a primary Server failure, you would only need to change the DNS record so
that the standby Server IP address will be resolved when Agents perform queries to the Tivoli
Storage Resource Manager Server.
In our lab environment we performed an installation using Oracle 8.1.7 on Windows 2000
Server to use as a database repository. We installed the Tivoli Storage Resource Manager
Server on another two Windows 2000 server systems. The details of installation are covered
in 4.8, “Manager HA install using remote Oracle database” on page 142.
3.5.4 Windows cluster install of IBM Tivoli Storage Resource Manager Server
In this case, the Tivoli Storage Resource Manager Server is installed on two Microsoft
Windows Server systems set up in a Microsoft Cluster Services (MSCS) environment. The
systems will use SAN attached storage for the shared disk resources. The database
repository will reside on a separate server. The setup is shown in Figure 3-12.
I DC
IP
ID C
IP
FC IP
IP
Database Server
with Tivoli SRM Database ID C
IP
SAN
FC
FC
FAStT 700
ibm.com/redbooks
In this installation, the IBM Tivoli Storage Resource Manager program files are installed on a
directory on the shared storage so that they can be reachable from both servers. Doing this
automatically maintains the consistency of the configuration.
In our lab environment we performed this installation using a remote database repository on
Oracle 8.1.7 on Windows 2000. We installed IBM Tivoli Storage Resource Manager Server
on cluster of two Windows 2000 Advanced Server systems. The details of the installation are
given in 4.7, “Microsoft Cluster installation” on page 123.
3.5.5 AIX cluster installation of IBM Tivoli Storage Resource Manager Server
In this case the installation of the IBM Tivoli Storage Resource Manager server will be
performed on two AIX server systems set up in an IBM HACMP environment. Both systems
will have the Tivoli Storage Resource Manager Server installed and they will use SAN
attached storage for the shared disk resources. The database repository will reside on a
separate server. The setup is shown in Figure 3-13.
IDC
IP IDC
IP
FC IP
IP
Database Server
with ITSRM Database IDC
IP
SAN
FC
FC
FAStT 700
ibm.com/redbooks
In this installation, the IBM Tivoli Storage Resource Manager program files are installed on
the shared storage, so they are accessible from both servers. With such installation we also
maintain the consistency of the configuration. Doing this automatically maintains the
consistency of the configuration. The database repository is installed on a Windows 2000
server running IBM DB2 UDB Version 7.2.
ibm.com/redbooks
ibm.com/redbooks
Installation
Database creation
Manager and Agent install
Configure the web access for Manager application
Start the Manager application
ibm.com/redbooks
A DB2 database can be created using DB2 Control Center or by using command line tools.
We used the DB2 Control Center wizard to create the database, and accepted defaults for the
configuration settings. In our case we created a database called ITSRMDB for this
environment.
For the database which will be used as the repository, you also need to provide the JDBC
driver. Tivoli Storage Resource Manager uses the JDBC protocol to access the database.
As this is a new install, the only possible selection is to install the IBM Tivoli SRM code. Click
Next to continue and the license agreement displays. Accept the license agreement and click
Next to continue. On the next window click Yes to confirm. You then select the components to
install, as shown in Figure 4-5.
As we are installing the Server code, we selected The Tivoli SRM Server and an Agent on
this machine.
Note: Whenever you install the Server code, the Agent code is also installed.
Enter in the supplied licenses, depending on what you have bought for your organization.
Click Next to continue, and the database selection screen in Figure 4-7 displays.
Select the database server which is available. In our setup, we used DB2 UDB as the
database repository.
After selecting the database repository click Next; you will see the service account screen
shown in Figure 4-8.
The installation program will query the DB2 installation for existing databases and display
them. If the database you created for Tivoli Storage Resource Manager repository is listed,
you can select it by clicking on the name. Otherwise, you can type in the name under
Database Alias field. You also need to provide the database user name and password under
Connection information. Because the manager is accessing the database using JDBC you
need to specify the path to the JDBC driver in the JDBC driver.
The JDBC driver for IBM DB2 is installed automatically with the database product itself.
Note: The setup for the other database engines will be slightly different, but you will still be
asked for the same type of information - that is, database name, user ID, and JDBC driver.
After providing all the necessary information, click Next and you will see the Repository
Creation Parameters screen shown in Figure 4-10.
On this screen you can specify the database schema and tablespace name.
Tip: We recommend that you accept the defaults for these two fields. Alternatively, you can
also use the naming convention that is used in your enterprise.
If you are using DB2 as the repository, you can also choose how you will manage the
database space:
System Managed (SMS)
This option indicates that the space is managed by the OS. In this case you specify the
Container Directory, which is then managed by the system, and can grow as large as the
free space on the filesystem.
Tip: If you do not have in house database skills the System Managed approach is
recommended.
Tip: We recommended that you use meaningful names for Container Directory and
Container Filer at installation. This can help you in case you need to find the container
file.
The setup for other types of databases is similar. An example using Oracle is in step 9.,
“Install the Tivoli Storage Resource Manager Server on the primary server using the same
parameters as on the standby server.’’ on page 145. An example using MS SQL-Server is in
4.3.4, “Microsoft SQL-Server as repository’’ on page 78.
Note: The ports 2078 and 2077 are registered with IANA (Internet Assigned Numbers
Authority), so we recommend you use them, unless they are already in use in your
network, you can change them. If you change them on the Manager installation, you
also need to change them on each host Agent installation.
The Agent port defined here is used for the local Agent which is installed along with the
Server installation. The port which is defined is registered in the database, and because of
that, each individual Agent could possibly use a different port (however, this is not
recommended).
Agent should perform a SCAN when first brought up: With this option on, the host
Agent will perform an initial scan after installation.
Agent may run scripts sent by server (in addition to local scripts): If this option is
selected, host Agents will accept scripts sent from the Manager, otherwise, they will only
run locally stored scripts. You can get more information about scripts in 3.2.2, “Scripts’’ on
page 51.
Administrators Group: This is the name of the administrators users group. The default
value is Administrators, and can be changed if required for your organization. The
security roles are described in 4.6.1, “Security’’ on page 98.
After supplying all the needed information, click Next. You will see the NAS Discovery screen
shown in Figure 4-12.
In this screen you define parameters which are used for NAS discovery:
User Name - User name to login to Windows NAS devices
Password - Password for Windows NAS devices
Tip: If you use different user names on different NAS devices you can later specify a
different user name and password combination for each device.
SNMP Communities - The manager uses SNMP communities to query and identify NAS
filers (for example Network Appliance NAS devices). If you do not specify the community
name, the public community is used.
After specifying the required parameters click Next - you will see the Space Requirements
screen as in Figure 4-13.
In this screen you can choose the installation path for the Server code. Here you can also see
the required space for the installation, which can help you to select a directory location. If the
destination directory does not exist, you will be prompted for creation of it, after you click
Next. Finally, you will see the installation summary screen in Figure 4-14.
At this stage you can still decide to go back and change settings if necessary. Click Next to
start copying files.
If you installed the Tivoli Storage Resource Manager repository in a DB2 UDB database, the
Create Service Account window is shown in Figure 4-15. The Tivoli Storage Resource
Manager creates a new Service account and the Agent will use it when running probes and
scans against DB2 databases on the current machine.
Click Yes to create the Service account and to continue with copying the files.
After the copying is complete, you should see the messages shown in Figure 4-16.
In this case, after installation a Probe was executed. This happened because we enabled the
installation option Agent should perform a SCAN when first brought up.
If you select MS SQL-Server as the database repository during the installation process, you
will see a screen like Figure 4-17.
Select MS SQL Server and click Next to continue. Figure 4-18 displays.
Click Next and the Repository Creation Parameters screen displays (Figure 4-19).
Here you specify the name and location of the database components. Click Next and the
installation process continues as in Figure 4-11 on page 75.
If you select Cloudscape as the database repository when installing, you will see the screen
like Figure 4-20.
Click Next and the installation process will continue. The pop-up warning in Figure 4-21
advises you not to use the database in a production environment.
Click OK and the installation process continues as from Figure 4-11 on page 75.
You can also access the Server system over the network and perform administration tasks
from a remote workstation. The remote administration console is a Java based applet, which
can be run locally or remotely by downloading it from the Web server.
In our example we set up remote Web access using Microsoft IIS (Internet Information
Server) which is built into Windows 2000. We did the following:
1. Select Start -> Administrative Tools -> Internet Information Services.
2. Right-click Default Web Site and select New -> Virtual Directory (Figure 4-22).
3. The Virtual Directory Creation Wizard displays. Click Next to display the Virtual
Directory Alias screen (Figure 4-23).
4. Enter an alias name which will be used as the access point for Web access (tsrm in our
example). Next, the Web Site Content Directory screen displays (Figure 4-24).
5. Specify the directory where the Web access files for Tivoli Storage Resource Manager are
located. They will be in the GUI directory under the installation directory, C:\Program
Files\Tivoli\TSRM\gui in our example. Click Next and the Access Permissions screen
(shown in Figure 4-25) displays.
6. In this dialog you can set up access permissions for the files in the virtual Web directory.
Figure 4-26 IBM Tivoli Storage Resource Manager main Web window
The applet is downloaded to your system and you need to grant the access (Figure 4-27).
After granting the session, you will see the administrator GUI main screen as in Figure 4-28.
Click Add and add the TivoliSRM.html document. Click OK to save the changes.
Now you can access the Tivoli Storage Resource Manager Server over Web simply by typing
in the address of the Web directory: http://lochness/tsrm/
Congratulations! You have just installed, configured, and started Tivoli Storage Resource
Manager.
Click Next to continue. Accept the license terms, click Next to continue, and you will see the
installation selection screen in Figure 4-32.
As we are installing the GUI, select The GUI for reporting and click Next. The Parameters
screen displays, as shown in Figure 4-33.
Enter the Server hostname or IP address and the Server port (as shown in Figure 4-11 on
page 75). Click Next - you will see the Space Requirements screen, as shown in Figure 4-34.
Here you can see the size of installed code and selected installation directory. We
recommend you keep the default settings. Click Next to complete the installation.
Windows Agent
To install the Agent code for Windows, run SETUP.EXE from the Windows directory on the Tivoli
Storage Resource Manager CD. The initial screen displays, as in Figure 4-4 on page 71.
Click Next and accept the license. You will see the installation selection screen shown in
Figure 4-35.
Select An Agent on this machine and click Next. You will see the Parameters screen, as
shown in Figure 4-36.
The Server Port should match the entry from the Server installation - 2078 in our case, as
shown in Figure 4-11 on page 75, or the Agent will not be able to connect to the Server. The
Server Name should be the hostname (or IP address) of the Tivoli Storage Resource Manager
Server. The Agent Port can be any free port on the Agent system. You should use the same
port for all Agents as this helps simplify management.
If you do not want to automatically perform a Scan after the Agent is installed, deselect the
option Agent should perform a SCAN when first brought up (gathers default
statistics).
If for some reason you do not want to allow Agents to accept scripts from the server, deselect
Agent may run scripts sent by server (in addition to local scripts).
After supplying all the parameters click Next. The installation program will check the
connection to the Server. The Space Requirements screen will display, as shown in
Figure 4-37.
Here you can see the required space for installation and specify the installation directory. If
the directory does not exists you need to confirm its creation. Click Next, then confirm the
settings on the next screen. Select Next to start copying files. After the installation is
complete, the Agent will automatically start.
If you are installing on an Agent with a NetWare client, you will be prompted to create a local
account for the Agent (as shown in Figure 4-38) before the Agent is started after installation.
This account can only be created if you are logged into the Novell NDS with sufficient
privilege.
UNIX Agent
Install the UNIX Agent by running ./setup.sh from the appropriate directory. Our example is
a Linux Agent. If you execute the script without parameters, the help is displayed as shown in
Example 4-1.
To install the Agent using the quick method, you need to supply the following parameters:
-s servername - The Server name or IP address
-d directory - The installation directory. The usual installation places are in /opt and /usr.
Specify the full path, for example /opt/tivoli/ITSRM.
-p serverport - The Server port
-q agentport - The Agent port
Note: The d, p, and q parameters can be omitted, if so, these defaults will be used:
d - /opt/tivoli/TSRM or /usr/tivoli/TSRM, depending on the platform
p - 2078
q - 2077
During installation you will see messages similar to those shown in Example 4-2.
For a Linux Agent, the installation process will create an auto-start entry in /etc/init.d and link
to this entry in runlevel 3 and 5. Other UNIX variants will create a similar entry in the
appropriate file to enable automatic start of the Agent on system start.
If for some reason, the Agent no longer appears in the Agent list, or it is marked as
Unreachable, and the network connection is working, you can force the registration process
by creating an empty file called PROBE_ME in the Agent installation directory. For example, on
Windows use C:\Program Files\Tivoli\TSRM\PROBE_ME. If the Agent showed as
Unreachable, you should first delete it from the Agent list.
Note: If you delete or reregister an Agent you will lose all historical data for this Agent.
2. Click Next, you will see the maintenance selection screen similar to Figure 4-40.
3. In this case, we are upgrading the Server. Select The IBM Tivoli SRM Server and all of
its Agents and click Next.
If the Server being upgraded uses IBM DB2 as the repository database, you will see the
screen shown in Figure 4-41; otherwise, you will go directly to the confirmation screen in
the next step.
4. Here you have to enter the DB2 administrator user ID and password. Click Next and you
will see the confirmation screen. Click Next to start the maintenance.
5. After all the upgraded files are copied, the summary screen (Figure 4-42) displays.
Check for errors and click Done to finish the maintenance process.
After performing maintenance on the Server system, the Server will automatically upgrade all
the Agents.
If for any reason you need to force an upgrade of the Agents with the same version currently
available, you need to create an empty file with the name UPGRADE_AGENTS in the Server
installation directory. For example, in Windows C:\Program Files\Tivoli\TRSM This will force
an upgrade.
You can also choose a time to perform the upgrade (When to Upgrade tab in Figure 4-44). We
chose to perform the upgrade immediately.
Under Options, (Figure 4-45) you can force a reinstall if the Agent is already on this level.
After saving the job, the scheduler will run it at the selected time (immediately in this case).
Each Agent will be stopped, upgraded, and restarted.
4.6.1 Security
To log in to the Tivoli Storage Resource Manager Server, you can use any local user ID on the
Server system. During installation you can specify the administration group (shown in
Figure 4-11 on page 75). The members of this group will be able to perform all tasks using the
GUI interface. We recommend creating a special group for Tivoli Storage Resource Manager
administrators. The group name can be changed after installation by editing the server.config
in the config directory and restarting the Server services. An example of the file is shown in
Example 4-3.
All other local users on the system can log in to the Server, but only with read-only access to
administrative tasks.
Windows domain users can also access the Server, provided they are members of local
groups.
4.6.2 Administration
When you start the Tivoli Storage Resource Manager GUI either locally or using the Web
browser, you will see the logon window as shown in Figure 4-47.
Enter the user ID and password and click OK. You will see the main screen (Figure 4-48).
As shown on the left side, the interface uses a tree-oriented navigation. Under the IBM Tivoli
SRM entry are four main sections:
Administrative Services - Here you can administer the Tivoli Storage Resource Manager
Server. We will cover these operations in this section.
IBM Tivoli SRM - Here you can manage and report on Agent systems. More information
on reporting is in Chapter 6, “Reporting” on page 247.
IBM Tivoli SRM for Databases - Here you can manage and report on database
applications on Agent systems. More information on database reporting is in Chapter 6,
“Reporting” on page 247.
IBM Tivoli SRM for Chargeback- Here you perform charge back functions. More
information on charge back is given Chapter 6, “Reporting” on page 247.
Tip: You can expand or collapse a tree or sub tree by clicking on the circle on the left
side of the tree name as highlighted in red in Figure 4-48.
In the following sections we will explain the functions found under Administrative Services.
More detailed information is in the manual IBM Tivoli Storage Resource Manager
Configuration and Getting Started Guide, SC32-9067.
Menus
The menus are at the top of the screen, as in Figure 4-49.
Tool Bar
The Tivoli Storage Resource Manager Tool Bar is shown in Figure 4-50.
Tip: Do not forget to save changes made to an object. The interface will warn you if you
try to close a window with unsaved data.
If you right-click a service component you will get the menu shown in Figure 4-51.
Except for Broadcast, which is only available on the Server node, all other options are
available on all nodes:
View Log - View log of the component
Broadcast - Inform Agents on Server location
Shutdown - Shut down the component:
– Normal - Clean shutdown, allowing all processing to finish
– Immediate - Quick shutdown
– Abort - Shut down and stop whatever is in process
When you click on a particular Agent, you will see the screen similar to Figure 4-52.
This shows general information about the Agent (status, port, address, last update, time zone,
connection errors).The Details screen is shown in Figure 4-53.
Here you can see details about the Agent (name of the Agent and Host, date and time when it
was started, uptime, disk space allocated to virtual memory size - VM, manufacturer and OS
of the Agent system, number of jobs scheduled to run on the Agent). The Jobs screen is
shown in Figure 4-54.
This views shows information about any jobs currently running on the Agent. The example
shows that the Scan job is running.
If you right-click the Agent you will get the menu shown in Figure 4-55.
Attention: By deleting the Agent you will lose all (including historical) data about it from
the repository.
License Keys
This option is for administering Tivoli Storage Resource Manager license keys. Clicking
License Keys shows a screen like Figure 4-57. In Tivoli Storage Resource Manager V1.2,
the license model has been simplified to only three license types.
Licensing requirements are explained in the manual IBM Tivoli Storage Resource Manager
Configuration and Getting Started Guide, SC32-9067. Here we will focus on the operations
around the licenses. A new license can be added by clicking Add, and entering the
appropriate license key as shown in Figure 4-58.
To change a license, select the product name and click Edit; Figure 4-58 displays for you to
enter the new license.
Click the icon on the left side of a particular product name (as circled in Figure 4-57) to
perform other specific licensing actions:
IBM Tivoli SRM
The licensing screen for Tivoli Storage Resource Manager is shown in Figure 4-59.
You can see the systems with installed Agents, which are licensed to use the product. To
select an Agent, click in the square in the Licensed column as shown in Figure 4-59.
If you will scan Novell NetWare servers, they have to be licensed as shown in Figure 4-60.
Figure 4-61 Licenses for Tivoli Storage Resource Manager for NAS
All the NAS devices are displayed, and you can select those which are to be licensed. The
Filer Logins tab is shown in Figure 4-62.
Here you can update the default login and password for NAS devices, which were defined
during installation (Figure 4-12 on page 76). Also, you can define a specific login for each
NAS device by selecting the row or rows, and clicking Set login per row or Set login for
all selected rows. The window for entering the login and password looks similar to
Figure 4-63.
Here you enter the specific login ID and password for the NAS appliance.
IBM Tivoli SRM for Databases
The licensing process for all database components (MS SQL-Server, Oracle, Sybase,
UDB) is similar. Our example shows the setup for MS SQL-Server. After opening you
will see a screen similar to Figure 4-64.
From the list of Agents, select those with SQL-Server databases installed, which you want
to monitor, as shown for CLYDE in Figure 4-64. To successfully scan the database, you
have to provide a login name and password for each instance. This can be done in the
RDBMS Logins tab as in Figure 4-65.
You can define a new RDBMS login by clicking Add New as shown in Figure 4-66.
Commercial drivers are also available for SQL-Server. An example is from Altanav Inc.,
which is available at:
http://www.atinav.com/products/aveconnect/MSSQLserver/aveconnect2.htm
A free copy of JDBC for Oracle is provided with the installation package, or it can be
downloaded from:
http://otn.oracle.com/software/tech/java/sqlj_jdbc/content.html
Alert Disposition
This option defines how the Alerts are generated when a corresponding event is discovered.
This screen is shown in Figure 4-67.
Log-File Retention
This option defines how long to keep the log files, as shown in Figure 4-68.
Depending on the OS, Tivoli Storage Resource Manager obtains the user names from:
Windows 2000 - Full name field, from LDAP
Windows NT - Full name field, from domain-level security database
NetWare - Surname and Given name fields, from LDAP
UNIX - User description from the password file
The name is stored in the repository database and then specific algorithms are used to
extract the names for building e-mail address rules. In the example shown in Figure 4-69, the
last name plus the first character of the first name will be used to create the name. When
e-mail is sent the default domain defined in Alert Disposition (see “Alert Disposition” on
page 114) will be appended.
More explanation on setting up the rules is in the manual IBM Tivoli Storage Resource
Manager Configuration and Getting Started Guide, SC32-9067.
This information is gathered during the discovery process on the Agents accessing NAS
devices and Novell NetWare servers.
The Agent systems with access to the NAS or NetWare volumes and filesystems will be
displayed here along with information on which volume or filesystem(s) they are using.
Important: If the discovery jobs are not run against NDS trees and NAS devices, the
volumes and filesystems will not be displayed.
To change the Agent that will scan the volume and filesystem, select the desired row(s) and
click Set agent per row or Set agents for all selected rows. You will see the window shown
in Figure 4-71 for NAS Agents, or the window shown in Figure 4-72 for the Novell NetWare
Agents.
In this window you specify which Agent will scan the selected volume or filesystem.
History Aggregator
This option specifies when to run the History Aggregation job as shown in Figure 4-73. This
job runs within the Tivoli Storage Resource Manager Server and aggregates information in
the repository for reporting purposes.
Select the desired Tree Name and click Edit, as shown in Figure 4-75.
Specify the login ID and the password for the NDS tree.
Tip: The login ID must be specified with the full context name.
Tivoli Storage Resource Manager uses this login ID to access the NDS trees and gather
information about the NetWare servers and volumes in those trees.
Important: The assigned login ID must have permission to enumerate the volumes within
the NetWare servers on that tree.
You can define how long to retain information for these removed entities:
Directories
Filesystems
Disks
Here you define the retention period for the following data:
Database-Tablespaces
Tables
You can define how long to retain information on these removed entities:
Databases-Tablespaces
Tables
Important: The clustered systems must be members of the same domain. They can
also be domain controllers.
One Fibre Channel HBA in each of the systems attached to the SAN
IBM FAStT Storage system with two 10GB LUNs FC-attached to the hosts. The LUNs
were configured to be seen by both systems. The first LUN was used as the quorum disk
and the second LUN was used as the data disk for DB2 and Tivoli Storage Resource
Manager.
IBM DB2 UDB Version 7.2 Service Pack 7
IP IDC
FC
FC
SAN IP
IDC
Heartbeat IP
LUN0 - Disk E:
Data (Quorum)
FAStT 700 IP
LUN1 - Disk F:
Data (DB2 & ITSRM)
FC
ibm.com/redbooks
Tip: We recommend using private subnet addresses for the heartbeat adapters.
10.Accept the HCL (Hardware Compatibility List) requirements. Figure 4-83 displays.
11.As we are installing on the first node, select The first node in the cluster. You will see the
Cluster Name screen, shown in Figure 4-84.
12.Enter the cluster name, in our example ITSOSRMCL, and click Next. The Account Selection
screen displays (Figure 4-85).
13.Enter the user ID and password that will be used by the Cluster service. This account must
be a domain account. Click Next. The Managed Disks screen displays (Figure 4-86).
14.Select the shared disks to be used for the cluster. You need to select at least one for the
Quorum disk. You can add more shared disks later. In our example, we chose Disk E: for
the Quorum disk. Next, the Cluster File Storage window displays, as shown in Figure 4-87.
15.Select which shared disk will be used for Quorum, Disk E: in our example. Click Next to
display the Configure Cluster Networks screen, as in Figure 4-88.
16.The next screens define the networks to be used in the cluster setup. First is the Network
Connections screen, shown in Figure 4-89.
17.In this panel you select roles for each network defined on the systems. At least two
networks are required, they can have the following roles:
– Client access - The network will be used for client access.
– Internal cluster communication only - The network will be used for cluster heartbeat.
– All communications - The network will be used for both communication methods
mentioned above.
In our example we selected our Local Area Connection network for Internal cluster
communication only.
Figure 4-90 shows our second Network configuration.
Tip: We recommend defining the All communications mode for the second adapter if
you have only two network adapters in the system.
After completing the network connection setup, click Next to continue to the Internal
Cluster Communication screen shown in Figure 4-91.
19.If more than one network was defined for cluster communication, the priority order for
them must be specified. In our example, we specified one network for private
communication and another network for all communications, therefore, we will define the
private network as the top priority network used for inter-cluster communication. If this
network fails, the all communications network will be used for inter-cluster communication
as well as client access.
The Cluster IP address screen comes next, shown in Figure 4-92.
20.Here you define the Cluster IP Address to be used by clients to access cluster resources.
If additional networks were defined for public or all communications access, you need to
also specify the network to which this address will be bound. In our example we used the
Local Area Connection 2 network.
After defining the address click Next to continue, and Finish to end the installation and
configuration on the first node.
21.Start the cluster installation and configuration on the second node, (SENEGAL in our
example) by accessing Add/Remove Windows Components in Control Panel ->
Add/Remove Programs, and selecting Cluster Service. The first windows shown are
identical to those for the primary cluster node (Figure 4-81 on page 125, and Figure 4-82
on page 125). Continue to the Create or Join a Cluster screen, shown in Figure 4-93.
23.Enter the name of the cluster you created on the first node (in step 12 on page 126) and
supply the same user ID, password, and domain of the account you will use to connect to
the cluster (in step 13 on page 127). Click Next. Figure 4-95 displays.
24.Specify the password for the domain account which will be used to run the cluster service
on this node. Click Next and then Finish to complete the installation and configuration of
the cluster.
Tip: If you installed the cluster from media at a lower Service Pack level than the installed
one, you should reapply the latest Service Pack on both nodes before continuing.
4.7.2 Adding shared disk resource for DB2 instance and SRM installation
In our setup, we use a local DB2 database for the Tivoli Storage Resource Manager
repository. To enable this for clustering, we need to provide a clustered instance for this
database, which requires definition of an additional shared disk resource. We have already
defined Disk F: to our cluster as shown in Figure 4-96 on page 133, as a member of the
Cluster group, but it will be later moved to a new cluster group used for the Tivoli Storage
Resource Manager Server cluster.
Before installing, create a user ID that will be used to install DB2 (db2admin in our example).
This user ID should be a member of the Windows DomainAdmins group. To start the
installation, log on using this newly created user ID.
When installing DB2, you only need to select the DB2 Enterprise Edition component. You
can then accept all defaults - the only thing you need to change is to select Do not install the
OLAP Starter Kit. After installation, restart the system and apply the appropriate Fix Pack.
In our installation we used IBM DB2 Enterprise Edition 7.2 with Fix Pack 7.
4. Run the following command to cluster the instance you created in step 2:
db2mscs -f:DB2MSCS.CFG
The command will define all the necessary cluster objects and copy the database instance
files to the clustered disk.
6. Verify that all resources in the new cluster group, in our example SRMCluster, are online.
You can verify the database instance by accessing it in the DB2 Control Center and
creating a sample database. You can also try to failover the resource group and see if the
instance is available.
When you have verified that the clustered instance is working and is capable of failover,
continue with the next installation steps.
4.7.5 Installing IBM Tivoli Storage Resource Manager Server on both nodes
In our example we installed the Tivoli Storage Resource Manager Server on the same disk as
the DB2 clustered instance, Disk F:.
Follow these steps to install Tivoli Storage Resource Manager Server on both nodes:
1. Log on to the first node (DIOMEDE) as the Domain Administrator.
2. If required, fail over the DB2 instance cluster group, in our example SRMCluster, to the
first node in the cluster. This is necessary for our configuration as the DB2 instance is
installed on Disk F: in this group and this disk is required to install the Tivoli Storage
Resource Manager Server on it.
3. Create the database in a non-clustered local instance. We created ITSRMDBD in the DB2
instance as shown in Figure 4-98.
4. Install Tivoli Storage Resource Manager Server following the instructions in 4.3.3,
“Installation of the Server code’’ on page 71, using the database created in step 3 as the
repository, in our example ITSRMDBD. Use the cluster NETNAME_VALUE, in our
example cluster2, for the server name (Example 4-4 on page 134). We installed in the
directory F:\Tivoli\TSRM.
5. After installation, stop the services for Server and Agent, and change them to manual
startup mode as shown in Figure 4-99.
11.Install Tivoli Storage Resource Manager Server following the instructions in 4.3.3,
“Installation of the Server code’’ on page 71, using the database created in step 10 as the
repository, in our example ITSRMDBS. Use the cluster NETNAME_VALUE, in our
example cluster2, for server name. In our example we installed in the directory
F:\Tivoli\TSRM.
12.After installation, stop the services for Server and Agent, and change them to manual
startup mode as shown in Figure 4-99.
Continue with the setup when you have verified that the database in the clustered instance
can be accessed from both cluster nodes.
4.7.7 Editing the Server config file to reflect the database change
As we will be using a database in a clustered instance, the Tivoli Storage Resource Manager
Server configuration file (server.config in the config directory) needs to be changed to point to
this database. Example 4-5 shows the config file we used.
As you can see we changed the database URL to url=”jdbc:db2:SRMDBCL” to reflect that the
repository database was moved to the clustered instance.
Follow these steps to define the resources for operating in a clustered environment:
1. Change the password of the TSRMsrv1 domain account to a new value. The password is
randomly generated by the initial install program, and it is used to run the Tivoli Storage
Resource Manager service. Since we need to synchronize this password on both systems,
we must manually reset it.
2. Edit the logon properties for the Tivoli Storage Resource Manager Server service on both
cluster nodes to reflect the password changes. Right-click on the service entry in the
Attention: If you do not change the password on both nodes, the service will fail to
start.
3. Using Cluster Administration, define a new Generic Service resource for the Tivoli
Storage Resource Manager Server in the clustered instance group, in our example
SRMCluster group. When creating the resource you should define it to be dependent on
the following resources:
– The disk where you installed the Server
– The clustered database instance
– The clustered IP address
– The clustered Network Name
You can see these values in our example in Figure 4-102.
The service name used for this resource is TrelliSrv1 as shown in Figure 4-103.
Also check Use Network Name for computer name, so that the Network Name defined
for this cluster group will be associated with this resource.
4. Using Cluster Administration define a new Generic Service resource for the Tivoli
Storage Resource Manager Agent in the clustered instance group, in our example
SRMCluster group. When creating the resource you should define it to be dependent on
the following resources:
– The disk where you installed the Server
– The clustered database instance
– The clustered IP address
– The clustered Network Name
You can see these values in our example in Figure 4-102.
If all resources are online your Tivoli Storage Resource Manager Server cluster
implementation is ready to use.
Note: When installing the Agents point them to the name which resolves into the cluster IP
address, in our example SRMCluster IP Address as shown in Figure 4-105.
For this installation we used Oracle 8.1.7 running on a Windows 2000 server as the
repository. Before installing, you need to install a JDBC driver for the database. This driver
can be downloaded from the following Web site:
http://otn.oracle.com/software/tech/java/sqlj_jdbc/content.html
Put the JDBC drivers on the local drive of the systems where the Tivoli Storage Resource
Manager Server will be installed.
Our configuration will use the environment shown in Figure 3-11 on page 61. To set this up:
1. Create the repository database on the Oracle database server.
Our database server was installed in the system GALLIUM, and created using the Oracle
Database Configuration Assistant (Figure 4-106).
2. Select Create a database and click Next. On the next screen select Typical and click
Next. Select Create new database files and click Next. For the primary type of database
usage, select Multipurpose and click Next. You can accept the default value for
Concurrently connected users, (in our example, 15) and click Next. On the screen
where you can select options to use with the database, you should deselect all options
and then click Next. A screen similar to Figure 4-107 will display.
3. Here you define the database name, in our example ITSRMREM. The Assistant will
automatically define the SID for the database, and in our example we accepted the default
value ITSRMREM. After specifying the name click Next. In the next window, select No
don’t register the database and click Next. In the window asking when to create the
database select Create Database Now and click Finish. The assistant will create the
database.
4. Install the Tivoli Storage Resource Manager Server on the standby server using this
database, following the instructions in 4.3.3, “Installation of the Server code’’ on page 71.
In the step for database selection, choose Oracle. The screen shown in Figure 4-108
displays.
5. Complete the connection information as shown, and click Next. Figure 4-109 displays.
6. Click Next and continue the installation process as described in 4.3.3, “Installation of the
Server code’’ on page 71.
7. Stop the Tivoli Storage Resource Manager Server service and set the startup type to
manual, using the Services applet under Administrative tools (Figure 4-110).
8. Clear the repository database, using the Oracle database tools. Delete and recreate the
database with the same name as when you created it (ITSRMREM in our example). This
is required because the installation program tries to create the repository in the database
and if the repository already exists, the installation will fail.
Note: If you are using this scenario for HA, you need to maintain two directories inside
the Tivoli Storage Resource Manager installation directory in a consistent state. These
directories are:
config - for the configuration files. After installation or changes on the primary server,
you need to copy those two files to the standby server:
– repository
– nas.config
scripts - for scripts. If you use server distribution to the Agents for the scripts, all
scripts must be copied on both servers.
After starting the services, the standby server connects to the remote database repository
using the same settings as the primary server. As all the information except scripts and basic
configuration options are in the database, the operations can resume.
Tip: For the best results you should keep the clocks of the primary and standby servers
synchronized.
Note: The local Agent installed on the primary server will not appear. Also, all tasks related
to that Agent will fail as the name of the standby server Agent is not the same as for the
primary server.
4.9 CIM/OM
This section describes how CIM/OM works, how to install and configure the CIM/ON server,
and how to configure Tivoli Storage Resource Manager to login into the CIM/OM server.
ITSRM
SLP
CIM M essag es
en cod ed w ith in XM L
CIM /O M
Device Provider
Device
or
ESS
ibm.com/redbooks
IBM Tivoli Storage Resource Manager supports reporting from CIM compliant devices. At the
present time, the only tested device is the ESS using its CIM/OM server. IBM Tivoli Storage
Resource Manager gathers and reports on ESS devices defined in the CIM/OM server. It
uses Probe jobs to collect information about the defined ESS devices and uses the reporting
facilities to view that information.
Windows 2000
AIX
Linux
ibm.com/redbooks
In our example we installed the CIM/OM server V1.1.0.1 on Windows 2000 Advanced Server.
The CIM/OM software can be downloaded from the Web site:
http://www-1.ibm.com/support/search.wss?rs=586&q=ssg1*&tc=STHUUM&dc=D400
Pre-installation task
Before installing the CIM/OM server, the ESS CLI has to be installed and configured correctly.
In our example we used ESS CLI Version 2.1.1.8. Verify the ESS CLI is correctly installed
using the command shown in Example 4-6.
You should see your ESS listed, as in the example. If not, reinstall the CLI package.
2. Click on Installation wizard - you will see a Welcome screen. Click Next to display the
License agreement. Click Next to accept it, and the directory selection screen
(Figure 4-114) displays.
3. Choose the installation directory and click Next. The installation summary screen
(Figure 4-115) displays.
4. Click Install to start copying files. After this is complete you will see a successful
completion message. Click Finish to end the installation process.
After installation, verify the two services are running, as they are essential to provide the
CIM/OM interface to the managed ESS devices.
CIM/OM configuration
Now you need to configure the CIM/OM to actually access the ESS and start providing this
information to the CIM enabled management application.
1. Define the users who will access the CIM/OM interface to gather data. Open a command
prompt with Start -> Programs -> IBM TotalStorage CIM Agent for ESS -> Configure
CIMOM Users. Use the adduser command as in Example 4-7.
In our example we defined user itsrm, with password itsrm. The exit command closes
the window.
2. Define the ESSs which will be controlled by the CIM/OM server. Open a command prompt
with Start -> Programs -> IBM TotalStorage CIM Agent for ESS -> Enable ESS
Communication. Use the address command (Example 4-8) to define a managed ESS.
C:\Program Files\IBM\cimagent>
Tip: If the verification still fails, try restarting both the CIM/OM services before re-verifying.
Upgrading CIM/OM
At the time of writing there was a fix Version 1.1.0.2 available for CIM/OM. It is recommended
to install this fix, which can be downloaded from:
http://www-1.ibm.com/support/search.wss?rs=586&q=ssg1*&tc=STHUUM&dc=D400
3. Click Next to start the installation; it will check the current and new version, as shown in
Figure 4-117.
4. Click Next to continue; the installation confirmation screen displays (Figure 4-118),
including the location and file size.
5. Click Install to begin copying files. When done, you will see the screen in Figure 4-119.
After the upgrade, check if the CIM/OM related services are running, and verify the
configuration as shown in Example 4-9 on page 150.
CIM/OM security
By default CIM/OM server uses secure communication with certificates. The certificate
created during installation is in the file truststore in the installation directory. You can create
new certificates with the command mkcertificate The new certificate will also be stored in
the truststore file.
IBM Tivoli Storage Resource Manager supports secure communication with CIM/OM. If you
are using an application which does not support the secure protocol, the CIM/OM server can
be configured to run in insecure mode. Follow the instructions in Common Information Model
Agent Installation and Configuration Guide for the IBM Enterprise Storage Server,
GC35-0485.
Your CIM/OM server for IBM ESS is now ready to do some serious reporting.
2. To create a new CIM/OM login definition, click Create. Figure 4-121 displays.
– Port - the CIM/OM CP/IP port. The CIM/OM server for ESS uses port 5989 for secure
communication port and 5988 for insecure communication. In our example we used
port 5989.
Tip: The truststore file has to be copied from the CIM/OM server to the machine
where IBM Tivoli Storage Resource Manager server is installed. If both are running
on the same machine, you can use the original location.
After entering all the required data, click Save to store the information into the repository
database. The defined CIM/OM login will appear similar to Figure 4-120.
Once you have defined the CIM/OM login(s) you can edit or delete them using the Edit
and Delete buttons.
3. Before you can start collecting data for CIM/OM managed ESSs, you need to discover
them. The discovery is done using the CIM/OM login information by the Agent on the IBM
Tivoli Storage Resource Manager Server. Select Discovery under Monitoring in the IBM
Tivoli SRM Tree. Right-click the Discovery tree and select Run Now as shown in
Figure 4-122.
4. Once discovery is complete, you should see two entries from the Agent installed on the
IBM Tivoli Storage Resource Manager Server. If you scroll the status window correctly you
can distinguish which was the CIM/OM discovery as shown in Figure 4-123.
The Log File Name for the CIM/OM will include cimom_discovery in the name, thus
identifying it as the discovered CIM/OM. To see if the discovery was successful, display
the job output information by double clicking the spy glass symbol circled in Figure 4-123.
The output is shown in Figure 4-124.
Our output shows that the ESS subsystem (2105.18921, where 18921 is the ESS serial
number) was discovered and configured. You can also see that CIM/OM data was queried
from the host w2kadvtsm which is the CIM/OM server.
5. Once the ESS is discovered, it can be configured for monitoring. Navigate to CIM/OM
Storage Subsystem Administration in the Navigation Tree as shown in Figure 4-125.
All discovered ESS’s will be displayed. To enable reporting on particular ESS, check the
Monitored square as shown in Figure 4-125.
ibm.com/redbooks
The Monitoring features of Tivoli Storage Resource Manager enable you to run regularly
scheduled or on-the-flight data collection jobs. These jobs gather statistics about the storage
assets and their availability and their usage within your enterprise, and make the collected
data available for reporting.
We will now give a quick overview of the monitoring jobs, and explain how they work through
practical examples.
Except for Discovery, you can create multiple definitions for each of those monitoring features
of Tivoli Storage Resource Manager. To create a new definition, right-click on the feature and
select New <feature>. Figure 5-3 shows how to create a new Scan job.
Once saved, any definition within Tivoli Storage Resource Manager can be updated by
right-clicking on the object and selecting Edit. This will put you in Edit mode. Save your
changes by clicking the floppy disk icon in the top menu bar.
Groups and Profiles are definitions that may be used by other jobs - they do not produce an
output in themselves.
As shown in Figure 5-4, all objects created within Tivoli Storage Resource Manager are
prefixed with the user ID of the creator. Default definitions, created during product installation,
are prefixed with Tivoli.Default.
Groups, Discovery, Probes, Scans, and Profiles are explained in the following sections.
5.1.2 Groups
Before defining monitoring and management jobs, it may be useful to group your resources
so that you can limit the scope of monitoring or data collection.
Figure 5-5 shows the groups you can create with Tivoli Storage Resource Manager.
Allows grouping by
Computers
Filesystems
Directories
User ids
OS user groups
ibm.com/redbooks
Computer Groups
Computer Groups allow you to target management jobs on specific computers based on your
own criteria. Some criteria you might consider for grouping computers are platform type,
application type, database type, and environment type (for example, test or production).
In order to target specific servers for monitoring based on OS and/or database type, we will
define these four groups:
Windows Systems
UNIX System
Windows DB Systems
NAS Devices
To create the first group, expand Groups -> Computer, right-click Computer and select New
Computer Group. Our first group will contain all UNIX systems as shown in Figure 5-6. To
add or remove a host from the group, highlight it in either the Available or Current Selections
panel and use the arrow buttons. You can also enter a meaningful description in the field.
To save the new Group, click the floppy disk icon in the menu bar, and enter the Group name
in the confirmation box shown in Figure 5-7.
We created the other groups using the same process, and named them Windows Systems,
Windows DB Systems, and NAS Devices.
Important: To avoid redundant data collection, a computer can belong to only one Group
at a time. If you add a system which is already in a Group, to a second Group, it will
automatically be removed from the first Group.
Figure 5-8 shows the final Group configuration, with the members of the Windows Systems
group.
Note: The default group Tivoli.DefaultComputerGroup contains all servers that have been
discovered, but not yet assigned to a Group.
Filesystem Groups
Filesystem Groups are used to associate together filesystems from different computers that
have some commonality. You can then use this group definition to focus the Scan and the
Alert processes to those filesystems.
To create a Filesystem Group, you have to select explicitly each filesystem for each computer
you want to include in the group. There is no way to do a grouped selection, e.g. / (root)
filesystem for all UNIX servers or C:\ for all Windows platforms. Figure 5-9 shows the
Filesystem Group definition screen.
Directory GROUPS
Use Directory Groups to group together directories to which you want to apply the same
storage management rules.
The Directory Group definition has two views for directory selection:
Use directories by computer to specify several directories for one computer.
Use computers by directory to specify one directory for several computers.
The button on the bottom of the screen toggles between New computer and New directory
depending on the view you select.
We will define one Directory Group with /tmp for all computers, and another with the Oracle
log directory for a specific computer (DIOMEDE). To define the first Group:
1. Select computers by directory.
2. Click on New directory.
3. Enter /tmp in the Directory field and select All computers (see Figure 5-11).
Figure 5-13 shows our final Groups configuration and details of the OracleArchive Group.
User Groups
You can define Groups made up of selected user IDs. These groupings will enable you to
easily define and focus storage management rules such as scanning and Constraints on the
defined IDs.
Note: You can include in a User Group only user IDs defined on the discovered hosts,
which have files belonging to them.
Figure 5-14 shows the list of available users at a specific point in time.
As shown in Example 5-1, we added a new user on the Agent DIOMEDE and created some
files for the user. We than ran a new Scan.
Now, Figure 5-15 shows that this user ID (itso_usr) is listed in the Available user’s list.
Note: As for users, an OS User Group will be added to the list of available Groups only
when a Scan job finds at least one file owned by a user belonging to that Group.
Note: As for users, an OS User Group can belong to only one Group at a time.
ibm.com/redbooks
More details of NAS and NetWare discovery are given in “NAS discovery” on page 56, and in
“Novell NetWare discovery” on page 58.
Use IBM Tivoli SRM -> Monitoring -> Discovery to change the settings of the Discovery
job. The following options are available.
Alert tab
The second tab, Alert, enables you to be notified when a new computer is discovered. See
5.2, “OS Alerts” on page 189 for more details on the Alerting process.
Options tab
The third tab, Options (Figure 5-18) sets the discovery runtime properties.
Uncheck the Skip Workstations field if you want to discover the Windows workstations
reported by the Windows Domain Controller.
ibm.com/redbooks
Pings gather statistics about the availability of monitored servers. The scheduled job will Ping
your servers and consider them active if it gets an answer. This is purely ICMP-protocol
based - there is no measurement of individual application availability. When you create a new
Ping job, you can set the following options.
Computers tab
Figure 5-20 shows the Computers tab, which is used to limit the scope of the computers that
are to be Pinged.
Options tab
On the Options tab, you specify how often the Ping statistics are saved in the database
repository. By default, Tivoli Storage Resource Manager keeps its Ping statistics in memory
for one hour before flushing them to the database and calculating an average availability. You
can change the flushing interval to another time amount, or a number of Pings (for example,
to calculate availability after every 10 Pings). The system availability is calculated as:
(Count of successful pings) / (Count of pings)
We selected to save to the database at each Ping, which means we will have an availability of
100% or of 0%, but we have a more granular view of the availability of our servers.
Alerts tab
The Alerts tab (shown in Figure 5-22) is used to generate Alerts for each host that is
unavailable. Alert mechanisms are explained in more detail in 5.2, “OS Alerts” on page 189.
We selected to:
Send e-mail to two users
Run a script that will send popup messages to selected administrators. The script is listed
in Example 5-2. Optimally, you would send an event to a central console such as the Tivoli
Enterprise Console. Note that certain parameters are passed to the script - more
information is given in “Alerts tab” on page 195.
More details about the related reporting features of Tivoli Storage Resource Manager are in
6.2.3, “Availability Reporting” on page 262.
5.1.5 Probes
Figure 5-25 summarizes the Probe process.
ibm.com/redbooks
The Probe process gathers data about the assets and system resources of Agents such as:
Memory size
Processor count and speed
Hard disks
Filesystems
The data collected by the Probe process is used by the Assets Reports described in 6.2.1,
“Asset Reporting” on page 252.
Computers tab
Figure 5-26 shows that we included the Tivoli.Default Computer Group in the Probe so that all
computers, including those not yet assigned to an existing Group, will be Probed. We saved
the Probe as ProbeHosts.
Important: Only the filesystems that have been returned by a Probe job will be available
for further use by Scan, Alerts, and policy management within Tivoli Storage Resource
Manager.
We set up a weekly Probe to run on Sunday for all computers. We recommend running the
Probe job at a time where all the production data you want to monitor is available to the
system.
Alert tab
As this is not a business-critical process, we asked to be alerted by mail for any failed Probe.
Figure 5-27 shows the default mail text configuration for a Probe failure.
ibm.com/redbooks
Specifying correct profiles avoids gathering unneeded information that may lead to space
problems within the Tivoli Storage Resource Manager repository. However, you will not be
able to report on or check Quotas on files that are not used by the Profile.
Tivoli Storage Resource Manager comes with several default profiles, (shown in Table 5-1)
prefixed with Tivoli.Default, which can be reused in any Scan jobs you define.
BY_MOD_NOT_BACKED_UP Gathers statistics by length of time since last modification (only for
files not backed up since modification). Windows only
LARGEST_FILES Gathers statistics on the n largest files. (20 is the default amount.)
LARGEST_ORPHANS Gathers statistics on the n largest orphan files. (20 is the default
amount.)
MOST_AT_RISK Gathers statistics on the n files that have been modified the longest
time ago and have not yet been backed up since they were
modified. Windows only. (20 is the default amount.)
OLDEST_ORPHANS Gathers statistics on the n oldest orphan files. (20 is the default
amount.)
MOST_OBSOLETE_FILES Gathers statistics on the n “most obsolete” files (i.e., files that have
not been accessed or modified for the longest period of time). (20
is the default amount.)
Those default profiles, when set in a Scan job, gather data needed for all the default Tivoli
Storage Resource Manager reports.
As an example, we will define an additional Profile to limit a Scan job to the 500 largest
Postscript or PDF files unused in the last six months. We also want to keep weekly statistics
at a filesystem and directory level for two weeks.
Statistics tab
On the Statistics tab (shown in Figure 5-29), we specified:
Retain filesystem summary for two weeks
Gather data based on creation data
Select the 500 largest files
The Statistics tab is used to specify the type of data that is gathered, and has a direct
impact on the type of reports that will be available. In our specific case, the Scan associated
with this profile will not create data for reports based on user IDs and users groups. Neither
will it create data for reports on directory size.
The Summarize space usage by section of the Statistics tab specifies how the space usage
data must be summarized. If no summary level is checked, the data will not be summarized,
and therefore will not be available for reporting in the corresponding level of Usage Reporting
section of Tivoli Storage Resource Manager.
In our particular case, because we select to summarize by filesystem and directory, we will
see space used by PDF and Postscript files at those levels, providing we set up the Scan
profile correctly. See 5.1.7, “Scans” on page 185 for information on this. We will not see which
users or groups have allocated those PDF and Postscript files.
Restriction: For Windows servers, users and groups statistics will not be created for FAT
filesystems.
The Accumulate history section sets the retention period of the collected data. In this case,
we will see a weekly summary for the last two weeks.
The Gather statistics by length of time since section sets the base date used to calculate the
file load. It determines if data will be gathered and summarized for the IBM Tivoli SRM ->
Reporting -> Usage -> Files reporting view.
The Gather information on the section sets the amount of files to retrieve for each of the
report views available under IBM Tivoli SRM -> Reporting -> Usage -> Access Load.
With the New Condition menu, you can create a single filter on the files while the New
Group enables you to combine several conditions with:
All of The file is selected if all conditions are met (AND)
Any of The file is selected if at least one condition is met (OR)
None of The file is NOT selected if at least one condition is met (NOT OR)
Not all of The file is selected if none of the conditions are met (NOT AND)
The Condition Group can contain individual conditions or other condition groups.
Each individual condition will filter files based on one of the listed items:
Name
Last access time
Last modified
Creation time
Owner user ID
Owner group
Windows files attributes
Size
Type
Length
We want to select files that meet our conditions: (name is *.ps or name is *.pdf) and
unused since six months. The AND between our two conditions will be translated to All of,
while the OR within our first condition will be translated to Any of.
On the screen shown in Figure 5-30, we selected New Group. From the popup screen,
Figure 5-31, we selected All of and clicked OK.
Now, within our All of group we will create one dependant Any of group using the same
sequence. The result is shown in Figure 5-32.
Now, we create individual conditions within each group by right-clicking on New Condition on
the group where the conditions must be created. Figure 5-33 shows the creation of our first
condition for the Any of group. We enter in our file specifications (*.ps and *.pdf) here.
We repeated the operation for the second condition (All of). The final result is shown in
Figure 5-34.
The bottom of the right pane shows the textual form of the created condition. You can see that
it corresponds to our initial condition. We saved the profile as PS_PDF_FILES (Figure 5-35).
5.1.7 Scans
We explain in Figure 5-36 the main objectives of the Scan jobs.
ibm.com/redbooks
The Scan process gathers statistics about the usage and trends of the server storage. Scan
jobs results are stored in the repository and supply the data necessary for the Capacity,
Usage, Usage Violations, and Backup Reporting facilities. To create a new scan job, IBM
Tivoli SRM -> Monitoring -> Scans, right-click and select New scan. The scope of each
Scan job is set by five different tabs on the right pane.
Filesystems tab
You can specify a specific filesystem for one computer, a filesystem Group (see “Filesystem
Groups” on page 165) or all filesystems for a specific computer. Only the filesystems you
have selected will be scanned. Figure 5-37 shows how to configure the Scan to gather data
on all our servers.
Note: Only filesystems found by the Probe process will be available for Scan.
Profiles tab
As explained in 5.1.6, “Profiles” on page 180, the Profiles are used to select the files that are
scanned for information gathering. A Scan job scans and gathers data only for files that are
scoped by selected Profiles. You can specify Profiles at two levels:
Filesystems: All selected filesystems will be scanned and data summarized for each
filesystem.
Directory: All selected directories (if included in the filesystem) will be scanned and data
summarized for each directory.
Figure 5-38 shows how to configure a Scan to have data summarized at both the filesystem
and directory level.
Alert tab
You can be alerted through mail, script, Windows Event Log, SNMP trap, or Login notification
if the Scan job fails. The Scan job may fail if an Agent is unreachable.
Click on the floppy icon to save your new Scan job, shown in Figure 5-39.
x - - - FS - -
x x - - FS - -
Dir if in specified FS
x x x - FS x -
Dir if in specified FS
x x x x FS x x
Dir if in specified FS
x x x FS x
Dir scanned if in
specified FS
x - x x FS x -
x - - x FS - -
5.2 OS Alerts
Tivoli Storage Resource Manager enables you to define Alerts on computers, filesystems,
and directories. Once the Alerts are defined, it will monitor the results of the Probe and Scan
jobs, and will trigger an Alert when the threshold or the condition is met.
Tivoli Storage Resource Manager provides a number options for Alert mechanisms from
which you can choose depending on the severity you assign to the Alert.
Figure 5-40 shows the Alert mechanisms provided by Tivoli Storage Resource Manager.
Alert mechanisms
SNMP traps
TEC events
Tivoli SRM GUI alerts
Windows Event Logger
Scripts
Email
ibm.com/redbooks
Depending on the severity of the triggered event or the functions available in your
environment, you may want to be alerted with:
An SNMP trap to an event manager. Figure 5-41 shows a Filesystem space low Alert as
displayed in our SNMP application, IBM Tivoli NetView. Defining the event manager is
explained in “Alert Disposition” on page 114.
A TEC event. See Chapter 5., “Operations: Policy, Quotas, and Alerts” on page 159.
An entry in the Windows Event log, as shown in Figure 5-44. This is useful for lower
severity alerts or when you are monitoring your Windows event logs with an automated
tool such as IBM Tivoli Distributed Monitoring.
Running a specified script - The script runs on the specified computer with the authority of
the Agent (root or Administrator). See 5.3.5, “Scheduled actions” on page 229 for special
considerations with scripts execution.
An e-mail - Tivoli Storage Resource Manager must be configured with a valid SMTP
server and port as explained in “Alert Disposition” on page 114. Figure 5-45 shows an
example of e-mail notification.
Except for the Alert Log, you can create multiple definitions for each of those Alert features of
Tivoli Storage Resource Manager. To create a new definition, right-click on the feature and
select New <feature>. Figure 5-47 shows how to create a new Filesystem Alert.
Alerts tab
The Alerts tab contains two parts:
Triggering condition to specify the computer component you want to be monitored. You
can monitor a computer for:
– RAM increased
– RAM decreased
– Virtual Memory increased
– Virtual Memory decreased
– New disk detected
– Disk not found
– New disk defect found
– Total disk defects exceed. You will have to specify a threshold.
– Disk failure predicted
– New filesystem detected
Information about disk failures is gathered through commands against disks with the
following exceptions:
– IDE disks do support only Disk failure predicted queries
– AIX SCSI disks do not support failures and predicted failures queries
Triggered action where you specify the action that must be executed. Available actions are
described in Figure 5-40. If you choose to run a script, it will receive several positional
parameters that depends on the triggering condition. The parameters display on the
Specify Script panel, which is accessed by checking Run Script and clicking the Define
button.
Figure 5-49 shows the parameters passed to the script for a RAM decreased condition.
Figure 5-50 shows the parameters passed to the script for a Disk not found condition.
Computers tab
This limits the Alert process to specific computers or computer Groups (Figure 5-51).
Alerts tab
As for Computer Alerts, the Alerts tab contains two parts. In the Triggering condition section
you can specify to be alerted if a:
Filesystem is not found, which means the filesystem was not mounted during the most
recent Probe or Scan.
Filesystem is reconfigured.
Filesystem free space is less than a threshold specified in percent, KB, MB, or GB.
Free UNIX filesystem inode count is less than a threshold (either percent or inodes count).
You can choose to run a script (click the Define button next to Run Script), or you can also
change the content of the default generated mail by clicking on Edit Email. You will see a
popup with the default mail skeleton which is editable. Figure 5-53 shows the default e-mail
message.
Alerts tab
Directory Alerts configuration is similar to Filesystem alerts. The supported triggers are:
Directory not found
Directory consumes more than the specified threshold set in percent, KB, MB or GB.
Directories tab
Since Probe jobs do not report on directories and Scan jobs report only on directories. if a
directory Profile has been assigned (See “Putting it all together” on page 188) you can only
choose to be alerted for any directory that has already been included in a Scan and actually
scanned.
There are eight different views. Each of them will show only the Alerts related to the selected
view except:
All view - Shows all Alerts
Alerts Directed to <logged user> - Shows all Alerts where the current logged user has
been specified in the Login notification field
When you click on the icon on the left of a listed Alert, you will see detailed information on the
selected Alert as shown in Figure 5-55.
Supported platforms
AIX using JFS
SUN using VxFS
ibm.com/redbooks
To setup filesystem extension policy, select IBM Tivoli SRM -> Policy Management ->
Filesystem Extension. Right click on Filesystem Extension and select Create Filesystem
Extension Rules. The screen in Figure 5-57 displays.
In the Filesystems tab, select the filesystems which will use filesystem extension policy by
moving them to the Current Selections panel. In Figure 5-57 we selected the /opt filesystem.
Note the Enabled checkbox - the default is to check it, meaning the rule will be active. If you
uncheck the box, it will toggle to Disabled - you can still save the rule, but the job will not run.
To specify the extension parameters, select the Extension tab (Figure 5-58).
This tab specifies how a filesystem will be extended. Here are the fields.
Amount to Extend
We have the following options:
Note: If you select Make Capacity under Amount to Extend, the Extend filesystems
when freespace is less than option is not available.
In the Provisioning tab (Figure 5-59) we define LUN provision parameters. Note that LUN
provisioning is available at the time of writing for filesystems on an ESS only.
LUN Provisioning is an optional feature for filesystem extension. When the Enable
Automatic LUN Provisioning is selected, LUN provisioning is enabled.
In the Create LUNs that are at least field, you can specify a minimum size for new LUNs. If
you select this option, LUNs of at least the size specified will be created. If no size is
specified, then the Amount to Extend option specified for the filesystem (in “Amount to
Extend” on page 202) will be used. For more information on LUN provisioning, see IBM Tivoli
Storage Resource Manager 1.2 User’s Guide.
The Model for New LUNs feature means that new LUNs will be created similar to existing
LUNs in your setup. At least one ESS LUN must be currently assigned to the Tivoli Storage
Resource Manager Agent associated with the filesystem you want to extend. There are two
options for LUN modeling:
Model new LUNs on others in the volume group of the filesystem being extended -
provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume
group.
Model new LUNs on others on the same host as the filesystem being extended -
provisioned LUNS are modeled on existing LUNs in the extended filesystem’s volume
group. If the corresponding LUN model cannot satisfy the requirements. it will look for
other LUNs on the same host.
The LUN Source option defines the location of the new LUN in the ESS, and has two options:
Same Storage Pool - provisioned LUNs will be created using space in an existing Storage
Pool. In ESS terminology this is called the Logical Sub System or LSS.
Same Storage Subsystem - provisioned LUNs can be created in any Storage Pool or
ESS LSS.
The When to Enforce Policy tab (Figure 5-60) specifies when to apply the filesystem
extension policy to the selected filesystems.
Enforce Policy after every Probe or Scan automatically enforces the policy after every
Probe or Scan job. The policy will stay in effect until you either change this setting or disable
the policy.
Enforce Policy Now enforces the policy immediately for a single instance.
Enforce Policy Once at enforces the policy once at the specified time, specifying the month,
day, year, hour, minute, and AM/PM
The Alert tab (Figure 5-61) can define an Alert that will be triggered by the filesystem
extension job.
Important: After making configuration changes to any of the above filesystem extension
options, you must save the policy, as shown in Figure 5-62. If you selected Enforce Policy
Now, the policy will be executed after saving.
/opt has 64 MB and 15% used space. We created a new Filesystem Expansion rule - IBM
Tivoli SRM -> Policy Management -> Filesystem Extension. Right click on Filesystem
Extension and select Create Filesystem Extension Rules. We selected the /opt filesystem
as shown in Figure 5-63.
In the Extension tab we specified the following values as shown in Figure 5-64:
Extend the filesystem by 64MB
Extend filesystem regardless of remaining freespace
We do not need to define anything in the Provisioning tab as the rootvg is not on an ESS. In
When to Enforce Policy we specified Enforce policy: Now, this means that the policy will be
executed only once.
Under Alert, we chose to send an SNMP trap and TEC event when a filesystem extension
action was triggered as shown in Figure 5-65.
After all the data is entered we save the rule, calling it opt extension. The new definition is
now shown in the menu tree as in Figure 5-66.
We now execute the rule by right clicking on it and selecting Run Now. In Figure 5-67 you can
see the successful extension of the /opt filesystem.
By clicking on the spyglass, you can examine the log of the action, as shown in Figure 5-68.
As you can see from Figure 5-68 the policy was executed three times so the new filesystem
size should be 64 MB (original size) + 3 x 64 MB (the increment defined in extension policy) =
256 MB and this is the size which is displayed in Example 5-4.
The volume group is defined on the vpath0 device which represents an ESS LUN with serial
number 30918921. The vpath device is used as we have two paths to the physical LUN. See
the Subsystem Device Driver documentation for explanation of vpath device functionality.
The filesystem is mounted on /essfs1 and is defined on logical volume /dev/lv00 as shown in
Example 5-4. The command lslv lv00 shows the information about the logical volume,
including its containing volume group. See Example 5-6.
In Example 5-4 you can see the current /essfs1 filesystem size which is 2.56GB.
We will now define the Filesystem Expansion Rule following the steps in “Expanding the
filesystem in rootvg (no LUN provisioning)” on page 207.
We defined to add 2GB on each expansion, which will trigger when the filesystem has less
than 75% free space.
We defined to model the LUNs on LUNs which are already in the volume group, and to create
them anywhere in the ESS.
In When to Enforce Policy we specified Enforce policy: Now, this means that the policy will
be executed only once or when we will manually run it.
In the Alert tab we defined to send an SNMP trap and TEC event when a filesystem extension
action was triggered.
Now we create some data to fill the disk. Example 5-7 shows /essfs1 at 80% utilization.
Now we run the filesystem extension policy. Figure 5-72 shows the filesystem extension was
successfully completed, extending /essfs1 by 2GB.
The df -k output also shows the difference as in Example 5-8. The new size is 4.56GB.
As the /essfs1 free space is still below 75%, we ran the rule again and the filesystem was
expanded again. The result can be seen in Example 5-9.
The new size is 6.56GB. Until now, the filesystem expansion did not require a new LUN as the
existing LUN for the essvg1 volume group was 8GB, as shown with the command lspv
vpath0 in Example 5-10.
As the /essfs1 free space is still below 75% we ran the rule again and the filesystem was
expanded again. The result can be seen in Example 5-11.
The partial log file for the third expansion is shown in Figure 5-73.
As shown in the log, a new LUN of 2GB was required to accommodate another filesystem
expansion. After the provisioning the ESS LUN, it was added to the essvg1 volume group and
the filesystem was expanded as shown in Example 5-11 on page 215.
The lsvpcfg command shows the new LUN in the essvg1 volume group (Example 5-12).
the lspv vpath1 command shows the physical attributes of the new LUN (Example 5-13).
Tip: If you wish to maintain the same LUN size in the volume group, it is recommended to
match the filesystem expansion size to the size of the LUNs used in volume group.
From the new LUN serial number 20018921 as shown in Example 5-12 we can see that it was
created in a different Storage Pool or LSS inside the ESS. The original LUN was in LSS 0x13
(as identified by serial number which starts with 3xx) and the new one is in LSS 0x12 (as
identified by serial number starting with 2xx). The reason for the new LUN being created in
another LSS is that the original LSS is full, therefore there is no space for new LUNs. We
selected the option to create new LUNs anywhere in the ESS in our expansion rule. You can
see the physical representation of LUNs from the ESS Specialist in Figure 5-74.
LSS 0x12
On this screen, the selected icon with label 43P_0 represents the host definition in the ESS
for the server which was used in the LUN provisioning example in this section.
5.3.2 Quotas
The main functionality of Quotas are displayed in Figure 5-75.
ibm.com/redbooks
Quotas can be set at either a user or at an OS User Group level. For the OS User Group level,
this could be either an OS User Group, (see “OS User Group Groups” on page 171), or a
standard OS group (such as system on UNIX, or Administrators on Windows). The User
Quotas trigger an action when one of the monitored users has reached the limit while the OS
User group Quotas trigger the action when the sum of space used by all users of monitored
groups has reached the limit. The Quotas definition mechanism is the same for both except
for:
The menu tree to use:
– IBM Tivoli SRM -> Policy Management -> Quotas -> User
– IBM Tivoli SRM -> Policy Management -> Quotas -> OS User group
The monitored elements you can specify:
– User and user groups for User Quotas
– OS User Group and OS User Group Groups for OS User Group Quota
We will show how to configure User Quotas. User Group Quotas are configured similarly.
Note that the Quota enforcement is soft - that is, users are not automatically prevented from
exceeding their defined Quota, but the defined actions will trigger if that happens. There are
three sub-entries for Quotas: Network Quotas, Computer Quotas, and Filesystem Quotas
Network Quotas
A Network Quota defines the maximum cumulated space a user can occupy on all the
scanned servers. An Alert will be triggered for each user that exceeds the limit specified in the
Quota definition.
Users tab
Figure 5-76 shows the Users tab for Network Quotas.
From the Available column, select any user ID or OS User Group you want to monitor for
space usage.
The Profile pull-down menu is used to specify the file types that will be subject to the Quota.
The list will display all Profiles that create summaries by user (by file owner). Select the Profile
you want to use from the pull-down. The default Profile Summary by Owner collects
information about all files and summarizes them on the user level. The ALLGIFFILES profile
collects information about GIF files and creates a summary at a user level as displayed in
Figure 5-77. This (non-default) profile was created using the process shown in 5.1.6,
“Profiles” on page 180.
Using this profile option, we can define general Quotas for all files and more restrictive
Quotas for some multimedia files such as GIF and MP3.
Filesystem tab
On the Filesystem tab shown Figure 5-78, select the filesystems or computers you want to be
included in the space usage for Quota management.
In this configuration, for each user, his cumulated space usage on all servers will be
calculated and checked against the Quota limit.
The When to CHECK tab is standard, and allows you to define a one off or a recurring job.
Alert tab
On the Alert tab, specify the Quota limit in: KB, MB or GB, and the action to run when the
Quota is exceeded.
You can choose from the standard Alerts type available with Tivoli Storage Resource
Manager. Each Alert will be fired once for each user exceeding their Quota. We have selected
to run a script that we wrote, QUOTAUSERNET.BAT, listed in Example 5-14.
The Alert has fired for user root and Administrators. This clearly shows that administrative
users such as root and Administrators should not normally be included in standard Quotas
monitoring.
Here, we received an Alert that the root user exceeded the Quota on the computers CRETE,
SOL-E, BRAZIL, and EASTER. Another Alert was generated for user itso_hb, because it
exceeded the Quota on the system BRAZIL.
Filesystem Quotas
A Filesystem Quota defines a space usage limit at the filesystem level. An Alert will be fired
for each filesystem where a user exceeds the limit specified in the Quota definition.
Use IBM Tivoli SRM -> Policy Management -> Quotas -> User -> Filesystem, right-click,
and select New quota to create a new Quota. After setting up and running a Quota for
selected filesystems, we received the following entries in the Alert History, shown in
Figure 5-81.
We also see down to the filesystem level on BRAZIL for the user itso_hb, who generated an
Alert in “Computer Quotas” on page 222.
When you run a Network Appliance Quota job, the NetApp Quota definitions will be imported
into Tivoli Storage Resource Manager for read-only purposes.
Note: Network Appliance Quotas jobs must be scheduled after the Scan jobs, since they
use the statistics gathered by the latest Scan to trigger any NetApp Quota violation.
With IBM Tivoli SRM -> Policy Management -> Network Appliance Quotas -> Imported
User Quotas and Imported OS User Group Quotas, you can view the definitions of the
Quotas defined on your NetApp filers.
ibm.com/redbooks
Constraints are used to generate Alerts when files matching specified criteria are consuming
too much space on the monitored servers.
Constraints provide a deeper level of Storage Resource Management. Quotas will allow
reporting on users who have exceeded their space limitations. With Constraints, we can get
more detailed information to specify limits on particular file types or other attributes, such as
owner, age, and so on. The output of a Constraint when applied to a Scan will return a list of
the files that are consuming too much space.
Note: Unlike Quotas, Constraints are automatically checked during Scan jobs and do not
need to be scheduled. Also, the Scan does not need to be associated with Profiles that will
cause data to be stored for reporting.
Filesystems tab
This Filesystems tab helps you to select the computers and filesystems you want to be
checked for the current Constraint. The selection method for computers and filesystems is the
same as for Scan jobs (see 5.1.7, “Scans” on page 185).
Use the buttons on the top of the screen, to allow or forbid files depending on their name. The
left column shows some default file patterns, or you can use the bottom field to create your
own pattern. Click >> to add your pattern to the allowed/forbidden files.
Users tab
The Users tab (figured in Figure 5-84) is used to allow or restrict the selected users in the
Constraint.
Options tab
The Options tab provides additional conditions for file selection, and limits the number of
selected files to store in the central repository.
Once again, the conditions added in the tab will be logically ORed with the previous set in the
File Types and Users tab.
The bottom part of the tab, shown in Figure 5-85, contains the textual form of the Condition,
taking into account all the entries made in the Filesystems, File Types, Users and Options
tabs.
You can change this condition or add additional conditions, by using the Edit Filter button. It
displays the file filter popup (Figure 5-86) to change, add, and remove conditions or
conditions groups as previously explained in 5.1.6, “Profiles” on page 180.
We changed the file filter to a more appropriate one by changing the OR operator to AND.
Alert tab
After selecting the files, you may want to generate an Alert only if the total used space
meeting the Constraint conditions exceeds a predefined limit. Use the Alert tab to specify
the triggering condition and action.
In our Constraint definition, a script is triggered for each filesystem where the selected files
exceed one Gigabyte. We select the script by checking the Run Script option and selecting
Define ... as shown in Figure 5-89. The script will be passed several parameters including a
path to a file that contains the list of files meeting the Constraint. You can use this list to
execute any action including delete or archive commands.
ibm.com/redbooks
Tivoli Storage Resource Manager comes with an integrated tool to schedule script execution
on any of the Agents. If a script fails due to an unreachable Agent, the standard Alert
processes can be used. To create a Scheduled action, select Scheduled Actions -> Scripts,
and right-click on New Script.
Computers tab
On the Computers tab, select the computers or computer groups to execute the script.
The Script Name pull-down field lists all files (including non-script files) in the servers’ script
directory.
Attention: For Windows Agents, the script must have an extension that has an associated
script engine on the computer running the script (for example: .BAT, .CMD, or .VBS).
Alerts tab
With the Alert tab you can choose to be notified when a script fails due to an unreachable
Agent or a script not found condition. The standard Alert Mechanism described in 5.2, “OS
Alerts” on page 189 is used.
Figure 5-92 shows the navigation tree for Tivoli Storage Resource Manager for Databases.
5.4.1 Groups
To get targeted monitoring of your database assets, you can create Groups consisting of:
Computers
Databases-Tablespaces
Tables
Users
Computer Groups
All databases residing on the selected computers will be probed, scanned, and managed for
Quotas.
The groups you have created using Tivoli Storage Resource Manager remain available for
Tivoli Storage Resource Manager for Databases. If you create a new Group, the computers
you put in it will be removed from the Group they currently belong to.
To create a Computer Group, use IBM Tivoli SRM for Databases -> Monitoring -> Groups
-> Computer, right-click, and select New Group.
“Computer Groups” on page 163 gives more information on creating Computer Groups.
Databases-Tablespaces Groups
Creating Groups with specific databases and tablespaces may be useful for applying identical
management rules for databases with the same functional role within your enterprise.
Table Groups
You can use Table Groups to create Groups of the same set of tables for selected or all
database instances.
You can combine both views as each entry you add will be added to the group.
User Groups
As for core Tivoli Storage Resource Manager, you can put user IDs in groups. The user
groups you create will be available for the whole Tivoli Storage Resource Manager product
set.
Tip: The Oracle and MS SQL-Server user IDs (SYSTEM, sa, ...) are also included in the
available users list after the first database Probe.
5.4.2 Probes
The Probe process is used to gather data about the files, instances, logs, and objects that
make up monitored databases. The results of Probe jobs are stored in the repository and are
used to supply the data necessary for Asset Reporting.
Use IBM Tivoli SRM for Databases -> Monitoring -> Probe, right-click, and select New
probe to define a new Probe job. In the Instance tab of the Probe configuration, you can
select specific instances, computers, and computer groups.
The Computers list contains only computers that have been licensed for Tivoli Storage
Resource Manager for Databases. The product licensing procedure is described in “License
Keys” on page 108.
5.4.3 Profiles
As for Tivoli Storage Resource Manager, Profiles in Tivoli Storage Resource Manager for
Databases are used to determine the databases attributes that are to be scanned. They also
determine the summary level and retention time to keep in the repository.
Use IBM Tivoli SRM for Databases - Monitoring - Profile, right-click, and select New
profile to define a new profile. Figure 5-95 shows the Profile definition screen.
5.4.4 Scans
Scan jobs in Tivoli Storage Resource Manager for Databases collect statistics about the
storage usage and trends within your databases. The gathered data is used as input to the
usage reporting and Quota analysis.
All this information is set through the Scan definition screen that contains one tab for each
previously listed item. To define a new Scan, select IBM Tivoli SRM for Databases ->
Monitoring -> Scan, right-click and select New scan as in Figure 5-96.
Note: If you request detailed scanning of tables, the tables will only be scanned if their
respective databases have also been selected for scanning.
Tivoli Storage Resource Manage for Databases uses the standard Alert mechanisms
described in 5.2, “OS Alerts” on page 189.
Device dropped x
An interesting Alert is the Archive log contains more than for Oracle, since the Oracle
application can hang if there is no more space available for its archive log. This Alert can be
used to monitor the space used by in this specific directory and trigger a script that will
archive the files to an external manager such as Tivoli Storage Manager once the predefined
threshold is reached. Here is the procedure:
1. We defined an Instance Alert and selected the Archive log contains more than
condition. We also specified that the script ARCHORA.BAT must be executed when the
Alert is fired. Note the parameters passed to the script.
2. As the archive command must run on the server where Oracle resides, we set Triggering
Computer in the Where to run pull-down field. This does not means that the script must be
physically copied on the monitored server.
3. On the Instance tab, we selected our Oracle server (GALLIUM) and we saved the Alert as
ArchiveOracleLog.
Example 5-16 shows a sample script which we have written, ARCHORA.BAT, which will
archive the Oracle logs to a Tivoli Storage Manager server, and then delete them after
archive. It assumes you already have a Tivoli Storage Manager Server and client defined and
configured for your environment. Note this is a sample only, and should be customized and
tested for your environment.
@echo on
dir %1\ARC*.*
echo ARCHORA.BAT ended successfully ...
exit 0
:NOTORACLE
echo Error - Not Oracle database
exit 4
:DIRNOTEXIST
echo Error - Directory does not exist
exit 4
:DSMCERROR
echo Error while running DSMC command
dir %1\ARC*.*
type dsmerror.log
When the Probe job is run against the GALLIUM server, an Alert is fired. You can see its
output in Figure 5-98.
Database/Tablespace offline x x x
Database/Tablespace dropped x x x
To avoid a Log Full condition, we will define an Alert to monitor log usage on our MS
SQL-Server database. When the log reaches 70% utilization, the Alert will trigger and
perform a backup of the transaction log.
:NOTSQL
echo Error - Not MSSQL database
exit 4
:SQLERROR
echo Error while running SQLMAINT command
exit 4
Table Dropped x x x
Tip: Please refer to 5.2.5, “Alert logs” on page 198 for more information about using the
Alert log tree.
We used IBM Tivoli SRM for Databases - Policy Management - Quotas - Network,
right-click and select New quota to create a new Quota. The right pane will switch to a Quota
configuration screen with four tabs.
Users tab
On the Users tab, specify the database users you want to be monitored for Quotas. You can
also select a profile in the Profile pull-down field on the top right of the tab. In this field, you
can select any Profile that stores summary data on a user level. The Quota will only be fired
for databases that have been scanned using this Profile.
Database-Tablespace tab
Use this tab to restrict Quota checking to certain databases. You can choose several
databases or computers. If you choose a computer, all the databases running on it will be
included for Quota management.
Alert tab
On the Alert tab you can specify the space limit allowed for each user and the action to run. If
no action is selected, the Quota violation will only be logged in the Alert log.
5.7.1 Database up
Tivoli Storage Resource Manager for Databases can be used to test for database availability
using Probe and Scan jobs since they will fail and trigger an Alert if either the database or the
listener is not available. Since those jobs use system resources to execute, you may instead
choose scheduled scripts to test for database availability.
Due to limited scheduling options and the need for user-written scripts, we recommend using
dedicated monitoring products such as Tivoli Monitoring for Databases.
Freelist count
You cannot monitor the count of freelists in an Oracle table using Tivoli Storage Resource
Manager for Databases.
This book part gives detailed procedures for using the reporting facilities of IBM Tivoli Storage
Resource Manager, plus information on backing up, restoring, and maintaining your IBM
Tivoli Storage Resource Manager environment.
Chapter 6. Reporting
This chapter discusses the following:
An overview of IBM Tivoli Storage Resource Manager’s reporting options
Using the supplied report definitions
Enterprise Storage Subsystem (ESS) reporting
– Prerequisite checking
– Creating a Probe
– Asset Reports
• By Storage Subsystems
– Storage Subsystem Reports
• Computer Views
• Storage Subsystem Views
Backup Reporting
Suggested list of Top 10 Reports
Customizing standard reports and saving the changes for later use
Setting up processes for generating daily reports
Reporting categories
Asset
Storage Subsystems
Availability
Capacity
Usage
Usage Violations
Backup
Chargeback
ibm.com/redbooks
The reporting capabilities of Tivoli Storage Resource Manager are very rich, with over 300
predefined views. You can see the data from a very high-level; for example, the total amount
of free space available over the enterprise; or from a low-level, for example, the amount of
free space available on a particular volume or a table in a database.
The data can be displayed in tabular or graphical format, or can be exported as HTML,
Comma Separated Variable (CSV), or formatted report files.
The reporting function uses the data stored in the Tivoli Storage Resource Manager
repository. Therefore, in order for reporting to be accurate in terms of using current data,
regular discovery, Ping, Probe, and Scan jobs must be scheduled. These jobs are discussed
in 5.1, “OS Monitoring” on page 160.
Figure 6-2 shows the Tivoli Storage Resource Manager main screen with the reporting
options highlighted.
The Reporting sections are used for interactive reporting. They can be used to answer ad hoc
questions such as, “How much free space is available on my UNIX systems?” Typically, you
will start looking at data at a high-level and drill down to find specific detail. Much of the
information can also be displayed in graphical form as well as in the default table form.
The My Reports sections give you access to predefined reports. Some of these reports are
pre-defined by Tivoli Storage Resource Manager, others can be created by individual users
My Reports will be covered in more detail in 6.5, “Creating customized reports” on page 345,
and 6.6, “Setting up a schedule for daily reports” on page 360.
The additional product, Tivoli Storage Resource Manager for Chargeback produces storage
usage Chargeback data, as described in 6.8, “Charging for storage usage” on page 364.
Figure 6-2 IBM Tivoli Storage Resource Manager main screen showing reporting options
Most categories are available for both operating system level reporting and database
reporting. However, a few are for operating system reporting only. The description of each
category specifies which applies, and in the more detailed following sections for each
category, we present the capabilities separately for both Tivoli Storage Resource Manager
and Tivoli Storage Resource Manager for Database as appropriate.
Availability Reporting
Availability data is collected by Ping processes and allows you to report on the availability of
your storage resources and computer systems. Availability Reporting is provided for operating
system reporting only.
Capacity Reporting
Capacity Reporting shows how much storage you have and how much of it is being used. You
can report at anywhere from an entire network level down to an individual filesystem.
Capacity Reporting is provided for both operating system and database reporting.
Usage Reporting
Usage Reporting goes down a level from Capacity Reporting. It is concerned not so much
with how much space is in use, but rather with how the space is actually being used for. For
example, you can create a report that shows usage by user, or a wasted space report. You
define what wasted space means, but it could be for example files of a particular type or files
within a certain directory, which are more than 30 days old. Usage Reporting is provided for
both operating system and database reporting.
Backup Reporting
Backup Reporting identifies files that have not been backed up. Backup Reporting is provided
for operating system reporting only.
ibm.com/redbooks
This section discusses Tivoli Storage Resource Manager’s standard reporting capabilities.
Customized reporting is covered in 6.5, “Creating customized reports” on page 345.
This section is not intended to cover exhaustively all of the reporting options available, as
these are very numerous, and are covered in detail in the Reporting section of the manual
IBM Tivoli Storage Resource Manager V1.1 Reference Guide SC32-9069. Instead, this
section provides a basic overview of Tivoli Storage Resource Manager reporting, with some
examples of what types of reports can be produced, and additional information on some of
the less straightforward reporting options.
The host GALLIUM has both Microsoft SQL-Server and Oracle database installed to
demonstrate database reporting. The Agent on LOCHNESS also provides data for a NAS
device call NAS200. The Agent on VMWAREW2KSRV1 also provides data for a NetWare
server called ITSOSJNW6.
ITSRM
Scan
ITSRM
Database
A23BLTZM
WNT LOCHNESS
ITSRM W2K
Agent & ITSRM
GUI Server
Ethernet
ITSRM
Scan
ibm.com/redbooks
VMWAREW2KSRV1
W2K (Vmware)
ITSRM
A t
Figure 6-4 Tivoli Storage Resource Manager Lab Environment
By Computer view
Click By Computer to see a list of all of the monitored systems (Figure 6-6.)
From there we can drill down on the assets associated with each system. We will take a look
at node GALLIUM. In Figure 6-7 we have shown most of the items for GALLIUM expanded,
with the details for Disk 0 displayed in the right-hand bottom pane.
By OS Type view
This view of the Asset data provides the same information as the By Computer view, with the
difference that the Agent systems are displayed sorted by operating system platform.
System-wide view
The System-wide view however does provide additional capability, as it can give a
System-wide view rather than a node-by-node view of some of the data. A graphical view of
some of the data is also available. Figure 6-8 shows the options available from the
System-wide view and in the main panel, the report of all exports/shares available.
Each of the options available under the System-wide view are self explanatory with the
possible exception of Monitored Directories. Tivoli Storage Resource Manager can monitor
utilization at a directory level as well as a device or filesystem level. However, by default,
directory level monitoring is disabled.
By setting up a monitored directory you will get additional information for that directory. Note
that the information collected includes any subdirectories. Information collected about the
directory tree includes the number of files, number of subdirectories, total space used, and
average file size. This can be graphed over time to determine space usage patterns.
All of the database Asset Reporting options are quite straightforward with the exception of
one. In order to receive table level asset information, one or more Table Groups needs to be
You would not typically include all database tables within Table Groups, but perhaps either
critical or rapidly growing tables. We will set up two groups, one for each database type.
To set up a Table Group, Tivoli Storage Resource Manager for Databases -> Monitoring
-> Groups -> Table, right-click Table and choose New Table Group (Figure 6-12).
We have entered a description of GALLIUM Table Group. Now we click New Instance to
enter the details of the database and tables that we want to monitor. From the drop down box,
we select the database instance, in this case the SQL-Server instance on GALLIUM. We then
enter three tables in turn. For each table, we entered the database name (Northwind ), the
creator name (dbo) and a table name. After entering the values, click Add to enter more
tables or finish. We entered the table names of Customers, Employees and Suppliers, as
shown in Figure 6-13. Once all of the tables have been entered click OK.
Now we return to the Create Table Group panel, and we see in Figure 6-15 the information
about the newly entered tables.
Now we chose File -> Save and when prompted, we entered the Table Group name of
GALLIUM Table Group.
In order for the information for our tables to be collected, the Table Group needs to be
assigned to a Scan job. We will assign it to the default database scan job called Tivoli.Default
DB Scan by choosing IBM Tivoli SRM for Databases -> Monitoring -> Scans ->
Tivoli.Default DB Scan. The definition for this scan job is shown in Figure 6-16 and in
particular we see the Table Groups tab. Our new Table Group is shown initially in the left hand
pane. We moved it to the right hand pane by selecting it and clicking >>. We then save the
updates to the Scan job by choosing File -> Save (or with the floppy disk icon from the tool
Example 6-1 on page 261 is an extract from the Scan job log showing that the table
information is now being collected. You can view the Scan job log through the Tivoli Storage
Resource Manager GUI by first expanding the particular Scan job definition. A list of Scan
execution reports will be shown; select the one of interest. You may need to right-click on the
Scan job definition and choose Refresh. The list of Scan executions for the Tivoli.Default DB
Scan is shown in Figure 6-17.
Once you have the actual job chosen you can click the detail icon for the system that you are
interested in to display the job log. The actual file specification of the log file on the Agent
Finally, we can produce table level asset reports by choosing for example, IBM Tivoli SRM
for Databases -> Reporting -> Asset-> System-wide-> All DBMSs -> Tables -> By Total
Size. This is shown in Figure 6-18.
Computer Uptime detects whether or not the Tivoli Storage Resource Manager Agent is
running. Computer Uptime statistics are gathered by a Probe job so this must be scheduled to
run on a regular basis. See 5.1.5, “Probes” on page 177.
Figure 6-19 shows the Ping report for our Tivoli Storage Resource Manager environment, and
Figure 6-20 shows the Computer Uptime report. To generate these reports, we had to select
the computers of interest and select Generate Report.
However, in reality there are really only two views, or perhaps three. The Filesystem Capacity
and Filesystem Used Space views are nearly identical - the only differences being in the
order of the columns and the row sort order.
And there is relatively little difference between these two views and the Filesystem Free
Space view. The Filesystem Capacity and Filesystem Used Space views report on used
space, so include columns like percent used space whereas Filesystem Free Space includes
columns like percent free space. All other data is identical.
Therefore, there are really only two views: a Disk Capacity view and a Filesystem Capacity
view.
The Disk Capacity view provides information about physical or logical disk devices and what
proportion of them has been allocated. Figure 6-21 shows the Disk Capacity by Disk selection
window.
Example 6-23 shows a Capacity Report by Computer Group. We actually have databases in
just one Computer Group, WindowsDBServers. We then drilled down to see all systems within
the WindowsDBServers group, then specifically to node GALLIUM, so that we could see all
databases on GALLIUM.
From a usage perspective there are two types of table report available:
Largest tables
Monitored tables
A Monitored Tables by RDBMS Type report is shown in Figure 6-25. In this case, only tables
which are part of a Table Group, which is included in a Scan job will be reported on.
First navigate Tivoli Storage Resource Manager -> Policy Management -> Constraints.
Existing Constraints will be listed. Right-click Constraints and choose New Constraint. On
the Filesystems tab we entered a description of forbidden files, chose Computer Groups,
then selected db2admin.Windows Systems and clicked >>. The completed Filesystems tab
is shown in Figure 6-26.
We then need to specify in the File Types tab, what a forbidden file is. You can define the
criteria as either inclusive or exclusive; that is, you can specify just those files types that will
violate the Constraint, or you can specify that all files will violate the Constraint except those
specified. There are a number of predefined file types included; you can also chose additional
files by entering appropriate values in the “Or enter a pattern field” at the bottom of the
form. We have chosen MP3 and AVI files. The completed File Types tab is shown in
Figure 6-27.
The Users tab is very similar to the File Types tab - you can specify which users should be
included or excluded from the selection criteria. We have taken the default, which is to include
all users.
In the Options tab, we nominate a maximum number of rows to be returned. We can also
apply some more specific selection criteria here such as only including files that are larger
than a defined size. Note, however that these criteria are added to the file list. For example, if
we specified here that we only wanted to include files greater than 1 MB, the search criteria
would be changed to ((NAME matches any of ('*.AVI', '*.mp3') AND TYPE <> DIRECTORY)
OR SIZE > 1 MB). So the returned list of files would be any file greater than 1 MB in size plus
any *.MP3 or *.AVI files.
If you wish to change the selection criteria so that instead you select any *.MP3 or *.AVI files
that are larger than 1 MB, you can enter 1 MB against the bigger than option, and then click
the Edit Filter button shown in Figure 6-30. You will then see the file filter as shown in
Figure 6-28. To add the size criteria to the file type criteria, click on the Size > 1MB entry and
drag it up to the All of tag. The changed filter is shown in Figure 6-29. You can also see the
Boolean expression for the filter has changed to reflect this condition.
In this case we did not want to apply a size criteria, so we left the Option tab entries at their
defaults as shown in Figure 6-30.
Finally, we can specify that we want an Alert generated if a triggering condition is met. The
only choice here is to specify a maximum amount of space consumed by the files that meet
our selection criteria. We left all of the Alert tab options at their defaults other than specifying
an upper limit of 100 MB for files that have met our selection criteria. The Alert tab is shown in
Figure 6-31. Alerting is covered in more detail in 5.2, “OS Alerts” on page 189.
We then clicked the Save button and entered a name of Forbidden Files as shown in
Figure 6-32.
Once the Scan has completed successfully, you can go ahead and produce Constraint
Violation Reports. Note that you cannot produce a report of violations of a particular
Constraint - the report will include entries for any Constraint violation. However, once the
report is generated, you can drill down into specific Constraint violations.
We produced the report by choosing Tivoli Storage Resource Manager -> Reporting ->
Usage Violations -> Constraint Violation -> By Computer. You will see a screen like
Figure 6-33 where you can select a subset of the clients if appropriate - after selecting, click
Generate Report.
You will then see a list of all of those instances of Constraint violations as shown in
Figure 6-34.
The report shows multiple types of Constraints. Some of these Constraints were predefined
(Orphaned File Constraint and Obsolete File Constraint) and others (ALLFILES and forbidden
files) we defined. An orphaned file is any file that does not have an owner. This allows you to
easily identify files that belonged to users who have left your organization or have had an
incorrect ownership set.
From there you can drill down on a specific Constraint, then filesystems within the Constraint,
and finally to a list of files that violated the Constraint on that filesystem by selecting the
magnifying glass icon next to the entry of interest. Or, as shown in Figure 6-35, by clicking the
pie chart icon next to the entry for forbidden files, you can produce a graph indicating what
proportion of capacity is being utilized by files violating the Constraint. Position the cursor
over any segment of the pie chart to show the percentage and number of bytes consumed by
that segment. We can see that 13% or 7.7 Gigabytes of capacity is being consumed by files
violating the forbidden files Constraint on this filesystem.
One difference between Quotas and Constraints is the process of collecting data. For
Constraints, the data is collected as part of a standard Scan job in a similar way to adding an
additional Profile to a Scan. Quota data collections are performed in a separately scheduled
job. So, when you set up a Quota you need to specify scheduling parameters.
We set up a Quota rule called Big Windows Users by choosing Tivoli Storage Resource
Manager -> Policy Management -> Quotas -> Users -> Computer, right-clicking Computer
and selecting New Quota. On the Users screen we entered a description of Big Windows
Users and then selected User Groups and then Tivoli.Default User Group as show in
Figure 6-37.
On the Alert tab, shown in Figure 6-40, we accepted all of the defaults other than to specify
the limit under User Consumes More Than, in this case, 1 GB.
No Alerts will be generated other than to log any exceptions in the Tivoli Storage Resource
Manager Alert Log.
Finally, we save the Quota definition, calling it Big Windows Users as shown in Figure 6-41.
This job will collect data related to the Quota, and add any Quota Violations to the Alert Log
as shown in Figure 6-43.
And finally we can create a Quota Violation report by choosing IBM Tivoli SRM -> Reporting
-> Usage Violations -> Quota Violations -> Computer Quotas -> By Computer. The
high-level report is shown in Figure 6-45.
We can then drill down further for additional detail or to produce a graphical representation of
the data behind the violation. The graph in Figure 6-46 shows a breakdown of the users’ data
by file size.
You can place a Quota on users, user groups, or all users and you can limit the Quota by
computer, computer group, database instance, database tablespace group or tablespace.
We will set up an Instance Quota that limits any individual user to 100 MB of space per
instance for any database on any server in the db2admin.WindowsDBServers computer
group.
To do this, navigate to IBM Tivoli SRM for Databases -> Policy Management -> Quotas ->
Instance. Right-click Instance and choose New Quota. Figure 6-47 shows the Quota
definition screen. We entered a description of Big DB Users and selected the Tivoli.Default
User Group by expanding User Groups, clicking Tivoli.Default User Group, and then
clicking >>.
On the When to Run tab shown in Figure 6-49, we chose to run the Quota job weekly and
nominated a time of day for the job to run. Other values were left at the defaults.
On the Alert tab (shown in Figure 6-50) we specified the actual Quota that we wanted
enforced, which was a 100 MB per user Quota. Other values were left as defaults.
We saved the new Quota definition with a name of Big DB Users as shown Figure 6-51.
We now run the Quota by right-clicking it and choosing Run Now as seen in Figure 6-52.
To check if any user has violated the Quota, navigate IBM Tivoli SRM for Databases ->
Alerting -> Alert Log -> All DBMSs -> All. We see one violation as shown in Figure 6-53.
We can also now run a database Quota violation report by choosing IBM Tivoli SRM for
Databases -> Reporting -> Usage Violations -> Quota Violations -> All Quotas -> By
User Quota. This report can be seen in Figure 6-54.
By default, information on only 20 files will be returned. Figure 6-56 shows the selection
screen for the report. You will notice that the report uses the Profile Tivoli.Most at Risk. It is in
this Profile that the 20 file limit is set, although the value can be changed. You can override
the value on the selection screen, but you can only reduce the value here, not increase it.
By updating the Profile you can also exclude files from the report. By default, any file in the
\WINNT\system* directory tree on any device will be excluded. You can add entries to the
exclusion list if appropriate. Ideally, the exclusion list should be the same as that in your
backup product.
To view the report, click Generate report. We choose to view it as a graphic by then clicking
on the pie icon and selecting Chart: Space Distribution for All. This is shown in Figure 6-58.
This chart tells you the amount of space consumed by files have not been backed up since
the last backup was run for this server.
The different charts can be viewed in different ways. To select another type of chart, right-click
in the chart area and select another type - e.g. bar chart, as shown in Figure 6-60.
The Incremental Backup Size option makes use of the archive bit, so it can only be used on
Windows systems, and if Tivoli Storage Manager is the backup application, the
resetarchiveattribute option must be used (for Version 5.2). A sample report is shown in
Figure 6-63.
The third report type here is Incremental Range Sizes Reporting. This does not rely on the
archive bit (instead, it uses the modification date) so is more generically applicable. It is
possible to show through the use of this report the actual difference between a traditional
weekly full/daily incremental backup process versus Tivoli Storage Manager’s progressive
incremental approach. To generate this report, select Backup -> Backup Storage
Requirements -> Incremental Range Size -> By Computer as shown in Figure 6-64.
After you select the Computers of interest, click Generate Report. Figure 6-65 shows the
output from this report, with the amount of data changed for different time ranges. Note that
the values are cumulative, so for each time range; the values shown include the smaller time
periods.
If we take the results for system BONNIE as an example, it shows that 390 files (1.02% of all
files) and 41.66 MB (1.82% of total storage) changed within the previous 24 hours and 2831
files (7.45% of all files) and 2.33 GB (53.42% of total storage) changed within the last week.
A typical metric when doing Tivoli Storage Manager planning is to estimate the amount of
data that changes each day in a file server environment as typically about 5-10%. With Tivoli
Storage Resource Manager, we can replace this estimate with actual numbers.
The 1.02% change rate here is outside the typical range because the system is in a lab
environment, and is not performing production work. But, to demonstrate the calculations we
will use those figures.
With Tivoli Storage Manager’s progressive incremental approach in this example we will only
backup approximately 291.62 MB (7 * 41.66 GB) per week compared to 4738 MB (4.38 GB *
1024 = 4485 MB + (6 * 41.66 MB)) for a traditional weekly full plus daily incremental
approach.
To set this up we use a new option, resetarchiveattribute, in the DSM.OPT file for Windows
clients, as shown in Example 6-2. The use of this option determines whether Tivoli Storage
Manager resets the Windows archive attribute on files that have been successfully backed up
to a Tivoli Storage Manager server. Tivoli Storage Manager will also reset the archive attribute
during incremental backups if it is determined that there is already an active object on the
Tivoli Storage Manager server. The resetarchiveattribute option is useful in conjunction
with applications, such as IBM Tivoli Storage Resource Manager, as a simple way to report
on the backup status of files.
The Windows archive attribute is used to indicate that a file has changed since the last
backup. If it has been set to OFF, the Windows operating system will turn the attribute back to
ON after the file has been modified. Tivoli Storage Manager does not use the Windows archive
You can also use the Tivoli Storage Manager Preferences editor, as shown in Figure 6-66 to
set the Reset archive attribute. In any case, you need to start the Tivoli Storage Manager
client (including the Windows Scheduler Service) to make the changes active.
Figure 6-66 Tivoli Storage Manager preference settings for archive attribute
The next Tivoli Storage Resource Manager Scan will then be able to identify files backed up
with Tivoli Storage Manager, and include them in reporting functions.
Reporting Categories
Asset Reporting
By Storage Subsystem
Disk Groups...Volume Spaces... Disk ... LUNs.
Storage Subsystem
Computer Views
By computer... By File Systems/Logical Volumes.
Storage Subsystem Views...By Storage Subsystem... By LUN... By Disk.
ibm.com/redbooks
The reporting capabilities in Tivoli Storage Resource Manager are expanded in Version 1.2 to
include Enterprise Storage Subsystem (ESS) reporting. IBM Tivoli Storage Resource
Manager uses Probe jobs to collect information about the ESS. We can then use the
reporting facility to view that information. The new subsystem reports show the capacity,
controllers, disks, and LUNs of an ESS and their relationships to computers and filesystems
within a network. Figure 6-67 summarizes the functionality.
43p
AIX 5.1 ML 4 ESSF20
ITSRM Agent 172.31.1.1
tsmsrv43p
172.31.1.155 2109
Intranet
Important: Refer to 4.9, “CIM/OM” on page 145 for additional details on confirming
these prerequisites.
The IBM Tivoli Storage Resource Manager will run a discovery to locate the CIM/OM server
in our environment, which in turn discovers the ESSs. See 4.9.3, “CIM/OM configuration in
IBM Tivoli Storage Resource Manager” on page 153.
Next, we show how to create a Probe for an ESS-F20. Select Probe -> Select new probe,
then under the Computers tab, choose Storage Subsystems. See Figure 6-69.
On the When to PROBE tab, we selected PROBE Now because we need to populate the
backend repository. See Figure 6-70.
After all parameters are defined, save the Probe definition. At this point the Probe is
submitted and will run immediately.
Note: For additional information on creating Probes, see 5.1.5, “Probes” on page 177.
There are several ways to check the status of the Probe job. First, we can check the color of
the Probe job entry in the navigation tree, then in the content panel. There are two colors that
represent job status. They are:
GREEN - Job successfully complete with no errors
RED - Job completed with errors
The status of the Probe job is displayed in text and in color, as shown in Figure 6-72, after
selecting the Probe job output in the navigation tree. The job at 8:44 am is in green, indicating
success. The job at 6:32 pm is in red, indicating errors.
We open the Probe job by selecting it and double clicking on the spy glass icon next to the
job in the content window. We see the contents of the job, including detailed information on
the status, as in Figure 6-73. Here, we have selected the successful Probe on June 9 at 8:44.
We choose Reporting -> Asset -> By Storage Subsystem -> ESSF20. This report provides
specific resource information of the ESS and allows us to view storage capacity by a
computer, filesystem, storage subsystem, LUN, and disk level. We can also view the
relationships between the components of a storage subsystem. Notice that the navigation
tree is hierarchical, and shows ESSF20 as active (green). See Figure 6-74.
We drill down to the Disk Groups. The disk group contains information related to the ESS, as
well as the volume spaces and disks associated with those Disk Groups. Expanding the Disk
Group node, a list of all Disk Groups on the ESS displays (Figure 6-75).
Continuing, we expand the disk group DG1 to view the disks and volume spaces within it. We
open Volume Space VS3, which shows the disks and LUNs associated with it. The Disks
subsection shows the individual disks associated with the Volume Space (see Figure 6-76).
Notice the LUNs subsection for disk DD0105 (Figure 6-77). This shows the LUN to disk
relationship. The LUNs shown here are just a subset of all the LUNs. You can see that the
LUN is spread across all the displayed disks in the content window.
Figure 6-78 shows the discovery of a disk with no LUN associations. This is known as a hot
spare. It can be used when one of the other seven disks in the disk group fails.
We now show a high level view of all disks in ESSF20. There are 32 disks in the ESS, as
shown in Figure 6-74 on page 302 in the Number of Disks field. Figure 6-79 shows a partial
listing of the disks.
We can also display a report of all the LUNs in the ESS. This report provides the physical disk
association with each LUN. We have a total of 56 LUNs in the ESSF20 as shown in
Figure 6-74 on page 302 (number of LUNS). A partial listing is shown in Figure 6-80.
By Computer
We drill down Computers Views -> By Computer. The report displays the association of
filesystems to the storage subsystem, LUNS, and disks on ESSF20. These reports are useful
for relating computers and filesystems to different storage subsystem components. There are
three options available in the Relate Computers to: pull down, as shown in Figure 6-81.
We select Storage Subsystems from the pull down, select the desired computer and click
Generate. Figure 6-82 shows that the generated report TSMSRV43P uses 9.24 GB in the
ESS.
Returning to the selection screen tab (Figure 6-81) we select LUNs. We choose the same
host, and click Generate. Figure 6-83 shows the generated report; the relationship between
TSMSRV43P and its assigned LUNs. TSMSRV43P has one LUN created on the ESS.
Finally, from the Selection tab (Figure 6-81), we select Disks, our host TSMSRV43P, and click
Generate. Figure 6-84 shows the report: the ESS disks assigned to the LUN on the host.
By Filesystem/Logical Volume
We will now drill to Computer Views -> By Filesystem/Logical Volume. The report displays
the association of filesystems to the storage subsystem, LUNS, and disks on ESSF20. These
reports are useful for relating computers and filesystems to different storage subsystem
components. There are three options available in the Relate Filesystem/Logical Volumes
to pull down, shown in Figure 6-85.
Select Storage Subsystem, the host (TSMSRV43P), and click Generate. Figure 6-86 shows
the filesystems on the host, which are located on the ESS.
From the Selection tab (Figure 6-85) we now choose LUNs, the host (TSMSRV43P), and
click Generate. Figure 6-87 shows the LUN location of each filesystem on the host.
From the Selection tab (Figure 6-85) we now choose Disks, the host (TSMSRV43P), and
click Generate. Figure 6-88 shows which disks are comprising each filesystem and logical
volume.
By Storage Subsystem
We will now drill down Storage Subsystem Views -> By Storage Subsystem. These
reports display the relationships of the ESS components (storage subsystems, LUNs, and
disks) to the computers and filesystems and logical volumes. There are two options available
in the Relate Storage Subsystems to: the pull down, shown in Figure 6-89.
Select Computers from the pull down, the subsystem ESSF20, and click Generate.
Figure 6-90 shows the space used by each host on the storage subsystem.
Now, select Filesystem/logical Volumes from Figure 6-89, the ESSF20 subsystem, and
click Generate. Figure 6-91 shows each host’s filesystems and logical volumes, with their
capacity and free space.
By LUN
Continuing, we drill down Storage Subsystem Views -> By LUNs, (Figure 6-92).
Select Computer from the Relate LUNs to: pull down, select the subsystem (ESSF20) with
the associated disks (default is all), and click Generate Report. Figure 6-93 shows the LUNs
Now select Filesystem/Logical Volumes from the Relate LUNS to pull down, the ESSF20
subsystem with associated logical disks (default is all), and click. Next, we clicked Generate
Report. Figure 6-93 shows the relationships between the LUNs, computers, and
filesystems/logical volumes, including free space and host device logical names.
Disks
Now we drill to Storage Subsystem Views -> Disks. There are two options available in the
Relate Disks to: pull down, shown in Figure 6-95.
Select Computer from the pull down, the ESSF20 subsystem with related disks (default is
all), and click Generate Report. Figure 6-96 shows the relationships of the disks to the hosts.
Now select Filesystem/Logical Volumes from the pull down (Figure 6-95), the ESSF20
subsystem with related disks (default is all), and click Generate Report. Figure 6-97 shows
the relationship between the ESS disks and the filesystems and logical volumes.
Note: For demonstration purposes, we have reduced some of the fields in the reports.
Click Generate Report. The report is shown in Figure 6-99. Various columns are displayed:
Storage Subsystem
Storage Subsystem Type
Manufacturer
Model
Serial Number
Computer
Filesystem/Logical Volume Path
Capacity
Free Space
Physical Allocation
Figure 6-100 shows the right hand columns of the same report.
This report provides quick answers to how much space on the ESS is allocated to each
filesystem.
Select LUNs this time from the pull-down in Figure 6-98. The report in Figure 6-101 shows
the LUN to host mapping for the ESS, which filesystem is associated with each LUN, and the
free space.
Figure 6-101 Computer view to the filesystem with capacity and free space
To generate this report, select IBM Tivoli SRM -> Reporting -> Storage Subsystem ->
Computer Views -> By Computer tree. We have selected all computers as in Figure 6-102.
Click the Generate Report field - the report is shown in Figure 6-103.
Note you can sort the report on a different column heading by clicking on it. The current sort
field is indicated by the small pointer next to the field name. Clicking again in the same
column reverses the sort order.
For each computer, percent availability, number of reboots, total down time, and average
downtime is given, as in Figure 6-105 shows the selection. The default sort order is by
descending Total Down Time.
You can also display this information graphically, by selecting the pie chart icon at the top of
the report, as shown in Figure 6-106.
Figure 6-107 shows an unstacked bar chart of the same information (right-click and select
Bar Chart).
Figure 6-109 shows the total disk space used by all the files, and the number of files on each
computer. The top column shows the totals for all Agents.
To drill down, select all the computers (using the Shift key) so they are highlighted, then click
on the pie icon, and select History Chart: Space Usage for Selected. The generated report
(Figure 6-110), shows how the total full backup size has fluctuated, and is predicted to
change in the future (dotted lines - to disable this, click Hide Trends).
To display the file count graph, select History Chart: File count from the pie icon in
Figure 6-109. The output report is shown in Figure 6-111, which shows trends in the number
of files on each computer.
Select IBM Tivoli SRM -> Reporting -> Backup -> Backup Storage Requirements ->
Incremental Range Size -> by Filesystem. Select Profile: Tivoli.by Modification as shown
in Figure 6-112.
The generated report shows all the filesystems on the selected computers as in Figure 6-113.
The third column shows the total number and total size of files (for all the systems, then
broken down by filesystem). Then there are “Last Modified” columns for one day, one week,
one month, two months, three, six, nine, and one year selections. Each of these gives the
number and size of the modified files.
To generate charts, highlight all the systems, and click the pie icon. Select Chart: Count
Distribution for Selected, as shown in Figure 6-114.
The chart is shown in Figure 6-115. Note that when your cursor passes over a bar, a pop-up
shows the number of files associated with that bar.
You can display other filesystems using the Next 2 and Prev 2 buttons. Change the chart
format by right-clicking and selecting a different layout. Figure 6-116 is a pie chart of the same
data. The pop-ups work here also as circled.
Figure 6-116 Pie chart selected with number of files which have modified
This is a quick overview database space consumption across the network. To drill down on a
particular RDBMS type, select the appropriate magnifying glass icon as in Figure 6-118.
Note you could select any RDBMS which is installed in your network.
The report shows the following information for each Agent with DB2, plus a total (summary):
Computer name
RDBMS instance
Total size
Container Capacity
Container free space
Log file capacity
Tablespace count
Container count
Log file count
Select the computer again, and click the magnifying glass. The report shows the entire DB2
environment running on computer TONGA. We have 3 DB2 UDB databases, shown in
Figure 6-122 and Figure 6-123.
Select an Agent, and click the magnifying glass to drill down. Figure 6-125 displays.
Select now a particular data file, and click the magnifying glass. The generated pie chart is
shown in Figure 6-126. We can see this data file is allocated on the C: drive.
Figure 6-126 Report DB2 File in a Pie Chart for DB2 File
Click the View Logical Volume button at the bottom to display the LUN report (Figure 6-127).
Using this procedure, we can find the LUNs where all the database data files are stored. This
information is useful for a variety of purposes, e.g. for performance planning, availability
planning, and assessing the impact of a LUN failure.
As an example, we will look for the IBM Tivoli Storage Manager server and client options files.
We have chosen this search for all machines because it will return a relatively small number
of results; however, any search criteria could be used.
Now select the File Filter tab. Click in the All files selected area and right-click to create
a new condition, as shown in Figure 6-129.
Now save the new Profile with an appropriate name, (in this instance, Search for TSM
Options Files). The saved Profile now appears in the Profiles list, see Figure 6-132.
Tip: We recommend choosing meaningful Profile names, which reflect the content or
function of the profile.
On the Profiles tab, select the newly created Profile and add it to the Profiles to apply to
Filesystems column, as shown in Figure 6-134.
Figure 6-135 Report with number of found Tivoli Storage Manager Option Files
Select the Options tab, then select Edit Filter as shown in Figure 6-137.
On the Edit Filter pop-up, double click the ATTRIBUTES Filter. Here we will replace the
ORPHANED condition with our own filter, since we want to actually search for Tivoli
Storage Manager option files, not orphaned files (Figure 6-138).
Use the Del button to delete the ORPHANED condition, then select NAME from the
Attributes pull-down, and the Add button to add another Attributes condition. We will
specify to search for Tivoli Storage Manager option files (including with an .smp extension
for sample files), as in Figure 6-139.
After each file pattern entry, click Add to save it. When all search arguments are entered,
click OK to save the search. The selection is now complete as in Figure 6-139.
Click OK again. Save the search with a new description and name (File -> Save As), so
that you do not overwrite the original Tivoli.Orphaned File Constraint. We called the
search “TSM Option File search.”
Now we have to embed the new Constraint into our Scan.
5. Bind the new Constraint into your Scan
To create or add this entry, go to IBM Tivoli SRM -> Monitoring -> Scans ->
Tivoli.Default.Scan. In the Profiles tab, add administrator.TSM Opt File search to the
right hand panel as in Figure 6-141. This will bind the Tivoli Storage Manager Option file
search to the filesystem search.
Figure 6-141 bind the Orphan search into Profiles to apply to Filesystems column
Finally, save and run the Scan. Check the Scan Job log for correct execution, as shown in
Figure 6-142.
Figure 6-143 Summary report of all Tivoli Storage Manager option files
Click the magnifying class on a filesystem (e.g. C drive). This will show all the files found
which matched the pattern, as in Figure 6-145. Note there are 13 files reported, which
matches the summary view given in Figure 6-135 on page 339.
Figure 6-145 Report for Tivoli Storage Manager Option file searched
System Reports, while included here in the customized reporting section, is in fact not
customizable currently. We will still discuss it in this section as it is part of the My Reports
group.
Reports owned by username’s Reports, where username is the currently logged in Tivoli
Storage Resource Manager username, are modified versions of standard reports from the
Reporting option. You will only see reports here that you have modified and saved.
Batch Reports are reports that are typically set up to run on a schedule, although they can be
run interactively. The key difference between Batch Reports and other reporting options is
that with Batch Reports, the output will always be written to an output file rather than
displayed on the screen.
Figure 6-148 shows the output from running the Storage Capacity system report. We could
have generated exactly the same output by selecting IBM Tivoli SRM -> Reporting ->
Capacity -> Disk Capacity -> By Computer -> Generate Report. Obviously, selecting IBM
Tivoli SRM -> My Reports -> Storage Capacity is a lot simpler.
The only report that does not fall into one of those categories is a usage violation report.
Figure 6-149 shows the output from the All Dbms - User Database Space Usage report. We
are not so much interested in the report contents as such here, but rather in the fact that when
the report was run it produced a report for all users. You can go back to the selection tab and
select specific users if required. This capability exists for all of the System Reports.
However, it is important to remember that you will only see those reports that have been
created by the currently logged in Tivoli Storage Resource Manager username.
We will create a report that is exactly the same as the Storage Capacity system report as
shown in Figure 6-148. In practice this is not something you would normally do as a report
already exists. However, this will demonstrate more clearly how the options relate to each
other.
We select IBM Tivoli SRM -> Reporting -> Capacity -> Disk Capacity -> By Computer ->
Generate Report. Once the report is produced, we save the report definition, using the name
My Storage Capacity. This is shown in Figure 6-150.
Once the report is saved you will see it available under username’s Reports for db2admin as
shown in Figure 6-151.
There are a few features of saved reports worth mentioning here. Firstly, characteristics such
as sort order are not saved with the report definition; however, selection criteria are saved.
Secondly, you can override the selection criteria when running your report. By default the
objects selected at the time of the save only will be reported. However, you can use the
Selection tab when running the saved report to include or exclude objects from the report. If
you change selection criteria you can resave the report, or save it under another name to
update the definition or create a new definition respectively.
We will show one brief example here. We will take one of the reports that we created earlier in
our discussion on Reporting (in this case Figure 6-25 on page 268) the Monitored Tables by
RDBMS Type report and set it up to be able to run more easily.
First we run the report by choosing IBM Tivoli SRM for Databases -> Reporting -> Usage
-> All DBMSs -> Tables -> Monitored Tables -> By RDBMS Type. We then saved the report
definition, naming it Monitored Tables by RDBMS Type. This is shown in Figure 6-152.
The report is more easily run now by choosing IBM Tivoli SRM for Databases -> My
Reports -> username’s Reports -> Monitored Tables by RDBMS Type.
Now, it is a simply a matter of specifying what has to be reported, plus when and what the
output should be. In this case we are going to create a system uptime report. As shown in
Figure 6-154, we entered our report description of System Uptime and have then selected
Availability ->Computer Uptime ->By Computer and clicked >>. Our selection is then
moved into the right hand panel, Current Selections.
We then selected the Selection tab, which is shown in Figure 6-155. Here we are able to
select a subset of available data by either reporting for a specified time range or a subset of
available systems. We took the defaults here.
On the Options tab, we specified that the report should be executed and generated on the
Agent called LOCHNESS, which is our Tivoli Storage Resource Manager server. We selected
HTML for Report Type Specification and then changed the rules for the naming of the
output file under Output File Specification.
By default the name will be {Report creator}.{Report name}.{Report run number}. In this
case we do not really care who created the report and having a variable like report run
number, which changes every time a new version of the report is created and makes it difficult
to access the file from a static Web page. So we changed the report name to be {Report
name}.html.
Note here that it possible to run a script after the report is created to perform some type of
post-processing. For example, you might need to copy the output file to another system if your
Web server is on a system that is not running an Tivoli Storage Resource Manager Agent.
On the When to REPORT tab we specified when the report should be generated. We chose
REPORT Repeatedly and then selected a time early in the morning (3:15 AM) and specified
that the report should be generated every day. This is shown in Figure 6-157.
We left the Alert tab options as default, but it is possible to generate an Alert through several
mechanisms including e-mail, an SNMP trap, or the Windows event log should the generation
of the report fail.
Finally, we saved the report, calling it System Uptime, as shown in Figure 6-158.
We choose IBM Tivoli SRM for Databases -> My Reports -> Batch Reports, right-click
Batch Reports and select New Batch Report as shown in Figure 6-159.
Figure 6-160 shows the Report tab. We expanded in turn Usage -> All DBMSs -> Tables ->
Monitored Tables -> By RDBMS Type and clicked >>. We also entered a Description of
Monitored Tables by RDBMS Type.
We accepted the defaults on the Selection tab, which is to report on all RDBMS types and
then went to the Options tab, shown in Figure 6-161. We set the Agent computer, which will
run the report to GALLIUM.
Note that the system that you run the report on must be licensed for each type of database
that you are reporting on. If we were to run the report on LOCHNESS, the Tivoli Storage
We also set the report type to HTML and changed the output file name to be {Report
name}.html. This is shown in Figure 6-161.
On the When to Report tab, shown in Figure 6-162, we chose REPORT Repeatedly and set
a start time.
We did not change anything in the Alert tab. We saved the definition with the name Monitored
Tables by RDBMS Type as shown in Figure 6-163.
We can now run the report by choosing IBM Tivoli SRM -> My Reports-> Batch Reports
and then right-clicking on db2admin.Monitored Tables by RDBMS Type and choosing Run
Now.
It is possible to generate output from Batch Reports in various formats including HTML,CSV,
(comma separated values) and formatted reports. For all of the reports that we set up, we
specified HTML as the output type, and also set them to run on a daily schedule. That way it
is very easy to use a browser to quickly look at the state of the organization’s storage. It also
means that anyone can look at the reported data through their browser, without having
access to, or indeed, knowing how to use Tivoli Storage Resource Manager. Obviously, if
unrestricted access to this data was not desirable some sort of password based security
could be included within the Web page.
Currently, all of the HTML output from Batch Reports is in table format - graphs cannot be
produced. There is also no ability to affect the layout of the reports in terms of sort order,
nominating the columns to be displayed or the column size. Using the interactive reporting
capability of the product does allow graphs to be produced and gives you some additional
capability in determining what the output looks like. To go further than that you can export to a
CSV file, and then use a tool such as Lotus 1-2-3® or Microsoft Excel to manipulate the
output.
Since Tivoli Storage Resource Manager itself is easy to install and use, we likewise took a
fairly simplistic view to creating the Web site. We used the Microsoft Word Web Page Wizard
to create the basic layout of the page as shown in Figure 6-166.
The main page has two frames. In the left hand frame we have created links to each of the
report files. The right hand frame is where the reports are displayed.
As additional Batch Reports are needed, it is a relatively simple process of editing the HTML
source and including another hot link.
Obviously, this could be made more sophisticated. An example would be to have the browser
list all HTML files within the report directory.
We then used the Virtual Directory Creation Wizard within Microsoft Internet Information
Server (IIS) to set up access to the reports as shown in Figure 6-167.
We could then access the reports through a Web browser as shown in Figure 6-168.
For each of the Chargeback by user options, a Profile needs to be specified. Profiles are
covered in 5.1.6, “Profiles” on page 180.
IBM Tivoli Storage Resource Manager can directly produce an invoice or create a file in CIMS
format. CIMS is a set of resource accounting tools that allow you to track, manage, allocate,
and charge for IT resources and costs. For more information on CIMS see:
http://www.cims.com.
Figure 6-169 shows the Parameter Definition screen. The costs allocated here do not
represent any real environment, but represent an example, based on these assumptions:
Disk hardware costs, including controllers and switches. is $0.50 per MB
Hardware costs are only 20% of the total cost over the life of the storage = $2.50 /MB
On average only 50% of the capacity is used = $5.00 /MB used
The expected life of the storage is 4 years - $5.00 /48 = 0.1042 /MB /month
Chargeback is useful, even if you do not actually collect revenue from your users for the
resources consumes. It is a very powerful tool for raising the awareness within the
organization of the cost of storage, and the need to have the appropriate tools and processes
in place to manage storage effectively and efficiently.
Figure 6-170 shows the Chargeback Report being created. Currently, it is not possible to have
the Chargeback Report created automatically (that is, scheduled).
Administrator.hb
100 5 0.52
We will discuss backup scenarios using both IBM DB2 UDB and Microsoft SQL-Server. Note
that the database included as standard with Tivoli Storage Resource Manager, Cloudscape,
is not recommended for a production environment, hence we do not discuss its backup here.
ibm.com/redbooks
Tivoli Storage Resource Manager relies on two main components: a Server and one or more
Agents. Each of them stores configuration data in text files and/or in databases. We describe
now each component and explain where they store their configuration information.
The configuration files contain information including the TCP/IP ports to be used by the
Server and Agents, database name, and username.
The Tivoli Storage Resource Manager database contains information about the configured
Agents, policies, schedules, and the actual storage resource data.
TSM
config B/A
files client
DB
utility
or
TSM
for
Database DB TSM API
ibm.com/redbooks
Figure 7-2 Tivoli Storage Resource Manager integration with Tivoli Storage Manager
Each of these products provide the interface between the IBM Tivoli Storage Manager API
and an application or database API.
IBM DB2/UDB databases can be backed up to IBM Tivoli Storage Manager as DB2/UDB has
built in IBM Tivoli Storage Manager API support.
Normal flat files (configuration, log and report files) on the Tivoli Storage Resource Manager
Server can be backed up using the IBM Tivoli Storage Manager Backup/Archive client.
Therefore, the two client types (Backup/Archive client for flat files, API client for DB2 backup)
work together to provide full data protection for your Tivoli Storage Resource Manager
environment.
The DB2/UDB API client and the IBM Tivoli Storage Manager Backup/Archive client can run
simultaneously on the same DB2 server, however, they are totally separate clients as far as
the Tivoli Storage Manager server is concerned and we will configure them separately.
Ethernet
ibm.com/redbooks
© 2002 IBM Corporation
We need to specify a management class and copy group within a policy domain for DB2
backups. We recommend defining a separate policy domain for the DB2 backups. We will
define a domain called DB2_DOMAIN and register the nodename assigned to the DB2
backup client (in our case, BONNIE_DB2) to it.
DB2 places special requirements on the management class. Each DB2 database backup is
stored as a unique object in the Tivoli Storage Manager Server, by specifying a time stamp as
Example 7-1 shows typical Tivoli Storage Manager commands to define an adequate
environment for DB2 backups. We define a policy domain, policy set, management class, and
copy groups for the DB2 environment. We activate the policy set and register our client node
to the policy domain. We are using a storage pool called BACK_LTO as the destination for our
DB2 backups.
Example 7-1 Tivoli Storage Manager setup for Tivoli Storage Resource Manager DB2 backups
DEFINE DOMAIN DB2_DOMAIN DESCRIPTION="Domain for DB2 backups" BACKRETENTION=30
ARCHRETENTION=365
The following parameters for the backup copy group were set:
VEREXISTS=1 to keep only one version of the backup file as the name of each DB2
backup is unique. (There will never be a newer version of the backup image with the same
name.)
VERDELETED=0 so that if the backup file has been deleted (through db2adutl), then
Tivoli Storage Manager should not keep an inactive version of this file.
RETEXTRA=0 (the same value as RETONLY) parameter will never be used as you will
never have more than one version of the backup file. To prevent confusion set this
parameter to the same value as RETONLY.
RETONLY=0 so that when a backup image file becomes inactive it will be purged from the
Tivoli Storage Manager Server at the next expiration.
DB2 configuration
Now, you must configure DB2 so that it uses the correct Tivoli Storage Manager node name,
password, and management class.
This can be done in two different ways. Either you define these parameters within DB2 as
shown in Example 7-4, or you can rely on information taken from the Tivoli Storage Manager
client options file, in association with the default Tivoli Storage Manager settings defined in
7.2.3, “Tivoli Storage Manager Server configuration” on page 373.
In both cases you will need to set some OS environment variables so that the Tivoli Storage
Manager API is able to find the Tivoli Storage Manager options file and knows where to write
log files. These environment variables are shown in Example 7-3.
Tip: We used a different DSM.OPT (DB2_DSM.OPT) file to save our DB2 environment. To
“inform” our DB2 environment, you have to define all the DSMI_ variables to the system. If
you should choose this simple way, you do not have to add the Tivoli Storage Manager
entries into the DB2 configuration of the database ITSRMDB as shown in Example 7-4,
TSM_MGMTCLASS, TSM_NODENAME, TSM_OWNER, TSM_PASSWORD. If you have
these entries in the DB2 configuration, you can remove them with the following commands:
Otherwise, define them into the system variables as shown in Example 7-3.
Example 7-3 IBM Tivoli Storage Manager environment variables for API client
DSMI_CONFIG=c:\tivoli\tsm\api\db2_dsm.opt
DSMI_DIR=c:\tivoli\tsm\api
DSMI_LOG=c:\tivoli\tsm\api
Example 7-4 shows the setup of these parameters, however, our recommendation is not to
set any of these parameters, but to rely on the Tivoli Storage Manager options file and default
settings. If you rely on the Tivoli Storage Manager options file and default settings, then for the
above four settings: the management class should be the default management class for the
node, the owner is not required to be set, the nodename comes from the Tivoli Storage
Manager options file and the password, when used with the Tivoli Storage Manager options
file setting passwordaccess generate, is stored in encrypted form in the Windows registry or
in a file on UNIX platforms.
Being able to set these options within DB2 does offer some flexibility when you have multiple
databases on the one system, which has different backup requirements. For example, you
can set different management classes for each database.
Now, you must configure DB2 for using online backups if you plan to run online backups of
your database. The recovery mode is set by the LOGRETAIN parameter.
C:\PROGRA~1\SQLLIB\BIN>db2stop force
SQL1064N DB2STOP processing was successful.
C:\PROGRA~1\SQLLIB\BIN>db2start
SQL1063N DB2START processing was successful.
Example 7-6 Configuring DB2 backup password to Tivoli Storage Manager API client
C:\Program Files\SQLLIB\adsm>dsmapipw.exe
*************************************************************
* Tivoli Storage Manager *
* API Version = 5.2.0 *
*************************************************************
Enter your current password:bonniedb2
Enter your new password:bonniedb2
Enter your new password again:bonniedb2
C:\PROGRA~1\SQLLIB\BIN>db2start
SQL1063N DB2START processing was successful.
As the DB2 database files are backed up using DB2, they must be excluded from backup by
the normal Backup/Archive client. We excluded all DB2 files except the recovery log files. You
must update the dsm.opt file located in c:\tivoli\tsm\baclient\ directory.
EXCLUDE C:\DB2\...\*
INCLUDE C:\DB2\...\*.LOG
See the redbook Backing Up DB2 Using Tivoli Storage Manager, SG24-6147 for detailed
information on setting up DB2 backups with Tivoli Storage Manager.
ibm.com/redbooks
Note: Please refer to this documentation for detailed information about DB2 protection and
Tivoli Storage Manager integration:
Backing Up DB2 Using Tivoli Storage Manager, SG24-6147
IBM DB2 Universal Database - Administration Guide: Implementation - Version 7,
SC09-2944
IBM DB2 Universal Database - Command Reference - Version 7, SC09-2951
Offline backup
An offline backup will run only if the database is not currently in use. You must stop the
database or at least close all connections. In our case, we do not have to stop the database
since Tivoli Storage Resource Manager is the application using it. Check this using the DB2
command shown in Example 7-12. We then stopped the Tivoli Storage Resource Manager
Server - this will close all active connections to the Tivoli Storage Resource Manager
database.
You can see that after stopping the application, message SQL1611W is returned by db2 list
applications for database itsrmdb, which means that no connections are active on the
database.
The backup script, ITSRMBackupOffline (displayed in Example 7-13) performs the following
operations:
1. Stop Tivoli Storage Resource Manager application.
2. Run backup of ITSRMDB database.
3. Start Tivoli Storage Resource Manager application.
@ECHO ON
@REM Get Status and check if Stopped
@REM -------------------------------
net start | findstr /i "IBM Tivoli SRM Server"
@if %errorlevel% EQU 0 GOTO BACKUPDB
:NOTSTOPPED
@ECHO ON
@REM IBM Tivoli SRM server not stopped - Backup cannot run
@REM -----------------------------------------------------
@echo "IBM Tivoli SRM Not Stopped !!!"
@echo "Backup process cancelled "
exit 1
:BACKUPDB
@ECHO ON
@REM IBM Tivoli ITSRM server is stopped - Backup can run
@REM -------------------------------------------------
@echo "Backup of ITSRMDB starting ..."
C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSRMDB USE TSM
@if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages"
@ECHO ON
@REM Get Status and check if Started
@REM -------------------------------
net start | findstr /i "IBM Tivoli SRM Server"
@if %errorlevel% EQU 0 GOTO STARTOK
Online backup
An online backup can run while applications are still accessing the data. DB2 will manage the
enqueue process and will use its recovery log to track all changes made to the database
while the backup is running. Your database must be configured for online backups (see
Example 7-5 on page 377). The database backup procedure, ITSRMBackupOnline, displayed
in Example 7-15, includes:
1. List current connections.
2. Run backup of ITSRMDB database.
3. List current connections.
:BACKUPDB
@ECHO ON
@REM DB2 is active - Backup can run
@REM ------------------------------
@echo "Backup of ITSRMDB starting ..."
C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSRMDB ONLINE USE TSM
@if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages"
C:\bkupscripts>
You can check the status of your backups using the db2adutl command, which is only valid
for backups done using Tivoli Storage Manager.
We see our two latest backups with timestamps 20030611142057 and 20030611132049.
ibm.com/redbooks
You can see in Example 7-18, all the steps executed to destroy and recover the Agent files.
From the Tivoli Storage Resource Manager: Administrative GUI we checked to ensure that
the Agent had started successfully (IBM Tivoli SRM -> Administrative Services -> Agents),
right-click SUSE82-1 and chose Check. Figure 7-5 shows that the Agent on SUSE82-1 did
start.
Example 7-19 shows the Server and Agent being stopped, the files being deleted, and the
Server and Agent failing to start.
We launched the Tivoli Storage Manager Backup/Archive client interface and started the
restore of the deleted directories, shown in Figure 7-7.
Figure 7-8 shows the successful restore of the Tivoli Storage Resource Manager files.
We were then able to successfully restart the Server and Agent as shown in Example 7-20.
Figure 7-9 shows the Tivoli Storage Manager Administrative GUI, where all of the Agents
have successfully reconnected to the Server after the restore.
Figure 7-21 shows stopping the Server and the SQL DELETE commands used to delete the
contents of the ITSRMDB tables.
C:\PROGRA~1\SQLLIB\BIN>cd C:\bkupscripts
We then restored the database as shown in Example 7-22. We selected the most recent
backup image to restore.
Rollforward Status
Node number = 0
Rollforward status = not pending
Next log file to be read =
Log files processed = S0000008.LOG - S0000011.LOG
Last committed transaction = 2003-06-12-18.03.53.000000
In the ROLLFORWARD command, we specified to which point we want to restore the database.
2003-06-12-18.03.53.000000 is expressed in Coordinated Universal Time (UTC) and is the time
just before we started our SQL DROP commands.
Figure 7-10 shows that Tivoli Storage Resource Manager restarted after the database
restore.
ibm.com/redbooks
Figure 7-11 Tivoli Storage Resource Manager Server Disaster Recovery procedures
We describe now the procedures we have used to recover from a complete loss of our Tivoli
Storage Resource Manager Server.
4. Restored all the files on the boot partition (disk C:\) as displayed in Figure 7-13.
Note that after the restore of the boot partition you will be prompted that a reboot of the
system is required. Do not reboot at this time. You need to wait until after the System
Objects have been restored.
The restore of System Objects finished successfully as you can see in Figure 7-15.
Rollforward Status
Node number = 0
Rollforward status = DB working
Next log file to be read = S0000001.LOG
Log files processed = -
Last committed transaction = 2003-06-12-19.18.19.000000
Rollforward Status
Node number = 0
Rollforward status = not pending
Next log file to be read =
Log files processed = -
Last committed transaction = 2003-06-12-19.18.19.000000
The Tivoli Storage Resource Manager Server is now successfully restarted as shown in
Figure 7-16.
Note that if your DB2 files and directories were never backed up using the standard
Backup/Archive client, your DB2 local and system directory will not be synchronized. You will
have to uncatalog the ITSRMDB database, and recreate the database during the restore as
briefly shown in Example 7-25.
C:\Program Files\SQLLIB>db2start
SQL1063N DB2START processing was successful.
ibm.com/redbooks
Example 7-26 shows the output of the reorgchk command on our ITSRMDB database.
Table statistics:
--------------------------------------------------------------------------------
Index statistics:
The reorgchk command calculates three formulas (F1, F2, F3) for the tables and three
formulas (F4, F5, F6) for the indexes to determine if the table or index must be reorganized.
Each hyphen displayed in the REORG column indicates that the calculated results were
within the set bounds of the corresponding formula, and each asterisk indicates that the
calculated result exceeded the set bounds of its corresponding formula.
Table reorganization is suggested when the results of the calculations exceed the bounds set
by the formula.
Attention: Refer to the appropriate administration guide for your DB2 platform.
If a reorganization is recommended for a table or an index, this can only be done when no
activity is running against the database. This means that IBM Tivoli SAN Manager must be
stopped in order to reorganize the tables.
Example 7-27 shows an output of the reorg for the ITSRMDB DB2
TIVOLISRM.T_STAT_FILE table followed by a reorgchk on this table.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Index statistics:
Table: TIVOLISRM.T_STAT_FILE
TIVOLISRMT_STAT_FILE_IX 137397 175 2 9 161 99 76 0 ---
--------------------------------------------------------------------------------
Important: Note that it is preferable to reorganize a table according its most used index.
Refer to the appropriate DB2 administration guide for tables and indices reorganization.
Finally, rebuild the packages (if any) that are associated with ITSRMDB using the db2rbind
command. No log file was created from our db2rbind command, which indicates that there
are currently no packages defined for our ITSRMDB database.
We installed SQL Server 2000 and a new Tivoli Storage Resource Manager instance on
server GALLIUM. From an installation point of view, the process was very similar to when
DB2 was used. Other than a local Agent, this new server had no Agents, and no significant
amount of data was collected so the database was extremely small.
Figure 7-18 displays using the Tivoli Storage Manager for Database GUI to start the backup,
and Figure 7-19 shows that the backup completed successfully.
In Example 7-29 we show the command line interface for Tivoli Storage Manager for
Databases being used to backup the database.
Figure 7-20 shows a restore of the database being started using the GUI, and Figure 7-21
shows the restore complete.
Figure 7-20 SQL Server database restore started using the GUI
And finally, Example 7-30 shows the restore being run using the command line interface.
Example 7-30 SQL Server database restore using the command line
C:\Program Files\Tivoli\TSM\TDPSql>tdpsqlc restore TivoliSRM /REPLACE
Querying Tivoli Storage Manager server for a list of database backups, please wait...
Both the backups and restores were quite straightforward. No special setup was required for
either Tivoli Storage Manager for Databases or Tivoli Storage Resource Manager. More
information on using Tivoli Storage Manager for Databases is available in the redbook Using
Tivoli Data Protection for Microsoft SQL Server, SG24-6148.
One small issue that we came across is that we received an error when performing the
restore when we had the SQL-Server Enterprise Console running. The error was reporting
that the restore process was unable to open the database in exclusive mode. It would appear
7.7.1 Using Oracle for the IBM Tivoli Storage Resource Manager database
We have not tested backing up and restoring an Oracle database used with Tivoli Storage
Resource Manager, however, the same principles apply as already shown for DB2 and MS
SQL-Server. If you have a Tivoli Storage Manager Server, you can use the extra product IBM
Tivoli Storage Manager for Databases to backup and restore Oracle. More information on
backing up Oracle is available in the redbook Backing Up Oracle Using Tivoli Storage
Management, SG24-6249.
There are five processes which make up a TEC server (see Figure 8-1):
1. tec_reception - Receives events, places them in a reception buffer, and writes them into
the database using a Framework RDBMS Interface Module (RIM). After successfully
placing the event into the reception log in the database, the status of the event is WAITING.
2. tec_rule - When an event becomes WAITING, this will cause the tec_rule process to pull in
the event and place it into its Event Cache. The tec_rule process is configured using a
so-called Rule Base. This can be programmed using a Prolog based language. These
Prolog files are compiled and loaded into the tec_rule process. Based on the defined
rules, the events are treated accordingly. They can be reformatted, matched, related,
time-triggered, and actions can be executed.
3. tec_dispatch - When the tec_rule process finishes processing an event, it hands it over to
the tec_dispatch process. This process stores the events in to the database event
repository table using RIM. It then informs the tec_server process about successfully
processing the event, and if there are any tasks or actions to be executed, it informs the
tec_task process to do it.
4. tec_server - This is the master process of the TEC server. It monitors all the other
processes and stops and starts these. Additionally, it receives a PROCESSED signal from the
tec_dispatch and informs the tec_reception process to change the reception log of an
event to PROCESSED. This is stored in the database accordingly.
5. tec_task - This process executes any tasks or actions requested by the rule engine, and
optionally gives a return code back to it.
Reception Event
Buffer Cache
tec_t_evt_rec_log tec_t_evt_rep
RDBMS
A Rule base is divided into event class definitions, which define the attributes of an event; and
rules, which define what should be done with an event.
IBM Tivoli Storage Resource Manager ships only a class definition file (so called baroc file)
but no rule file.
Events can be received either through Tivoli Enterprise Framework mechanisms (which
requires some software to be installed on each event sender) or through a socket connection
(which only requires that events are sent according to TEC formats). IBM Tivoli Storage
Resource Manager sends its events through a socket connection directly to the TEC server.
In order to view the events and assign them to administrators to be treated, there is a Java
based program called the TEC Console. This connects to the event repository using
Framework mechanisms (RIM) and a helper process called tec_ui_server. It can be
configured to show different views for different administrators. Events can be modified
graphically.
DB2
Tivoli Enterprise Console
Tivoli Configuration Manager
Tivoli Monitoring
Ethernet
Tivoli Desktop
Tivoli Enterprise Console Console
ibm.com/redbooks
All the other machines in the lab are running the Tivoli Light Client Framework (LCF) code,
which is the basis for all Tivoli Management activities.
To import the event class definitions, open the Tivoli Desktop and double click the Event
Server icon. In the window (Figure 8-3) you see the defined rule bases, with the active one
highlighted by an arrow.
Choose the active rule base and right-click it. Select Import (Figure 8-4).
Select the check-box Import Class Definitions and enter the fully qualified path to the
definitions file. This file is on the IBM Tivoli Storage Resource Manager CD and is called
tivoliSRM.baroc. (Our example uses a copy of this file on disk.)
After the class definitions are imported, we must compile the rule base to incorporate the
changes (as shown in Figure 8-6). To compile, right-click the active rule base icon and select
Compile.
Carefully check the output for any compilation errors. If there were none, load the rule base
(Figure 8-7). You must recycle the event server whenever you make any changes to the class
definitions. If you only changed rules, then recycling the event server is not necessary.
Stop and start the Event Server by right-clicking the icon on your Tivoli Desktop (Figure 8-8).
To make the changes click the Windows menu and then Configuration (Figure 8-10).
First we have to create an Event Group to specify filters to sort out the IBM Tivoli Storage
Resource Manager events. Right-click Event Groups and select Create Event Group
(Figure 8-11).
Name the Event Group (for example, ITSRM) and right-click it. Select Create Filter
(Figure 8-12).
When the dialog opens up, enter a description to the filter and select Add Constraint
(Figure 8-13).
Choose Class as an Attribute and Operator In, then select SRMAlert in the Value window
(Figure 8-14).
This will add a Constraint to our filter ITSRM. If you add multiple Constraints, they behave as
a boolean AND. If you add more filters to an Event Group they behave as a boolean OR. You
can test if your filter matches any events by clicking the Test SQL button on Figure 8-13. If
there are no events in the TEC repository, then you will get zero matching events. You can
view the Constraint in plain SQL by clicking the little arrow above the Help button on
Figure 8-13. It will display similar to Figure 8-15.
After creating the Event Group, we must assign it to a Console. We assume that you already
have a Console defined, so right-click it and select Assign Event Group. The menu in
Figure 8-16 appears.
Select the appropriate roles and click OK. You will see output similar to Figure 8-17.
Your Console should now have the ITSRM Event Group assigned to it (Figure 8-18).
After configuring the Event Console, you can look at the results by changing the view in the
Windows menu and choose Summary Chart View. The window that appears is the actual
event viewer, which shows all configured event groups (Figure 8-19).
If you click the bar of a particular event group, the event viewer for this event group opens
(Figure 8-20).
In the upper window space, you can see the events which you can modify and are assigned
for you to solve. You can acknowledge, close, and run tasks, or view the details of the
selected event.
If you select an event and click the Details button, the window in Figure 8-21 opens. It
describes in plain text the most important details of the selected event.
You get a complete listing of all event attributes by selecting the Attribute List tab
(Figure 8-22). There you can get additional information on where the event originated, when it
has occurred, when it has been received by the TEC server and other fields.
The possible event attributes (slots) IBM Tivoli Storage Resource Manager uses are the
following:
adaptor_host: Name of the Tivoli SRM server generating the event
hostname: Name of the alerting computer
origin: IP address of the alerting computer
source: The name of the application generating the event i.e. IBM Tivoli Storage Resource
Manager
msg: Text description which gives the summary of the event.
messageID: ID that is assigned to the associated message by the TSRM product.
The event classes IBM Tivoli Storage Resource Manager uses are:
Ram_Changed: the amount of RAM on an Agent has changed
VirtualMemory_Changed: the amount of virtual memory on an Agent has changed.
Disk_New: a new disk has been discovered on an Agent
Disk_Missing: a disk has been removed from an Agent
Disk_Failure: a managed disk has predicted that a disk failure is imminent.
Disk_Defect: a new defect has been detected on a managed disk.
Filesystem_New: a new filesystem has been discovered on a managed computer.
Filesystem _Missing: a filesystem has been removed or unmounted from an Agent
Filesystem_Reconfigured: the physical space definition of an Agent filesystem has been
reconfigured.
Filesystem _FreeSpace_Low: a managed filesystem is low on free space.
Filesystem _Inode_Low: a managed UNIX filesystem is low on free inodes.
Filesystem_Constraint_Violated: a Constraint on a managed filesystem has been violated.
Filesystem_Auto_Extend: a managed filesystem will be extended.
Filesystem_Stopped_Auto_Extend: extension of a managed filesystem is prevented.
Directory_Missing: a monitored directory has been removed from a managed computer.
Directory_Quota_Exceeded: a user or directory storage quota has been exceeded.
Computer_Offline: an Agent is offline.
Computer_Discovered: a new unmanaged computer has been discovered.
NasComputer_Discovered: a new filer has been discovered.
Filer_Missing: a filer is no longer accessible through the specified resource.
DiskArray_Missing: a disk array is no longer visible to a managed computer.
DiskArray_New: a new Disk Array has been discovered.
Job_Failure: a scheduled job has failed
Save your changes by clicking on the Save button under the top menu.
This configuration only defines where TEC events should be sent to - we have not yet actually
enabled any events. To enable events for a specific topic in IBM Tivoli Storage Resource
Manager, you have to select the TEC check box on every Alert properties tab that you want ti
activate. For example, if you want a TEC event sent when the Default Scan fails, navigate to
its properties page and enable it (Figure 8-24).
You can configure IBM Tivoli Storage Resource Manager to send a TEC event for any Alerts.
Another example is to send an event when a new computer is discovered, as shown in
Figure 8-25.
You can learn about the Tivoli Enterprise Data Warehouse in the following manuals and
redbook:
Tivoli Enterprise Data Warehouse Release Notes, GI11-0857
Installing and Configuring Tivoli Enterprise Data Warehouse, GC32-0744
Enabling an Application for Tivoli Enterprise Data Warehouse, GC32-0745
Introduction to Tivoli Enterprise Data Warehouse, SG24-6607
Tivoli Warehouse
Control Server:
IBM DB2® Warehouse
DWC Metadata
ITM ETL
Tivoli
Reporting
Interface
Inventory ETL
Data Marts
Data Marts
ETL Data Marts
TEC ETL Central Data
Data Marts Business Intelligence Tools
Warehouse
Data Marts
Data Marts
IBM Cognos
ETL Business
Source App Brio Objects
The first step to introducing TEDW is enabling the source applications. This means to provide
all tools and customizations necessary to import the source operational data into the central
data warehouse. All components needed for that task are collected in Warehouse Packs for
each source application.
An important part of the Warehouse Packs is the ETL programs (Extract, Transform, and
Load). ETL programs process data in three steps. First they extract the data from a data
source. Then the data is validated, transformed, aggregated, and cleansed so that it fits the
format and needs of the data target. Finally, the data is loaded into the target database.
In TEDW there are two types of ETLs. The central data warehouse ETL pulls the data from
the source applications and loads it into the central data warehouse. The central data
warehouse ETL is also known as source ETL or ETL1. The second type of ETL is the data
mart ETL.
The central data warehouse (CDW) is the database that contains all enterprise-wide
historical data (with hour as the lowest granularity). This data store is optimized for the
efficient storage of large amounts of data and has a documented format that makes the data
accessible to many analysis solutions. The database is organized in a very flexible way, and
you can store data from new applications without adding or changing tables.
The data mart ETL extracts a subset of historical data from the central data warehouse that
contains data tailored to and optimized for a specific reporting or analysis task. This subset of
data is used to create data marts. Data mart ETL is also known as target ETL or ETL2 .
TEDW provides a Report Interface (RI) that creates static two-dimensional reports of your
data using the data marts. The RI is a role-based Web interface that can be accessed with a
Web browser without any additional software installed on the client. You can also use other
tools to perform OLAP analysis, business intelligence reporting, or data mining.
The Control server is the system that contains the control database, which contains metadata
for Tivoli Enterprise Data Warehouse and from which you manage your data warehouse. The
Control server controls communication between the Control server, the central data
warehouse, the data marts, and the Report Interface.
The Control server uses the Data Warehouse Center to define the ETL processes and the
star schemas used by the data marts. You use the Data Warehouse Center to schedule,
maintain, and monitor these processes.
For more information about Tivoli Enterprise Data Warehouse, refer to Introduction to Tivoli
Enterprise Data Warehouse, SG24-6607.
The Tivoli Storage Resource Manager Warehouse Pack provides the steps that extract data
from the Tivoli Storage Resource Manager Enterprise Repository database. The central data
warehouse ETL transforms that data so it conforms to the central data warehouse format and
then loads it into the central data warehouse of Tivoli Enterprise Data Warehouse. Other
products, like Tivoli Service Level Advisor, pull data into data marts they provide to use with
service level agreement reports.
Collection of data from IBM Tivoli products into one central repository provides the user with
the opportunity to see trends in operation, resource usage and cross product interoperability.
Tivoli Storage Resource Manager historical data is available for use by Tivoli Service Level
Advisor (SLA) and Tivoli Storage Manager.
Consult the Tivoli Service Level Advisor documentation for information about its installation,
configuration, and use. Tivoli Enterprise Data Warehouse and IBM DB2 Data Warehouse
Center ETL processes are designed to perform data collection at least once a day.
ETL1
SLA ETL2
ITSRM
Client/Agent
SLA
Data marts
ITSRM
History Aggregator
panel
A STORAGE_GUID attribute will not be available for the monitored systems until all Agents
are updated to Tivoli Storage Resource Manager Version 1.2.
Consult the Tivoli Storage Resource Manager Version 1.2 documentation for a list of
platforms that support GUID.
You can get the TEDW Fix Packs at the Web site:
http://www.ibm.com/software/sysmgmt/products/support/TivoliDataWarehouse.html
The TEDW required fixes for DB2 are at the Web site:
http://www-1.ibm.com/support/entdocview.wss?uid=swg24001636
Refer to the manual Installing and Configuring Tivoli Enterprise Data Warehouse, GC32-0744
and the redbook Introduction to Tivoli Enterprise Data Warehouse, SG24-6607 for information
on installing TEDW. We do not provide the detailed installation steps here - simply follow the
given instructions.
http://www.ibm.com/software/sysmgmt/products/support/TivoliDataWarehouse.html
Select Downloads and then Warehouse Packs. Download the Storage Resource Manager
Warehouse Pack and unzip it to a directory. We used:
C:\Tivoli-Software\wep\ITSRM_WEP1.2.
1. To import the Warehouse Pack, start the setup program from the Tivoli Enterprise Data
Warehouse installation media. Click Next and on the next screen choose Application
Installation only (Figure 9-3). (Note that when you installed TEDW, the selection was
Custom/Distributed.)
2. Verify that the fully qualified local hostname appears on the next screen (Figure 9-4).
3. Enter the DB2 username and password of the data warehouse database (Figure 9-5),
which you configured when installing TEDW.
4. Next, you need the path to the Warehouse Pack. The directory entered should contain the
file twh_app_install_list.cfg (Figure 9-6), which was part of the zip package downloaded at
the beginning of this section.
5. Choose whether to install additional Warehouse Packs (Figure 9-7) for other Tivoli
products.
7. Depending on what type of machine you have, this can take some time to complete. If
everything went well, the summary screen appears (Figure 9-9).
This step imported the IBM Tivoli Storage Resource Manager Warehouse Pack into Tivoli
Enterprise Data Warehouse.
9.4.3 Register the Tivoli Storage Resource Manager database with ODBC
Next, register the IBM Tivoli Storage Resource Manager repository database with the ODBC
interface on the warehouse manager server.
1. If it is a DB2 database, as in our case, start the Client Configuration Assistant from the
DB2 Program Folder. It shows the ODBC data source that is already configured. To add
the repository DB, click Add in the upper right corner of the window (Figure 9-10).
Note: If you are running IBM Tivoli Storage Resource Manager repository on the
Cloudscape database, you cannot use the Warehouse Enablement Pack.
2. There are three different ways to register a database, we chose to Manually configure a
connection to a database (Figure 9-11).
4. Next, enter the hostname and the port that the remote DB2 instance uses (Figure 9-13).
You can determine the port by listing the /etc/services (or
%SystemRoot%\system32\drivers\etc\services on Windows) file on the Tivoli Storage
Resource Manager Server (or remote database server).
5. Then, enter the database name (ITSRMDB in our example); see Figure 9-14.
9. If the connection worked, you will see the following screen (Figure 9-18).
From the menu bar, choose Tools -> Data Warehouse Center. This is the main
application for configuring the Central Data Warehouse (CDW), the Data Marts and the
ETLs (Figure 9-20).
3. Click the Data Source tab and enter the name for your ODBC connection in the data
source name field (Figure 9-22). The default name is TIVOLISR, which we accepted.
Enter also the appropriate user name and password (Figure 9-23) and click OK.
5. Do not change the database name, just enter the password for the CDW DB2 user
(Figure 9-25). Click OK to complete.
A process can have different types of objects. The first process has only one actual
“executable” step, which you can see at the top position in the right window of Figure 9-26,
called Create Archive. These steps can have three different states:
Development - Used for modifications
Test - You can execute, but changes are rolled back after completion
Production - You can execute and changes persist, but no configurations can take place
To run the process, right-click the process and select Mode -> Production (Figure 9-26).
To run the initialization process, select the Warehouse menu and select Work in Progress
(Figure 9-27).
In the new window, select Work in Progress -> Run New Step (Figure 9-28).
In the main window, you can see the progress of the step. If it finished successfully the status
shows Successful (Figure 9-30).
This created some additional configurations inside the IBM Tivoli Storage Resource Manager
repository database.
To schedule the actual ETL to extract data, right-click the BTM_C10_ETL1_Process and
select Schedule (Figure 9-31).
This will open the Schedule properties. Enter suitable parameters (Figure 9-32).
In the last tab, Notification, you can send an e-mail, if a step fails to run (Figure 9-34).
The schedule is not enabled until you change the mode on the associated steps to
Production. You can select multiple steps (Figure 9-35).
When you now look at the Work in Progress window, you should see the scheduled ETL
process with a status of Scheduled, Figure 9-36.
This process will now run at the specified time. To run it manually, right-click the process and
select Run Now (Figure 9-37).
You should see the progress of each single step in the window. If everything worked well, you
should see the status as Successful for each step (Figure 9-38).
This process retrieved the information from the IBM Tivoli Storage Resource Manager
repository database and loaded it into the Tivoli Enterprise Central Data Warehouse.
Look for entries in the COMPTYP_CD row saying BTM_Server, BTM_Client or File_System.
These are entries generated by the Warehouse Pack for IBM Tivoli Storage Resource
Manager (Figure 9-40).
The structure of the Warehouse Pack generated entries is described in the PDF document
shipped in the doc directory of the Warehouse Pack. This document provides in-depth
information about the ETL process and the database structure.
The Warehouse Pack for IBM Tivoli Storage Resource Manager currently contains only the
ETL 1 process. To use the collected data, you can use Tivoli Service Level Advisor (TSLA).
The redbook Introducing IBM Tivoli Service Level Advisor, SG24-6611 explains how to
incorporate different Warehouse Pack data into the TSLA. It also explains how to extract data
and build reports with popular third-party Business Intelligence applications. Alternatively, you
can extract the data and use third party reporting tools as described in Introduction to Tivoli
Enterprise Data Warehouse, SG24-6607.
A future version of the Warehouse Pack for IBM Tivoli Storage Resource Manager will have
predefined reports and the data mart ETL 2.
We assume you have basic understanding of IBM Tivoli Configuration Manager and a running
installation of Tivoli Enterprise Framework V3.7.1 or 4.1 and IBM Tivoli Configuration
Manager V4.2. For more information about these products see the redbook All About IBM
Tivoli Configuration Manager V4.2, SG24-6612.
Software Distribution enables you to install, configure, and update software remotely within
your network.
Inventory enables you to gather and maintain up-to-date inventory asset management
information in a distributed environment. This helps system administrators and accounting
personnel to manage complex, distributed enterprises.
Activity Planner enables you to define a group of activities that originate from different
applications in an activity plan, submit or schedule the plan for running, and monitor the plan
while it runs.
Change Manager functions with Activity Planner to support software distribution, inventory,
and change management in large networks. It uses reference models to simplify the
management of the network environment.
You can use Resource Manager, together with Software Distribution and Inventory, to perform
the management operations for pervasive devices.
You can use the Web Interface to install and manage various Tivoli Configuration Manager
Web objects. The Web Interface has a server component that pushes software packages,
inventory profiles, and reference models from the Tivoli region to the Web Gateway where
they are stored until they are pulled by the Web Interface endpoint.
With enterprise directory integration, you can exploit organizational information that is stored
in enterprise directories in order to determine a set of targets for a software distribution or an
inventory scan. The Enterprise Directory Query Facility enables you to select a specific
directory object, or container of directory objects, as subscribers for a reference model or an
activity plan.
We created separate Policy Regions for each Tivoli product. Double click Inventory Policy
Region (Figure 10-2).
Make sure that the Inventory Policy Region contains the InventoryConfig resource as a
Managed Resource. To determine if it has been set, right-click the Policy Region and select
Managed Resources. The dialog in Figure 10-3 appears.
For our environment we created the default Query Libraries with the script
inventory_query.sh in the bin/generic/inv/SCRIPTS/QUERIES directory of the Tivoli
installation directory and created a Profile Manager called Inventory_default_PM.
(Figure 10-4). To create a Profile Manager select Create in the top menu and select Profile
Double click the Profile Manager and the dialog in Figure 10-5 appears.
Since we want to create a software only inventory scan, you should deselect all hardware
related check boxes. The only ones we need is the PC Software section (Figure 10-7) and the
UNIX Software section (Figure 10-8).
There are two possible ways to collect software information from endpoints. One is to scan all
the files on your machine and compare them to a predefined list, thus determining an installed
product by filename and size of a significant file in the software package. IBM Tivoli Storage
Resource Manager ships these so called Inventory Signature files with the product. They can
be found in the installation directory in the TIVINV subdirectory. The signature files are zero
bytes in length and are recognized by filename (TSRM01_02.SIG for the IBM Tivoli Storage
Resource Manager - Manager Version 1.2). The signatures for IBM Tivoli Storage Resource
Manager are already incorporated in the latest inventory signature files, which you can
download from the IBM Software support Web site.
Another way to determine installed software is to query the native software repository of the
OS. This gives you very fast scans, but relies on the fact that the software has registered itself
in the OS during installation, rather than just copying files.
For IBM Tivoli Storage Resource Manager you can do both - the choice for your environment
depends on the practices of your IBM Tivoli Configuration Manager environment.
In our examples we chose to use the native software query, so we check just the Scan
Registry for Product Information boxes in the dialog (Figure 10-8), not the Scan for File
Information
Click OK to close the dialog and distribute the Inventory Profile to your Endpoints. Right-click
on the Profile and select Distribute (Figure 10-9).
This opens a dialog where you can choose the machines which will run the inventory scan.
After selecting, click on the Distribute & Close button (Figure 10-10).
You can determine the status of the inventory scan with a tool called Distribution Status
console. If it is installed, you find it on the main screen of your Tivoli Desktop (Figure 10-1 on
page 459). Double click on the icon and a console opens (Figure 10-11).
In the upper window, double click on the Inventory Scan distribution and in the lower window
select All Nodes. You can see which scans are successfully completed, pending, failed etc.
When the scans are all finished, you can query the collected information. There are many
standard queries, but we want to gather only the data for IBM Tivoli Storage Resource
Manager. Therefore we create a new query by selecting Create -> Query (Figure 10-12).
Name the Query and select inv_query as the repository. This is the Inventory Database RIM
object. The table containing the native software information is NATIVE_SWARE_VIEW.
Select the columns you want and add a filter: Column name PACKAGE_NAME = ‘IBM Tivoli
SRM’ (Figure 10-13).
Click Run Query to execute the query while it is being edited. The output shows all the
installed IBM Tivoli Storage Resource Manager products including Agents, Manager and
Consoles (Figure 10-14).
You can also query the Inventory database with a native DB2 client. That enables you to
connect to Business Intelligence tools or script based applications.
Play around with the queries and you will find that there is much additional information which
can be obtained from them. In combination with the hardware inventory scans, you can
determine which Fibre Channel cards are in your systems, and which firmware levels and
drivers they are using. In the following query output we queried all the IBM software which
was on the endpoints (Figure 10-15).
You can build one package for each platform or all platforms in one. The benefit of separating
the packages by operating system is that you prevent having to download all the code to all
the endpoints before installation occurs. If temporary space is an issue, you should split into
multiple packages. This in turn makes it slightly more complicated in installation tasks,
because you have to group the endpoints by operating system. We will give some simple
examples here, but if you already have a production ready installation of Configuration
Manager, then the design rules will be in place, and you should build the packages according
to them.
Right-click the package name and select Properties. The dialog in Figure 10-17 displays.
Enter the package version and a title for your package. Leave all the other parameters at their
default values.
For the actual installation we use the command line procedure. First copy the installation
media to the hard drive. We only need the setup.exe and the directories install, java and
agent (Figure 10-18).
After setting the package properties, we add objects to the package. From the screen in
Figure 10-16, click the tab Execute program as shown in Figure 10-19.
With this type of action you can distribute files to the endpoint, run the provided script and
delete the temporary files. During the Tivoli Storage Resource Manager installation, the setup
program ends at once and additional processes are spawned. For this reason, we cannot use
software distribution for corequisite files since these file would be deleted, when they are still
needed. Therefore, we need additional actions to distribute the installation media.
After selecting the the action, the Execute Program Properties dialog appears (Figure 10-20).
Enter the full path to the installation setup program. The example shows the installation of the
Windows agent. This must be the path as it appears after transferring the files to the endpoint,
In the arguments field, enter the parameters for silent installation. The syntax is:
setup.exe -s servername -d installdir -p serverport -q agentport -x (no scripts from
server) -n (no initial scan)
For example, the only non-default parameters are to specify server name (WISLA) and no
initial scan of the Agent. The full installation command is:
setup.exe -s wisla -n
The Working Directory entry points to the installation directory. Optionally, you can redirect
standard out and standard error to files. Click OK to end the dialog.
This should be sufficient for the installation process. Configuration Manager can also do
deinstallation, so to configure the deinstallation process select the Remove tab from
Figure 10-20 on page 472, as shown in Figure 10-22.
A single command is sufficient to remove the software. The path to the deinstallation program
is in the installation directory of the IBM Tivoli Storage Resource Manager agent. We need an
argument for the uninstallation program. To open the dialog, click Advanced (Figure 10-23).
The parameters to specify are java -uq. This procedure is not documented in the manual, but
is derived from the script to remove the UNIX agents and proved to work well. Be sure to add
the working directory for the process.
We chose to make just one software package for Windows and AIX machines. To prevent
execution on an AIX machine, you can specify a condition when to run that action, using the
Condition button at the top right hand corner of Figure 10-22. Figure 10-24 displays.
Choose os_name from the list box, add an == operator, and enter Windows_NT. This will
ensure execution only on the desired platform.
Using the same procedure, we added an extra action for the AIX installation, starting from the
Execute Program Properties dialog shown in Figure 10-20 on page 472. The actions to define
are mainly the same except for the paths and the setup.aix program. Also, we added a
condition that allows execution only on AIX machines.
Since we can not download the installation media with the Execute Program action, we have
to distribute it with an extra step.
In the main screen of the Software Package editor (Figure 10-16 on page 470), choose the
Add Object tab and click the Add Directory icon. The dialog in Figure 10-25.
To ensure all subdirectories are getting copied, choose the Advanced button on the lower left
corner, and select the Descend Directories check box (Figure 10-26).
Save this package to an .sp file on your server and exit the Software Package Editor.
Double-click the object PM_SD_ITSRM to open the Profile Manager, and create a Profile with
the name of your file package including the version (Figure 10-29).
After you have created the Profile, an empty package icon appears in the Profile Manager.
Add any subscribers for the distribution of the package.
A dialog appears, where you can select the node on which you have previously created the
Package and the path to the .sp file. Checking Build, will include all the source files and
programs and actions into one single file (.spb) to be distributed to the target endpoint. Enter
the location where you want to store the .spb file. You might want to store it on your software
distribution server or on any of your software depot servers. If your are rebuilding it, check
Overwrite (Figure 10-32).
The package icon of the should now be a sealed package, ready to ship to your targets. For
installation to occur, right-click the package and choose Install (Figure 10-33).
The install dialog, which is shown in Figure 10-34, lets you select on which endpoints to install
the software. Our package will work on Windows and AIX servers. Additional checks can be
made, eg. whether the software is already installed or with the Change Manager feature, if
you are allowed to install the software due to licensing issues. For additional information see
the redbook All About IBM Tivoli Configuration Manager Version 4.2, SG24-6612.
You can also schedule the installation and query inventory to look for hardware or software
Constraints. To ensure that every host in your environment has an IBM Tivoli Storage
Resource Manager agent, you can use the strategies described in Implementing Automated
Inventory Scanning and Software Distribution After Auto Discovery, SG24-6626, to discover
new nodes through Tivoli NetView, install an endpoint, perform an inventory query, and
automatically deploy the S agent on them.
Another method of identifying hosts to install software on, is querying an LDAP directory like
Microsoft Active Directory or IBM Directory with the Enterprise Directory Query facility. Then
you would be able to create a machine group for IBM Tivoli Storage Resource Manager, and
automatically deploy the software once a machine belongs to the group.
Configuration Manager enables you to remove the software as well. For this function,
right-click the package and select Remove (Figure 10-35).
All the other options like verify, clean, etc., are not defined and will not work.
Trend Analysis
C
us
to
m
iz
Profile
e
/D
De
Ro
Data
is
fa
tr
llu
ib
ul warehouse
ut
ts
p
e
lay
sp
TMR Di Web health
console
G et D
ITM at a
Heartbeat
l
tal
Ins
Design
Create ITM Engine ITM Engine
Debug
Workbench
If you want in depth monitoring for your IBM Tivoli Storage Resource Manager DB2 instance,
you can use these additional modules.
In our examples here, we use the shipped monitor Parametric Services to watch the status of
the Windows services, which are required to run IBM Tivoli Storage Resource Manager.
Additionally, there is a default action to restart stopped services.
Open the Tivoli Desktop and navigate to your Monitoring Policy Region (Figure 11-2).
Create a profile manager to contain the monitoring profiles. Select Create -> Profile
Manager and create a dataless Profile manager, called PM_DM_ITSRM in our example
(Figure 11-3).
Open the Profile Manager, select Create -> Profile and choose a Tmw2kProfile (which is the
Monitoring profile resource). If this entry does not show up in the list, make sure the
Tmw2kProfile is in the managed resources list of the Policy Region. Figure 11-4 shows a
Profile called P_DM_ITSRM in .
Double click on the newly created profile and in the screen that appears, click Add with
Defaults. This opens a chooser window, where you can select the resource model you want
to add to your profile. In the Category list box, choose Windows and select the Parametric
Services entry (Figure 11-5).
After adding the resource model, we have to edit the model to include the services we want to
monitor. For that, click Edit (Figure 11-6).
In this window, we can adjust all attributes belonging to that resource model. To specify the
services to monitor open the Parameters window (Figure 11-7). You must enter the names of
the services exactly as they appear in the Windows Registry under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services. The services which IBM
Tivoli Storage Area Network Manager needs to run are:
DB2
TSRMagt1
TSRMsrv1
Click Apply Changes and Close to close the window, bring up the next dialog with the
Indications button. As you can see from the definitions, the default action, when a service is
stopped or failed, is to restart the service automatically. Additionally a CRITICAL TEC event is
generated (Figure 11-8).
To enable the TEC events globally for this Profile and to specify to which TEC server the
events are sent, click on the Properties menu in the Profiles main window (Figure 11-4 on
page 488) and the following screen will open (Figure 11-9).
Close the windows with the OK button until you are back in the Profile Manager main window.
(Figure 11-10). Subscribe the endpoints running the IBM Tivoli Storage Resource Manager -
Manager with Profile manager -> Subscriber and distribute the Profile using Profile
Manager -> Distribute -> Distribute Now.
You can determine if your resource models are running on a particular endpoint by issuing the
wdmlseng command at thecommand line on your Tivoli Managed Region (TMR) server.
Example 11-1 shows typical output.
P_DM_Basic_Win#tonga-region
TMW_EventLog :Running
For demonstration purposes, we stopped the TSRMsrv1 service on our Server. After a few
seconds the following TEC events appear in the TEC console (Figure 11-11).
IBM Tivoli Monitoring detected the service that has been stopped and restarted it accordingly.
Part 6 Appendices
These scripts can be downloaded as described in “Locating the Web material” on page 503.
@echo on
dir %1\ARC*.*
echo ARCHORA.BAT ended successfully ...
exit 0
:NOTORACLE
echo Error - Not Oracle database
exit 4
:DIRNOTEXIST
echo Error - Directory does not exist
exit 4
:DSMCERROR
echo Error while running DSMC command
dir %1\ARC*.*
type dsmerror.log
Example A-2 shows the BKPSQLLOG.bat script, which can be used to backup the MSSQL
transaction log should this log reaches a high usage percentage.
:NOTSQL
echo Error - Not MSSQL database
exit 4
@ECHO ON
@REM Get Status and check if Stopped
@REM -------------------------------
net start | findstr /i "IBM Tivoli SRM Server"
@if %errorlevel% EQU 0 GOTO BACKUPDB
:NOTSTOPPED
@ECHO ON
@REM IBM Tivoli SRM server not stopped - Backup cannot run
@REM -----------------------------------------------------
@echo "IBM Tivoli SRM Not Stopped !!!"
@echo "Backup process cancelled "
exit 1
:BACKUPDB
@ECHO ON
@REM IBM Tivoli ITSRM server is stopped - Backup can run
@REM -------------------------------------------------
@echo "Backup of ITSRMDB starting ..."
C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSRMDB USE TSM
@if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages"
@ECHO ON
@REM Get Status and check if Started
@REM -------------------------------
net start | findstr /i "IBM Tivoli SRM Server"
@if %errorlevel% EQU 0 GOTO STARTOK
Example A-4 shows the script that we used to an online backup of the IBM Tivoli Storage
Resource Manager DB2 database in 7.3, “Backup procedures” on page 378.
:BACKUPDB
@ECHO ON
@REM DB2 is active - Backup can run
@REM ------------------------------
@echo "Backup of ITSRMDB starting ..."
C:\PROGRA~1\SQLLIB\BIN\db2cmd.exe /c /w /i db2 backup database ITSRMDB ONLINE USE TSM
@if %errorlevel% NEQ 0 echo "Backup failed - Please check error messages"
Select the Additional materials and open the directory that corresponds with the redbook
form number, SG246886.
IBM Redbooks
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this redbook.
For information on ordering these publications, see “How to get IBM Redbooks” on page 508.
Tivoli Storage Management Concepts, SG24-4877
Getting Started with Tivoli Storage Manager: Implementation Guide, SG24-5416
Deploying the Tivoli Storage Manager Client in a Windows 2000 Environment, SG24-6141
Backing Up DB2 Using Tivoli Storage Manager, SG24-6147
Using Data Protection for Microsoft SQL Server, SG24-6148
Backing Up Oracle Using Tivoli Storage Management, SG24-6249
Early Experiences with Tivoli Enterprise Console, SG24-6015
Introducing IBM Tivoli Service Level Advisor, SG24-6611
Introduction to Tivoli Enterprise Data Warehouse, SG24-6607
All About IBM Tivoli Configuration Manager V4.2, SG24-6612
Implementing Automated Inventory Scanning and Software Distribution After Auto
Discovery, SG24-6626
Other resources
These publications are also relevant as further information sources:
IBM Tivoli Storage Resource Manager V1.1 Configuration and Getting Started Guide,
SC32-9067
IBM Tivoli Storage Resource Manager V1.1 Installation Guide, GC32-9066
IBM Tivoli Storage Resource Manager V1.1 Reference Guide, SC32-9069
Tivoli Enterprise Data Warehouse Release Notes, GI11-0857
Installing and Configuring Tivoli Enterprise Data Warehouse, GC32-0744
Enabling an Application for Tivoli Enterprise Data Warehouse, GC32-0745
You can also download additional materials (code samples or diskette/CD-ROM images) from
that site.
Index 511
M P
mainframe Parametric Services monitor 487
Storage Resource Management 4 people costs 13
manual storage management 12 PERL 229
measuring storage resources 5 Ping 35, 40, 49, 121, 174, 248, 250, 262
Microsoft platform administration 14
Excel 360 policy based automation 4
Internet Information Server 81, 362 policy management 200
Microsoft Active Directory 483 pre-defined views 38
Microsoft Cluster Services 61 Probe 27–28, 36, 38, 40–41, 49, 78, 146, 177, 186, 194,
modified since backup files 289–290 232, 248, 250, 262, 297–298
MOF 18 Profile 37, 180, 187, 219, 276, 287, 333, 364
monitoring 49, 160 profile manager 487
monitoring storage 23 profile overview 486
most at risk files 37, 287 progressive incremental backup 293
Motif 101 Prolog 412
MSCS 61, 123 provisioning 54
proxy model 20
N
NAS 24–25, 28, 49, 52–53, 55–57, 76, 172 Q
exported filesystems 53 Quorum disk 127
login id 76, 110–111 Quota 50, 115, 180, 200, 218, 250, 268
password 76 violation report 276
Quota 200
SNMP 76
Storage Resource Management 25, 116 R
native software repository 463 RDBMS 27
NDS 53, 58, 91, 116–117, 119–120 Redbooks Web site 508
NetView 190 Contact us xxviii
event forwarding to TEC 427 replication solutions 16
NetWare 28, 49, 52–53, 58, 91, 109, 117, 172 reporting 38, 48
login 119 assets 40, 250, 252
Storage Resource Management 25 availability 40, 174, 250, 262
Network Appliance backup 41, 250
quota 223 backup storage requirements 291
Network Attached Storage. See NAS backups 287
network discovery 49 batch 345, 351, 360
non-Tivoli applications 432 by userID 41, 345
NTFS 124 capacity 40, 186, 250, 263
computer uptime 319
Constraint violation 274
O customized 345
object-oriented 18 database assets 256
obsolete files 8, 181, 274 database batch 356
ODBC 439 database capacity 265
offline backup 382 database Quota violations 282
OLAP database space usage 347
analysis 433 database usage 266
online backup 385 disk capacity 263
Oracle 29, 53, 61, 70, 74, 92, 111, 142, 230, 259, 370 filesystem capacity 263
archive log 235 owned by a username 348
Database Configuration Assistant 142 Quota violation 276
JDBC driver 114, 142 saved reports 349
regular administration 242 scheduling 345, 360
SID 143 storage capacity 263, 346
orphaned files 181, 274 storage subsystems 40, 250
out-of-space condition 31 top 10 reports 316
uptime 319
usage 40, 182, 250, 266
usage violation 40, 250, 268
Index 513
Event Filters 418 Tivoli Storage Manage
event format 426 resetarchiveattribute 292
Event Groups 418 Tivoli Storage Manager 16, 229, 235, 287, 295, 371, 433
event processing 412 API 371
events from Tivoli Storage Resource Manager 427 archive bit 287
import class definitions 414–415 backup reporting 295
load rule base 417 backup volume prediction 294
RIM 412 Backup/Archive client 374
Rule Base 412–414 client options file 295, 377, 396
stop or start event server 417 Constraint violation report 268
Test SQL 421 copy group 373
TEC commnds dsm.opt 295, 377
wtdumprl 414 management class 373
tec_dispatch 412 nodename 375
tec_reception 412 policy domain 373
tec_rule 412 progressive incremental backup 293
tec_server 412 resetarchiveattribute 295
tec_task 412 RETEXTRA 374
tec_ui_server 413 RETONLY 374
Tivoli Configuration Manager 414, 458 VERDELETED 374
create Profile Manager 460 VEREXISTS 374
Distribution Status console 466 Tivoli Storage Manager capabilities
inventory 459 Backup-Restore 251
Inventory Profile 464 Disaster preparation and recovery 399
Inventory Signature files 463 Tivoli Storage Manager commands
software distribution 470 db2adutl 378
software distribution profile 478 dsmapipw 377
Web interface 458 QUERY NODE 375
Tivoli Desktop 459, 466, 478, 487 Tivoli Storage Manager for Databases 405
Tivoli Distributed Monitoring 191 Tivoli Storage Resource Manager 16, 23, 25–26, 62,
Tivoli Enterprise Console 176 266, 274, 360, 428, 433
Tivoli Enterprise Console see TEC ad hoc jobs 162
Tivoli Enterprise Data Warehouse 414, 432–433 administration 99, 102
Administration 445 administration group 98
configuration 444 administration GUI 81
data mart 432 administrative tasks 98
database 436 Agent 24, 26–27, 32, 48, 50, 52, 91, 100, 102
ETL processes 433 Agent administration 103
ETL programs 432 Agent automatic upgrade 107
ODBC 439 Agent auto-start 93
source applications 432 Agent backup 379
Subject Areas 445 Agent configuration file 107
Warehouse Packs 432 Agent details 104
Warehouse Schemas 445 Agent health 107
Warehouse Sources 445 Agent id 91
Warehouse Targets 445 Agent installation 89, 91
Tivoli Enterprise Framework 413 Agent license 109
Tivoli Light Client Framework 414 Agent log 106
Tivoli Managed Region 493 Agent platforms 29, 68
Tivoli Management Framework 414 Agent Port 90
Tivoli Monitoring 414, 486 Agent port 75, 92, 104
Parametric Services monitor 487 Agent quick installation 92
profile manager 487 Agent restore 387
resource model 486 Agent shutdown 107
TEC events 491 Agent statistics 34
wdmlseng command 493 Agent status 32, 103
Tivoli Monitoring for Databases 242 Agent tasks 52
Tivoli NetView 190 agent upgrade 96
Tivoli SAN Manager 16 Alert 23, 25, 27, 35, 41, 50–51, 101, 119, 173, 176,
Tivoli Service Level Advisor 433 189, 203, 205, 208, 227, 272, 428–429
Index 515
Import Class Definitions 415 products 25
install Warehouse Pack 435 Profile 37, 162, 180, 187, 219, 276, 287, 333, 364
installation 67, 143 quick installation 92
installation directory 77, 83, 89, 91–92, 136, 145 Quota 50, 115, 180, 200, 218, 250, 268
interactive reporting 248 Quota scheduling 221
interface look and feel 101 Quota violation report 276
inventory 49 read-only access 99
Inventory Signature files 463 remote access 81
invoices 42 remote administration 81
JDBC driver 73 remote database 53, 59, 70, 142, 145
job output 162 remote execution 27
job scheduling 188 report scheduling 345, 360
job status 162 reporting 23, 25, 27, 38, 48, 247
jobs 105 Reporting Tab 101
license key 72 reports on the Web 361
licensing 26, 53, 71, 89, 108 repository 24, 27–28, 48, 50, 73, 78, 101, 248
local database 59 repository database 53
log retention 115 retention period 120–121
logging 103, 106, 115 roles 49
login 98 sample script 229
logon properties 138 sample scripts 497
LUN modeling 204 saved reports 349
LUN provisioning 200–201, 203, 211 scalability 53, 59
mail port 115 Scan 27, 37–38, 41, 49, 53–54, 75, 90, 106, 161,
maintenance 93 180, 185–186, 198, 248, 274, 276, 337
Managed Devices 26 Scan job log 260
Managed Systems 48 scanned files 54
maximum report size 39 scheduled actions 229
modified since backup files 289–290 scheduled jobs 27, 35, 48, 50–51, 105, 162
monitored directories 255 scheduled reports 48, 249
monitored server summary 35 scheduler 102
monitoring 24–25, 49, 160 script 41, 50–51, 75, 176, 192, 200
monitoring services 490 script parameters 195, 228
most at risk files 37, 287 scripts 145
MSCS 123 security 98–99
My Reports 248, 345 security levels 29
NAS 25, 53, 55–57, 76, 172, 200 Server 24, 26–27, 32, 48–49, 102
NAS probe 116 Server backup 381
native client 24, 26 server configuration file 98
navigation 101 Server installation 69
NDS 116, 119 Server log 103
NetWare 53, 58, 91, 109, 172 Server name 75
NetWare login 119 Server platforms 28, 68
NetWare reporting 251 Server port 75, 88, 90, 92
Network Appliance Quota 223 Server restore 390
network discovery 49 Server shutdown 103
Network Quota 218 Server status 102
obsolete files 181, 274 server.config 138
orphaned files 181 service 73, 100, 102, 136, 138
OS User Group Group 171, 218 services monitoring 490
OS User Groups 41 shared database 62
overview 24 shared disk 124
Panel Retention 101 shutdown 103, 107
Ping 35, 40, 49, 121, 174, 248, 250, 262 SNMP 208
policy management 200 software distribution 470
ports 75, 79, 88, 90, 92 space requirements 77, 89, 91
pre-defined reports 248 standard reporting 251
Probe 27, 36, 38, 40–41, 49, 78, 146, 177, 186, 194, standby server 60
248, 250, 262, 297–298 storage inventory 49
PROBE_ME 93 storage statistics 160
Index 517
virtualization 16
Visio 13
volume group 54
W
warehouse pack 432
wasted space 21
wasted space report 250
WBEM 17
wdmlseng 493
Web browser 24, 26, 49, 361
Web Health Console 486
Web reporting 41
Windows 56
archive bit 287, 292, 295
backup 381
clustering 61, 123
domain 49–50, 56
Domain Controller 172
domain users 99
event log 35, 176, 191
MSCS 123
Service Pack 133
Storage Resource Management 4, 25
workgroup 50
Windows 2000
LDAP 116
restore 396
System Objects 397–398
WWW Server 27