Академический Документы
Профессиональный Документы
Культура Документы
Edson Manoel Cristiano Colantuono Hans-Georg Khne Devi Raju Ghufran Shah Sergio Henrique Soares Monteiro
ibm.com/redbooks
International Technical Support Organization Implementing Tivoli Data Warehouse 1.2 June 2004
SG24-7100-00
Note: Before using this information and the product it supports, read the information in Notices on page xix.
First Edition (June 2004) This edition applies to Version 1.2 of the Tivoli Data Warehouse product.
Copyright International Business Machines Corporation 2004. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Examples. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiii Part 1. Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Chapter 1. Introducing Tivoli Data Warehouse 1.2. . . . . . . . . . . . . . . . . . . . 3 1.1 Data warehousing basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.1 Data warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.1.2 Data mart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.3 Business intelligence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.1.4 Data mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.2 Tivoli Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 What is new in Tivoli Data Warehouse 1.2 . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.1 Crystal Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 IBM DB2 UDB for OS/390 and z/OS support . . . . . . . . . . . . . . . . . . 13 1.3.3 Flexible and extended configuration support . . . . . . . . . . . . . . . . . . 17 1.3.4 Installation enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 1.3.5 Serviceability and scalability improvements . . . . . . . . . . . . . . . . . . . 19 1.4 Tivoli Data Warehouse architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.4.1 Tivoli Data Warehouse control center server . . . . . . . . . . . . . . . . . . 21 1.4.2 Source databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4.3 Central data warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4.4 Data marts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 1.4.5 Warehouse agents and agent sites. . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.4.6 Crystal Enterprise Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 1.5 Benefits of using Tivoli Data Warehouse . . . . . . . . . . . . . . . . . . . . . . . . . 23 Chapter 2. Planning for Tivoli Data Warehouse 1.2 . . . . . . . . . . . . . . . . . . 27
iii
2.1 Hardware and software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.1.1 Hardware requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.1.2 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.1.3 Database requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.1.4 Crystal Enterprise requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.2 Physical and logical design considerations . . . . . . . . . . . . . . . . . . . . . . . . 36 2.2.1 Source databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.2.2 Control server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 2.2.3 Central data warehouse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 2.2.4 Data marts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.2.5 Single machine installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 2.2.6 Distributed deployment on UNIX and Windows servers . . . . . . . . . . 43 2.2.7 Distributed deployment on z/OS, UNIX, and Windows servers . . . . 45 2.2.8 Warehouse agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 2.2.9 Considerations about warehouse databases on z/OS . . . . . . . . . . . 54 2.2.10 Coexistence with other products . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 2.2.11 Selecting port numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.3 Database sizing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 2.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 2.4.1 Authority required to install and maintain IBM DB2 UDB . . . . . . . . . 57 2.4.2 Authority required to install Tivoli Data Warehouse . . . . . . . . . . . . . 57 2.4.3 Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 2.4.4 Controlling access to data in the warehouse . . . . . . . . . . . . . . . . . . 59 2.4.5 Protecting information in Crystal Enterprise Professional for Tivoli . 59 2.4.6 Multicustomer and multicenter support . . . . . . . . . . . . . . . . . . . . . . . 60 2.5 Network traffic considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 2.5.1 Architectural choices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 2.5.2 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 2.6 Integration with other business intelligence tools . . . . . . . . . . . . . . . . . . . 64 2.7 ETL development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 2.8 Skills required for a Tivoli Data Warehouse project . . . . . . . . . . . . . . . . . 67 2.8.1 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2.8.2 Data collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2.8.3 Data manipulation (ETL1 and ETL2). . . . . . . . . . . . . . . . . . . . . . . . . 68 2.8.4 Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Chapter 3. Getting Tivoli Data Warehouse 1.2 up and running. . . . . . . . . 71 3.1 Preparing for the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 3.1.1 Ensuring fully qualified host names. . . . . . . . . . . . . . . . . . . . . . . . . . 74 3.1.2 Installing and configuring IBM DB2 client and server . . . . . . . . . . . . 76 3.1.3 Crystal Enterprise installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 3.2 Tivoli Data Warehouse 1.2 installation . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.3 Quick start deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
iv
3.3.1 Quick start deployment: installation and configuration . . . . . . . . . . . 94 3.3.2 Configuring the control database . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 3.3.3 Creating ODBC connections to the data mart databases . . . . . . . . 101 3.4 Distributed deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 3.4.1 Distributed deployment installation: Windows and UNIX . . . . . . . . 104 3.4.2 Distributed deployment installation: z/OS . . . . . . . . . . . . . . . . . . . . 115 3.4.3 Creating ODBC connections to the data mart databases . . . . . . . . 123 3.5 Installing warehouse agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 3.5.1 Installing IBM DB2 Warehouse Manager . . . . . . . . . . . . . . . . . . . . 128 3.5.2 Creating the remote agent sites . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 3.6 Verification of the installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 3.6.1 Verifying the remote agent install . . . . . . . . . . . . . . . . . . . . . . . . . . 141 3.7 Installing warehouse enablement packs . . . . . . . . . . . . . . . . . . . . . . . . . 142 Chapter 4. Performance maximization techniques . . . . . . . . . . . . . . . . . 145 4.1 DB2 performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 4.2 Operating system performance tuning . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.2.1 Windows environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 4.2.2 Primary Windows performance factors . . . . . . . . . . . . . . . . . . . . . . 151 4.2.3 AIX environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 4.3 Tivoli Data Warehouse performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Part 2. Case study scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Chapter 5. IBM Tivoli NetView Warehouse Enablement Pack . . . . . . . . . 161 5.1 Case study overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 5.2 IBM Tivoli NetView WEP overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 5.3 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.3.1 Verifying prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 5.3.2 Gathering installation information . . . . . . . . . . . . . . . . . . . . . . . . . . 166 5.4 Preparing NetView for data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 5.4.1 Enabling NetView to export data for Tivoli Data Warehouse . . . . . 167 5.4.2 NetView SmartSets configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 170 5.4.3 Configuring NetView Data Warehouse daemon (tdwdaemon) . . . . 176 5.4.4 Verifying NetView data collection enablement . . . . . . . . . . . . . . . . 178 5.5 Installation of the NetView WEPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.5.1 Backing up the TDW environment . . . . . . . . . . . . . . . . . . . . . . . . . 181 5.5.2 Establishing ODBC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 5.5.3 Installing NetView Enablement Pack Software . . . . . . . . . . . . . . . . 185 5.5.4 Defining the authority to the warehouse sources and targets . . . . . 188 5.6 Testing, scheduling, and promoting the ETLs . . . . . . . . . . . . . . . . . . . . . 191 5.6.1 Promoting the ETLs to TEST mode . . . . . . . . . . . . . . . . . . . . . . . . 192 5.6.2 Testing the ETLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 5.6.3 Scheduling the ETLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Contents
5.6.4 Promoting the ETLs to Production status . . . . . . . . . . . . . . . . . . . . 197 5.7 Running NetView ETLs on remote agent sites . . . . . . . . . . . . . . . . . . . . 198 5.8 Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 5.8.1 Accessing the Crystal ePortfolio feature . . . . . . . . . . . . . . . . . . . . . 206 Chapter 6. IBM Tivoli Monitoring Warehouse Enablement Pack. . . . . . . 225 6.1 Case study overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 6.2 IBM Tivoli Monitoring WEP overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 6.3 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 6.4 Installing the ITM WEP data collector component. . . . . . . . . . . . . . . . . . 232 6.4.1 Activate data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 6.5 Installing and configuring ITM Generic WEP. . . . . . . . . . . . . . . . . . . . . . 241 6.5.1 Backing up the TWH databases . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 6.5.2 Establishing an ODBC connection on the Control Center. . . . . . . . 242 6.5.3 Installing the ITM 5.1.1 AMX ETL processes . . . . . . . . . . . . . . . . . 247 6.5.4 Installing AMX Fix Packs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 6.5.5 Defining the authority to the warehouse sources and targets . . . . . 254 6.5.6 Modifying the ETL for the source table name to the RIM user . . . . 257 6.6 Installing and configuring ITM for OS WEP . . . . . . . . . . . . . . . . . . . . . . . 262 6.6.1 Backing up the TWH databases . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 6.6.2 Installing the ITM 5.1.1 AMY ETL processes . . . . . . . . . . . . . . . . . 262 6.6.3 Installing AMY Fix Packs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 6.6.4 Defining the authority to the warehouse sources and targets . . . . . 265 6.7 Testing, scheduling, and promoting the ETLs . . . . . . . . . . . . . . . . . . . . . 267 6.7.1 Testing the ETLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 6.7.2 Checking that data has been collected . . . . . . . . . . . . . . . . . . . . . . 270 6.7.3 Scheduling the ETLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 6.7.4 Promoting the ETL status to Production mode . . . . . . . . . . . . . . . . 274 6.8 Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 6.8.1 Available reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 6.8.2 Accessing the Crystal ePortfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 6.9 Troubleshooting of ITM data collection . . . . . . . . . . . . . . . . . . . . . . . . . . 286 6.9.1 Using itmchk.sh script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 6.9.2 Manual checking of ITM data collection . . . . . . . . . . . . . . . . . . . . . 290 Chapter 7. IBM Tivoli Storage Manager Warehouse Enablement Pack . 297 7.1 Case study overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298 7.2 IBM Tivoli Storage Manager WEP overview . . . . . . . . . . . . . . . . . . . . . . 299 7.3 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 7.4 Installing and configuring ITSM WEP 5.2 . . . . . . . . . . . . . . . . . . . . . . . . 301 7.4.1 Changes required on the IBM Tivoli Storage Manager servers . . . 301 7.4.2 Installing the IBM Tivoli Storage Manager ODBC . . . . . . . . . . . . . . 301 7.4.3 Backing up the TWH databases . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
vi
7.4.4 IBM Tivoli Storage Manager WEP installation . . . . . . . . . . . . . . . . 305 7.4.5 Defining the authority to the warehouse sources and targets . . . . . 313 7.5 IBM Tivoli Storage Manager ETL processes . . . . . . . . . . . . . . . . . . . . . . 314 7.5.1 ANR_C05_ETL1_Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 7.5.2 ANR_C10_EXPServer_Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.5.3 ANR_M05_ETL2_Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 7.6 Testing, scheduling, and promoting the ETLs . . . . . . . . . . . . . . . . . . . . . 320 7.6.1 ETL data collection verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 7.7 Reporting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.7.1 Available reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 7.7.2 Accessing the Crystal ePortfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Part 3. Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Appendix A. IBM DB2 UDB administration for other relational DBAs . . 339 Common DBA tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Creating databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Creating databases in IBM DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Creating databases in Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Creating databases in Sybase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342 Managing space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 DB2 space management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Oracle space management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Sybase space management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Creating objects in the database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Creating tables in DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Creating tables in Oracle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Creating tables in Sybase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Additional table control parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Appendix B. Tivoli Data Warehouse 1.2 reference . . . . . . . . . . . . . . . . . . 349 Report listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Measurement sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357 Appendix C. Warehouse Enablement Packs properties file . . . . . . . . . . 361 The twh_install_props.cfg properties file . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
Contents
vii
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
viii
Figures
1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 2-9 2-10 2-11 2-12 3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 3-12 3-13 3-14 3-15 3-16 3-17 3-18 IBM DB2 Data Warehouse Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Crystal Enterprise multi-tier architecture . . . . . . . . . . . . . . . . . . . . . . . . 12 TDS OS/390 and TDW 1.2 Data flow . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Distributed and OS/390 Data feeds into Tivoli Data Warehouse . . . . . . 16 Multiple source applications loading into a central data warehouse . . . 17 Tivoli Data Warehouse the big picture . . . . . . . . . . . . . . . . . . . . . . . 20 Detail Component view of Tivoli Data Warehouse 1.2. . . . . . . . . . . . . . 23 Integrated Systems Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Single machine installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Distributed deployment on Windows and UNIX systems . . . . . . . . . . . . 44 Operational data sources and the CDW databases on the same server 45 Data sources, CDW, and data mart databases on a z/OS system . . . . 46 Operational data sources both on z/OS and on distributed systems . . . 47 Separate data mart databases on z/OS system and distributed system 48 Two CDWs on a Windows or UNIX system and on a z/OS system. . . . 49 Warehouse agent on control server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Warehouse agents on data targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Configuration with a warehouse agent on the source . . . . . . . . . . . . . . 52 Tivoli Data Warehouse and firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Business intelligence integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Installation process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Install DB2 V7 components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Create DB2 Services - DB2 Instance db2inst1 . . . . . . . . . . . . . . . . . . . 79 Create the DB2 fenced user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Administration Server window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Select DB2 Enterprise Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Installation Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88 MSDE security configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Installation window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Completion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 Crystal Enterprise Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Crystal Administration Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Quick start deployment configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . 94 InstallShield Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Tivoli common logging directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Setup window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 DB2 connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Crystal connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
ix
3-19 3-20 3-21 3-22 3-23 3-24 3-25 3-26 3-27 3-28 3-29 3-30 3-31 3-32 3-33 3-34 3-35 3-36 3-37 3-38 3-39 3-40 3-41 3-42 3-43 3-44 3-45 3-46 3-47 3-48 3-49 3-50 3-51 3-52 3-53 3-54 3-55 3-56 3-57 3-58 3-59 3-60 5-1
Summary window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Completion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Configuring the IBM DB2 data warehouse center . . . . . . . . . . . . . . . . 100 Configuring the Warehouse Control Database Management . . . . . . . 101 Distributed deployment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Install Shield Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Tivoli Common Logging Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Setup Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Before proceeding with TDW 1.2 distributed installation . . . . . . . . . . . 108 DB2 connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 Central data warehouse on remote host . . . . . . . . . . . . . . . . . . . . . . . 109 Central data warehouse database server list. . . . . . . . . . . . . . . . . . . . 110 Data mart on remote host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Data mart database server list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Crystal connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Summary window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 Completion window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Configuring the IBM DB2 Data Warehouse Center . . . . . . . . . . . . . . . 114 Configuring the Warehouse Control Database Management . . . . . . . 115 Adding central data warehouses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 z/OS IBM DB2 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 z/OS central data warehouse database configuration . . . . . . . . . . . . . 117 Central data warehouse server on z/OS . . . . . . . . . . . . . . . . . . . . . . . 118 Central data warehouse summary window . . . . . . . . . . . . . . . . . . . . . 118 Central data warehouse on z/OS install. . . . . . . . . . . . . . . . . . . . . . . . 119 Adding data marts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 z/OS IBM DB2 Server information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 z/OS data mart database configuration . . . . . . . . . . . . . . . . . . . . . . . . 121 Data mart server on z/OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Data mart creation summary window. . . . . . . . . . . . . . . . . . . . . . . . . . 122 Data mart on z/OS install . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Distributed environment with agent . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Select the DB2 Warehouse Manager components . . . . . . . . . . . . . . . 129 Install DB2 V7 menu on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Create DB2 Service Menu on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Setup Window - create warehouse agents . . . . . . . . . . . . . . . . . . . . . 132 Before proceeding with remote agent sites creation . . . . . . . . . . . . . . 133 Warehouse agents - specify the TDW control server . . . . . . . . . . . . . 134 Successful remote agent creation window. . . . . . . . . . . . . . . . . . . . . . 135 DB2 Data Warehouse services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Remote Agent Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Verify Remote Agents on Tivoli Data Warehouse Control Center . . . . 141 Distributed deployment scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5-2 5-3 5-4 5-5 5-6 5-7 5-8 5-9 5-10 5-11 5-12 5-13 5-14 5-15 5-16 5-17 5-18 5-19 5-20 5-21 5-22 5-23 5-24 5-25 5-26 5-27 5-28 5-29 5-30 5-31 5-32 5-33 5-34 5-35 5-36 5-37 5-38 5-39 5-40 5-41 5-42 5-43 5-44
IBM Tivoli NetView Warehouse Enablement Pack data flow . . . . . . . . 164 NetView Configure data export to DB2 - Parameters . . . . . . . . . . . . . 168 NetView Configure data export to DB2 - create database . . . . . . . . . . 168 NetView Configure data export to DB2 - register and start tdwdaemon169 NetView SmartSet desktop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Microsoft SmartSet Advanced attributes . . . . . . . . . . . . . . . . . . . . . . . 172 Create Microsoft SmartSet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 NetView SmartSets - Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 NetView SmartSets - Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 SmartSet Microsoft contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Create an ODBC data source for NETVIEW . . . . . . . . . . . . . . . . . . . . 183 Add an ODBC data source for NETVIEW . . . . . . . . . . . . . . . . . . . . . . 183 Configure NetView Source database connectivity . . . . . . . . . . . . . . . . 184 NetView WEP installation - List of WEPs to install . . . . . . . . . . . . . . . 185 NetView WEP installation - Properties file . . . . . . . . . . . . . . . . . . . . . . 186 NetView WEP installation - List of WEPs to install NetView . . . . . . . . 187 NetView WEP installation - successful installation . . . . . . . . . . . . . . . 187 Data Warehouse Control Center - check control database . . . . . . . . . 189 Configure NetView data warehouse sources . . . . . . . . . . . . . . . . . . . . 190 Configure NetView data warehouse targets . . . . . . . . . . . . . . . . . . . . 191 Promote ETLs to test mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Test ETL process steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Work in Progress - Log file. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 Sample contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Schedule ANM_c05_ETL1_Process . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Schedule configuration for ANM_C05_ETL1_Process . . . . . . . . . . . . 197 Promote ANM_c05_ETL1_Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Select remote agents properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 Change remote agents properties - sources and targets. . . . . . . . . . . 201 Select ETL process properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Demote ETL processes to development mode . . . . . . . . . . . . . . . . . . 202 Change the ETL processes agent site . . . . . . . . . . . . . . . . . . . . . . . . . 203 Work in Progress - Run ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Work in progress - Check ETL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Log Details menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Crystal Enterprise - Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 Crystal Enterprise 9 - ePortfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Crystal Enterprise 9 - Log in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Crystal Enterprise 9 - Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Crystal Enterprise 9 - Tivoli Reports: IBM Tivoli NetView . . . . . . . . . . 211 Crystal Enterprise 9 - Daily Status Summary by SmartSet . . . . . . . . . 212 Crystal Enterprise 9 - Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Crystal Enterprise 9 - Parameters for Schedule Option . . . . . . . . . . . . 213
Figures
xi
5-45 5-46 5-47 5-48 5-49 5-50 5-51 5-52 5-53 5-54 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 6-15 6-16 6-17 6-18 6-19 6-20 6-21 6-22 6-23 6-24 6-25 6-26 6-27 6-28 6-29 6-30 6-31 6-32 6-33
Crystal Enterprise 9 - Schedule Parameters . . . . . . . . . . . . . . . . . . . . 214 Crystal Enterprise 9 - Schedule Parameter Selection . . . . . . . . . . . . . 215 Crystal Enterprise 9 - Parameters: Specific Time Frame. . . . . . . . . . . 216 Crystal Enterprise 9 - Report History . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Failed report generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Crystal Enterprise 9 - Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Crystal Enterprise 9 - Report (count) . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Summary of total status changes by SmartSet example . . . . . . . . . . . 221 Nodes with longest outage times example . . . . . . . . . . . . . . . . . . . . . 222 Total daily status changes in monitored network example . . . . . . . . . 223 Environment for our case study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Overview of ITM integration with Tivoli Data Warehouse . . . . . . . . . . 228 IBM Tivoli Monitoring data flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Resource Model Data Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 Aggregation time line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Installing warehouse support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 RIM setup options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234 Logging option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Client Configuration Assistant opening dialog . . . . . . . . . . . . . . . . . . . 243 Add Database Wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Add System dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 Select ITM_DB in the dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Confirmation dialog window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 User ID and Password dialog window . . . . . . . . . . . . . . . . . . . . . . . . . 247 ODBC connection successful . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Install a Warehouse Pack window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Tivoli Common Logging Directory window . . . . . . . . . . . . . . . . . . . . . . 249 Add Warehouse Pack window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Location of installation properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Installation menu window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Installation summary window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 AMX installation completion window. . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Installation of AMX Fix Pack 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 IBM Tivoli Monitoring, Version 5.1.1 Generic ETL1 Sources . . . . . . . . 255 AMX_ITM_RIM_Source user ID information . . . . . . . . . . . . . . . . . . . . 255 AMX_TWH_CDW_Source user ID information . . . . . . . . . . . . . . . . . . 256 IBM Tivoli Monitoring, Version 5.1.1 Generic ETL1 Target . . . . . . . . . 256 AMX_TWH_CDW_Target user ID information. . . . . . . . . . . . . . . . . . . 257 Tables and views of AMX_ITM_TIM_Source. . . . . . . . . . . . . . . . . . . . 258 Table name filter specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Endpoint tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 AMX_c05_ETL1 process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Selecting new table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
xii
6-34 6-35 6-36 6-37 6-38 6-39 6-40 6-41 6-42 6-43 6-44 6-45 6-46 6-47 6-48 6-49 6-50 6-51 6-52 6-53 6-54 6-55 6-56 6-57 7-1 7-2 7-3 7-4 7-5 7-6 7-7 7-8 7-9 7-10 7-11 7-12 7-13 7-14 7-15 7-16 7-17 7-18 7-19
Installation menu window with the AMY pack . . . . . . . . . . . . . . . . . . . 263 AMY installation completion window . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Installation of AMY Fix Pack 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 AMY_TWH_CDW_Source user ID information . . . . . . . . . . . . . . . . . . 266 AMY_TWH_MART_Target user ID information . . . . . . . . . . . . . . . . . . 267 Change ETL mode to Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Manually test the ETLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Work in progress window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270 Sample Content of table F_OS_HOUR . . . . . . . . . . . . . . . . . . . . . . . . 271 Schedule AMX_c05_ETL1_Process . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Schedule configuration for AMX_c05_ETL1_Process . . . . . . . . . . . . . 273 Promoting ETLs to Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Crystal Enterprise - Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Crystal Enterprise 9 - ePortfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Crystal Enterprise 9 - Log in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278 Crystal Enterprise 9 - Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Crystal Enterprise 9 - available reports for ITM . . . . . . . . . . . . . . . . . . 280 Scheduling Operating System Busiest System report . . . . . . . . . . . . . 281 Crystal Enterprise 9 - parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Crystal Enterprise 9 - Parameters for the report . . . . . . . . . . . . . . . . . 282 Crystal Enterprise 9 - Report History . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Operating System Busiest Systems report . . . . . . . . . . . . . . . . . . . . . 284 Operating System Paging File Utilization. . . . . . . . . . . . . . . . . . . . . . . 285 Operating System Operating System UNIX CPU Statistics . . . . . . . . . 286 TDW 1.2 - distributed deployment scenario . . . . . . . . . . . . . . . . . . . . . 299 ITSM ODBC Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302 ITSM ODBC data source configuration panel . . . . . . . . . . . . . . . . . . . 303 Install a Warehouse Pack window . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Tivoli Common Logging Directory window . . . . . . . . . . . . . . . . . . . . . . 306 Add Warehouse Pack window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Location of installation properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Data mart and remote agent site settings . . . . . . . . . . . . . . . . . . . . . . 308 Central data warehouse and remote agent site settings . . . . . . . . . . . 309 Editing IBM Tivoli Storage Manager ODBC settings . . . . . . . . . . . . . . 310 ITSM ODBC Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Installation menu window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Installation summary window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Installation Progress and Completion window . . . . . . . . . . . . . . . . . . . 313 Sample of Process Model ANR_C05_ETL1_Process . . . . . . . . . . . . . 316 Sample Content of Table D_NODE . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Crystal Enterprise - Launchpad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Crystal Enterprise 9 - ePortfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Crystal Enterprise 9 - Log in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
Figures
xiii
7-20 7-21 7-22 7-23 7-24 7-25 7-26 7-27 7-28 7-29 C-1
Crystal Enterprise 9 - Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Crystal Enterprise 9 - available reports for ITSM . . . . . . . . . . . . . . . . . 327 Scheduling Operating System Busiest System report . . . . . . . . . . . . . 328 Crystal Enterprise 9 - parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Crystal Enterprise 9 - Parameters for the report . . . . . . . . . . . . . . . . . 330 How Has Clients use of Server Storage Changed Over Time? . . . . . . 331 How Has Clients Use of Server Storage Changed Over Time? . . . . . 332 How Has Clients Use of Server Storage Changed by Platform? . . . . . 333 How Has My Server Storage Space Utilization Changed Over Time? 334 Which Clients are Using the Most Server Storage?. . . . . . . . . . . . . . . 335 Location of the twh_install_props.cfg file . . . . . . . . . . . . . . . . . . . . . . . 362
xiv
Tables
2-1 2-2 2-3 2-4 2-5 2-6 2-7 2-8 5-1 5-2 5-3 5-4 5-5 5-6 6-1 7-1 7-2 C-1 Hardware recommendations for Tivoli Data Warehouse components . 29 Additional hard disk space requirements . . . . . . . . . . . . . . . . . . . . . . . . 30 Software requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Web servers and OS supported by Crystal Web Connector . . . . . . . . . 35 Requirements for Tivoli Data Warehouse components . . . . . . . . . . . . . 36 Agent sites placement for data transfers to a central data warehouse . 52 Where to place agent sites for data transfers to data marts . . . . . . . . . 53 Default port used in Tivoli Data Warehouse environments . . . . . . . . . . 56 Environment for NetView integration . . . . . . . . . . . . . . . . . . . . . . . . . . 162 NetView WEP Prerequisite Check - NetView server platform . . . . . . . 165 Netview Enablement Pack installation information . . . . . . . . . . . . . . . 166 Case Study SmartSets attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Add database wizard - register TCP/IP . . . . . . . . . . . . . . . . . . . . . . . . 184 NetView sources and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Hardware and operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Environment for NetView integration . . . . . . . . . . . . . . . . . . . . . . . . . . 298 ITSM WEP Warehouse Object Names . . . . . . . . . . . . . . . . . . . . . . . . 314 WEP installation properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
xv
xvi
Examples
3-1 3-2 3-3 3-4 3-5 3-6 3-7 3-8 3-9 3-10 3-11 5-1 5-2 5-3 5-4 5-5 5-6 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6-9 6-10 6-11 6-12 6-13 6-14 twh_create_datasource script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Verification of central data warehouse database on z/OS . . . . . . . . . . 119 Verification of data mart database on z/OS . . . . . . . . . . . . . . . . . . . . . 123 twh_create_datasource script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Verify control server (twh_list_cs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Verify central data warehouse (twh_list_cdws) . . . . . . . . . . . . . . . . . . 137 Verify data mart databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Verify remote agent site (twh_list_agentsites) . . . . . . . . . . . . . . . . . . . 138 Verify Crystal Enterprise Professional for Tivoli installation . . . . . . . . . 139 Verify data user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 twh_configwep command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Verify NetView source database updates . . . . . . . . . . . . . . . . . . . . . . 169 NetView tdwdaemon configuration file tdwdaemon.properties . . . . . . 178 Restart the NetView data warehouse daemon tdwdaemon . . . . . . . . . 178 Status of NetView data warehouse daemon (tdwdaemon) . . . . . . . . . 179 Status of the NetView SNMP collector daemon (snmpcollect) . . . . . . 179 Check the NetView source database . . . . . . . . . . . . . . . . . . . . . . . . . . 180 Testing the RIM object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Datacollector configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 wdmlseng command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 wdmcollect command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Sample SQL that check the collection . . . . . . . . . . . . . . . . . . . . . . . . . 240 Running itmchk.sh tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 itmchk.sh tool report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288 Retrieving the date of last data upload into ITM database. . . . . . . . . . 290 Names of the endpoints collecting data . . . . . . . . . . . . . . . . . . . . . . . . 290 wrimtest command output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Status of resource models distributed on an endpoint . . . . . . . . . . . . . 292 msg_DataCollector.log. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 trace_tmnt_rimh_eng1.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 trace_dmxengine.log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
xvii
xviii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
xix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX CICS DataJoiner DB2 Universal Database DB2 Domino DRDA ^ Everyplace ibm.com IBM IMS Informix Lotus MQSeries MVS NetView NetVista OS/390 RACF Redbooks (logo) Redbooks RMF S/390 SP2 Tivoli Enterprise Console Tivoli Enterprise Tivoli WebSphere z/OS
The following terms are trademarks of other companies: Crystal and Crystal Enterprise are trademarks of Business Objects. Intel and Intel Inside (logos) are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Other company, product, and service names may be trademarks or service marks of others.
xx
Preface
With Tivoli Data Warehouse, you can analyze historical trends from various Tivoli and customer applications. The Tivoli Data Warehouse infrastructure enables a set of extract, transform, and load (ETL) utilities to extract and move data from Tivoli application data stores to a central repository. The open architecture of Tivoli Data Warehouse also enables data from non-Tivoli applications to be integrated into its central repository. Data from the central repository can be extracted into data marts that pertain to the reporting needs of selected groups. These data marts can also be used to produce cross application reports. This IBM Redbook focuses on planning, installation, customization, use, maintenance, and troubleshooting topics related to the new features of the Tivoli Data Warehouse version 1.2. This is done using a number of case study scenarios and several warehouse enablement packs. The instructions given in this book are very detailed and explicit. These instructions are not the only way to install the products and related prerequisites. They are meant to be followed by anyone to successfully install, configure, and set up Tivoli Data Warehouse environments of any size.
xxi
administrator. He graduated in Physics at the University of Rome and collaborated with the Italian National Institute of Nuclear Physics developing simulation programs for high energy physics experiments. Dr. Hans-Georg Khne is a software architect for SerCon in Germany. He graduated in physics at the University of Muenster developing simulation programs for high energy physics experiments. He joined SerCon in 1996, working in the distributed systems management area. He planned and implemented several systems management solutions in the areas software distribution, availability management, and business automation. Devi Raju is a Tivoli Implementation Specialist for IBM India. She started her career with IBM and has been with IBM for 8 years now. Devi has 4 years of experience in Enterprise System Management. She has worked in various large Tivoli customer projects. She is also a Tivoli Certified Consultant on PACO products. Ghufran Shah is an IBM Certified Deployment Professional and an IBM Certified Instructor based in the UK with os-security.com. He holds a degree in Computer Science, and has over 8 years of experience in Systems Development and Enterprise Systems Management. As well as teaching Tivoli courses worldwide, his areas of expertise include Tivoli Systems Management Architecture, Implementation, and Training together with Provisioning and Orchestration. His focus in now on leveraging IBM solutions to provide customers with the vision and reality of an OnDemand environment. Sergio Henrique Soares Monteiro is an IT Specialist in Brazil. He has over 10 years of experience in database administration and development fields. He has worked with Oracle, DB2, Informix and SQL Server on UNIX and Windows, including clustered servers. He currently works as a Database administrator in the CTIs IBM in Hortolandia, Brazil. His areas of expertise include sizing, performance tuning, and internals of RDBMS. Thanks to the following people for their contributions to this project: Budi Darmawan International Technical Support Organization, Austin Center David Stephenson IBM Global Services, Australia Diana Marcattili IBM Global Services, Italy Georg Holzknecht Senior Systems Consultant, T-Systems CDS GmbH, Germany
xxii
Jonathan Cook, Brian Jeffrey, Mike Mallo Tivoli Data Warehouse development team, IBM Software Group, Austin Ken Hannigan IBM Tivoli Storage Manager development team, IBM Software Group, Tucson Yvonne Lyon, editor International Technical Support Organization, San Jose Center
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Mail your comments to: IBM Corporation, International Technical Support Organization Dept. JN9B Building 003 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493
Preface
xxiii
xxiv
Part 1
Part
Fundamentals
Chapter 1.
These are the main types of data: Subject-oriented: Data that gives information about a particular subject instead of about a company's on-going operations Integrated: Data that is gathered into the data warehouse from a variety of sources and merged into a coherent whole Time-variant: All data in the data warehouse that is identified with a particular time period
Consolidating and organizing data for better business decisions can lead to a competitive advantage, and learning to uncover and leverage those advantages is what business intelligence is all about. The amount of business data is increasing exponentially. In fact, it doubles every two to three years. More information means more competition. In the age of the information explosion, executives, managers, professionals, and workers all need to be able to make better decisions faster. Because now, more than ever, time is money. Much more than a combination of data and technology, BI helps you to create knowledge from a world of information. Get the right data, discover its power, and share the value, BI transforms information into knowledge. Business intelligence is the application of putting the right information into the hands of the right user at the right time to support the decision-making process.
The need to reduce IT costs and leverage existing corporate business information. The investment in IT systems today is usually a significant percentage of corporate expenses, and there is a need not only to reduce this overhead, but also to gain the maximum business benefits from the information managed by IT systems. New information technologies like corporate intranets, thin-client computing, and subscription-driven information delivery help reduce the cost of deploying business intelligence systems to a wider user audience, especially information consumers like executives and business managers. Business intelligence systems also broaden the scope of the information that can be processed to include not only operational and warehouse data, but also information managed by office systems and corporate Web servers.
Most companies already collect and analyze massive quantities of data. Data mining techniques can be implemented rapidly on existing software and hardware platforms to enhance the value of existing information resources, and can be integrated with new products and systems as they are brought on-line. Given databases of sufficient size and quality, data mining technology can generate new business opportunities by providing these capabilities: Automated prediction of trends and behaviors: Data mining automates the process of finding predictive information in large databases. Questions that traditionally required extensive hands-on analysis can now be answered directly from the data and quickly. A typical example of a predictive problem is targeted server performance. Data mining uses data on past critical events to identify the servers most likely to cause future critical problems. Other predictive problems include forecasting server outage and other forms of performance degradation that is likely to occur, given certain events. Automated discovery of previously unknown patterns: Data mining tools sweep through databases and identify previously hidden patterns in one step. An example of pattern discovery is the analysis of IBM Tivoli Monitoring data to identify seemingly unrelated events that are often received together.
DB2 warehouse management consists of: An administrative client to define and manage data warehousing tasks and objects, and warehouse or data mart operations: the Data Warehouse Center A manager to manage and control the flow of data: the warehouse server Agents residing on IBM DB2 Universal Database Enterprise Edition server platforms to perform requests from the manager or warehouse server: the
Warehouse Server
Warehouse Agents
Data Data
Databases
End Users
Message
Relational Source
Message
Message
Data
DB2 Target
Data Metadata Message Data Metadata yy Control Database DB2 Log Editions Configuration Data
NonRelational Source
Non-DB2 Target
Data
Flat Files, Web or SAP R/3
10
If we define a report as an entity that visualizes the output of SQL clauses, or an SQL Pull, then the Crystal Enterprise Professional Version 9 for Tivoli, which is shipped with the Tivoli Data Warehouse 1.2 product, is supplied with a number of standard reports provided by the Tivoli Data Warehouse Enablement Packs (WEPs). When a report is made available by the WEP to Crystal Enterprise, the layout, legends, colors, and the look-and-feel of the report can all be customized. However, to create a new report (using the definition above), or to modify the SQL pull criteria of an existing report, Crystal Reports and a different version of Crystal Enterprise is required: Crystal Enterprise Version 9 Special Edition. A license for Crystal Enterprise Version 9 Special Edition must be purchased separately. The Crystal Enterprise Version 9 Special Edition will allow you to: Extend your reporting capabilities to develop, deliver, and analyze new reports created from your Tivoli Systems Management Data using Crystal Reports version 9 Provide support for approximately 75 concurrent online users Add, modify, and design new reports from your Tivoli Systems Management Data using Crystal Reports version 9 For the tasks listed above, Crystal Reports Version 9 Special Edition is required and must be purchased separately. Next we present a brief introduction to the Crystal Enterprise architecture. As shown in Figure 1-2, by using the Crystal Enterprise multi-tier architecture, the IBM Tivoli product portfolio has a key partnership developed to ensure the deepest level of integration and ongoing support for this solution. Please note that some of the functions may not be available on the Crystal Enterprise Professional for Tivoli product.
11
Client Tier
Intelligence Tier
Web Component Server
Cache Server
Event Server
Processing Tier
Job Server
Page Server
Data Tier
OLAP
Relational ODBC
In Crystal Enterprise, there are four tiers, each of which can be installed on one machine, or with the Crystal Enterprise Version 9 Special Edition, spread across many. The Crystal Enterprise architecture tiers are as follows: Client tier: Administrators and end users interact with this component directly, which is made up of the applications that enable people to administer, publish, and view reports. Intelligence tier: These components manage the Crystal Enterprise administration system, which consists of maintaining all aspects of the security information, storing report instances, and controlling the flow of requests to the appropriate servers.
12
Processing tier: These components access the data and generate the reports. This is the only tier that communicates directly with the databases that contain the report data. Data tier: The databases that contain the data used in the reports fall into this tier. These databases are referred as Data Sources in Crystal Enterprise, and a wide range of databases are supported. These databases could contain historic data and/or operational data. This redbook does not go into the details of Crystal Enterprise Professional Version 9 for Tivoli administration and configuration. Refer to the following documentation shipped with the product: Crystal Enterprise 9 Installation Guide Crystal Enterprise 9 Administrators Guide Crystal Enterprise 9 Getting Started Guide Crystal Enterprise 9 ePortfolio Users Guide
13
Some Tivoli Decision Support Guides require direct access to the data in your operational data stores, which can decrease the performance of the products creating and using those data stores. Tivoli Data Warehouse ensures that your operational data stores are not impacted by users running reports. It also ensures that users can run reports efficiently by accessing databases that are optimized for interactive reporting. By saving historical data in a central location and in a common format, Tivoli Data Warehouse makes it easier to create reports that draw on data collected by more than one product. Tivoli Decision Support stores and accesses data using Cognos Powerplay and Crystal Reports. In contrast, Tivoli Data Warehouse publishes the format of its data, as well as the format of the data in the products that feed the warehouse, allowing the use of various reporting tools. This enables you to use the business intelligence solutions you already know. In addition, Tivoli software uses Crystal Enterprise, which is provided with Tivoli Data Warehouse, as a common reporting solution. Tivoli Data Warehouse provides support for multiple languages. Tivoli Decision Support is available only in English. Tivoli Decision Support for OS/390 is available in English and Japanese.
14
Web Reports
15
Figure 1-4 shows how data from the distributed environment and the OS/390 or z/OS environment can be encapsulated to produce real, end-to-end enterprise reporting.
TEC ITM
SMF
Reports
SLA
Reports
ETL
Central Data Warehouse on OS/390
Central Data Warehouse
ETL
Central Data Warehouse distributed
Central Data Central Data Warehouse Warehouse
ETL
Data Mart for reporting functionality
Data Mart
Gain the competitive edge Crystal Enterprise Other Business Intelligence Tools Other Reporting Tools
Figure 1-4 Distributed and OS/390 Data feeds into Tivoli Data Warehouse
For additional details on Tivoli Data Warehouse 1.2 components placement, refer to Chapter 2, Planning for Tivoli Data Warehouse 1.2 on page 27.
16
. . . .
. . . .
. . . .
. . . .
Figure 1-5 Multiple source applications loading into a central data warehouse
17
However, you could also keep data about multiple customers and data centers in one central data warehouse database, while restricting access so that customers can see and work with data and reports based only on their data and not any other customers data. For example, support for multiple customer environments enables a service provider to keep historical data about all of its customers in one deployment of Tivoli Data Warehouse. Multiple data center support provides a way to partition data physically by customizable criteria such as location, application, or business purpose.
18
19
Data Mart
Operational Data
OLAP, Business Intelligence & Analysis Tools Crystal Enterprise Server Web Server Crystal Web Interface Web Reports
A Tivoli Data Warehouse 1.2 architecture can be composed of the following elements: Tivoli Data Warehouse control center server One or more central data warehouse databases One or more data mart databases IBM DB2 warehouse agents and agents sites Crystal Enterprise server
20
21
22
Figure 1-7 shows an overview of the Tivoli Data Warehouse 1.2 architecture and supported software components.
Win NT/2000
TDW 1.2 Control Center
Web-based Reports Cr ys ta l
IE 5.5 SP2 & 6.0 Netscape 6.2.3
eP or
tfo lio
WM Agent
DB2 UDB EE & DB2/390 Central Data Data Mart Warehouse ETL2 Data Mart Data Mart Data Mart Star Schema
Data Mart
Win NT/2000/2003
23
In addition to the data collected by diverse IBM Tivoli software, Tivoli Data Warehouse has the flexibility and extensibility to enable you to integrate your own application data. It offers database optimizations both for the efficient storage of large amounts of historical data and for fast access to data for analysis and report generation. It provides the infrastructure and tools necessary for maintaining the data: These tools include the Tivoli Data Warehouse application, IBM DB2 Universal Database Enterprise Edition, the Data Warehouse Center, the DB2 Warehouse Manager, and Crystal Enterprise. It includes the ability to use your choice of data analysis tools to examine your historical data. In addition to the Crystal Enterprise reporting solution that is shipped with Tivoli Data Warehouse, you can analyze your data using any product that performs online analytical processing (OLAP), planning, trending, analysis, accounting, or data mining. It offers multi-customer and multicenter support: You can keep data about multiple customers and multiple data centers in one warehouse, but restrict access so that customers can see and work with data and reports based only on their own data and not any other customers data. You can also restrict an individual users ability to access data. It includes internationalization support: Reports can be displayed in the language of the users choice. Crystal Enterprise only comes in English, French, German, and Japanese; however, the reports generated by Crystal are translated into Brazilian Portuguese, French, German, Italian, Spanish, Japanese, Korean, Simplified Chinese, and Traditional Chinese. This means that even if you are running the Crystal Enterprise server in English, you could view a report in Italian through the Crystal Report Viewer Web interface if that language is set as your locale preference.
24
As shown in Figure 1-8, the Tivoli Data Warehouse 1.2 could be used as a single integration point for all Systems Management data, and it could also be used as both tool and technology to drive business intelligence within your enterprise.
25
26
Chapter 2.
27
Quick start installation, also known as stand-alone installation, with all the
components installed on a single Microsoft Windows NT, Windows 2000, or Windows 2003 system. This is convenient for demonstrations, as an educational or test platform, and for companies that do not plan to have many users concurrently accessing the data stored in the Tivoli Data Warehouse databases and/or those that do not need to capture and analyze large amounts of data.
Distributed installation, with the components installed on multiple systems in your enterprise, including UNIX and z/OS servers. See Software requirements on page 30 to determine the operating systems supporting each component of Tivoli Data Warehouse.
The historical reporting for Tivoli Data Warehouse 1.2 is provided by Crystal Enterprise, which can be installed in three different configurations, depending on the version used:
Server-side installation connected to a Web server, which allows you to maintain separation between Crystal Enterprise and the Web server by running them on separate machines. In this scenario the Web server can also be on a UNIX system. Expanded installation, which allows you to install Crystal Enterprise server
components on more machines in order to create an Automated Process Scheduler (APS) cluster, to increase available resources and to distribute the processing workload. Note: IBM ships a limited license of Crystal Enterprise Professional Version 9 for Tivoli with Tivoli Data Warehouse 1.2 that only allows the full stand-alone installation option. In the following sections we provide the current hardware and software requirements for a Tivoli Data Warehouse environment, but you should also check the Tivoli Data Warehouse Release Notes, SC32-1399, for possible updates about these requirements.
28
Distributed installation
Control server
Data mart
Disk space requirements for central data warehouse and data marts can vary greatly according to the amount of actual collected data. See 2.3, Database sizing on page 56 to help you in evaluating the storage required by the different metrics you wish to collect in your own environment.
29
Table 2-2 lists the hardware requirements for additional components of a Tivoli Data Warehouse 1.2 solution.
Table 2-2 Additional hard disk space requirements Component Crystal Enterprise Professional Version 9 for Tivoli Warehouse agent Recommended hardware 1.0 GB RAM 2.4 GHz processor 30 GB disk (1 GB for installation alone) 50 MB disk
The storage required by the installation of Crystal Enterprise Professional Version 9 for Tivoli shown in Table 2-2 assumes a full stand-alone installation of Crystal Enterprise Professional Version 9 for Tivoli. Please note that on the Crystal Enterprise server additional storage is required for Web server, database client software and all the reports installed by each warehouse enablement pack.
30
Table 2-3 Software requirements Tivoli Data Warehouse core components Operating system Microsoft Windows NT, service pack 6 or higher and Microsoft Data Access Components (MDAC) 2.7 service pack 1 Windows 2000 Server SP2+ Windows 2000 Advanced Server SP2+ Windows 2003 Server IBM AIX 4.3.3, 5.1, and 5.2 Sun Solaris Versions 2.8 and 2.9 RedHat Linux Version 7.1, 7.2, 7.3 and Advanced Server 2.1 SuSE Linux Version 7.2 Turbo Linux 7 zOS 1.2, 1.3, 1.4 Yes Yes Yes Yes No No Yes Yes Yes Yes No No 4.3.3, 5.1 only 2.8 only Data source Yes Warehouse agents Yes Control server Yes Central data warehouse Yes Data mart database Yes Crystal Back End Yes, also NT4 Server SP6a Crystal Weba Yes
Yes
No
No
No
No
No
No No No
No No No
No No Yes
No No Yes
No No No
a. The Crystal Enterprise limited edition provided with Tivoli Data Warehouse requires that the Web server is on the same system of Crystal Enterprise server
31
32
33
Note: if the Microsoft Data Engine (MSDE) or Microsoft SQL Server is already installed on the local machine, you must set up a user account for the Crystal Enterprise Professional for Tivoli APS before installing Crystal Enterprise Professional for Tivoli, as follows: 1. Determine whether the Crystal Enterprise Professional for Tivoli APS should use Windows NT or Microsoft SQL Server authentication when connecting to your local database installation. 2. Using your usual administrative tools, create or select a user account that provides Crystal Enterprise with the appropriate privileges to your database server: If you want the APS to connect to its database using Windows NT authentication, ensure that the Windows NT user account that you assign to the APS has System Administrators role in your SQL Server installation. In this scenario, the Windows NT user account that you assign to the APS is not actually used to create the system database during the installation process. Instead, your own Windows NT administrative account is used to create the database, so verify that your Windows NT account also has the System Administrators role in your SQL Server installation. If you want the APS to connect to its database using SQL Server authentication, the login that you assign to the APS must belong to the Database Creators role in your SQL Server installation. In this scenario, the SQL Server credentials that you assign to the APS are also used to create the database and its tables. 3. Verify that you can log on to SQL Server and carry out administrative tasks using the account you set up for use by the APS. For details about APS database migration, see Configuring the intelligence tier in the Crystal Enterprise 9 Administrators Guide manual, which is shipped with the product. Note: For a detailed list of environments tested with Crystal Enterprise, please consult the Platforms.txt file included with your product distribution. In case you have acquired a Crystal Enterprise 9 Special Edition license and choose to deploy the server-side installation method using the Crystal server connected to an external Web server, you also need to install and configure the appropriate Web Connector on your Web server machine. The supported Web servers are listed in Table 2-4.
34
Table 2-4 Web servers and OS supported by Crystal Web Connector Web server Microsoft IIS5 / ISAPIY Microsoft IIS5 / CGIY Microsoft IIS4 / ISAPI-Y Microsoft IIS4 / CGI-Y iPlanet 6.0 SP3 / NSAPI iPlanet 6.0 SP3 / CGI iPlanet 4.1 SP10 / NSAPI iPlanet 4.1 SP10 / CGI Domino 5.0.8 / DSAPI Domino 5.0.8 / CGI Operating systems Windows 2000 Server Windows NT 4.0 Server Windows 2000 Server Windows NT 4.0 Server Sun Solaris 2.7, 2.8 Windows 2000 Server Windows NT 4.0 Server Windows 2000 Server Windows NT 4.0 Server Sun Solaris 2.7, 2.8 IBM AIX 4.3.3, 5.1 Sun Solaris 2.7, 2.8 RedHat 6.2/7.3 (x86) SuSe 7.3/8.0 (x86) IBM AIX 4.3.3, 5.1 IBM AIX 4.3.3, 5.1 Windows 2000 Server IBM AIX 4.3.3, 5.1
35
Component
Control server
Windows
36
Component
Warehouse agents
a. See the Tivoli Data Warehouse Release Notes, SC32-1399 for complete details about supported operating systems.
37
The control server creates a control database (TWH_MD) which contains descriptions of the stored data (known as metadata) for both Tivoli Data Warehouse and for the warehouse management functions. The control servers uses the DB2 Data Warehouse Center to automate the data warehouse processing, to define the ETL processes that move and transform data into the central data warehouse and the data marts. The control server runs also a warehouse agent, the component of IBM DB2 Warehouse Manager that manages the data flow between warehouse sources and targets. In advanced Tivoli Data Warehouse scenarios, you can move the warehouse agent to other locations (see Warehouse agents on page 49). When using the configuration with the warehouse agent on the control server, the computer on which you install the control server must also connect to the operational data stores of your enterprise, which potentially reside on other systems and in relational databases other than IBM DB2. To enable the control server to access these data sources, you must install the appropriate database client for each data source on the control server system.
38
Note: If you plan to install warehouse packs that were created to run on Tivoli Enterprise Data Warehouse 1.1, you need to create at least one central data warehouse database on a Windows or UNIX system. These warehouse packs use only the first central data warehouse database that is created on a Windows or UNIX system. This central data warehouse database must be named TWH_CDW. A central data warehouse database on a z/OS system can be populated only by warehouse packs developed for Tivoli Data Warehouse 1.2. On Windows and UNIX platforms, the first central data warehouse database created by Tivoli Data Warehouse 1.2 is named TWH_CDW. Subsequent central data warehouse database will be named TCDW1, TCDW2, and TCDW3. On z/OS systems, there are no hard specifications on the name used for the central data warehouse database. However, it is a good practice to follow the naming convention adopted by Tivoli Data Warehouse 1.2. Multiple central data warehouse databases might be useful in the following situations: If your Tivoli Data Warehouse deployment contains systems in widely separated time zones or geographies: A central data warehouse ETL typically runs during off-peak hours to avoid impacting the performance of your operational data stores: Having central data warehouse databases located on servers in different time zones enables you to schedule ETLs for each system at an appropriate off-peak time. If your deployment includes z/OS systems. Warehouse packs that use data extracted from z/OS data sources must load their data into a data mart database on a z/OS system. In contrast, warehouse packs that use operational data stores from Windows or UNIX systems can load that data into a data mart database on any supported operating system. Therefore, when you have sources on z/OS and on distributed systems, you must have at least one central data warehouse database on a z/OS system (see Figure 2-5 on page 47 and Figure 2-6 on page 48). You may optionally choose to have also a second central data warehouse database on a distributed system in order to keep distributed applications data completely separate from z/OS applications data (see Figure 2-7 on page 49).
39
If you want to distribute the central data warehouse workload. When using different warehouse packs that do not provide cross-application reports, you can have each warehouse pack load its data into separate central data warehouse databases. This allows you to schedule the central data warehouse ETLs for both warehouse packs to run at the same off-peak time without causing database performance problems. When planning to use multiple central data warehouse databases, consider the following information: If you use a set of warehouse packs that collect historical data intended for the same reporting purpose, all of the warehouse packs must write their data into the same central data warehouse database. Note that if a warehouse pack supports extracting data from multiple central data warehouse databases, its documentation contains information about the placement of the central data warehouse databases. Distributed application data may flow through a central data warehouse database either on a z/OS or on a distributed system into a data mart database either on a z/OS or on a distributed system. z/OS application data can flow only through a central data warehouse database on a z/OS system into a data mart database on a z/OS system. Important: Although it is possible for a data analysis program to read data directly from central data warehouse databases without using data marts, this is highly not recommended and not supported. Analyzing historical data directly from the central data warehouse database can cause performance problems for all applications using the central data warehouse.
40
On z/OS systems, Tivoli Data Warehouse 1.2 only supports one data mart database per IBM DB2 subsystem. In addition to that, the central data warehouse and data marts must be in the same IBM DB2 subsystem, and the IBM DB2 subsystem must have an unique location name. On Windows and UNIX platforms, the first data mart database created by Tivoli Data Warehouse 1.2 is named TWH_MART. Subsequent central data warehouse database will be named TMART1, TMART2, and TMART3. On z/OS systems, there are no hard specifications on the name for the data mart database. However, it is a good practice to follow the naming convention adopted by Tivoli Data Warehouse 1.2. Each data mart database can contain the data from multiple central data warehouse databases. The data mart databases do not require any Tivoli Data Warehouse software or DB2 Warehouse components, but you may choose to install a warehouse agent on the servers containing the data mart databases to improve the performance of data transfer from central data warehouse databases (refer to Warehouse agents on page 49) Note: If you plan to install warehouse packs that were created to run on Tivoli Enterprise Data Warehouse 1.1, you need to create at least one data mart database on a Windows or UNIX system. These warehouse packs use only the first data mart database that is created on a Windows or UNIX system (TWH_MART). A data mart database on a z/OS system can be populated only by warehouse packs for Tivoli Data Warehouse 1.2. Multiple data mart databases might be useful in the following situations: If your deployment includes z/OS systems. Warehouse packs that use data extracted from z/OS data sources must load their data into a data mart database on a z/OS system. In contrast, warehouse packs that use operational data stores from Windows or UNIX systems can load that data into a data mart database on any supported operating system. You might optionally place a data mart database on a Windows or UNIX system to keep data from those systems separate from data from z/OS applications (see Figure 2-6 on page 48).
41
If you want to store your enterprise data in different database for security reasons. You can allow each user to access only to the data mart database containing the information which that user is authorized to examine. If you plan to access data marts using different reporting or data analysis programs. You can format the data and you can tune each data mart database according to the programs that is used to analyze it and the expected workload. For an effective planning of data mart databases locations, you should consider these requirements: Each warehouse enablement pack provides its own data structure called star schema. A single data mart database can contain many star schemas. Data from different warehouse enablement packs can be stored in the same data mart database. Each using its separate star schema. Each warehouse pack can write to only one data mart database, and it must pull all of the data for the data mart from a single central data warehouse database. Different star schemas in one data mart database can pull their data from different central data warehouse databases. Data mart databases on a Windows or UNIX system cannot pull z/OS applications data while a data mart database on z/OS can receive data coming all supported platforms (z/OS, UNIX and Windows).
42
Data source
Data Mart
43
Windows system
Data Mart
Data source Data source Data source Windows system Crystal Server Web Server
Figure 2-2 Distributed deployment on Windows and UNIX systems
Web Reports
Central data warehouse and source databases on the same server: The central data warehouse database is on the same computer as the database containing the operational data sources. The control server and Crystal Enterprise are on two different Microsoft Windows servers, as seen in Figure 2-3. Because operational data sources usually have a high rate of transactions per hour, it is not recommended to share the same IBM DB2 server for data source and data marts: This configuration may increase the time needed to obtain reports from data marts.
44
On the other hand, it is possible to have a common IBM DB2 server for data sources and central data warehouse without affecting the performances whenever the ETL1 and ETL2 can be scheduled in off-peak times.
Control Server Metadata Windows or UNIX system Data source Data source Central Data Warehouse
Windows system
Data Mart
45
Data sources, central data warehouse and data mart database on a z/OS system: The central data warehouse and the data mart database are in a IBM DB2 UDB for OS/390 and z/OS. The control server and Crystal Enterprise are on two different Microsoft Windows servers, as seen in Figure 2-4. This kind of configuration is typically used when all management data source comes from z/OS applications. The reason is that all warehouse enablement packs extracting data from z/OS data sources must load their data into a central data database located at the Z/OS system.
Source Source
z/OS system
Data Mart
TDW environment
Web Reports
Operational data sources both on z/OS and on distributed systems: You can transfer data to a central data warehouse database on a z/OS system even if your operational data sources are distributed among z/OS and distributed systems, as seen in Figure 2-5. In this configuration you can use only warehouse enablement packs for Tivoli Data Warehouse 1.2, because those warehouse enablement packs for version 1.1 do not allow any data transfer to a central data warehouse located on a z/OS system.
46
The common data mart database on z/OS provides the reports for both data sources on z/OS and distributed systems. You may choose to have the common data mart database on a distributed system instead of z/OS, but in that case you cannot have any reports from operational data sources on z/OS, as seen in Figure 2-6.
Windows system
Z/OS system Windows or UNIX system Data source Data source Data source Central Data Warehouse Data Mart
Figure 2-5 Operational data sources both on z/OS and on distributed systems
Separate data mart databases on a z/OS and on a distributed system: This configuration is typically used to segment the reporting functions into two logical areas, one for the z/OS and the other for the distributed environment. All z/OS application data flows through the central data warehouse database to the data mart database on z/OS, while the distributed applications data is transferred to a data mart on a distributed system.
47
Windows system
Windows or UNIX system Data source Data source Data source Data source Central Data Warehouse
Z/OS system
Windows or UNIX system Windows system Data Mart Data Mart1 Crystal Server Web Server
Web Reports
Figure 2-6 Separate data mart databases on z/OS system and distributed system
Two central data warehouse servers, one on z/OS, and one on a UNIX or Windows system: This is a more complex deployment with one central data warehouse and one data mart database in a DB2 UDB for OS/390 and z/OS, a second central data warehouse database on a UNIX or Windows server, and the control server and Crystal Enterprise server on separate Windows systems, as seen in Figure 2-7. This configuration may be chosen by customers who want to keep completely separate z/OS applications data from distributed applications data.
48
Windows or UNIX system Data source Data source Control Server Metadata Windows system
Z/OS system Data source Central Data Warehouse 1 Windows or UNIX system Data source Central Data Warehouse 2 Data Mart Data Mart2
Windows system
Web Reports
Data Mart Data Mart1 Windows or UNIX system Crystal Server Web Server
Figure 2-7 Two CDWs on a Windows or UNIX system and on a z/OS system
49
When you install the Tivoli Data Warehouse control server, a warehouse agent is automatically installed on the Control server machine. In the basic configuration, shown in Figure 2-8, the control server uses its local warehouse agent to manage data flow from the operational data sources to the central data warehouse (ETL1 process) and from that to the data marts (ETL2 process). In case the Tivoli Data Warehouse databases are located on the same system as the control server, the warehouse agent is not used.
Source
ETL2
Warehouse Agent
Data Mart
In a distributed scenario, as shown in Figure 2-9, you might improve the performance of Tivoli Data Warehouse by placing a warehouse agent on each central data warehouse server and data mart server. These remote warehouse agents allow a straight data flow from target to source without passing through the control server, reducing the workload on this server and increasing the speed of data transfer.
50
Data
ETL1
Warehouse Agent
Data
Source
Typically the warehouse agent is placed on the target of a data transfer. In this configuration, the warehouse agent performs the following tasks: Passes SQL statements that extract data from the remote source tables Transforms the data if required Writes the data to the target table on the local database This configuration offers the best performance in a distributed environment by optimizing the DB2 data flow and using block fetching for the extraction. This is the recommended configuration to use with Tivoli Data Warehouse when source and target are on physically separate machines. The warehouse agent may be installed on the system that contains the source database. In this configuration, the agent: Passes SQL statements that extract data from the local source tables. Transforms the data if required. Writes the data to the target table on the remote database. This alternative configuration does not optimize the DB2 data flow (Distributed Relational Database Architecture DRDA or DB2 private protocols) and should be used only if justified by specific architecture requirements. For example, if your data source and data mart databases are on distributed systems while the central data warehouse database is on a Z/OS system, you are forced to place warehouse agents on the distributed systems, one of which is the source of data transfer, as seen in Figure 2-10.
51
z/OS system
Warehouse Agent
Warehouse Agent
Source Agent site UNIX or Windows system Warehouse Server (Control Server)
To improve the performance of your ETL process, you should carefully plan where to place agent sites in your environment and which site to associate with each ETL. Table 2-6 suggests where to place the warehouse agents when transferring data from data sources to the central data warehouse database in different scenarios.
Table 2-6 Agent sites placement for data transfers to a central data warehouse Operational data source location A Windows or UNIX system Central data warehouse database location A different Windows or UNIX system The same system A z/OS system Warehouse agent location Central data warehouse system No agent required Operational data source system
52
Central data warehouse database location The same z/OS location A different z/OS location A Windows or UNIX system
Warehouse agent location No agent required Control server Deployment not supported. Data sources on z/OS can load data only in a central data warehouse on z/OS.
Table 2-7 suggests where to place the warehouse agents when transferring data from central data warehouse databases to data mart databases in different scenarios.
Table 2-7 Where to place agent sites for data transfers to data marts Central data warehouse database location a Windows or UNIX system Data mart database location A different Windows or UNIX system The same system A z/OS system z/OS system The same z/OS location A different z/OS location A Windows or UNIX system Warehouse agent location Data mart system No agent required Central data warehouse system No agent required Control server Data mart
Note that Tivoli Data Warehouse automatically recognizes when the source and target data are on the same computer, and in that case it transfers the data without using the warehouse agent. Here are some common situations in which data transfer does not use the warehouse agent: When using operational data sources, central data warehouse, and data mart in the same IBM DB2 location on a z/OS system. When the operational data and the central data warehouse database are on the same computer running Windows or UNIX. When transferring data between a central data warehouse database and data mart database on the same computer running Windows or UNIX.
53
To install warehouse agents, you must install IBM DB2 Warehouse Manager and Tivoli Data Warehouse on each machine that will be an agent site. If you are using operational data stored in databases other than IBM DB2 (Oracle, Informix etc.) you are also required to install on that computer a database client for each type of remote database that the agent needs to access. For example, if the operational data source for a warehouse pack is an Oracle database on another computer, you must install also an Oracle database client on the agent site. For more information about warehouse agents, refer to the IBM DB2 Warehouse Manager Installation Guide, and the redbook, DB2 Warehouse Management: High Availability and Problem Determination Guide, SG24-6544-00.
54
55
80
56
You can also find useful information about database sizing in the redbook Planning a Tivoli Enterprise Data Warehouse Project, SG24-6608, even if the estimates indicated in that redbook are for warehouse enablement packs running on Tivoli Enterprise Data Warehouse version 1.1.
2.4 Security
The following sections describe security considerations for installing and using Tivoli Data Warehouse.
57
2.4.3 Firewalls
You cannot have a firewall between any Tivoli Data Warehouse components. This includes the control server and the Data Warehouse Agent sites. However, it is possible to have configurations with a firewall between source databases and a central data warehouse, or a firewall between a central data warehouse and data marts, if the source database vendor supports communication through firewalls, and the Data Warehouse Agent resides on the central data warehouse. Figure 2-11 shows both a valid configuration and a non-functional configuration. Only the ODBC communication can be transported through the firewall.
Warehouse Agent
Central Data Warehouse
Warehouse Agent
Central Data Warehouse
Source
Source
Firewall
Firewall
Crystal Enterprise can deliver a broad range of reporting and analytic content to any browser using pure DHTML. Unlike plug-in based technologies, DHTML requires no software downloads and no special configuration to enable viewing, making it ideal for deployments through a firewall without compromising security.
58
59
Crystal Enterprise includes a set of predefined access levels that allow you to set common security levels quickly and facilitate administration and maintenance. Each access level grants a set of rights that combine to allow users to accomplish common tasks such as viewing reports, scheduling reports, etc. Users can inherit rights as the result of group membership, subgroups can inherit rights from parent groups, and both users and groups can inherit rights from parent folders. However, you can always disable inheritance or customize security levels for particular objects, users, or groups. Refer to the manual, Crystal Enterprise Professional Version 9 for Tivoli Administrators Guide, which is provided with Crystal Enterprise Professional Version 9 for Tivoli, for further information about security aspects of Crystal Enterprise Professional for Tivoli.
60
When the central data warehouse ETL is run, data from applications is assigned valid customer account codes by matching certain data fields in the incoming data with pre-identified values in a matching database table. Because each application can use different fields and different numbers of fields to identify customers, each application has its own matching table that it uses during the central data warehouse ETL process. To configure the central data warehouse for multicustomer functionality, you must manually define your customers into the TWG.CUST and Product_Code.CUST_LOOKUP tables, where Product_Code is the code univocally associated to the warehouse pack you intend to use. In the same way, you can configure the multicenter support simply defining your centers into the TWG.CENTR and Product_Code.CENTR_LOOKUP tables on the central data warehouse database. See the multicustomer and multicenter support information in the manual, Enabling an Application for Tivoli Data Warehouse, GC32-0745-02, for details on how to configure the central data warehouse for multicustomer or multicenter support.
61
62
2.5.2 Scheduling
As stated earlier, ETLs transfer potentially large amounts of data over the network. In a production environment, it is advised not to run these during normal business hours. Instead, ETLs should be scheduled to run when network traffic is low.
ETLs
After establishing the production window, the ETLs may now be scheduled. Once again, different strategies may be implemented to fit the needs of the customer. There are a number of different variables that come into play. Two important variables to consider are the number of warehouse enablement packs (WEPs) and the types of WEPs installed in the Tivoli Data Warehouse. One or two WEPs that extract large amounts of data can take much longer than the combination of many WEPs that extract small amounts of data. If it seems that the amount of time needed to run the ETLs will be longer than the time allocated, a couple of strategies could be implemented. You can refer to the redbook Planning a Tivoli Enterprise Data Warehouse Project, SG24-6608 for estimates about the time required for each ETL to run. This redbook provides information about ETLs times running on a Tivoli Enterprise Data Warehouse version 1.1 environment.
63
ETL grouping
Another consideration when planning the scheduling of ETLs should be grouping. When you have several warehouse packs installed in your environment, you can use two basically different approaches for ETLs schedule: First run all the ETL1s (from operational data sources to central data warehouse), then all the ETL2s (from central data warehouse to data mart). Run sequentially the ETL1 and ETL2 for each warehouse pack. The first approach will place most of the network load at the beginning of your maintenance window when ETL1s are run. This is because the ETL1s query the application databases and must transfer data across the network when doing so. ETL2s extract data from the central data warehouse and load it into the data marts. If these two databases reside on the same machine, network traffic is minimal. Therefore if the total time of all ETLs runs into the production window, there is relatively little impact on the network. The drawback is that if the ETL2s carry over into the production window, the reports will be unavailable during normal business until the ETLs complete. The second approach guarantees that at least some data marts would be already updated at the start of business even if the total time of all ETLs runs into the production window. The drawback is that an ETL1 might still be running at the start of business, which might put a heavy load on the network at a high peak time. Tip: ETL execution times are largely dependent upon the amount data they query. Always try to schedule ETLs that handle large amounts of data before your ETLs that handle less data in the maintenance window when considering either strategy.
64
Data Mart 2
Crystal Enterprise
Other BI Applications
By having only a subset of the data in the data mart, the database system can manage the data faster and easier. Also, since the filtering has already been applied at the ETL2 run time, the reporting queries become much smaller. Smaller queries performing on a small subset of data are, of course, easier to tune. Therefore, the end customer will experience better reporting performance. Refer to the redbook, Tivoli Data Warehouse Report Interfacing, SG24-6084 for details on how to integrate with other business intelligence and reporting tools.
65
Tivoli Data Warehouse allows mapping of the existing data sources into one or more central data repository the repository, so that reports having a common look and feel can be easily generated, thus improving the understanding of different platforms. In addition to having different data repositories caused by different platforms, different tools are frequently deployed on the same platforms, but on different servers. This is usually the case when these servers are supported by different personnel and then consolidated at some future time (for example, companies that have grown through acquisition, where the servers have come from different companies, or where different support organizations have been allowed to select their own system management tools). By mapping these tools to the central data repository, common reports can be generated, masking the different tools used to create the data. Lastly, one may want to convert existing infrastructure to Tivoli based products. By using Tivoli Data Warehouse, historical data from existing servers can be loaded into the central data repository without any data loss. This will allow one to convert over to a Tivoli product that uses the central data repository and not lose any data. While servers are in the process of converting, both the new and old data sources can be used to generate reports in the new common format. The ETL1 programs take the data from these sources and places it in the central data warehouse, while the ETL2 programs extract from the central data warehouse a subset of historical data that contains data tailored to and optimized for a specific reporting or analysis task. This subset of data is used to create one or more data marts, a subset of the historical data that satisfies the needs of a specific department, team, or customer. A data mart is optimized for interactive reporting and data analysis. The format of a data mart is specific to the reporting or analysis tool you plan to use. Customers can then use Crystal Enterprise Professional for Tivoli or other analysis program to analyze a specific aspect of their enterprise using the data in one or more data marts. Whenever you need to extract data from sources not supported by existing Tivoli warehouse packs or if you decide to use customized data marts, you have to develop your own ETLs. The guide, Enabling an Application for Tivoli Data Warehouse, GC32-0745-02, provides all the information needed to develop customized ETLs.
66
2.8.1 Implementation
The setup of a Tivoli Data Warehouse environment is highly automated and therefore does not require a very specific training. The Tivoli Data Warehouse administrator should have DB2 administrative skills on distributed platform in order to manage and optimize all the database used in the data warehouse. If the managed environment includes also z/OS systems, the administrator should have administrative skills also in DB2 UDB for OS/390 and z/OS or he should be supported by a person with these skills at least during the databases setup on z/OS.
67
Let us consider the following scenario to explain this further. Suppose that a company plans to store in a Tivoli Data Warehouse all data retrieved by Tivoli monitoring applications as well as by some preexisting and highly customized applications. Tivoli Data Warehouse allows that company to correlate data coming indifferently from Tivoli and non Tivoli sources without affecting the preexisting processes: that can be obtained simply customizing the SQL code necessary to transfer data from the old application database to the central data warehouse (source ETL) and that required to populate the data marts (target ETL) that will be used to generate reports. Please note that the ETLs perform not only a plain data transfer between different databases, but they are also in charge of all the transformation tasks required to have a common format for all data independently from the sources. A typical example of data transformation may be the time stamp of a measurement: each monitoring application generally uses the local time and that could generate confusion whenever we examine data produced in different areas of the world. Therefore the ETLs are also required to convert the times referring to a common standard, such as the Greenwich Mean Time. Another example of data transformation concerns the standardization in the denomination of the measured components: different applications may use different names to indicate the same object and the ETLs must correct any possible mismatch in order to produce always coherent reports also when comparing data from different sources.
68
Customized ETLs can be packaged and shipped to customers or colleagues, who can install them using the Tivoli Enterprise Data Warehouse installation wizard. You can find out how to integrate your own components into the Tivoli Data Warehouse in the manual Enabling an Application for Tivoli Data Warehouse, GC32-0745-02. Source and target ETLs are developed with the same method and slight variations in design considerations. Skills required to develop ETLs are: Standard SQL Some experience in Data Warehousing and fair amount of DB2 skills Knowledge of source databases involved ETL developers should totally understand their end users requirements in order to project proper data marts, while each data source administrator is requested to provide ETL developers with the structure of their databases. No interface is provided to allow users unfamiliar with the data and with SQL to simply move data from any applications data store to the data warehouse. In a Tivoli-only environment with Tivoli WEPs, the following skills are recommended: Knowledge of the source application Knowledge of the used Tivoli product to collect the date (IBM Tivoli Monitoring or IBM Tivoli Storage Manager) Basic Knowledge of Tivoli Data Warehouse To set up data collection for a new application, the following skills are needed: Knowledge of the source application Knowledge of Standard SQL Knowledge of Tivoli Data Warehouse and its underlying data model One of the most critical aspects of the data manipulation phase in a large environment is the scheduling of all the different ETL processes running in a Tivoli Enterprise Data Warehouse. The person responsible for scheduling ETLs should have a thorough knowledge about: The timing of data source updates The requirements of end users for all reports The workload for each server in the Tivoli Data Warehouse environment The impact of ETLs on the network You can find a discussion about the last two items in Network traffic considerations on page 61.
69
2.8.4 Reporting
The final step of a Tivoli Data Warehouse process is producing timely updated reports according different end users specifications. Tivoli warehouse enablement packs already provide out-of-the-box reports for the Crystal Enterprise Professional for Tivoli Limited Edition bundled with Tivoli Data Warehouse, but if there is a need to define additional customized reports, then either Crystal Enterprise Professional for Tivoli Professional Edition or another Business Intelligence tool is required. These are the skills required to implement customized reports for Tivoli Data Warehouse: Basic knowledge of how to connect the data mart databases via ODBC Basic knowledge of standard SQL Knowledge of the data mart structure and data Experience with Crystal Enterprise products or other Business Intelligence tools Report designers usually interact with the ETL2 developers, who are providing all of their requirements about relevant metrics, details on information, aggregation times, preservation of old data, and so on, in order to optimize the star schemas according to their reporting needs.
70
Chapter 3.
Topics covered include: Installing and configuring IBM DB2 client and server on page 76 Crystal Enterprise installation on page 86 Quick start deployment on page 93 Distributed deployment on page 103 Installing warehouse agents on page 126 Verification of the installation on page 135 Installing warehouse enablement packs on page 142
71
72
Phase 1
Install a Web Server Define whether Crystal Enterprise will use Microsoft SQL Server or Windows NT authentication Install IBM DB2 Client for access to the Data Mart databases Install Crystal Enterprise Professional version 9 for Tivoli
Phase 2
Install IBM DB2 Universal Database Enterprise Edition 7.2 and MINIMUM Fixpack 8 or upgrade existing IBM DB2 Servers to version 7.2 at least Fixpack 8 level on the TDW Control Server, and all Servers that will host Central Data Warehouse and Data Mart databases on Windows / UNIX platforms Install or upgrade IBM DB2 Universal Database for OS/390 and z/OS V7.1 on the mainframe that will host Central Data Warehouse and Data Mart databases
Data Mart
Phase 3
Start the installation process from the TDW Control Server machine. During the install process, provide user IDs, password and Port number so that the Control Server can access and create all the Central Data Warehouse and Data Mart databases on the remote systems On the TDW Control Server, using the DB2 Warehouse Manager, configure IBM DB2 to use the TDW Control Database (TWH_MD) From the TDW Control Server, install and configure all the Central Data Warehouse and Data Mart databases on the z/OS systems
Phase 4
On the systems that will act as Warehouse Agent Sites: Install the IBM DB2 Warehouse Manager Install the Warehouse Agent component
73
On AIX systems
The default domain name search order is as follows: 1. Domain Name Server (DNS) Server 2. Network Information Service (NIS) 3. Local /etc/hosts file If the /etc/resolv.conf does not exist, the /etc/hosts file is used. If only the /etc/hosts file is used, the fully qualified computer name must be first one that is listed after the IP address. Verify that the /etc/resolv.conf file exists and contains the appropriate information, such as:
domain mydivision.mycompany.com nameserver 123.123.123.123
If NIS is installed, the /etc/irs.conf file overrides the system default. It contains the following information:
hosts = bind, local
The /etc/netsvc.conf file, if it exists, overrides the /etc/irs.conf and the system default. It contains the following information:
hosts = bind,local
On Linux systems
Verify that the /etc/resolv.conf file exists and contains the appropriate information, such as:
domain mydivision.mycompany.com nameserver 123.123.123.123
A short name is used if the /etc/nsswitch.conf file contains a line that begins as follows and if the /etc/hosts file contains the short name for the computer:
hosts: files
74
To correct this, follow these steps: 1. Change the line in the /etc/nsswitch.conf file to:
hosts:dns nis files
On Solaris systems
Verify that the /etc/resolv.conf file exists and contains the appropriate information, such as:
domain mydivison.mycompany.com nameserver 123.123.123.123
A short name is used if the /etc/nsswitch.conf file contains a line that begins as follows and if the /etc/hosts file contains the short name for the computer:
hosts: files
To correct this, follow these steps: 1. Change the line in the /etc/nsswitch.conf file to:
hosts: dns nis files
75
76
On Windows systems, at DB2 7.2 Fix Pack 8, db2level command returns a message similar to the following:
DB21085I Instance "db2admin"uses DB2 code release "SQL07026"with level identifier "03070105"and informational tokens "DB2 v7.1.0.75","n021110" and "WR21314".
The string "DB2 v7.1.0.75" indicates that the system is at IBM DB2 V7.2 Fix Pack 8 level. The last two information tokens ("n021110" and "WR21314") vary by operating system type and the specific patches that are installed. If the IBM DB2 client or server is installed but does not have Fix Pack 8 or higher, it is required to upgrade to Version 7.2 Fix Pack 8 (at minimum) before running the Tivoli Data Warehouse installation wizard. You cannot install Tivoli Data Warehouse with a lower Fix Pack level. In addition to checking the IBM DB2 proper version and level, consider the following possibilities: If you have Lightweight Directory Access Protocol (LDAP), make sure that it is disabled for this IBM DB2 instance. In a IBM DB2 command window, run the following command:
db2set -all | more
Examine the value of DB2_ENABLE_LDAP settings. If this value is listed and is set to YES, disable LDAP and restart the IBM DB2 server by running the following commands in a IBM DB2 command windows:
db2set db2_enable_ldap=NO db2stop force db2start
If the value is not listed or is set to NO, no action is required. On UNIX systems, make sure that the IBM DB2 administration client is installed. If you can start the DB2 Control Center, the client is installed. On Windows Systems, if you did not perform a typical installation of IBM DB2 Universal Database, make sure you have the following components: Administration and configuration Tool Application Development Interfaces Data Warehousing Tools If you selected the Typical installation when installing IBM DB2 Universal Database on a Windows system, all of the necessary components are available. Make sure that other applications using the IBM DB2 instance do not have database names that duplicate those of Tivoli Data Warehouse.
77
2. The Install DB2 V7.2 window, shown in Figure 3-2, appears. Select DB2 Administration Client and DB2 UDB Enterprise Edition.
78
3. A new DB2 instance should be created for the Administration Server database. We specified the DB2 instance name db2inst1, as shown in Figure 3-3. You should also specify /home/db2inst1 as the instance owner directory.
79
4. The installation process creates the DB2 fenced user. We specified the DB2 instance name db2fenc1, as shown in Figure 3-4.
80
5. Next, Figure 3-5 shows the values we used to create the user ID for the DB2 Administration Server.
6. The installation process creates and sets the values of several environment variables, for example, DB2SYSTEM. 7. At the end of the installation process, you may check the installation log file created at /tmp/db2setup.log. 8. The installed JDBC code level needs to be upgraded to Version 2.0. You should log on to the system with a valid DB2 user ID, and issue the following commands: For Bash, Bourne, or Korn shell:
# . INSTHOME/sqllib/db2profile # cd /INSTHOME/sqllib/java12/ # . ./usejdbc2
Where INSTHOME is the home directory of the instance. Verify that the JDBC level is correct by entering the following command:
# echo $CLASSPATH
81
2. Unzip the Fix Pack using the following command to get a tar file:
# gzip FP8_U484610.tar.Z
3. Un-tar the Fix Pack using the following command to extract the Fix Pack files.
# tar -xvf FP8_U484610.tar
4. Run the following command to install the Fix Pack from the location where you un-tar the Fix Pack files.
# ./installFixpack
5. Provide the DB2 instance password if prompted. 6. The installation wizard copies the files and finishes the installation of the Fix Pack. 7. Un-tar the efix for Fix Pack 8
tar -xvf special_U484610.tar
8. Make a backup of files db2bp and db2level, in directory /usr/lpp/db2_07_01/bin directory 9. Copy the files db2bp and db2level to the /usr/lpp/db2_07_01/bin directory. Note: If you are using a 32-bit IBM DB2 Server, make sure to install the 32-bit Fix Pack 8. Or if you are using a 64-bit IBM DB2 Server, make sure to install the 64-bit Fix Pack 8.
82
2. Select Start -> Run. Type in D:\setup.exe and click OK to start the installation. From the Installation window, select Install. 3. The Select Products window opens. From this window you can select the component(s) of DB2 for Windows you would like to install. Select DB2 Enterprise Edition as shown in Figure 3-6. Click Next.
4. The Select Installation Type window opens. Select the installation type you prefer. We selected Typical. 5. Select the installation directory. In our environment, we used C:\DB2\SQLLIB. 6. The installation prompts for the DB2 administrative user ID. We selected and set the password for the db2admin user ID. 7. After the installation wizard copies the DB2 files onto the machine, the Install OLAP Starter Kit window opens. Select Do not install the OLAP Starter Kit and then click Continue. 8. When the setup complete the installation process, click Finish. 9. Update Java. The installed JDBC code level needs to be upgraded to Version 2.0. You should open a DOS-command prompt window and issue the following commands, where DB2_DIR is the DB2 installation directory:
cd DB2_DIR\java12 usejdbc2
The usejdbc2 command will copy the appropriate version of db2java.zip into the DB2_DIR\java12 directory. 10.Reboot the machine.
83
Note: Once you have installed a Fix Pack, you wont be able to un-install it. 3. Stop all database activity before applying this Fix Pack. To stop all database activity, on a DB2 command window, run:
c:\db2\sqllib\bin:\>db2stop
or
c:\db2\sqllib\bin:\>db2stop force
and
c:\db2\sqllib\bin:\>db2admin stop
4. Unzip and extract the Fix Pack files to a temporary directory. 5. Run the following command to install Fix Pack from the Fix Pack directory:
c:\fp8_wr21314\setup.exe
6. Key in the DB2 instance owner password if the setup prompts for it and click Next. 7. The wizard shows the selection window. Click Next to continue. 8. Extract the efix for Fix Pack 8 into a temporary directory. Take a backup of the files db2bp.exe and db2level.exe
84
9. Copy the files db2bp.exe and db2level.exe from the temporary directory to the <DB2DIR>\bin directory, where <DB2DIR> is the IDM DB2 installation directory:
c:\special_wr21314\copy db2bp.exe c:\db2\sqllib\bin\ c:\special_wr21314\copy db2level.exe c:\db2\qllib\bin\
85
In a safe place, record the IBM DB2 user name and password that was specified for the IBM DB2 client. 6. On the Start Copying Files panel, select Next. and Finish to complete the setup.
86
If you want the APS to connect to its database using SQL Server authentication, the login that you assign to the APS must belong to the Database Creators role in your SQL Server installation. In this scenario, the SQL Server credentials that you assign to the APS are also used to create the database and its tables. 3. Verify that you can log on to SQL Server and carry out administrative tasks using the account you set up for use by the APS.
87
3. As shown in Figure 3-7, change the destination folder (optional) and choose the installation type New - Install a new Crystal Enterprise System. Click Next.
4. The setup program checks to see whether or not the Microsoft Data Engine (MSDE) or Microsoft SQL server is installed on the local machine. If the setup program detects a database, use the Microsoft SQL Server Authentication window to provide the credentials that correspond to the database account you setup for the APS. The default user ID for the database account is named sa. For more information refer to Crystal Enterprise Automated Process Scheduler on page 86. If the setup program does not detect an existing database, the installation process will install Microsoft Data Engine and will create the credentials for the default SQL administrator account (sa) user ID. The installation wizards prompts you for the password to be used by the sa user ID. The setup program later configures the APS to connect to its system database using the sa account and the password you create here. Click Next.
88
5. Figure 3-9 shows the Crystal Enterprise Professional Version 9 for Tivoli installation in progress.
89
7. If the Web Server installed on the local machine is a supported version, then the setup program installs and configures the appropriate Crystal Enterprise Web Connector. Therefore, when the installation is complete, you can access the Crystal Enterprise Professional for Tivoli server by opening your Web browser and pointing it to:
http://<CRYSTALSERVER>/crystal/enterprise9/
In this URL, <CRYSTALSERVER> is the host name of the Crystal Enterprise Professional for Tivoli server machine.
90
8. Crystal Enterprise Launchpad is launched, as shown in Figure 3-11. Click Administrative Tool Console.
91
9. Logon as Administrator, as shown in Figure 3-12 to verify that the setup program installed and configured the appropriate Crystal Enterprise Web connector. The default password for the Administrator account is set to blank (no password).
For details on how to use the Crystal Enterprise Professional for Tivoli Web interface, refer to the Crystal Enterprise Professional Version 9 for Tivoli Administrators Guide manual, which is provided with Crystal Enterprise Professional Version 9 for Tivoli.
92
As described in 2.2, Physical and logical design considerations on page 36, there are several ways to deploy a Tivoli Data Warehouse 1.2 environment. Next, we go into detail on the installation steps and initial configuration of the environment, as follows: In 3.3, Quick start deployment on page 93 we provide installation steps for a stand-alone system installation with all the components installed on a single Windows 2000 Server system. In 3.4, Distributed deployment on page 103 we describe an example scenario of a Tivoli Data Warehouse 1.2 distributed environment and provides the installation steps for a TDW control server, one or more central data warehouse, and data mart databases on Windows, UNIX, and z/OS systems. Such deployment is best suited for large enterprises, enterprises distributed across widely-separated time zones, and enterprises containing z/OS systems collecting systems management data.
93
Figure 3-13 shows a typical quick start configuration, mapped to an existing Tivoli environment. All the Tivoli Data Warehouse 1.2 components (TDW control server, central data warehouse, and data mart databases) are installed on a single Windows 2000 machine.
TWH_MART
Tivoli Environment
TWH_CDW
TWH_MD
94
3. Figure 3-15 displays the directory of the Tivoli common logging. The default location is %ProgramFiles%\IBM\Tivoli\common\. Each product stores logging information in a separate subdirectory within the Tivoli Common Logging directory. Click Next.
95
4. Figure 3-16 displays the Setup Window. Select Quick start. Specify the installation directory (Optional); the default is %ProgramFiles%\TWH\; and click Next.
96
5. Figure 3-17, the information window, specifies the existing IBM DB2 instance owner user ID and password to be used to create and connect to the Tivoli Data Warehouse databases. Click Next.
6. Figure 3-18 specifies the following connection information for the Crystal Enterprise server, and then click Next. Host name: The fully qualified host name of the machine name where Crystal Enterprise Professional Version 9 for Tivoli is installed. User name: The user credentials Tivoli Data Warehouse will use on the connection to the Crystal Enterprise server. Defaults to Administrator. Password: Password for the Administrator user ID. Defaults to blank (no password). The default values above are based on a new Crystal Enterprise Professional Version 9 for Tivoli Server limited edition shipped with Tivoli Data Warehouse 1.2.
97
7. Figure 3-19 indicates the components to be installed and their location. In our case, one TDW control server, one central data warehouse database, and one data mart database on the local computer. Click Install to start the installation.
98
8. After the installation is over, a completion window is displayed, as shown in Figure 3-20. It has a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors. Click Next. If you are prompted to restart, click Yes, restart my system.
99
4. In the Advanced window, shown in Figure 3-21, type TWH_MD in the control database field. Click OK to return to the logon window.
5. Click Cancel to exit from the Data Warehouse Center login screen. In order to configure the IBM DB2 Warehouse Control Database Management, complete the following steps: 1. Open the Data Warehouse Center - Control Database Management window by selecting Start -> Programs -> IBM DB2 -> Warehouse Control Database Management. 2. Type the TWH_MD in new control database. 3. Do not change the schema name. 4. Type the IBM DB2 instance owner user ID and password for the control database, and then click OK. In our scenario, we used the db2admin user ID. 5. When the message, Processing has completed, appears as shown in Figure 3-22, click Cancel.
100
101
3. Run the twh_create_datasource script using the following syntax as shown in Example 3-1:
twh_create_datasource <DBtype> <ID> <odbcname> <DBname> <SRVname> <port>
In this script: <DBtype> <ID> Can be set to DB2UDB or DB390 depending on the location of the database. An unique identifier for the local node name. The script creates node names following the naming convention: TDWCS%ID%. In our example, we used ID=10 resulting in node name TDWCS10. The ODBC data source name. The data mart database name. The IBM DB2 server in which the data mart database resides. This must be the fully qualified host name. The port number to connect to the IBM DB2 server.
Example 3-1 twh_create_datasource script C:\Temp>twh_create_datasource.bat DB2UDB 10 TWH_MART TWH_MART tdw001.itsc.austin.ibm.com 50000 Creating DB2/UDB datasource TWH_MART C:\Temp>db2cmd /w /c /i db2 catalog tcpip node TDWCS10 remote tdw001.itsc.austin.ibm.com server 50000 DB20000I The CATALOG TCPIP NODE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>db2cmd /w /c /i db2 catalog database TWH_MART at node TDWCS10 authentication server DB20000I The CATALOG DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>C:\Temp\ODBCcfg.exe DB2 TWH_MART TWH_MART No Username was provided. Skipping connection test. C:\Temp>
102
103
TWH_CDW
TWH_MD
Tivoli Environment
Data Mart
z/OS environment Data Sources Central Data Warehouse Data Source Data Mart Data Source Hostname: wtsc66oe
Data Source
104
We assume that the Crystal Enterprise Professional for Tivoli Server has already been deployed. In this case, the Tivoli Data Warehouse installation process will connect to the Crystal Enterprise Professional for Tivoli Server and install only the Crystal Publishing Wizard on the TDW control server system. In order to install the distributed environment portion of the scenario presented in Figure 3-23, perform the following steps: 1. Insert the Tivoli Data Warehouse 1.2, installation CD into the CD-ROM drive. If the installation wizard does not start up, run the setup.exe program, which is located in the root directory of the CD. 2. The panel in shown in Figure 3-24, the InstallShield Wizard, is displayed. Click Next to proceed.
105
3. Figure 3-25 shows the directory of the Tivoli common logging. The default location is %ProgramFiles%\IBM\Tivoli\common\. Each product stores logging information in a separate subdirectory within the Tivoli Common Logging directory. Click Next.
106
4. Select Custom or Distributed, for distributed deployment. You can use the default installation directory %ProgramFiles%\TWH or change it to your needs. Then click Next. However, we changed the installation directory name to C:\TWH as shown in Figure 3-26.
5. A warning message is displayed to emphasize the need to fulfill all of the prerequisite tasks required for installing Tivoli Data Warehouse 1.2 in a distributed architecture. This is shown in Figure 3-27.
107
6. Specify the existing IBM DB2 instance owner user ID and password to be used to create and connect to the TDW control server database (TWH_MD), as shown in Figure 3-28. Click Next.
108
7. You are prompted to provide the list of servers that will host the central data warehouse database in your environment. Click Add and type the IBM DB2 Server information on the system that will host the remote central data warehouse database, as shown in Figure 3-29. Click Next.
8. Make sure that all of the central data warehouse database servers are included in your list. The list for our case study environment is shown in Figure 3-30. Click Next.
109
9. You are prompted to provide the list of servers that will host the data mart database in your environment. Click Add and type the IBM DB2 Server information on the system that will host the remote data mart database, as shown in Figure 3-31. Click Next.
110
10.Make sure that all of the data mart database servers are included in your list. The list for our case study environment is shown in Figure 3-32. Click Next.
11.Figure 3-33 specifies the following connection information for the Crystal Enterprise server: Host name: The fully qualified host name of the machine name where Crystal Enterprise Professional Version 9 for Tivoli is installed. User name: User credentials Tivoli Data Warehouse will use on the connection to the Crystal Enterprise server. Default to Administrator. Password: Password for the Administrator user ID. Default to blank (no password). The default values above are based on a new Crystal Enterprise Professional Version 9 for Tivoli Server limited edition shipped with Tivoli Data Warehouse 1.2. Click Next.
111
12.An installation summary is then displayed showing which Tivoli Data Warehouse component is going to be installed on which server, as shown in Figure 3-34. Click Install, or click Back to make changes.
Components location
112
13.A completion window is displayed as shown in Figure 3-35. It has a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors. Click Next. If you are prompted to restart, click Yes, restart my system.
113
4. In the Advanced window, shown in Figure 3-36, type TWH_MD in the control database field. Click OK to return to the logon window.
5. Click Cancel to exit from the Data Warehouse Center login screen. In order to configure the IBM DB2 Warehouse Control Database Management, complete the following steps: 1. Open the Data Warehouse Center - Control Database Management window by selecting Start -> Programs -> IBM DB2 -> Warehouse Control Database Management. 2. Type TWH_MD in the new control database. 3. Do not change the schema name. 4. Type the IBM DB2 instance owner user ID and password for the control database, and then click OK. 5. When the Processing has completed message appears, as shown in Figure 3-37, click Cancel.
114
115
4. The installation wizard searches for the existing Tivoli Data Warehouse configuration and displays a list of central data warehouse locations found. It also allows you to add to the existing list of systems in which central data warehouse databases will be created. Click Add on the install window. 5. With the help of your z/OS system administrator, specify the IBM DB2 configuration information on the z/OS system, as shown in Figure 3-39. Specify database type DB2 for z/OS and S/390, a fully qualified host name, port number, and a user ID with SYSADM authority. Click Next.
116
6. With the help of your z/OS system administrator, specify the central data warehouse database configuration information as shown in Figure 3-40. Click Next. Refer to 2.2.9, Considerations about warehouse databases on z/OS on page 54 for details.
117
7. Make sure that the new central data warehouse database server is included in your list. The list for our case study environment is shown in Figure 3-41.
8. As displayed in Figure 3-42, the Summary window shows the Tivoli Data Warehouse components that will be installed and configured. Click Install or click Back to make changes.
118
9. The completion window is displayed in Figure 3-43. It has a successful completion notice, or messages describing problems. Make sure the window does not list any warnings or errors and that the installation of the central data warehouse databases was successful. Click Finish.
10.You can verify your installation by issuing the commands listed in Example 3-2 on the z/OS system. These commands will confirm the creation of the central data warehouse database on the z/OS system.
Example 3-2 Verification of central data warehouse database on z/OS select select select select * * * * from from from from sysibm.sysdatabase; sysibm.systablespace; sysibm.sysstogroup; sysibm.systables where dbname = 'TCDW1';
-- where TCDW1 is the new database name specified during the instalation
119
These are the steps for creating the data mart: 1. Insert the Tivoli Data Warehouse 1.2, installation CD into the CD-ROM drive of the control server. If the installation wizard does not start up, run the setup.exe program, which is located in the root directory of the CD. 2. The install process display the welcome panel. Click Next. It also provides the location of the common log files. Click Next. 3. The install proceeds by verifying the current Tivoli Data Warehouse environment and configuration. You will be prompted with two options, as shown in Figure 3-44. You can to create one or more central data warehouse databases, or you can create one or more data mart databases. In this case, select Add data marts and click Next.
4. The installation wizard searches for the existing Tivoli Data Warehouse configuration and display a list of found data mart locations. It also allows you to add to the existing list of systems in which data mart databases will be created. Click Add on the install window. 5. With the help of your z/OS system administrator, specify the IBM DB2 configuration information on the z/OS system, as shown in Figure 3-45. Specify database type DB2 for z/OS and S/390, a fully qualified host name, port number, and a user ID with SYSADM authority. Click Next.
120
6. With the help of your z/OS system administrator, specify the data mart database configuration information as shown in Figure 3-46. Click Next. Refer to 2.2.9, Considerations about warehouse databases on z/OS on page 54 for details.
121
7. Make sure that the new data mart database server is included in your list. The list for our case study environment is shown in Figure 3-47.
8. The Summary window, as displayed in Figure 3-48, shows the Tivoli Data Warehouse components that will be installed and configured. Click Install or click Back to make changes.
122
9. The completion window is displayed in Figure 3-49. It has a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors and that the installation of the data mart database was successful. Click Finish.
10.You can verify your installation by issuing the commands listed in Example 3-3 on the z/OS system. These commands will confirm the creation of the data mart database on the z/OS system.
Example 3-3 Verification of data mart database on z/OS select select select select * * * * from from from from sysibm.sysdatabase; sysibm.systablespace; sysibm.sysstogroup; sysibm.systables where dbname = 'TCMART1'
-- where TMART1 is the new database name specified during the instalation
123
In any case, there must be ODBC connections to these data marts defined on the Crystal Enterprise server. Tivoli Data Warehouse 1.2 provides the twh_create_datasource script that sets up ODBC connections to the data mart databases. You may use this script or create the ODBC connections manually. In our case study scenario, we need to create ODBC connections to the TWH_MART data mart database located in the AIX machine TDW009 and to the TMART1 data mart database running on the z/OS system WTSC66. In order to create an ODBC connection to the TWH_MART database using the twh_create_datasources script, perform the following tasks: 1. On the Crystal Enterprise server machine, open an IBM DB2 command window: Start-> Programs-> IBM DB2-> Command Window. 2. Copy both the twh_create_datasource.bat and ODBCcfg.exe files from the disk1\tools directory of the Tivoli Data Warehouse 1.2 installation media to a temporary directory on the Crystal Enterprise server machine. 3. Run the twh_create_datasource script using the following syntax and shown in Example 3-1.
twh_create_datasource <DBtype> <ID> <odbcname> <DBname> <SRVname> <390LocalDBName> <port>
Where: <DBtype> <ID> Can by set to DB2UDB or DB390 depending on the location of the database. An unique identifier for the local node name. The script creates node names following the naming convention: TDWCS%ID% The ODBC data source name The data mart database name The IBM DB2 server in which the data mart database resides. This must be the fully qualified host name.
<390LocalDBName> For databases on z/OS only. Specifies the local database name. <port> The port number to connect to the IBM DB2 server
Example 3-4 twh_create_datasource script C:\Temp>twh_create_datasource.bat DB2UDB 09 TWH_MART TWH_MART tdw009.itsc.austin.ibm.com 50000 Creating DB2/UDB datasource TWH_MART
124
C:\Temp>db2cmd /w /c /i db2 catalog tcpip node TDWCS09 remote tdw009.itsc.austin.ibm.com server 50000 DB20000I The CATALOG TCPIP NODE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>db2cmd /w /c /i db2 catalog database TWH_MART at node TDWCS09 authentication server DB20000I The CATALOG DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>C:\Temp\ODBCcfg.exe DB2 TWH_MART TWH_MART No Username was provided. Skipping connection test. C:\Temp> C:\Temp>twh_create_datasource.bat DB2390 66 TMART1 TMART1 wtsc66oe.itso.ibm.com TMART1 33768 Creating DB2/390 datasource TMART1 C:\Temp>db2cmd /w /c /i db2 catalog tcpip node TDWCS66 remote wtsc66oe.itso.ibm. com server TMART1 DB20000I The CATALOG TCPIP NODE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>db2cmd /w /c /i db2 catalog database TMART1 at node TDWCS66 authenticati on dcs DB20000I The CATALOG DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>db2cmd /w /c /i db2 catalog dcs database TMART1 as 33768 parms ',,INTERR UPT_ENABLED,,,,,' DB20000I The CATALOG DCS DATABASE command completed successfully. DB21056W Directory changes may not be effective until the directory cache is refreshed. C:\Temp>C:\Temp\ODBCcfg.exe DB2 TMART1 TMART1 No Username was provided. Skipping connection test. C:\Temp
125
126
TWH_CDW
TWH_MD
Agent Site
Tivoli Environment
Data Mart
z/OS environment Data Sources Data Source Central Data Warehouse Data Mart Data Source Hostname: wtsc66oe
Data Source
There are two steps to be performed in order to create remote warehouse agent sites: 1. Install IBM DB2 Warehouse Manager on every server that will become warehouse agent sites. In our case study scenario, IBM DB2 Warehouse Manager must be installed on the central data warehouse server (tdw004) and on the data mart server (tdw009). 2. On the machine to become the warehouse agent site, use the Tivoli Data Warehouse installation wizard to register and enable the warehouse agent to run ETLs. After the warehouse agents have been registered with the control server, the following steps should be performed: 1. On the remote agents site: Catalog the databases that the remote agent is supposed to use.
127
2. On the remote agents site: Make available the warehouse enablement pack files. 3. On the data warehouse control servers site: Configure the ETL processes to use the remote agent. Chapter 5, IBM Tivoli NetView Warehouse Enablement Pack on page 161 provides details on the steps needed to enable the WEP to use the remote agent site, described above. Warehouse agents are supported on Windows and UNIX systems only. If you are using IBM DB2 databases on a z/OS system, you must use the warehouse agent on another computer in your deployment.
Also, on the Windows Services panel, stop the following services: DB2-DB2CTLSV DB2 JDBC Applet Server DB2 License Server DB2 Security Server Warehouse server Warehouse logger
2. Load the IBM DB2 Warehouse Manager installation media. 3. Select Start -> Run. Type in D:\setup.exe and click OK to start the installation. From the Installation window, select Install.
128
4. The Select Products window opens. Make sure that DB2 Warehouse Manager is selected. Click Next. 5. The Select Installation Type window opens. Select Custom. 6. On the Select Components window, make sure that only Warehouse Agent and Documentation are selected, as shown in Figure 3-51. Click Next.
7. At the Start Copying Files window, you can review the installation options. Click Next. 8. When the setup complete the installation process, click Finish. 9. Reboot the machine.
2. Log in as user with root authority. 3. Mount the IBM DB2 Warehouse Manager installation media. 4. Change to the directory where the installation media is mounted. 5. Enter the ./db2setup command.
129
6. Within the Install DB2 V7 Menu select DB2 Data Warehouse Agent as shown in Figure 3-52.
7. In the Configuration - DB2 Data Warehouse Agent Services menu, accept the default service names and port numbers. 8. In the Create DB2 Services menu, select: Do not create a DB2 Instance. and Do not create the Administration Server. as shown in Figure 3-53. Then click OK.
130
9. In the DB2 Setup Utility window your settings are summarized. Click Continue and the installation process starts. 10.After the installation process check the DB2 Setup Utility window for error messages. Click OK to finish the installation of the DB2 warehouse agent.
131
On UNIX
3. In the Welcome window, click Next. 4. The Tivoli Common Logging Directory window displays the name of the Tivoli common logging directory. Click Next. 5. Select Create a remote agent site. Change the installation directory (Optional); the default directory is %ProgramFiles%\TWH. We have changed it to C:\TWH as shown in Figure 3-54. Click Next.
132
6. A warning message is displayed to emphasize the need to fulfill all of the prerequisite tasks required for creating remote agent sites. This is shown in Figure 3-55.
7. Specify the existing IBM DB2 instance owner user ID and password to connect to the local IBM DB2 database. Click Next. 8. In the Connection to Remote control server window, specify the following information, as shown in Figure 3-56: The fully qualified host name of the TDW control server of the Tivoli Data Warehouse 1.2 environment. The port number for IBM DB2 on the TDW control server The IBM DB2 instance owner user ID and password to connect to IBM DB2 on the TDW control server Then click Next.
133
9. The Summary window indicates that you are creating an agent site. Click Install to start the installation. 10.In the Progress Window, review the progress of the program. When the program completes, the Installation Results windows, as shown in Figure 3-57, contains either a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors and then click Next.
134
11.Click Finish to exit. You must restart the system before the agent site can be used by any warehouse packs.
135
For our case study installation Example 3-5 gives the output of the twh_list_cs.bat command. You will find host name and database name in Figure 3-50, which gives an overview of our case study scenario.
Example 3-5 Verify control server (twh_list_cs) C:\TWH\tools\bin>twh_list_cs.bat Listing the control server information in the Tivoli Data Warehouse registry. Control Server: Control Server Database Server Information: Host name: tdw003.itsc.austin.ibm.com Vendor: DB2 UDB Port: 50000 Database name: TWH_MD Control Server Database Client Information: Node name: Not applicable. Database alias: TWH_MD ODBC connection name: TWH_MD Tivoli Data Warehouse component version: 1.2.0.0
The Tivoli Data Warehouse is built on top of the IBM DB2 Universal Database Enterprise Edition Data Warehouse. To check whether the DB2 warehouse services are running enter Start -> Programs -> Administrative Tools -> Services and look for the services named Warehouse logger and Warehouse server. Both must be up and running. Figure 3-58 shows the services windows focusing on the DB2 data warehouse services.
Verify Tivoli Data Warehouse central data warehouse databases Use the batch twh_list_cdws.bat to display information about the central data warehouse databases. Example 3-6 shows the output for the case study installation. Both central data warehouses are displayed. The second installed central data warehouse on Z/OS has TCDW1 as database alias and TWH_CDW1 as ODBC connection name assigned.
136
Example 3-6 Verify central data warehouse (twh_list_cdws) C:\TWH\tools\bin>twh_list_cdws.bat Listing the central data warehouse information in the Tivoli Data Warehouse registry. Central Data Warehouse: Central Data Warehouse Database Server Information: Host name: tdw004.itsc.austin.ibm.com Vendor: DB2 UDB Port: 50000 Database name: TWH_CDW Control Server Database Client Information: Node name: TDW1 Database alias: TWH_CDW ODBC connection name: TWH_CDW Tivoli Data Warehouse component version: 1.2.0.0 Enabled/Disabled: E Central Data Warehouse: Central Data Warehouse Database Server Information: Host name: wtsc66oe.itso.ibm.com Vendor: DB2 390 Port: 33768 Database name: DB2E Control Server Database Client Information: Node name: TDW3 Database alias: TCDW1 ODBC connection name: TWH_CDW1 Tivoli Data Warehouse component version: 1.2.0.0 Enabled/Disabled: E
Verify Tivoli Data Warehouse Data Mart databases. To check on the data warehouse data mart databases use the twh_list_marts.bat command. Example 3-7 shows the output for our case study scenario. You notice the difference between DB2 UDB for OS/390 and z/OS databases and IBM DB2 Universal Database Enterprise Edition DB2 databases. However, you see no differences between Windows and UNIX based databases. In our case study the TWH_MART database resides on an AIX box (refer to Figure 3-50 on page 127).
Example 3-7 Verify data mart databases C:\TWH\tools\bin>twh_list_marts.bat Listing the data mart information in the Tivoli Data Warehouse registry.
137
Host name: tdw009.itsc.austin.ibm.com Vendor: DB2 UDB Port: 50000 Database name: TWH_MART Control Server Database Client Information: Node name: TDW2 Database alias: TWH_MART ODBC connection name: TWH_MART Tivoli Data Warehouse component version: 1.2.0.0 Enabled/Disabled: E Data Mart: Data Mart Database Server Information: Host name: wtsc66oe.itso.ibm.com Vendor: DB2 390 Port: 33768 Database name: DB2E Control Server Database Client Information: Node name: TDW3 Database alias: TMART1 ODBC connection name: TWH_MART1 Tivoli Data Warehouse component version: 1.2.0.0 Enabled/Disabled: E
Verify remote agents sites. To verify the remote agent sites enter the command twh_list_agentsites.bat. Example 3-8 shows the output for our case study installation. The command does not return the remote agents for internal use!
Example 3-8 Verify remote agent site (twh_list_agentsites) C:\TWH\tools\bin>twh_list_agentsites.bat Listing the agent site information in the Tivoli Data Warehouse registry. Local Agent Site: Host name: tdw003.itsc.austin.ibm.com Version: 1.2.0.0 Enabled/Disabled: Enabled Warehouse Pack Usage: Local Agent Site: Host name: tdw004.itsc.austin.ibm.com Version: 1.2.0.0 Enabled/Disabled: Enabled Warehouse Pack Usage: Local Agent Site: Host name: tdw009.itsc.austin.ibm.com
138
To view the data warehouse remote agent sites you may also select from the windows desktop Start -> Programs -> IBM DB2 -> Control Center to open the DB2 control center. From the DB2 Control Center select Tools -> Data Warehouse Center to open the Data Warehouse Center where you select Administration -> Agent Sites. Figure 3-59 shows the result for the case study installation. All remote agents are listed in this view. However, in addition to the agents listed by the twh_list_agentsites in Figure 3-59, the agents for internal use are also listed.
Verify the installation of the Crystal Enterprise Professional for Tivoli Server (twh_update_rptsrv) To verify the installation of the Crystal Enterprise Professional for Tivoli Server and its registration with the Tivoli Data Warehouse control server enter the command twh_update_rptsvr -l from DOS-shell. Figure 3-9 shows the output of this command for the case study installation.
Example 3-9 Verify Crystal Enterprise Professional for Tivoli installation C:\TWH\tools\bin>twh_update_rptsrv -l Report Server Host Name: tdw002.itsc.austin.ibm.com Report Server User ID: Administrator
139
Verify users, sources, and targets. Enter the command twh_update_userinfo -l to get an overview of the used user names and the available sources and targets.
Example 3-10 Verify data user C:\TWH\tools\bin>twh_update_userinfo -l tdw003.itsc.austin.ibm.com: 50000: TWH_MD: db2admin: ANM_TWH_MD_Target CDW_TWH_MD_Source tdw004.itsc.austin.ibm.com: 50000: TWH_CDW: db2admin: AN1_TWH_CDW_Target ANM_TWH_CDW_Source ANM_TWH_CDW_Target CDW_TWH_CDW_Source CDW_TWH_CDW_Target tdw009.itsc.austin.ibm.com: 50000: TWH_MART: db2inst1: ANM_TWH_MART_Target CDW_TWH_MART_Source CDW_TWH_MART_Target wtsc66oe.itso.ibm.com: 33768: DB2E: tivo01: CDW_TWH_CDW1_Source CDW_TWH_CDW1_Target CDW_TWH_MART1_Source CDW_TWH_MART1_Target CDWTD0002I The command that changes user IDs and passwords (twh_update_userinfo) completed.
140
Figure 3-60 Verify Remote Agents on Tivoli Data Warehouse Control Center
141
142
Example 3-11 twh_configwep command output C:\TWH\tools\bin>twh_configwep -u db2admin -p <your password> -f list CDWCW0002I The twh_config_wep.pl program started. Installed warehouse enablement CODE VERSION ROLE ---- ------------------ ---AN1 Version 1.1.0 V11 ANM Version 1.1.0 V11 ---- ------------------ ---Source datasources used CODE DATASOURCE ---- ---------------AN1 ANM_SOURCE ANM ANM_SOURCE ---- ---------------packs: NAME ---------------------------------------------IBM Tivoli Netview IBM Tivoli Netview ----------------------------------------------
Central data warehouse datasources used by warehouse enablement packs: CODE ---AN1 ANM ---DATASOURCE ---------------TWH_CDW TWH_CDW ---------------CLIENT_HOSTNAME -----------------------------------------------------localhost localhost ------------------------------------------------------
Data mart datasources used by warehouse enablement packs: CODE DATASOURCE CLIENT_HOSTNAME ---- ---------------- -----------------------------------------------------ANM TWH_MART localhost ---- ---------------- -----------------------------------------------------CDWCW0001I The twh_config_wep.pl program completed successfully.
143
144
Chapter 4.
145
Buffer pools
A buffer pool is an area of memory. In this area, database pages of user table data, index data, and catalog data are temporarily moved from disk storage. DB2 agents read and modify data pages in the buffer pool. The buffer pool is a key influence of overall database performance, because data can be accessed much faster from memory than from a disk. If most of the data required by applications is present in the buffer pool, then less time is needed to access this data. This improves performance. Buffer pools can be defined with varying page sizes including 4K, 8K, 16K and 32K.
Prefetchers
Prefetchers are present to retrieve data from disk and move it into the buffer pool before applications need the data. For example, applications needing to scan through large volumes of data would have to wait for data to be moved from disk into the buffer pool if there were no data prefetchers. Prefetchers are designed to improve the read performance of applications as well as utilities such as backup and restore, since they prefetch index and move data pages into the buffer pool, thereby reducing the time spent waiting for I/O to complete. The number of prefetchers may be controlled by the database configuration parameter NUM_IOSERVERS.
Page cleaners
Page cleaners are present to make room in the buffer pool, before agents and prefetchers read pages from disk storage and move them into the buffer pool. For example, if an application has updated a large amount of data in a table, many of the updated data pages in the buffer pool may not yet have been written on to disk storage. Such pages are called dirty pages. Since prefetchers cannot place fetched data pages onto the dirty pages in the buffer pool, these dirty pages must first be flushed to disk storage and become clean pages, so that prefetchers can find room to place fetched data pages from disk storage. The number of page cleaners may be controlled by the database configuration parameter NUM_IOCLEANERS.
146
The NUM_IOCLEANERS parameter allows you to specify the number of asynchronous page cleaners for a database. These page cleaners write changed pages from the buffer pool to disk before the space in the buffer pool is required by a database agent. As a result, database agents should not have to wait for changed pages to be written out so that they may use the space in the buffer pool. This improves overall performance of the database applications. If you set the parameter to zero (0), no page cleaners are started and as a result, the database agents will perform all of the page writes from the buffer pool to disk. This parameter can have a significant performance impact on a database stored across many physical storage devices, since in this case there is a greater chance that one of the devices will be idle. If no page cleaners are configured, your applications may encounter periodic log full conditions.
Logs
Changes to data pages in the buffer pool are logged. Agent processes updating a data record in the database update the associated page in the buffer pool, and write a log record into a log buffer. To optimize performance, neither the updated data pages in the buffer pool, nor the log records in the log buffer are written to disk immediately. They are asynchronously written to disk by page cleaners, and the logger, respectively. The logger and the buffer pool manager cooperate and ensure that an updated data page is not written to disk storage before its associated log record is written to the log. This ensures database recovery to a consistent state from the log in the event of a crash such as a power failure. A number parameters can be used here: logfilsiz: This parameter represents the size of the log files. logprimary: The primary log files establish a fixed amount of storage allocated to the recovery log files. This parameter allows you to specify the number of primary log files to be pre-allocated. logsecond: This parameter specifies the number of secondary log files that are created and used for recovery log files (only as needed). logbufsz: This parameter specifies the amount of database shared memory to use a buffer log before writing these records to disk.
147
Therefore, the maximum heap size needs to be low enough to contain the heap within physical memory. MON_HEAP_SZ indicates the amount of memory (in 4K pages) which is allocated for database monitor data (at db2start). The amount of memory needed will depend on the number of snapshot switches which are turned on and active Event Monitors. If the memory heap is insufficient, an error will be returned when trying to activate a monitor and it will be logged to the db2diag.log file.
Database heap
There is one database heap per database, and the database manager uses it on behalf of all applications connected to the database. It contains control block information for tables, indexes, table spaces, and buffer pools. It also contains space for event monitor buffers, the log buffer (logbufsz), and temporary memory used by utilities. Therefore, the size of the heap will be dependent on a large number of variables. The control block information is kept in the heap until all applications disconnect from the database.
I/O servers
Configuring enough I/O servers with the num_ioservers configuration parameter can greatly enhance the performance of queries for which prefetching of data can be used. To maximize the opportunity for parallel I/O, set num_ioservers to at least the number of physical disks in the database. It is better to overestimate the number of I/O servers than to underestimate. If you specify extra I/O servers, these servers are not used, and their pages are paged out. As a result, performance does not suffer. Next, we describe some further performance features that can affect the SQL performance with indexes.
148
RUNSTATS
This command updates statistics about the physical characteristics of a table and the associated indexes. The optimizer uses these statistics when determining access paths to the data. This utility should be called when a table has had many updates, or after reorganizing a table. This command should be run on all tables, and a typical usage would be:
runstats on table <table name> with distribution and detailed indexes all
During our testing, we developed a script which will perform runstats on every table, in every database, in the instance pointed to by the environment variable DB2INSTANCE.
REORGCHK
This command line utility calculates statistics on the database to determine if tables or indexes, or both, need to be reorganized or cleaned up. It can either use existing statistics or perform runstats to create up-to-date statistics.
REORG
This command reorganizes an index or a table. The index option reorganizes all indexes defined on a table by rebuilding the index data into un-fragmented, physically contiguous pages. The table option reorganizes a table by reconstructing the rows to eliminate fragmented data, and by compacting information. If you specify an index as part of the table reorganization, then the table will be physically ordered by that index. With Version 7 of DB2, both table and index reorganization cannot take place online.
Data distribution
If you have a system that has multiple physical disk drives available to DB2, you would probably want to split out the DB2 data across these multiple disks to reduce I/O contention as much as possible. In an ideal environment this could mean separate disks for: Database logs Mirrored database logs (if this option has been set up in the database configuration file) Temporary space (used for sorting data and storing intermediate result sets) Table data Index data
149
Due to cost and resource constraints, this ideal scenario may not be completely possible. However, you should try to maximize the utilization of the available disks in your system. As an example, if you only have two physical drives, keep the logs on one drive and create the database and the physical data on the other.
Tablespace type
Once we have created a DB2 system and chosen our disk distribution, then we can decide how we want to store the data. DB2 has two types of tablespaces to store data, System Managed Space (SMS) and Database Managed Space (DMS). SMS is the default tablespace type, and three tablespaces of this type are created after the installation of DB2. These tablespaces allow the operating system to allocate and manage the space where the table data resides. DMS tablespaces give DB2 the ability to control storage space. The amount of space allocated to a DMS tablespace must exist upon creation, since these files are allocated upon creation, as opposed to SMS tablespaces, which grow within the specified file system. Both types of tablespaces have their own advantages and disadvantages, but in general SMS is easier to manage, whereas DMS can be faster. DMS is also more flexible. The regular data, large data, and indexes can be split between different DMS tablespaces. DMS can be created on raw unformatted disks. Recent enhancements to DMS tablespaces mean that it is now possible to add containers to a tablespace, remove containers, and reduce the size of containers. The Create Tablespace Wizard in the Control Center guides you through creating a tablespace.
150
1. Navigate through the path Start -> Settings -> Control Panel -> Network and Dial-up Connections -> Local Area Connection. 2. Click Properties on the Local Area Connection window, and select File and Printer Sharing for Microsoft Networks. 3. Select the option Maximize data throughput for networking applications.
151
Today most systems based on the 32-bit Intel Architecture (IA-32) support the Physical Address Extensions (PAE) capabilities of the IA-32. PAE provides operating system software with an instruction set to address physical memory above 4 GB. Operating systems that take advantage of the PAE can address up to 64 GB physical memory. Given the memory extensions of the IA-32, the primary factor in memory selection will most likely not be the cost of the memory itself, but rather the incremental cost moving from one edition of the Windows operating system to the next in order to address more physical memory. Processor: Most systems are limited by the total number of central processing units (CPU) they can support. Typically a 4-way system cannot be upgraded to an 8-way system unless it is indeed a true 8-way that was populated with only 4 processors. There are a few systems on the market today, such as the IBM x440 and the Unisys ES7000, that can be expanded beyond the total number of original processors by adding additional processor expansion modules. Besides quantity and speed, another important consideration in terms of processor selection is the size of the internal L2 cache. Slower processors with larger internal caches have shown significant throughput advantages for database applications over faster processors with smaller internal caches. Another factor to consider when selecting the number of processors is the operating system software costs. There are incremental licensing costs associated with each Windows 2000 or Windows Server 2003 edition to support more processors. Storage: The disk subsystem has been an area of much debate over the last several years. Most disk subsystems will implement some form of redundancy that has always favored recoverability over performance. In recent years improvements in technology have been able to overcome many of the performance limitations imposed by implementing redundant disk arrays. Performance characteristics of disk controllers include speed, throughput, channels, and cache. Care should be taken in the placement of disk controllers in the system. Although most disk adapters are backwards compatible, you want to match the disk controller speed with that of the systems PCI bus. You should avoid placing faster 64-bit 66 Mhz disk controllers in slower 32-bit 33 Mhz PCI slots. You should also consider the number of disk controllers in your system as well. Attaching a single disk controller with several I/O channels might be capable of driving your subsystem, but can quickly saturate a single PCI bus, not to mention introduce a single point of failure into your system. If possible, you should also avoid placing disk controllers on PCI buses populated with other I/O intensive resources.
152
Performance characteristics of disk subsystems include disk speed, size, cache, and the number of physical disks in the subsystem. You should favor a subsystem with a large number of small drives over a small number of large drives. If this is impractical, plan for growth by choosing a large number of large drives. Best performance will be achieved for database applications with a large number of physical disks (5-10) per processor. For example, a large 32-way SMP server should be attached to a disk subsystem with a minimum of 160 physical disks, not including parity disks or hot spares. With an average 18-GB disk you would have almost 3 TB of total storage. At first this might seem impractical for a 1 TB database, but you need to consider space requirements for loading files and storing the most recent backup image before copying to tape. Hardware implementations of disk arrays are now commonplace on Intel based servers. Modern disk controllers support RAID levels 0, 1, 5, and 10, sometimes referred to as 0+1. As with most performance decisions, there is always a give (cost) and take (performance) associated with choosing which RAID level to implement.
153
154
For entry level systems, Windows Server 2003 Standard Edition will provide support for 32-bit Intel servers with up to 2 CPUs and 4 GB memory. Windows Server 2003 Enterprise Edition will provide support for both 32-bit and 64-bit Intel servers with up to 8 CPUs. The 32-bit version will support 32 GB of memory and the 64-bit version will support up to 64 GB of memory. The Microsoft Cluster Service will be included and provide support for up to 8 nodes in a single cluster. Windows Server 2003 Datacenter Edition will provide support for both 32-bit and 64-bit Intel servers with up to 32 CPUs. The 32-bit version will support 64 GB of memory and the 64-bit version will support up to 128 GB of memory. The Microsoft Cluster Service will be included and provide support for up to 8 nodes in a single cluster.
Distributed install
The Tivoli Data Warehouse components can be, but do not need to be, installed on the same systems as other Tivoli software or on the systems where the operational data stores reside. The operational data stores are on a system that is not part of the Tivoli Data Warehouse deployment. As an example, the TEC database will have operational data.
155
Crystal Enterprise should reside on a system that does not have other Tivoli Data Warehouse or Tivoli software products on it. Users access Crystal Enterprise reports using a Web browser from any system Other types of data analysis tools, should be located on systems outside your Tivoli Data Warehouse deployment.
ETL routines
The scheduling of data warehouse ETLs should be done during off-peak hours to avoid impacting the performance of your operational data stores. For distributed environments across geographic locations, you may consider putting central data warehouse databases at each location, as each location may have different off-peak hours.
156
For example, if your operational data is on systems in the United Kingdom and the United States, you might put a central data warehouse database on a system in each location. This enables you to schedule the central data warehouse ETL for each system at an appropriate off-peak time. The time taken for ETLs to complete depends on many factors, including the amount of data they have to process, the speed of the database in which the source and target data reside, the performance of the network, and so forth. Ensure that the default scheduling interval is changed to an appropriate interval for your environment and data level. The ETL processes that update tables in the central data warehouse should not all be scheduled at the same time. There might be unknown dependencies in the data, and updates to the same tables might cause performance problems, depending on your environment. Data Analysis programs can read directly from central data warehouse databases without using data marts; but this use is not supported. Analyzing historical data directly from the central data warehouse database can cause performance problems for all applications using the central data warehouse.
Sample tuning
In a recent deployment of Tivoli Data Warehouse, approximately 85% of the current execution time of the ETL processes was consumed by the triggers and corresponding sequences. With optimal tuning (without changes of SQL-statements), performance was increased by 82%, and with changes to SQL statements, performance can be increased by 90%.
157
Each environment is different, and these performance improvements cannot be guaranteed, but our recommendations are as follows: Distribute the database to the three available hard disks (the test server in this scenario had three hard disks). Use DMS tablespaces instead of SMS tablespaces. Increase caches of the sequences (approximately 5000 may be enough, but this has to be tested). Execute runstats after loading data to the staging tables. Distribute the temporary tablespace also to the three available hard disks.
158
Part 2
Part
159
160
Chapter 5.
161
TDW003 TDW004
TDW Control Server 1.2 TDW Central Warehouse 1.2 (remote agent site) TDW Data Mart 1.2 Crystal Enterprise Server 9 NetView 7.1.4
DB2 Server 7.2FP10a DB2 Server 7.2FP10a DB2 Server 7.2FP10a DB2 Client 7.2FP10a -
Figure 5-1 summarizes the distributed Tivoli Data Warehouse environment used in this chapter.
162
Agent site
TWH_CDW
TWH_MD
TDW Control Server Tivoli NetView Windows 2000 Server Hostname: TDW008 Windows 2000 Server Hostname: TDW003
163
The IBM Tivoli NetView Warehouse Enablement Pack code is provided with the IBM Tivoli Netview 7.1.4 software. In Figure 5-2 the data flow of an integration of NetView in a Tivoli Data Warehouse environment is illustrated. We start with a brief description of the processes and their control. Node availability information is stored by the NetView process tdwdaemon into the NetView source database. The NetView snmpcol daemon writes performance information gathered by SNMP polling (CPU load, number of processes, etc.) into the NetView source database. The data upload to the NetView source database is controlled by the NetView server that is illustrated in Figure 5-2 by broken lines. The data flow within the Tivoli Data Warehouse is controlled by the Tivoli Data Warehouse control server. The generation and publishing of the NetView specific reports is controlled by the Crystal Enterprise server.
ETL1 ETL2 Datawarehouse Control Server Data Mart Central Data Warehouse
Crystal Server
Web Server
Data Control
Web Reports
Figure 5-2 IBM Tivoli NetView Warehouse Enablement Pack data flow
164
5.3 Prerequisites
The stated prerequisites, as per the IBM Tivoli NetView Warehouse Enablement Pack Implementation Guide - Version 1.1.0, SC32-1237-00, which can be found in the \tedw_apps_etl\anm\pkg\v110\doc directory of the enablement pack software, are: IBM Tivoli NetView Version 7.1.4 IBM DB2 Universal Database Enterprise Edition Version 7.2 IBM DB2 Universal Database Enterprise Edition Version 7.2 Fix Pack 6 Tivoli Enterprise Data Warehouse required e-fixes to IBM DB2 UDE v7 Fix Pack 6 (1.1-TDW-0002) Tivoli Enterprise Data Warehouse Version 1.1 Tivoli Enterprise Data Warehouse 1.1 Fix Pack 2 (1.1-TDW-FP02) In this case study scenario chapter we will be using Tivoli Data Warehouse 1.2 in a previously built distributed environment, as described in Chapter 3, Getting Tivoli Data Warehouse 1.2 up and running on page 71. Here is a list of the products used and their releases in the case study scenario: IBM Tivoli NetView 7.1.4 IBM Tivoli Netview 7.1.4 early fix PJ29597 IBM DB2 Universal Database Enterprise Edition Version 7.2 Fix Pack 10a (shipped with Tivoli Data Warehouse 1.2) IBM Tivoli NetView Warehouse Enablement Pack version 1.1.0 Tivoli Data Warehouse Version 1.2
IBM Tivoli Netview 7.1.4 IBM DB2 Universal Database Enterprise Edition Version 7.2 Fix Pack6 We will be using a DB2 UDB 7.2 Fix Pack 10a level on a separate server from the NetView server.
Install IBM Tivoli Netview 7.1.4 Install the DB2 Client version 7.2 fix pack 10a on the NetView server machine. We used the installation media shipped with Tivoli Data Warehouse 1.2
165
Name of the database that will be created during installation to store the NetView availability data. A DB2 user ID with create authority. The DB2 user ID password The number of days the availability data will be retained before being purged. The time of day to load SmartSet information. The name of the SmartSets, for which availability data will be collected. Note: The Router SmartSet is required. The directory name where the enablement pack will be installed. Name of the database that will be created during installation to store the NetView availability data. A DB2 user ID with create authority. The DB2 password.
NETVIEW
NETVIEW
db2admin
db2inst1
As required
90
Leave as default
23
Leave as default
Routers
\usr\ov\dwpack
Leave as default
NETVIEW
NETVIEW
db2admin
db2inst1
166
167
4. Select OK to proceed. In our case study installation, we used a remote DB2 server residing on AIX host tdw009.
5. Select Create Database in the popup shown in Figure 5-4. The NETVIEW source database and its tables are created.
6. Select Yes in the popup window shown in Figure 5-5 to register and start the Data Warehouse daemon (tdwdaemon).
168
Figure 5-5 NetView Configure data export to DB2 - register and start tdwdaemon
During the configuration a number of logs may be written, depending on whether any problems are encountered during installation. The log files we discovered were: \usr\ov\dwpack\DWP_Install_stdout.log \usr\ov\log\TDWError_mmddhh.log \usr\ov\log\tdwdaemon.log Check these logs for unusual messages.
Verifying db updates
We verified that data was actually being written to the availability database by issuing the following command:
db2 select count(*) from netview.netview_nodes
The greater than zero count returned (as shown in Example 5-1) indicated that data was in fact being written.
Example 5-1 Verify NetView source database updates C:\DB2\SQLLIB\BIN>db2 connect to NETVIEW user db2inst1 using <DB password> Database Connection Information Database server SQL authorization ID Local database alias = DB2/6000 7.2.8 = DB2INST1 = NETVIEW
169
If the count had been zero, we could have tried the following sequence of actions: 1. Stop the tdwdaemon daemon:
ovstop tdwdaemon
7. Verify that data is being written by checking the count on the netview.netview_nodes table again:
db2 select count(*) from netview.netview_nodes
170
Figure 5-6 shows the SmartSet desktop of NetView. Some of the network nodes are not available. Therefore, the CriticalNodes SmartSet is displayed as red.
We now explain how to create new NetView SmartSets. However, you may need to vary the steps to meet your requirements.
171
From the NetView desktop, select Submap -> New Smartset ... from the toolbar to open the SmartSets configuration window shown in Figure 5-7. There, select the folder Advanced.
In the Combine Find Condition field you may insert conditions which meet your needs. We insert the phrase (refer to Figure 5-7): (SNMP sysObjectID = 1.3.6.1.4.1.311.1.1.3.1.2) Thus the SmartSet is populated by all network nodes, whose SNMP enterprise ID is equal to the specified value. In this case all Microsoft Windows 2000 servers with SNMP option populate this SmartSet.
172
Table 5-4 shows the configuration for all three SmartSets we created for our case study. The SmartSet IBM contains all Nodes running on an IBM operating system. (However, in our case study, NetView found only AIX hosts.) The SmartSet TDW contains all nodes whose host names start with the letters tdw. All private enterprise object IDs are listed at the following Web site:
http://www.iana.org/assignments/enterprise-numbers Table 5-4 Case Study SmartSets attributes SmartSet Name Combine Find Condition field
After filling the Combine Find Condition field shown in Figure 5-7, select Create SmartSet ... . A popup appears as shown in Figure 5-8. Insert the name of your SmartSet and optionally a descriptive text. We used the name Microsoft for the SmartSet in our case study. Then click OK to actually create the SmartSet.
173
If you open the folder SmartSets in the SmartSet configuration window again, the newly created SmartSets are now listed as shown in Figure 5-9. The highlighted line shows the new Microsoft SmartSet.
174
The new SmartSets are also displayed on the NetView SmartSet desktop as shown in Figure 5-10.
Double-click the Microsoft SmartSet icon to open the container as shown in Figure 5-11. NetView polls the enterprise object ID as specified in Table 5-4 on page 173 via SNMP. Therefore, only SNMP managed nodes populate the Microsoft SmartSet. Windows 2000 server without SNMP option or with different SNMP community names are not included.
175
176
The default value is Routers. The Routers SmartSet is required. If you change the SMARTSETS settings, make sure that Routers is included in your list. If you specify All, availability data of all network nodes will be stored in the source database. In our case study installation, we have created new NetView SmartSets called TDW, IBM and Microsoft. We selected our self-made SmartSets along with the required SmartSet Routers as shown in Example 5-2. SMARTSET_LOAD_TIME Gives the hour when the NetView SmartSet population is loaded to the NetView data warehouse source database. The default value 23 means, the data is loaded every day at 11 pm. Note: It is recommended to schedule the ETL1 ANM_c05_ETL1_Process within the Tivoli data warehouse at least 1 hour after the SmartSet load time
OUTAGE_STORAGE_TIME This parameter specifies the number of days before availability data expires in the NetView data warehouse source database. The default value is 90 days. Note: The SmartSet data is loaded once a day. Therefore the NetView source database contains a snapshot of this particular point in time. SmartSets which population changes rapidly like for instance CriticalNodes may not be suitable for reporting purposes DBPASSWORD The encrypted db2 password Note: To change the DB2 password, execute the updateDBPassword.bat command in the /usr/ov/bin directory. An example of this follows: updateDBPassword.bat c:\usr\ov\conf\tdwdaemon.properties newpwd DB2USER The db2 user ID (db2 administrator or any user ID with create authority). DBNAME The name of the NetView source database. Default is NETVIEW.
177
Port DB2 database IP-port of the NetView source database. Default is 50000. HOSTNAME Hostname of the server which hosts the NetView source database. NODENAME DB2 database node of the NetView source database. Default is TDWNODE.
Example 5-2 NetView tdwdaemon configuration file tdwdaemon.properties #Thu Mar 18 14:36:46 CST 2004 PORT=50000 HOSTNAME=tdw009.itsc.austin.ibm.com DBUSER=db2inst1 SMARTSET_LOAD_TIME=23 NODENAME=TDWNOTE SMARTSETS=Routers,TDW,Microsoft,IBM DBPASSWORD=c3dqNDNy OUTAGE_STORAGE_TIME=90 DBNAME=netview
After changing the tdwdaemon.properties file, you have to restart the tdwdaemon to apply the changes. Use the NetView commands ovstop and ovstart for this purpose. Example 5-3 shows the command shell dialog.
Example 5-3 Restart the NetView data warehouse daemon tdwdaemon C:\usr\ov\bin>ovstop tdwdaemon Done C:\usr\ov\bin>ovstart tdwdaemon Done
178
Example 5-4 shows the command shell output executing this command.
Example 5-4 Status of NetView data warehouse daemon (tdwdaemon)
C:\usr\ov\bin>ovstatus tdwdaemon object manager name: tdwdaemon behavior: OVs_WELL_BEHAVED state: RUNNING PID: 2092 last message: Initialization complete. exit status: Done
The log file for the tdwdaemon is named tdwdaemon.log and can be found in the \usr\ov\log directory. Verify that no apparent errors were being reported on the tdwdaemon.log file. You will also find DB2 errors in this log file that the tdwdaemon has encountered while communicating with the NetView data warehouse source database. Note: If no tdwdaemon.log exists for tdwdaemon to write to, it will create a new one. Deleting it prior to restarting the tdwdaemon makes it easier to review, because all the old entries are removed.
Example 5-5 shows the command shell output executing this command.
Example 5-5 Status of the NetView SNMP collector daemon (snmpcollect) C:\usr\ov\bin>ovstatus snmpcollect object manager name: snmpcollect behavior: OVs_WELL_BEHAVED state: RUNNING PID: 1616 last message: Initialization complete. exit status: Done
179
The log file for the tdwdaemon is named snmpCol.trace and can be found in the \usr\ov\log directory. Verify that no apparent errors were being reported on the snmpCol.trace log file in the directory \usr\ov\log. You will also find DB2 errors in this log file that the snmpcollect daemon has encountered while communicating with the NetView data warehouse source database.
Where <password> is your password for the db2inst1 database user. Here is a list of the items to check and the commands to use to check them: Availability data:
db2 select count(*) from netview.netview_nodes
Performance data:
db2 select count (*) from netview.snmpcollection
NetView Smartsets:
db2 select count (*) from netview.smartsets
In all three cases the count must be greater than 0. Example 5-6 shows the results of these commands for our test study environment. Note: You may have to wait until the SmartSet data upload has taken place. The time for this upload is specified in the configuration file tdwdaemon.properties.
Example 5-6 Check the NetView source database C:\DB2\SQLLIB\BIN>db2 connect to NETVIEW user db2inst1 using <password> Database Connection Information Database server = DB2/6000 7.2.8 SQL authorization ID = DB2INST1 Local database alias = NETVIEW C:\DB2\SQLLIB\BIN>db2 select count(*) from netview.netview_nodes 1 -----------
180
349 1 record(s) selected. C:\DB2\SQLLIB\BIN>db2 select count(*) from netview.snmpcollection 1 ----------40787 1 record(s) selected. C:\DB2\SQLLIB\BIN>db2 select count(*) from netview.smartsets 1 ----------5 1 record(s) selected. C:\DB2\SQLLIB\BIN>
181
For example, in order to back up the TWH_MD, TWH_CDW, and TWH_MART databases, perform the following commands:
db2 backup database TWH_MD to C:\\backup db2 backup database TWH_CDW to C:\\backup db2 backup database TWH_MART to C:\\backup
182
Figure 5-13 shows the ODBC IBM DB2 Driver - Add window. You can add an optional description. If your NETVIEW source database is local or already cataloged within the DB2 client, you need to select NETVIEW from the Database alias dialog. If you use a remote database, select Add.... This will open the Add Database Wizard window.
Here are the different steps we have to perform in the Add Database Wizard (these steps are also shown in Figure 5-14): 1. Source We selected Manually configure connection to database. 2. Protocol We selected TCP/IP.
183
tdw009.itsc.austin.ibm.com
50000 -
Hostname of the database server default DB2 service port optional left blank
4. Database We inserted NETVIEW as the database name. Select Finish to start the registration.
The hostname of the DB2 UDB Server i.e. where our NetView availability data is kept, along with the default DB2 port number.
184
185
4. Select Add and insert the location of the NetView warehouse pack installations properties file. This file is named twh_install_props.cfg. The properties file for NetView availability WEP with code ANM can be found on the installation media under directory \tedw_apps_etl\anm\pkg\, as shown in Figure 5-16. The properties file for NetView SNMP performance WEP with code AN1 can be found on the installation media under directory \snmp_etl\an1\pkg\.
5. Select OK and/or Next to get back to List of Warehouse Packs to install. In contrast to Figure 5-15, the list is now populated with the NetView WEP as shown in Figure 5-17.
186
6. Select Next to proceed with the installation. The installation takes a few minutes. Figure 5-18 shows the window which is displayed after successful installation.
To install both NetView WEPs, you have to perform the installation steps twice using the different properties files of the two WEPs.
187
ODBC data source for the NetView source database ODBC data source for the central data warehouse ODBC data target for the central data warehouse ODBC data target for the data mart ODBC data target control server database ODBC data source for the NetView source database ODBC data source for the Tivoli NetView warehouse source database
188
To configure the Tivoli data warehouse sources and targets for Netview, perform the following steps: 1. Open the DB2 control center by selecting Start -> Programs -> IBM DB2 -> Control Center. 2. From the DB2 control center, open the data warehouse center by selecting Tools-> Data Warehouse Center from the toolbar. 3. In the data warehouse logon window, type the user ID of the data warehouse administrator (default to db2admin) and the appropriate password. Select Advanced to ensure that the control database is set to TWH_MD as shown in Figure 5-19.
If you have opened the data warehouse control center, you will see a browser tree as shown in Figure 5-20. There are among others the leaves Warehouse Sources and Warehouse targets. In this section we first discuss the configuration of the data warehouse authorities for the data warehouse sources and then the data warehouse targets for use with the NetView WEPs.
189
3. As Data source name, select the specific data source for your environment. For our case study installation the data sources for the NetView data warehouse sources are listed in Table 5-6 on page 188. Therefore we inserted NETVIEW as data source for our ANM_AVAIL_Source example in Figure 5-20. 4. Insert the User ID and appropriate password to the NetView source database. In our case study, we used a DB2 database on AIX for which the default user ID is db2inst1, as shown in Figure 5-20. 5. Select OK to finish the data warehouse source configuration. Repeat these steps for all NetView data warehouse sources as listed in Table 5-6.
190
3. As Database name, select the specific database for your environment. For our case study installation, the databases for the NetView data warehouse targets are listed in Table 5-6 on page 188. TWH_CDW was already configured as needed, so we can leave it as is for our case study (refer to Figure 5-21). 4. Insert the User ID and appropriate password to the target database. Because in our case study the TWH_CDW database is on a Windows machine, we used db2admin, as shown in Figure 5-21. 5. Select OK to finish the data warehouse source configuration. Repeat these steps for all NetView data warehouse targets as listed in Table 5-6 on page 188.
191
There are three known modes for ETL processes in Tivoli Data Warehouse: Development: In this mode, process steps can be changed and their schedule can be configured. However, they do not run on their scheduled times and they cannot be tested. Test: In this mode, process steps are not scheduled, but they can be tested and their schedule can be changed. Production: In this mode, the processes run as scheduled. Neither the process steps nor their schedules can be changed.
You have to promote all process steps you want to test into Test mode.
192
To view the results, select Warehouse -> Work in progress from the Data Warehouse Control Center. The Work in Progress window opens displaying a line for each executed process step as shown in Figure 5-24. You can right-click the process and select Show Log from the context menu to open the log window. There you can see additional information regarding the process step execution. In case of failure, thats where you will find the error messages.
193
Important: Do not change the sequence of the processing steps. You might cause great damage to the data within the Tivoli Data Warehouse databases.
194
195
Selecting Schedule will open up a dialog as shown Figure 5-27. Here you have to define the date and time, for this process to run. Note: Changes apply only when the process is in development mode. If you use NetView SmartSets as described in 5.4, Preparing NetView for data collection on page 167, you have to synchronize the hour when the SmartSet data is loaded to the NetView source database specified in the tdwdaemon.properties file and the schedule of the NetView WEP ETLs. We recommend that the ETLs be scheduled at least one hour later then the SmartSets loading time. In our case study installation, the SmartSet data load is done at 11pm as shown in Example 5-2 on page 178; one hour later at 0am, the ANM_c05_ETL1_Process is scheduled as shown in Figure 5-27.
196
ANM_m05_ETL2_Process:
197
AN1_c05_SNMP_ETL1_Process: AN1_c05_s010_extractsnmpdata AN1_c05_s020_transformsnmpdata AN1_c05_s030_loadsnmpdata The following steps must be performed for all processes described above. Here we use ANM_c05_ETL1_Process to describe it. On the control server, using Data Warehouse Center tool, select the above processes and right-click them. Choose Mode -> Production, as shown in Figure 5-28.
After promoting the processes to production mode, they are scheduled for the configured times and they are visible in the Work in progress list.
198
In this section we explain how to make use of such a remote agent site. For our case study installation, we move the execution of the NetView SNMP performance warehouse enablement pack (AN1) ETLs from the control server to a remote agent site. Here is an overview of the steps required to move the execution of the ETLs to a remote agent site: 1. Make the ETLs code available on the remote agent site. 2. Make all data warehouse data sources and targets, which are used by the ETLs, available at the remote agent site. 3. Assign the remote agent the necessary data warehouse sources and targets. 4. Demote the moving ETL processes to mode development. 5. Configure the ETL processes to use the remote agent. 6. Promote the moved ETL processes to mode production. 7. Verify ETLs running on remote agent.
199
In the properties window of the remote agent site, available sources and targets are displayed on the left pane. Assigned sources and targets are displayed on the right pane. Move the proper warehouse sources and targets related to the warehouse enablement pack in question. In our case, shown in Figure 5-30, we choose only the SNMP performance ETL relevant AN1_SNMP_Source and AN1_TWH_CDW_Target to assign to the remote agent site. Click OK to finish the assignment.
200
201
Right-click all AN1 processes in the list: AN1_c05_s010_extractsnmpdata AN1_c05_s020_transformsnmpdata AN1_c05_s030_loadsnmpdata Then select Mode -> Development from the processes context menu as shown in Figure 5-32.
202
203
The AN1_c05_s010_extractsnmpdata process is executed on the remote agent now and its subprocesses are started there automatically. If all AN1 processes execute successfully, the Work in Progress window looks like that shown in Figure 5-35. You find new lines for each AN1 subprocess, AN1_c05_s010_extractsnmpdata, AN1_c05_s020_transformsnmpdata, and AN1_c05_s030_loadsnmpdata, which are all check marked after their successful completion. Additionally, the process AN1_c05_s010_extractsnmpdata is scheduled again marked with a clock. This line is highlighted in Figure 5-35.
204
Double-click one of the lines in Figure 5-35 to open the related Log Details window in Figure 5-36 for the ANM_c05_s010_extractNodeInfo process step.
205
In our example, the process is successful and no error messages are displayed in the detailed view. With the buttons Previous and Next you can navigate the log details for all process steps that are displayed in the Log window shown in Figure 5-35.
5.8 Reporting
In this section we show how to set up, configure, and use some of the reports provided by the NetView availability warehouse enablement pack (ANM). Here is a list of the predefined reports provided by the IBM Tivoli NetView Warehouse Enablement Pack: Daily Status Summary By SmartSet Nodes With Longest Outage Time In Routers SmartSet Nodes With Most Status Changes In Routers SmartSet Nodes With The Longest Outage Times Nodes With The Most Daily Status Changes Summary Of Daily Network Status Summary Of Total Outage Time By SmartSet Summary Of Total Status Changes By SmartSet Total Daily Status Changes In Monitored Network Total Daily Status Changes In Routers SmartSet Crystal Enterprise Professional version 9 for Tivoli has in comparison to a full Crystal license reduced configuration options. If the reports shipped with IBM Tivoli NetView Warehouse Enablement Pack do not match your needs and you want to develop additional reports, you have to upgrade your Crystal Enterprise installation. Note: As described in Chapter 3, Getting Tivoli Data Warehouse 1.2 up and running on page 71, an ODBC connection to the data mart database needs to be defined on the Crystal Enterprise server before we can work with the reports. Please refer to that chapter for details.
206
Here, <hostname> represents the hostname of the Crystal Enterprise report server, as shown in Figure 5-37.
In this section, we concentrate on viewing NetView reports, and we do not explain the configuration of Crystal Enterprise to its full extent. For details on configuration and administration tasks, refer to the following manuals shipped with the product: Crystal Enterprise 9 Installation Guide Crystal Enterprise 9 Administrators Guide Crystal Enterprise 9 Getting Started Guide Crystal Enterprise 9 ePortfolio Users Guide
207
From the Crystal Enterprise Launchpad, proceed by selecting the ePortfolio link, which will bring you to the window shown in Figure 5-38. In the top bar, you can see that we are authorized as user guest. By default, the guest user has no access to the NetView reports, as indicated by the words No Folders on the left side of the window.
The installation process of the first warehouse enablement pack on the Tivoli Data Warehouse environment creates a user ID on the Crystal Enterprise environment named Tivoli. This user ID is to be used to access the reports provided by any IBM Tivoli software.
208
To log in as the Tivoli user ID, select the Log On button in the upper right corner of the ePortfolio window in Figure 5-38 on page 208. The Log On window, as shown in Figure 5-39, is presented. The Tivoli user ID has no password by default. We use the Enterprise authentication method as we have specified during the Crystal Enterprise installation.
209
After entering the required data, select Log On to proceed. Now we are back at the ePortfolio window in Figure 5-40, but now with user Tivoli authority. Instead of No Folders in the guest users ePortfolio window in Figure 5-38, there is now a link visible with the name IBM Tivoli NetView in the Tivoli user ePortfolio window shown in Figure 5-40.
210
We follow this link by selecting IBM Tivoli NetView and proceed to the IBM Tivoli NetView reports as shown in Figure 5-41. All reports provided by the IBM Tivoli NetView Warehouse Enablement Pack are listed there. As already mentioned, there are only reports on availability and no reports on performance.
We open the reports context menu by left-clicking on the desired report name, as shown in Figure 5-42. We are presented with a menu which contains the items: View: Generate report instantaneously. View latest instance: View last report. Schedule: Change or create a new schedule for report generation. History: View already generated reports.
211
We continue by selecting Schedule from the Daily Status Summary by SmartSet report, for example. The Schedule window, as shown in Figure 5-43, is opened.
212
The Customize your options toolbar offers three selection buttons: Schedule: Pressing this button starts a new schedule with the current options and parameters. Cancel: Pressing this button closes the Schedule window and you get back to the reports window without adding a new schedule for the report. Help: Opens the help window. Figure 5-44 shows the selection of parameters for the schedule option. Here you can select the frequency the reports should be generated.
We want to schedule the report to run now. Next, it is necessary to provide the required parameters of the report. From the Customize your options pull-down menu, select Parameters, as shown in Figure 5-45. We left the other options settings to the default values.
213
The Schedule windows changes to Figure 5-46 and we are presented with three selection fields: Time Filter General Time Frame Specific Time Frame Note: Schedule requirements may differ for each report. The schedule selections presented here are for the Daily Status Summary by SmartSet report.
214
For the items Time Filter and General Time Frame we select the default values None by pressing the Add button at each selection. Thus we specify a lower and an upper bound for the specific time frame by selecting the Start of range and End of range parameters. Select Add Range to accept the settings. Figure 5-47 shows the parameters window after the selections for our case study report.
215
Now all required parameters are specified. Start the report generation by pressing the Schedule button from the toolbar.
216
As we have left the Schedule parameter set to Now, as shown in Figure 5-43, the report is scheduled immediately, and the reports history window is opened as shown in Figure 5-48.
The report just scheduled is still running, and therefore it is in status Pending. Note: The History window is not updated automatically. Press the Refresh button to view the current state. Figure 5-48 shows four different status, as follows: Pending: Report generation is still running. Success: Report is generated successfully. Click the link Instance Time in the left column of the table to view the report. Recurring: Report is scheduled to be generated certain times. Refer back to Figure 5-43 on page 212.
217
Failed: Report generation failed. Click the link Failed to open the log window as shown in Figure 5-49. The error message, Information is needed before this report can be processed, means that your parameter settings are not valid. Go back to the window shown in Figure 5-46 on page 215 and reenter your parameter settings.
To view successfully generated reports from the history window, as shown before in Figure 5-48 on page 217, click the link Instance Time in the left column of the table to view the associated report. On the following pages, Figure 5-50 and Figure 5-51 show the report Daily Status Summary by SmartSet for our case study scenario. The report is text based and is split into two pages. At the top of the report window are buttons to navigate through multi-page reports. In the NetView data warehouse daemon properties file, tdwdaemon.properties, all specified SmartSets are populated and included in this report (refer back to Example 5-2 on page 178). Some nodes are members of one or more SmartSets, and therefore the availability of these nodes is used multiple times to calculate the total availability.
218
219
220
Figure 5-52, Figure 5-53, and Figure 5-54 show more examples of reports provided by the IBM Tivoli NetView Warehouse Enablement Pack.
221
222
223
224
Chapter 6.
225
226
TDW001
Tivoli Data Warehouse
Control Server Metadata Central Data Warehouse Warehouse
TDW015
Web Server Crystal Server
TDW009
Tivoli Management Server
ITM_DB Data source
Monitored Endpoints
Web Reports
TDW0XX
Figure 6-1 Environment for our case study
Hardware and operating systems used in our case study environment are listed in Table 6-1.
Table 6-1 Hardware and operating systems Hostname OS Model Memory Disk size
Window 2000 Server SP4 AIX 5.1.0 Window 2000 Server SP4
40 GB 18 GB 40 GB
227
(not only IBM Tivoli Monitoring). A second ETL program extracts a subset of data from the CDW and transfers it into a data mart database specifically designed for reporting (see Figure 6-2).
Data Mart
The IBM Tivoli Monitoring 5.1.1 is shipped with two Warehouse Enablement Packs (WEP) that provide the ETLs required by the Tivoli Data Warehouse integration: IBM Tivoli Monitoring Warehouse Enablement Pack Version 1.1 (also called AMX pack) IBM Tivoli Monitoring for Operating Systems Warehouse Enablement Pack Version 1.1 (also called AMY pack) The AMX warehouse enablement pack provides warehouse functionality for all the applications based on IBM Tivoli Monitoring 5.1.1. It is intended as a generic tool, driven by the metadata provided by each monitoring application. It can be used to extract the data stored into the ITM Middle Layer Repository (ITM_DB) and to load it into the Tivoli Data Warehouse central data warehouse database (TWH_CDW). Due to this generic nature, the IBM Tivoli Monitoring WEP does not provide either data mart ETL process, or star schemas. These functions are provided by the WEP of the specific monitoring application.
228
The ETL process provided by the IBM Tivoli Monitoring WEP (also called generic ETL1) is able to load the correct data into the central data warehouse common schema looking at the rules defined by the specific monitoring application metadata. It is also able to detect and trace any exceptions in the operational data (any data which is not properly described by the application metadata). Instead, the IBM Tivoli Monitoring for Operating Systems warehouse pack (AMY WEP) provides a set of metadata to drive the IBM Tivoli Monitoring, Version 5.1.1 warehouse pack (AMX or Generic ETL1) to retrieve data collected by the IBM Tivoli Monitoring 5.1.1 Operating Systems Resource Models. It also provides a sample star schema definition used to build some data marts and general-purpose reports. Figure 6-3 depicts the role of the two WEPs in IBM Tivoli Monitoring 5.1.1 integration with Tivoli Data Warehouse.
ETL 2 (AMY)
ETL 1 (AMX) Tivoli Agent Tivoli Agent Tivoli Agent ITM collect RIM ITM_DB
The primary link between IBM Tivoli Monitoring and Tivoli Data Warehouse is the resource model data logging. When you select the Tivoli Enterprise Data Warehouse logging option for a given Resource Model, the Aggregation option is grayed out. You can still select the Raw data logging option (Figure 6-4) for the same Resource Model. If you select both options (TEDW Data and Raw Data), the data will be aggregated at the top of the hour, and both raw data and aggregated data will be visible in the Web health console.
229
Aggregation data is processed at each cycle time. The IBM Tivoli Monitoring engine aggregates the current data with the previous data (that is already a result of previous aggregation), so that, only the data resulting from the aggregation is kept in memory. This data is logged in the endpoint database when the aggregation period expires. The Tivoli Data Warehouse data aggregation algorithm is exactly the same, but the aggregation period is fixed to 1 hour. The IBM Tivoli Monitoring engine calculates the Min, Max, Avg, and Total values. However, the Total value is not applicable for all metrics. For example, the Web Health console will only show the Min, Max, Avg data. The total is the sum of the metrics value for each cycle time in the aggregation period. A thread is spawned every hour and the endpoint database is queried for all the aggregated data saved in the previous hour. Data are then saved in XML files. The thread is spawned with a fixed delay time (presently not possible to change, that is, 20 minutes) with respect to the full hour. Figure 6-4 shows the data logging options dialog in an IBM Tivoli Monitoring profile.
230
If Raw Data logging is enabled, the IBM Tivoli Monitoring engine aggregates data collected during each cycle time and stores the resulting value in the endpoint database when the aggregation period expires. For TEDW Data logging option, the algorithm is exactly the same, but the aggregation period is fixed to 1 hour. In both cases, data logging will start at the next full hour after the profile distribution time. Figure 6-5 shows an example of an aggregation time line.
The hourly aggregated data is exported from the endpoint database into a well-formed XML file, 20 minutes after the expiration of the aggregation period. For more information about IBM Tivoli Monitoring, you can refer to the redbook: IBM Tivoli Monitoring Version 5.1: Advanced Resource Monitoring, SG24-5519.
6.3 Prerequisites
Before installing the warehouse enablement packs for IBM Tivoli Monitoring, Version 5.1.1, the following software must be installed: IBM Tivoli Monitoring Version, 5.1.1 IBM Tivoli Monitoring Version, 5.1.1 Fix Pack 6 (5.1.1-ITM-FP06) IBM DB2 Universal Database Enterprise Edition 7.2 IBM DB2 Universal Database Enterprise Edition 7.2 Fix Pack 8 with its eFix (This is the minimum level of IBM DB2 supported by Tivoli Data Warehouse 1.2. Fix Pack 10a is the recommended level and it is shipped with Tivoli Data Warehouse 1.2).
231
Tivoli Data Warehouse 1.2 Crystal Enterprise Professional Version 9 for Tivoli IBM DB2 Warehouse Manager, Version 7.2 Fix Pack 8 with its eFix, on the remote agent sites in case of a distributed deployment. (This is the minimum level of IBM DB2 Warehouse Manager supported by Tivoli Data Warehouse 1.2. Fix Pack 10a is the recommended level and it is shipped with Tivoli Data Warehouse 1.2).
Here, DM511ED is the index file for Tivoli Enterprise Data Warehouse Support 5.1.1 and managed_node is the name of the managed node to be installed on. 1. From the Tivoli Desktop, select Desktop -> Install -> Install Product. The Install Product dialog shows the products that are available to install as shown in Figure 6-6.
232
2. Select IBM Tivoli Monitoring - Tivoli Enterprise Data Warehouse Support, Version 5.1.1, then select your Tivoli Management Region server and the gateways that you want to have it installed on. 3. RIM configuration is required to proceed with the installation, as shown in Figure 6-7.
233
The installation process will create a RIM object named itm_rim_<nodename>, where <nodename> is the RIM host in your Tivoli environment (tdw009 for ours). The RIM object can also be created at a later time using the following command, for instance, assuming that you have a DB2 database server:
wcrtrim -v DB2 -h tdw009 -d itm_db -u db2inst1 -H /usr/lpp/db2_07_01 -s TCPIP -I /home/db2inst1 itm_rim_tdw009
This RIM object by default has itmitm as a password, which must be changed to match the password of your database instance owner. Use the wsetrimpw command as follows:
wsetrimpw itm_rim_<nodename> itmitm <newpw>
Here, <newpw> is the database instance owner password. 4. Click Set and then select Install and follow the normal installation dialogs.
234
5. The physical database for the Warehouse Support component name is ITM_DB and needs now to be created. This process can be accomplished by either using a provided shell script or using SQL scripts. If you intend to use the provided shell script, make sure you grant the RDBMS administrator (or database instance owner) user ID with Administrator (root) and Tivoli_Admin_Privileges and run the script logged in as your user ID. The reason is that the shell script collects information from the previously created RIM object in order to create both the database and its structure. The shell script name is cr_itm_db.sh, and it is located in the $BINDIR/TME/Tmw2k/Warehousecfg directory. As an alternative method, you can use the SQL scripts. These scripts are also located in the $BINDIR/TME/Tmw2k/Warehousecfg directory and have the following name standard: cr_db.<DBext> and cr_tbl.<DBext>, where <DBext> is the database vendor designator (db2 in our case). The following sequence describes the creation process for DB2 using the SQL scripts: a. On the RIM host machine, login as your instance owner (in our case, db2inst1). b. Only perform this step if the RIM host machine does not have the Warehouse Support component installed. Copy the cr_db.db2 and cr_tbl.db2 files from the $BINDIR/TME/Tmw2k/Warehousecfg directory from your TMR server to the RIM host machine. c. Move to the directory where the SQL scripts are located and rename cr_db.db2 to cr_db_db2.sql and rename cr_tbl.db2 to cr_tbl_db2.sql. Edit cr_db_db2.sql and replace CREATE DATABASE _xz_db with CREATE DATABASE itm_db. d. Run the following command to create the itm_db database:
db2 -td$ -vf cr_db_db2.sql
e. In order to have the itm_db database structure created, run the following commands, where <db2inst1pw> is the database instance owner password.
db2 connect to itm_db user db2inst1 using <db2inst1pw> db2 -td$ -vf cr_tbl_db2.sql
235
Example 6-1 Testing the RIM object tdw009:/>wrimtest -l itm_rim_tdw009 Resource Type : RIM Resource Label : itm_rim_tdw009 Hostname : tdw009 User Name : db2inst1 Vendor : DB2 Database : itm_db Database Home : /usr/lpp/db2_07_01 Server ID : tcpip Instance Home : /home/db2inst1 Opening Regular Session...Session Opened RIM : Enter Option >
Type x and press Enter to release the session. 7. The data collection process of the warehouse support component needs to be configured. The configuration file is named .config and it is located in the $DBDIR/dmml directory. The warehouse support entries in the.config file have the prefix datacollector. Such entries should be added/modified using the wdmconfig command, and it is important to notice that this file must not be modified manually. For details on the wdmconfig command, refer to the IBM Tivoli Monitoring Users Guide Version 5.1.1, SH19-4569 manual. To set the collection parameters, issue the following command:
wdmconfig -m <nodename> -D datacollector.rim_name=itm_rim_<rimhost> \ -D datacollector.db_purge_interval=30 \ -D datacollector.db_purge_time=0 \ -D datacollector.delay=30 \ -D datacollector.sleep_time=1 \ -D datacollector.max_retry_time=6
Here, <node name> is the gateway specified for the monitored endpoints. You can check if the entries were correctly set by issuing:
wdmconfig -m <nodename> -G datacollector*
236
datacollector.rim_name Specifies the name of the RIM object that the data collection process will use to load data to the database. The default is itm_rim_<RIM host name>. datacollector.db_purge_interval Specifies the number of days the data is kept on the database: older data is automatically removed from the database. The value can range from 10 to 60. The default is 30 days. datacollector.db_purge_time Specifies the time during the day for the data removal operation. The value can range from 0 (midnight) to 23. The default is 0 (midnight). datacollector.delay Specifies the time delay (in minutes, compared to the hour) after which the data collector process uploads data from the endpoints. The value can range from 1 to 60 minutes. The default is 30 minutes. datacollector.sleep_time Specifies the time interval (in minutes) between two consecutive requests of data uploading generated by the data collector processor. The value can range from 1 to 60 minutes. The default is 1 minute. datacollector.max_retry_time Specifies the maximum number of times an XML data file must be processed before being archived when an error occurs. The default is 6 times.
237
Note: By using the Logging window, you can specify how to log data collected by a resource model. If you select the Raw Data option, data will be written exactly as it is collected by the resource model. All the monitored values are collected and copied in the database. This data later can be used by the Health Console. Selecting the TEDW Data option allows data to be collected and copied in the database for later use by Tivoli Enterprise Data Warehouse. If you choose the Aggregated Data option, data will be collected and aggregated at fixed intervals you define (Aggregation Period). Then only the aggregated values are written in the database. The Historical Period option is used to specify the period for which data is to be stored in the database. For the Tivoli Data Warehouse you need to check TEDW Data option. In addition, you can also check the Raw Data option (this does not affect data collection for Tivoli Data Warehouse).
4. Now, you can specify the subscribers. Each resource model can only have specific subscriber types. You may want to create multiple profiles which each profile contains resource models for specific object types. For example, in our ITSO environment we created two different profiles, one for Windows 2000 endpoints and the other for AIX endpoints.
238
5. Distribute the profile to your subscribers. You can either do it from the Tivoli desktop, or using the wdmdistrib command. 6. Check the distribution of the profile and the execution of it using the wdmlseng command. Ensure that all the profiles are having the state of Running, as shown in Example 6-3.
Example 6-3 wdmlseng command output # wdmlseng -e tdw002 Forwarding the request to the engine...
The following profiles are running: tmw2kDefProfile#tdw009-region TMW_EventLog :Running TMW_PhysicalDiskModel :Running TMW_TCPIP :Running TMW_LogicalDisk :Running TMW_MemoryModel :Running TMW_Process :Running TMW_Processor :Running
7. You need to tell the gateways, to which the endpoint reports to, to start collecting historical information to be put into the RIM object. Use the wdmcollect command. The syntax of the command is:
wdmcollect -e <endpoint> -s <time> -m <managed_node>
Here, <endpoint> is the endpoint name, <time> is the collection interval in hours, and <managed_node> is the managed node to be defined as the data collector node. After you run this command for all the endpoints, you can check the result using the command wdmcollect -q. The managed node will pull the data from the endpoint every interval and store it under $DBDIR/dmml/tedw. Example 6-4 shows the command output in our case study environment. It means that data is being collected from nine endpoints (tdw001, tdw002, tdw003, tdw004, tdw005, tdw006, tdw008, tdw010, tdw011) via the managed node tdw009 every hour (3600 seconds).
Example 6-4 wdmcollect command output # wdmcollect -q Processing ManagedNode tdw009... <Requests> <Request Id="235d1afd" Endpoint="tdw011" RefreshTime="3600" LastCall="Fri Aug 1 14:32:00 CDT 2003"/> <Request Id="545a2a6b" Endpoint="tdw010" RefreshTime="3600" LastCall="Fri Aug 1 14:32:00 CDT 2003"/>
239
<Request Id="52f58445" 14:32:00 CDT 2003"/> <Request Id="cc9111e6" 14:32:00 CDT 2003"/> <Request Id="bb962170" 14:32:00 CDT 2003"/> <Request Id="229f70ca" 14:32:01 CDT 2003"/> <Request Id="25f2b4d3" 14:39:04 CDT 2003"/> <Request Id="bcfbe569" 14:39:12 CDT 2003"/> <Request Id="5b43c86e" 14:39:23 CDT 2003"/> </Requests>
Endpoint="tdw004" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw003" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw002" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw001" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw005" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw006" RefreshTime="3600" LastCall="Fri Aug 1 Endpoint="tdw008" RefreshTime="3600" LastCall="Fri Aug 1
8. Before you can start checking on the RIM database to see whether the data has been collected, you have to wait for a time that depends on the selected collection interval and the data collector delay. We check the collection by verifying that the ENDPOINTS table has been populated and checking the timekey_dttm in metricsdata table as shown in Example 6-5. Please note that the time reported in timekey_dttm is a GMT time.
Example 6-5 Sample SQL that check the collection db2 => connect to ITM_DB Database Connection Information Database server SQL authorization ID Local database alias = DB2/6000 7.2.6 = DB2INST1 = ITM_DB
db2 => select host_name from endpoints HOST_NAME ------------------------------------------------------------------------------tdw001 tdw002 tdw003 tdw004 tdw005 tdw006 tdw007 tdw008 tdw009 tdw010 tdw011
240
11 record(s) selected. db2 => select max(timekey_dttm) from metricsdata 1 -------------------------2003-07-31-11.00.56.000000 1 record(s) selected.
241
After creating these directories, we completed the following steps on the DB2 server from a command window: 1. Stop and restart DB2:
db2stop force db2start
Where <REMOTE_SVR> is the hostname of the DB2 server that contains the ITM database (TDW009 in our case) and <DB2_PORT> is the TCP/IP port used by DB2 (the default is 50000). If you prefer using DB2 Client Configuration Assistant, execute the following steps: 1. On the Windows task bar, click Start -> Programs -> IBM DB2 -> Client Configuration Assistant. The Client Configuration Assistant is displayed (Figure 6-9).
242
2. Click Add and the Add Database Wizard shows (Figure 6-10). Select Search the network and then click the Database name tab.
243
3. In the Database name tab, click Add System. An Add System dialog appears (Figure 6-11). Selected TCPIP and complete the Host name information. Click OK.
244
4. A list of DB2 databases should expand from the DB2 server you added (Figure 6-12). Select ITM_DB and click Finish.
245
5. A Confirmation window appears (Figure 6-13). It is recommended to test the ODBC connection to verify a healthy link. Click Test Connection.
6. Fill out the User ID and Password information (Figure 6-14). Click OK.
246
7. The connection should be successful (Figure 6-15). If not, review your settings and change them accordingly.
247
2. The following window shows the location of the Tivoli common logging directory (Figure 6-17 on page 249) which will contain all TDW log files. In our installation we use the default location C:\Program Files\ibm\Tivoli\common. Click Next.
248
3. In the windows as shown in Figure 6-18 on page 249, click Add to add the AMX warehouse pack.
249
4. In the Location of installation properties file window, as shown Figure 6-19 on page 250, specify the location of the AMX warehouse pack installation properties file, twh_install_props.cfg: you can find this file in the IBM Tivoli Monitoring version 5.1 media, under the tedw_apps_etl\AMX\pkg directory. Click Next.
5. The installation menu window (Figure 6-20 on page 251) now lists the IBM Tivoli Monitoring warehouse enablement pack. Select it and click Next to continue.
250
6. Click Install in the summary window (Figure 6-21 on page 252) to start the warehouse pack installation.
251
7. View the progress of installation through the messages that are shown in windows until its completion. The final installation window (Figure 6-22 on page 253) contains either a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors, and then click Next. If warnings are listed, check the logs to ensure that the warnings can safely be ignored. Click Finish to exit the wizard.
252
253
254
Figure 6-24 IBM Tivoli Monitoring, Version 5.1.1 Generic ETL1 Sources
You should edit the properties of each one of the above entries. In order to do that, right-click it and select Properties and then select the Data Source tab. Fill in the database instance owner user ID information (Figure 6-25 and Figure 6-26).
255
5. For the IBM Tivoli Monitoring, Version 5.1.1 Generic ETL1 Target ETL, shown in Figure 6-27, expand the Warehouse Target folder, right-click the AMX_TWH_CDW_Target, select Properties and then select the Database tab. Fill in the user ID information.
Figure 6-27 IBM Tivoli Monitoring, Version 5.1.1 Generic ETL1 Target
256
6.5.6 Modifying the ETL for the source table name to the RIM user
The AMX ETL assumes that the schema name of ITM_DB RIM is set to db2admin. This is the default name for databases created on a Windows machine, but if you are using DB2 on other platforms or you have a different instance name, you must modify the source table name into the DB2 Warehouse Manager. In our case, we have the ITM_DB on a UNIX server and we use db2inst1 as the user ID and instance name. Therefore we have to modify the source table name, executing the following steps: 1. From the Data Warehouse Center, from the Warehouse sources list, select the AMX_ITM_RIM_Source and open the Property page. 2. Go to the Tables and views tab as shown in Figure 6-29.
257
3. Expand the Tables folder and you will get a dialog asking for the name filter, such as shown in Figure 6-30. We only need to get the table called ENDPOINTS. The schema name is the RIM user ID.
258
4. When the DB2.ENDPOINTS has been found, move it from the available tables and views to the selected tables and views box by clicking the > button. You will have two ENDPOINTS tables as shown in Figure 6-31. Click OK.
5. Now from the Data Warehouse Center, expand the Subject Area and find the process called AMX_c05_ETL1_Process. Right-click it and select Open. The Process Modeller window shown is similar to Figure 6-32.
259
260
6. Click the tables icon and click the work area; a dialog box will be presented, as shown in Figure 6-33. Select the DB2INST1.ENDPOINTS table and click the > button. Then click OK.
7. The new table is now shown in the process modeller window; now we need to connect the tables to the first step. Use the link icon and select data links. Drag the cursor from the ENDPOINTS table to the AMX_c05_s010_RIM_Extract step, and a new link is created. 8. Remove the old link by selecting the link, right-click and select Remove. Remove also the old DB2ADMIN.ENDPOINTS table by selecting it, right-click and select Remove. 9. Save the process model using the menu Process -> Save and close the window. Attention: At this time, we bypass scheduling the AMX ETL and changing the status of the ETL to production. We do this because we are not ready to run the AMX ETL until an ETL2 is installed (AMY). Running the AMX ETL prematurely will result in errors and prevent you from gathering data in future collections.
261
262
1. Click Start -> Programs -> TDW -> Install a Warehouse Pack, this starts the warehouse pack installation wizard (see Figure 6-16 on page 248). In the Welcome windows, click Next. 2. The following window shows the location of the Tivoli common logging directory (Figure 6-17 on page 249) which will contain all TDW log files. In our installation we use the default location C:\Program Files\ibm\Tivoli\common. Click Next. 3. In the windows as shown in Figure 6-18 on page 249, click Add to add the AMY warehouse pack. 4. In the Location of installation properties file window, as shown Figure 6-19 on page 250, specify the location of the AMY warehouse pack installation properties file, twh_install_props.cfg: you can find this file in the IBM Tivoli Monitoring version 5.1 media, under the tedw_apps_etl\AMY\pkg directory. Click Next. 5. The installation menu window now lists the IBM Tivoli Monitoring warehouse enablement pack (Figure 6-34). Select it and click Next to continue.
6. Click Install in the summary window (Figure 6-21 on page 252) to start the warehouse pack installation.
263
7. View the progress of installation through the messages that are shown in windows until its completion. The final installation window contains either a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors, and then click Next. If warnings are listed, check the logs to ensure that the warnings can safely be ignored. Click Finish to exit the wizard.
264
Important: Verify that all the ETL processes belonging to the warehouse enablement pack for which you are installing a Fix Pack are in development mode. This is required for preventing the possibility that any ETL process may run during the Fix Pack installation. If you have already configured your ETLs with DB accounts information before installing the Fix Pack, you have to configure them again after the Fix Pack installation (see Defining the authority to the warehouse sources and targets on page 254).
265
2. On the IBM DB2 Control Center utility, start the IBM DB2 Data Warehouse Center utility by selecting Tools -> Data Warehouse Center. The Data Warehouse Center logon windows appears. 3. Log into the IBM DB2 Data Warehouse Center utility using the local DB2 administrator user ID. In our case, db2admin. 4. In the Data Warehouse Center window, expand the Warehouse Sources folder. Update the database sources that relates to the application that you want to configure. In this example, this is AMY_TWH_CDW_Source. You should edit the properties of this warehouse source. In order to do that, right-click and select Properties and then select the Data Source tab. Fill in the database user ID and password information. For our environment the values are shown in Figure 6-37.
5. Again, for the AMY warehouse targets, you need to modify the user ID information from the property pages. You should edit the properties of each one of those warehouse targets. In order to do that, right-click it and select Properties and then select the Database tab. Fill in the database user ID and password information. For our environment the values are shown in Figure 6-38, using the AMY_TWH_MART_Target as example.
266
267
2. Right-click the process you want to test and choose Test (see Figure 6-40 on page 269). The process must be run sequentially according the dependency described in the ETL process model (see Figure 6-32 on page 260). The sequence for all AMX and AMY processes is as follows: a. AMX_c05_ETL1_Process i. ii. iii. iv. v. AMX_c05_s05_Pre_Extract AMX_c05_s010_Rim_Extract AMX_c05_s020_Parsing AMX_c05_s030_Exception AMX_c05_s040_Comp_Msmt
b. AMX_c10_Rim_Prune_Process i. AMX_c10_s010_Rim_Prune c. AMY_c05_ETL1_Data_Update_Process i. AMY_c05_s010_Update d. AMY_m05_ETL2_Process i. ii. iii. iv. v. vi. AMY_m05_s05_Mart_Prepare_Stage AMY_m05_s010_Mart_Pre_Extract AMY_m05_s020_Mart_Extract AMY_m05_s030_Mart_Load AMY_m05_s040_Mart_Rollup AMY_m05_s050_Mart_Prune
268
The following processes should only be tested and executed in case you need to reset the OS data mart. All the data will be cleaned out of the data mart database. e. AMY_m10_Reset_ETL2_Process i. AMY_m10_s010_Reset_OS_Data_Mart
3. To verify the status of processes (Successful, Failed, In Progress, or Scheduled), select in the Data Warehouse Center Warehouse -> Work in Progress. The Work in Progress window (see Figure 6-41 on page 270) not only shows the status of all processes for all ETLs with status set to Successful, Failed, In Progress, or, if promoted to production, Scheduled, but also allows you to run a process again by simply right-clicking on it and selecting Run now.
269
270
271
272
273
274
6.8 Reporting
Next we show how to set up, configure, and use some of the reports provided by the IBM Tivoli Monitoring, Version 5.1.1 Warehouse Enablement Pack. Note: As described in Chapter 3, Getting Tivoli Data Warehouse 1.2 up and running on page 71, an ODBC connection to the data mart database needs to be defined on the Crystal Enterprise server before we can work with the reports. Please refer to that chapter for details.
Here, <hostname> represents the hostname of the Crystal Enterprise report server, as shown in Figure 6-46.
275
In this section, we concentrate on viewing IBM Tivoli Monitoring, Version 5.1.1 reports and we do not explain the configuration of Crystal Enterprise to its full extent. For details on configuration and administration tasks, refer to the following manuals shipped with the product: Crystal Enterprise 9 Installation Guide Crystal Enterprise 9 Administrators Guide Crystal Enterprise 9 Getting Started Guide Crystal Enterprise 9 ePortfolio Users Guide
276
From the Crystal Enterprise Launchpad, proceed by selecting the ePortfolio link, which will bring you to the window shown in Figure 6-47. In the top bar, you can see that we are authorized as user guest. By default, the guest user has no access to the NetView reports as indicated by the words No Folders on the left side of the window.
The installation process of the first warehouse enablement pack on the Tivoli Data Warehouse environment creates a user ID on the Crystal Enterprise environment named Tivoli. This user ID is to be used to access the reports provided by any IBM Tivoli software.
277
To log in as the Tivoli user ID, select the Log On button in the upper right corner of the ePortfolio window in Figure 6-47 on page 277. The Log On window as shown in Figure 6-48 is presented. The Tivoli user ID has no password by default. We use the Enterprise authentication method as we have specified during the Crystal Enterprise installation.
278
After entering the required data, select Log On to proceed. Now we are back at the ePortfolio window in Figure 6-49, but now with user Tivoli authority. Instead of No Folders in the guest users ePortfolio window in Figure 6-47, there is now a link visible with the name IBM Tivoli Monitoring for Operating Systems in the Tivoli user ePortfolio window in Figure 6-49 .
279
We follow this link by selecting IBM Tivoli Monitoring for Operating Systems and proceed to the IBM Tivoli Monitoring, Version 5.1.1 reports as shown in Figure 6-50. All reports provided by the IBM Tivoli Monitoring Warehouse Enablement Pack are listed there.
280
In order to obtain the reports, select the desired report, for example, Operating System Busiest Systems, and select Schedule, as shown in Figure 6-51.
The schedule report panel starts. In order to run the reports at this time, select Now under the Run Report option. As this report requires additional parameters, such as time frame, select Parameters under the Customize your Options option, as shown in Figure 6-52.
281
Figure 6-53 shows the selection of parameters for the report. Select Schedule when ready to run the report.
282
Because we selected to run this report now, the report is scheduled immediately and the reports history window is opened. The just-scheduled report will run and its initial status is set to Pending. Note: The History window is not updated automatically. Click the Refresh button to view the current state.
To view successful generated reports from the history window as shown in Figure 6-54, click the link Instance Time in the left column of the table to view the associated report. The report is shown in Figure 6-54.
283
284
Next we present some more examples of reports provided by the IBM Tivoli Monitoring, Version 5.1.1 Warehouse Enablement Pack. Figure 6-56 shows the Operating System Paging File Utilization report.
285
Figure 6-57 shows the Operating System UNIX CPU Statistics report.
286
To access that Web site and download the script, you must provide an IBM ID: If you do not have it, you can apply on the same Web link to receive one. The itmchk.sh script is bundled in a package called ITM - TEDW Analysis Tools together with another script (twhchk.sh), which can be used to check the installed components and their installation order in a Tivoli Data Warehouse 1.2 environment and to provide a summary of meaningful data from both the ITM and TEDW databases. To run the itmchk.sh script, follow these steps: 1. Download the ITM-TEDW-Health.tar in the Tivoli Manage Node where your ITM RIM object is available (in our case study scenario, the TMR server TDW009 is also the RIM host for ITM database). 2. Extract all the files and directories contained in the archive file. 3. Go into the ITM-TEDW-Health directory. 4. Setup the Tivoli environment (. /etc/Tivoli/setup_env.sh for a UNIX system). 5. Run the command ./itmchk.sh 6. The program prompts for the ITM RIM name as shown in Example 6-6. Choose the number corresponding to the RIM name used for ITM_DB database and press ENTER.
Example 6-6 Running itmchk.sh tool # ./itmchk.sh ======================================================== IBM Tivoli Monitoring - Tivoli Enterprise Data Warehouse Configuration and Status Snapshot (c) IBM - Tivoli ======================================================== ==> Analysis started at: Aug 14 2003 11:46:10 ==> Setting Debug Mode... (I) Debug Disabled ==> Including Help Messages Routines... (I) Done Including Help Messages Routines
287
==> Looking up TME 10 System Information... (I) Valid User Found ==> Getting Environmental Configuration... (I) Environment Successfully Imported ==> Looking for RIM Objects definition... (I) Discovered the following RIM Objects: 1) itm_rim_tdw009 2) mdist2 3) spr_rim -n (?) Select the One used by Warehouse Component:
7. The tool checks ITM configuration and then prints a report as shown in Example 6-7.
Example 6-7 itmchk.sh tool report (I) Using RIM Object (itm_rim_tdw009) ==> Retrieving RIM Object Information... (I) Valid RIM Object Type Found (I) RIM Object Configuration is: <> Database Vendor: DB2 <> Database Home: /usr/lpp/db2_07_01 <> Database Name: itm_db <> Database User: db2inst1 <> DB2 Instance Home: /home/db2inst1 (I) The RIM Host Trace is turned OFF ==> Testing RIM Object (itm_rim_tdw009) Connectivity... (I) Attempting to Connect... (I) RIM Object (itm_rim_tdw009) is Working Properly ==> Retrieving Data Collector Configuration... (I) RIM Object (itm_rim_tdw009) Matches the Data Collector Configuration (I) The XML files will be processed 6 (default) times before being archived. (I) The time between two consecutive requests of data upload is 1 (default) minutes. (I) Data will be uploaded 30 minutes past the full hour. (I) Old data will be removed from the ITM database at 0 (default) o'clock. (I) The ITM database will not retain data older then 30 (default) days. ==> Retrieving Middle Layer Trace Configuration... (I) The profile distribution trace is at level 1 (default) and has a size of 1 (default) bytes. ==> Retrieving the Status of the EP cache... (I) Cache status is: Alive for Endpoint: tdw001 (I) Cache status is: DMEngineOff for Endpoint: tdw002
288
Alive for Endpoint: tdw003 DMEngineOff for Endpoint: tdw004 Alive for Endpoint: tdw005 Alive for Endpoint: tdw006 Alive for Endpoint: tdw007 Alive for Endpoint: tdw008 Alive for Endpoint: tdw009 Alive for Endpoint: tdw010 DMEngineOff for Endpoint: tdw011
==> Retrieving Info about the Queued Requests... (I) Endpoint="tdw001" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw002" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw003" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw004" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw005" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw006" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw007" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw008" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw009" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw010" LastCall="Wed Aug 13 14:39:21 CDT 2003" (I) Endpoint="tdw011" LastCall="Wed Aug 13 14:39:21 CDT 2003" ==> Done. ======================================================== ================== Analysis Completed ================== ========================================================
The itmchk.sh report shows the following information: RIM Object configuration RIM connection status Data Collector configuration parameters (see Installing the ITM WEP data collector component on page 232) Middle Layer Trace Configuration (used for debugging) Status of the endpoint cache The last data upload request to each endpoint
289
The date of the last data upload allows an estimation of the time frame in which the data collection process stopped. This estimate can be very useful when examining the monitoring trace files to help in understanding the reasons for data upload failure. You can also check the hostnames of all endpoints that provided data, as shown in Example 6-9.
Example 6-9 Names of the endpoints collecting data db2 => select host_name from endpoints HOST_NAME ----------------------------------------------------------
290
tdw001 tdw002 tdw003 tdw004 tdw005 tdw006 tdw007 tdw008 tdw009 tdw010 tdw011 11 record(s) selected.
This information can be used to track which monitored endpoints are not providing data.
291
Often a connection failure is simply caused by a change in the database password without the needed update of the RIM object. To change the RIM password use the command:
wsetrimpw rim_name <old_password> <new_password>
The following profiles are running: tmw2kDefProfile#tdw009-region TMW_EventLog :Running TMW_PhysicalDiskModel :Running TMW_TCPIP :Running TMW_LogicalDisk :Running TMW_MemoryModel :Running TMW_Process :Running TMW_Processor :Running
To change the status of resource models, modify the original monitoring profile using the Tivoli desktop and redistribute it to the endpoint. If the monitoring engine is not running correctly on the endpoint, you can try to restart it using the command:
wdmcmd -restart -e <endpoint_name>
292
If your ITM database does not receive data even if your resource models are correctly running on the monitored endpoints and the connection between RIM host and database is working, you should verify that: 1. Enable Data Logging and TEDW Data in the Logging option of your monitoring profiles are checked (see Activate data collection on page 237). 2. The data collection parameters are correct (use wdmconfig -m <nodename> -G datacollector* command). 3. The gateway data collection process was started with the wdmcollect command (use wdmcollect -m all -q to list the active collection processes for all managed nodes).
293
The data related to the request '520207df' has been successfully received <F>1035247480000<F>Tue Oct 22 00:44:40 2002 GMT<F>AMW<F>DataCollector<F>eastham<F>19074<F>AMW<F> - AMW0202I The file '/var/spool/Tivoli/eastham.db/dmml/tedw/tedw1/1035246605.zip' is going to be processed for upload.
<F>1035247481000<F>Tue Oct 22 00:44:41 2002 GMT<F>AMW<F>DataCollector<F>eastham<F>19074<F>AMW<F> - AMW0198I The data related to file: '/var/spool/Tivoli/eastham.db/dmml/tedw/tedw1/1035246605.zip.dir/ITM_WH@2002#10 #22#0 #20.xml' have been successfully loaded into the DataBase
In Example 6-13, instead, we show the occurrence of a failure in the data collection process. In this case the problem was caused by a stop of Tivoli Framework processes on the RIM host.
Example 6-13 trace_tmnt_rimh_eng1.log <F>1037723936000<F>Tue Nov 19 16:38:56 2002 GMT<F>AMW<F>datacollector<F>eastham<F>18332<F>MIN<F>../../../../.. /src/objects/DataCollector/platform/StoreData/RIMConnectionHandler.cxx<F>RIMCon nectionHandler::connect()<F>537 026664<F>'FRWSL0005E A communications failure occurred: FRWOG0014E destination dispatcher unavailable Please refer to the TME 10 Framework Planning and Installation Guide, "TME Maintenance and Troubleshooting" for details on diagnosing communication errors or contact your Tivoli support provider.
The IBM Tivoli Monitoring has important log files also at the endpoints: For Windows platform, most information is stored in Tmw2k.log file under the $LCF_DATDIR/LCFNEW/Tmw2k directory. For the UNIX platform, the logs are under the $LCF_DATDIR/LCFNEW/AMW/logs directory. These log files are: msg_dmxengine.log trace_dmxengine.log trace_dmxeu.log trace_dmxntv.log
As for the managed nodes logs, the message file contains operational messages, while the trace files contain error messages. In Example 6-14 we show how the trace_dmxengine.log reports a problem with a distributed resource model (in this case, the Network Interface resource model in ITM_Unix_monitor profile).
294
Example 6-14 trace_dmxengine.log <F>1036710042642<F>Thu Nov 07 17:00:42 CST 2002<F>AMW<F>Engine<F>yarmouth<F>1224 0<F>MIN<F>ReferenceModel profile=ITM_Unix_monitor#eastham-region model=DMXNetworkInterface<F><F>Thread[TmrSrvAction_RMTimer,5,main]<F>No NICs found!<F>None
For further information about IBM Tivoli Monitoring troubleshooting, refer to Appendix C of IBM Tivoli Monitoring Users Guide Version 5.1.1, SH19-4569-01.
295
296
Chapter 7.
297
TDW003 TDW004
TDW Control Server 1.2 TDW Central Warehouse 1.2 (remote agent site) TDW Data Mart 1.2 Crystal Enterprise Server 9 IBM Tivoli Storage Manager 5.2
DB2 Server 7.2FP10a DB2 Server 7.2FP10a DB2 Server 7.2FP10a DB2 Client 7.2FP10a -
298
Figure 7-1 gives a brief summary of the distributed Tivoli Data Warehouse environment used in this chapter.
Agent site
TWH_CDW
TWH_MD
TDW Control Server IBM Tivoli Storage Manager 5.2 Windows 2000 Server Hostname: TSMSVR01 Windows 2000 Server Hostname: TDW003
299
The IBM Tivoli Storage Manager Warehouse Enablement Pack extracts information related to the following components: IBM Tivoli Storage Manager servers Nodes that are managed by the servers Filespaces belonging to the nodes Amount of data stored on the servers on behalf of nodes and their filespaces Storage pools in which the data is stored. IBM Tivoli Storage Manager itself does not record historical information about these entities. As a result, the IBM Tivoli Storage Manager Warehouse Enablement Pack records the current state of each entity as it extracts information from the IBM Tivoli Storage Manager servers. The current state, collected regularly over a period of time, becomes the historical record of the IBM Tivoli Storage Manager activity.
7.3 Prerequisites
Before installing the IBM Tivoli Storage Manager Warehouse Enablement Pack, you must install the following software: IBM Tivoli Storage Manager 5.2. IBM DB2 Universal Database Enterprise Edition, Version 7.2. IBM DB2 Universal Database Enterprise Edition, Version 7.2 Fix Pack 8e, 9, 10, or 10a. IBM DB2 Universal Enterprise for z/OS and OS/390, Version 7. Tivoli Data Warehouse 1.2. Crystal Enterprise Professional Version 9 for Tivoli. IBM DB2 Warehouse Manager, Version 7.2 Fix Pack 8 with its eFix, on the remote agent sites in case of a distributed deployment. (This is the minimum level of IBM DB2 Warehouse Manager supported by Tivoli Data Warehouse 1.2. Note that Fix Pack 10a is the recommended level and it is shipped with Tivoli Data Warehouse 1.2). IBM Tivoli Storage Manager 5.2 ODBC Driver. The IBM Tivoli Storage Manager Warehouse Enablement Pack supports central data warehouses and data mart databases on DB2 UDB for z\OS and for OS\390, or on DB2 UDB for Windows and UNIX systems. Regardless of the platform on which data warehouses or data marts reside, the warehouse enablement pack doesnt support multiple central data warehouse or data mart databases.
300
301
The data mart ETL (ETL2) extracts a subset of the information stored in the central data warehouse database and stores it in a data mart database. The data mart is optimized for reporting on specific areas of interest, and may be used by a number of different reporting tools. Restriction: Since the IBM Tivoli Storage Manager ODBC client runs only on Microsoft Windows platform, it is important that you dont configure Tivoli Data Warehouse remote agent sites to run on non-Windows platform. In this section we will be installing the IBM Tivoli Storage Manager ODBC on the control center server, which in our case study environment is the TDW003 server. Perform the following tasks to install the IBM Tivoli Storage Manager ODBC on the control server: 1. Locate the TSM520C_GA_ODBC_.exe file in the IBM Tivoli Storage Manager 5.2 installation Media. 2. Double-click the file, and on the Welcome screen, click Next. 3. Select a temporary directory to save installation files and click Next twice to confirm the installation. 4. Select the installation directory for ODBC and click Next. 5. Select Complete setup type and click Next, as shown in Figure 7-2.
302
The Tivoli Data Warehouse warehouse pack installer cannot configure IBM Tivoli Storage Manager data sources. It will recognize an existing IBM Tivoli Storage Manager ODBC data source named ITSMSRC, and allow it to be used with the warehouse enablement pack. To create the IBM Tivoli Storage Manager ODBC data source on the control server. perform the following steps: 1. Click Start ->Control Panel -> Administrative Tools -> Data Source (ODBC). 2. Go to the System DSN tab and click Add. Select TSM ODBC Driver and Click Finish: The configuration Panel is opened.
Fill the Data source name with ITSMSRC, fill the Administrator name with the IBM Tivoli Storage Manager administrator name and the TCP/IP with the full qualified hostname of IBM Tivoli Storage Manager server (the name must match with the SERVERHLADDRESS value configured on the IBM Tivoli Storage Manager), leave other parameters as default, and then click OK. If you plan on using the IBM Tivoli Storage Manager Warehouse Enablement Pack with multiple data sources, then you may need to follow this process to configure the ODBC data sources: Create a dummy DB2 database on the control server and create as many ODBC connections to the dummy database as you have IBM Tivoli Storage Manager servers. Create the ODBC connections as if you were creating the IBM Tivoli Storage Manager ODBC connections (same name).
303
Do not configure any IBM Tivoli Storage Manager ODBC connections to the real IBM Tivoli Storage Manager servers databases. Proceed with the installation of the IBM Tivoli Storage Manager warehouse enablement pack as described in the next section. When the installer prompts for the IBM Tivoli Storage Manager data sources, configure the data sources to point to the ODBC connections to the dummy DB2 database. After the IBM Tivoli Storage Manager warehouse enablement pack installation has completed, replace the DB2 data sources with IBM Storage Manager ODBC data sources with the same name. Manually update the user name and passwords associated with each source in the DB2 Data Warehouse Center.
304
5. Restart Warehouse logger and Warehouse server services on the control server.
2. The following window (Figure 7-5) shows the location of the Tivoli common logging directory which will contain all TDW log files. In our installation we use the default location C:\Program Files\ibm\Tivoli\common. Click Next.
305
3. In the window shown in Figure 7-6, click Add to add the IBM Tivoli Storage Manager Warehouse Enablement Pack.
306
4. In the Location of installation properties file window, as shown Figure 7-7, specify the location of the IBM Tivoli Storage Manager warehouse enablement pack installation properties file, twh_install_props.cfg. You can find this file in the IBM Tivoli Storage Manager 5.2 Warehouse Enablement Pack media, under the tedw_apps_etl\AMN directory. Click Next.
307
5. The installer will prompt for the data mart database to be used by the processes of the IBM Tivoli Storage Manager Warehouse Enablement Pack. It also prompts for the remote agent site that will run the ETL2 processes, as shown in Figure 7-8. On our case study scenario, the data mart is on TDW009 and we will be using the default agent site on the control server on TDW003.
308
6. The installer will prompt for the central data warehouse database to be used by the processes of the IBM Tivoli Storage Manager Warehouse Enablement Pack. It also prompts for the remote agent site that will run the ETL1 processes, as shown in Figure 7-9. In our case study scenario, the central data warehouse is on TDW004 and we will be using the default agent site on the control server on TDW003. At this time you can opt to perform the scheduling settings for the ETL1 processes. In case you opt to do so, the installation process will schedule the processes to run at the specified time and promote the processes to Production status. You can also opt not to schedule the ETLs at installation time and perform the foregoing tasks manually. The installer will also define the user authority for each one of the warehouse sources and targets processes.
Figure 7-9 Central data warehouse and remote agent site settings
309
7. The installer will investigate the existence of an IBM Tivoli Storage Manager ODBC connection named ITSMSRC. Click Edit to specify its settings, as shown in Figure 7-10.
8. As shown in Figure 7-11, set the User ID of IBM Tivoli Storage Manager Administrator and Password. The Administrator name should be the same as the IBM Tivoli Storage Manager ODBC system data source you created prior to the installation. Click Next. The installer will test the ODBC connection and return to ODBC Data Source Properties Panel. Click Next again.
310
9. The installation menu window (Figure 7-12) now lists the IBM Tivoli Storage Manager Warehouse Enablement Pack. Select it and click Next to continue.
311
10.Click Install in the summary window (Figure 7-13) to start the IBM Tivoli Storage Manager Warehouse Enablement Pack installation.
11.View the progress of installation through the messages that are shown in windows until its completion. The final installation window (Figure 7-14) contains either a successful completion notice or messages describing problems. Make sure the window does not list any warnings or errors, and then click Next. If warnings are listed, check the logs to ensure that the warnings can safely be ignored. Click Finish to exit the wizard.
312
313
Table 7-2 provides a list of the warehouse sources and targets whose authority should be checked.
Table 7-2 ITSM WEP Warehouse Object Names
ANR_TWH_CDW_Target ANR_TWH_MART_Target ANR_ITSMRC_Source ANR_TWH_CDW_Source ANR_TWH_MART_Source ANR_IBM_Tivoli_Storage_Manager_v1.1.0_Subject_Area ANR_C05_ETL1_Process ANR_C10_EXPServer_Process ANR_M05_ETLS2_Process ANR_C05_S010_Preextract ANR_C05_S020_Extract ANR_C05_S030_Transform ANR_C05_S040_SRVR_LOAD ANR_C05_S050_STGP_LOAD ANR_C05_S060_NODE_LOAD ANR_C05_S070_FILESP_LOAD ANR_C05_S080_OCCUP_LOAD ANR_C10_S010_EXPServer ANR_m05_s010_spbuildmart ANR_m05_s020_sprollup ANR_m05_s030_spupdatestats ANR_m05_s040_fspuildmart ANR_m05_s050_fsrollup ANR_m05_s060_fsupdatestats
Steps
314
7.5.1 ANR_C05_ETL1_Process
This process is the main central data warehouse ETL. It extracts information from an IBM Tivoli Storage Manager server and loads it into the central data warehouse database. Run this process once each 24 hour period to collect information about the previous days processing. You should choose a time of low activity on the ITSM server in which to run this process. For example, you might choose to schedule the process in the early morning, after your nightly backups have completed. but before the daily server maintenance processes begin. Figure 7-15 illustrates the Process Model of ANR_C05_ETL1_Process. In order to obtain the content of Figure 7-15 on page 316, perform the following steps: Important: Do not modify or make any changes on the Process Model. If you are prompted to Save, click No. 1. Start the IBM DB2 Control Center utility by selecting Start -> Programs -> IBM DB2 -> Control Center. 2. On the IBM DB2 Control Center utility, start the IBM DB2 Data Warehouse Center utility by selecting Tools -> Data Warehouse Center. The Data Warehouse Center logon windows appears. 3. Log into the IBM DB2 Data Warehouse Center utility using the local DB2 administrator user ID, in our case, db2admin. 4. In the Data Warehouse Center Window, expand Subject Areas -> Processes and double-click ANR_ITSM_C05_ETL1_Process. Similar procedures can be followed for the other processes, ANR_C10_EXPServer_Process and ANR_M05_ETLS2_Process.
315
The process ANR_C05_ETL1_Process has the following steps. ANR_c05_s010_preextract This step prepares for the subsequent extraction step by creating staging tables in the central data warehouse database. It is a separate step so that ANR_C05_S020_Extract may be run multiple times to extract information from multiple IBM Tivoli Storage Manager servers.
316
ANR_c05_s020_extract This step extracts information from an IBM Tivoli Storage Manager server and stores it in staging tables in the central data warehouse. Typically, there is a one-to-one correspondence between the table from which data is extracted and the staging table the central data warehouse. For example, information extracted from the IBM Tivoli Storage Manager servers STATUS table is stored in a table called ANR.STG_STATUS in the central data warehouse database. ANR_c05_s030_transform This step transforms some of the data in the staging tables created by ANR_c05_s020_extract for use by subsequent steps. In particular, this step analyzes the information stored in the servers SERVER_HLA field in the STATUS table to determine if it is a fully qualified TCP/IP host name, a TCPI/IP address, or some other value. A new staging table, ANR.STG_SERVER is created by this step. ANR_c05_s040_srvr_load This step loads information about the IBM Tivoli Storage Manager server and the computer on which is running into the central data warehouse. Information about the server and its host computer is obtained from two staging tables: ANR.STG_STATUS and ANR.STG_SERVER. This step inserts or updates, as necessary, component entries in the central data warehouse database. It defines a parent-child relationship between the host computer and the server. Finally, it inserts or updates any attributes that are associated with the components. ANR_c05_s050_stgp_load This step loads information about the servers storage pools into the central data warehouse. Information primarily comes from one staging table, ANR.STG_STGP. This step inserts or updates, as necessary, component entries in the central data warehouse database. It defines a parent-child relationship between the server and storage pools. Storage pools that were removed from the server are detected and expired from the central data warehouse database. This step inserts or updates any attributes that are associated with the components and loads new measurements collected since the last time the process ran. ANR_c05_s060_node_load This step loads information into the central data warehouse about the client nodes that are registered to the IBM Tivoli Storage Manager server. Information primarily comes from one staging table, ANR.STG_NODE.
317
This step inserts or updates, as necessary, component entries in the central data warehouse database. It defines a parent-child relationship between the server and the nodes. Nodes that were removed from the server are detected and expired from the central data warehouse database. This step inserts or updates any attributes that are associated with the components and loads new measurements collected since the last time the process ran. ANR_c05_s070_filesp_load This step loads information about the client nodes file spaces into the central data warehouse. Information primarily comes from one staging table, ANR.STG_FILESP. This step inserts or updates, as necessary, component entries in the central data warehouse database. It defines a parent-child relationship between the file space and its owning node. File spaces that were removed from the server are detected and expired from the central data warehouse database. This step inserts or updates any attributes that are associated with the components and loads new measurements collected since the last time the process ran. ANR_c05_s080_occup_load This step loads client occupancy information into the central data warehouse. Occupancy information describes the amount of server storage that is being used for a given nodes file spaces. It is broken down by node, file space, storage pool and data type (backup, archive, or space management). Information primarily comes from one staging table, ANR.STG_OCCUP. The IBM Tivoli Storage Manager warehouse enablement pack uses an abstract component type to represent occupancy information. The component type, ANR_FS_OCCUPANCY, represents the server storage being used by a file space in a particular storage pool for a given type of file. It is a child in a parent-child relationship with the file space. ANR_FS_OCCUPANCY components are named after the storage pool that owns the storage and the type of storage being described. Names are generated by concatenating the storage pool name with the storage type and separating the two with a colon. This step inserts or updates, as necessary, component entries in the central data warehouse database. It defines a parent-child relationship between a file space and the ANR_FS_OCCUPANCY component. Occupancy components that were removed from the server (due to storage pool migration, for example) are detected and expired from the central data warehouse database. This step inserts or updates any attributes that are associated with the components and loads new measurements collected since the last time the process ran. If invalid data is detected, the ETL process creates the exception table ANR.EXCEPT_SRVR.
318
7.5.2 ANR_C10_EXPServer_Process
This process is used to expire information about IBM Tivoli Storage Manager servers and their subcomponents from the central data warehouse. Run this process manually whenever information about a specific IBM Tivoli Storage Manager server must be expired from the central data warehouse. Do not schedule this process to run automatically, as it will expire data from the central data warehouse that you may wish to keep. The ANR_C10_EXPServer_Process process expires servers based on the contents of table ANR.EXPServer_List. ANR.EXPServer_List contains a single column, SERVER_NAME, which names the server to be expired. Multiple servers may be expired by inserting a row for each server into the ANR.EXPServer_List table. If ANR.EXPServer_List contains no rows, then ANR_C10_EXPServer_Process will not expire any servers. This process has the following step: ANR_c10_s010_expserver This step expires each of the servers listed in the ANR.EXPServer_List table. The servers and their attributes are marked as expired. Then all of the servers subcomponents and their attributes are expired, as are relationships between the components. After all components related to a given server have been expired, the entry for that server is deleted from the ANR.EXPServer_List table.
7.5.3 ANR_M05_ETL2_Process
This process loads data from the central data warehouse into the Storage Pool Occupancy Data Mart and the Filespace Occupancy Data Mart. Run this process once very 24 hours to load new data into the Storage Pool Occupancy and Filespace Occupancy data marts. This process should be run only after the central data warehouse process ANR_C05_ETL1_Process has completed. This process has the following steps: ANR_m05_s010_spbuildmart This step loads data from the central data warehouse into the Hourly Storage Pool star schema in the Storage Pool Occupancy Data Mart. ANR_m05_s020_sprollup This step aggregates the hourly data loaded by the ANR_m05_s010_spbuildmart step and loads it into Daily, Weekly, Monthly, Quarterly and Yearly star schemas.
319
ANR_m05_s030_spupdatestats This step updates statistics about fact tables loaded in the previous steps. The statistics are used by reports to determine which set of facts are the most recent for any given server. ANR_m05_s040_fsbuildmart This step loads data from the central data warehouse into the Hourly Filespace star schema in the Filespace Occupancy Data Mart. ANR_m05_s050_fsrollup This step aggregates the hourly data loaded by the ANR_m05_s040_fsbuildmart step and loads it into Daily, Weekly, Monthly, Quarterly and Yearly star schemas. ANR_m05_s060_fsupdatestats This step updates statistics about fact tables loaded in the previous steps. The statistics are used by reports to determine which set of facts are the most recent for any given server.
320
321
7.7 Reporting
In this section we show how to set up, configure, and use some of the reports provided by the IBM Tivoli Storage Manager 5.2 Warehouse Enablement Pack. Note: As described in Chapter 3, Getting Tivoli Data Warehouse 1.2 up and running on page 71, an ODBC connection to the data mart database needs to be defined on the Crystal Enterprise server before we can work with the reports. Please refer to that chapter for details.
Here, <hostname> represents the hostname of the Crystal Enterprise report server, as shown in Figure 7-17.
322
In this section, we concentrate on viewing IBM Tivoli Storage Manager 5.2 reports and we do not explain the configuration of Crystal Enterprise to its full extent. For details on configuration and administration tasks, refer to the following manuals shipped with the product: Crystal Enterprise 9 Installation Guide Crystal Enterprise 9 Administrators Guide Crystal Enterprise 9 Getting Started Guide Crystal Enterprise 9 ePortfolio Users Guide
323
From the Crystal Enterprise Launchpad, proceed by selecting the ePortfolio link, which will bring you to the window shown in Figure 7-18. In the top bar, you can see that we are authorized as user guest. By default, the guest user has no access to the NetView reports, as indicated by the words No Folders on the left side of the window.
The installation process of the first warehouse enablement pack on the Tivoli Data Warehouse environment creates a user ID on the Crystal Enterprise environment named Tivoli. This user ID is to be used to access the reports provided by any IBM Tivoli software.
324
To log in as the Tivoli user ID, select the Log On button in the upper right corner of the ePortfolio window in Figure 7-18 on page 324. The Log On window as shown in Figure 7-19 is presented. The Tivoli user ID has no password by default. We use the Enterprise authentication method as we have specified during the Crystal Enterprise installation.
325
After entering the required data, select Log On to proceed. Now we are back at the ePortfolio window in Figure 7-20, but now with user Tivoli authority. Instead of No Folders in the guest users ePortfolio window in Figure 7-18, there is now a link visible with the name IBM Tivoli Storage Manager, Warehouse Enablement Pack in the Tivoli user ePortfolio window in Figure 7-20.
326
We follow this link by selecting IBM Tivoli Storage Manager, Warehouse Enablement Pack and proceed to the IBM Tivoli Storage Manager 5.2 reports as shown in Figure 7-21. All reports provided by the IBM Tivoli Storage Manager 5.2 warehouse enablement pack are listed there.
327
In order to obtain the reports, select the desired report, for example, How Has Clients use of Server Storage Changed Over Time?, and select Schedule, as shown in Figure 7-22.
328
The schedule report panel starts. In order to run the reports at this time, select Now under the Run Report option. As this report requires additional parameters, such as time frame, select Parameters under the Customize your Options option, as shown in Figure 7-23.
329
Figure 7-24 shows the selection of parameters for the report. Select Schedule when ready to run the report.
Because we selected to run this report now, the report is scheduled immediately and the reports history window is opened. The just-scheduled report will run and its initial status is set to Pending. Note: The History window is not updated automatically. Press the Refresh button to view the current state.
330
To view successful generated reports from the history window, click the link Instance Time in the left column of the table to view the associated report. This report is shown in Figure 7-25.
Figure 7-25 How Has Clients use of Server Storage Changed Over Time?
Next we show some more examples of reports provided by the IBM Tivoli Storage Manager 5.2 Warehouse Enablement Pack.
331
Figure 7-26 displays a sample report of a single server called AMORRIS on How Has Clients Use of Server Storage Changed Over Time? It shows how to drill down to a client.
Figure 7-26 How Has Clients Use of Server Storage Changed Over Time?
332
Figure 7-27 displays a sample report of all servers on How Has Clients Use of
Figure 7-27 How Has Clients Use of Server Storage Changed by Platform?
333
Figure 7-28 displays a sample report of all servers on How Has My Server
Figure 7-28 How Has My Server Storage Space Utilization Changed Over Time?
334
Figure 7-29 displays a sample report of all servers on Which Clients Are Using the Most Server Storage?.
Figure 7-29 Which Clients are Using the Most Server Storage?
335
336
Part 3
Part
Appendixes
337
338
Appendix A.
339
Creating databases
After installing the RDBMS software, the next step is to create an instance or/and database to hold the data structure. The concept of database may vary among RDBMSs. For example, Oracle uses the term DATABASE to refer to the logical grouping of all objects storing the necessary data for an application as well as its internal catalog, and INSTANCE to refers to the memory, processes, and software pieces necessary to manage and access this data. The same naming convention is used by DB2, while Informix, MSSQL, and Sybase ASE use a slightly different convention. Informix, MSSQL, and Sybase all use the term database to refer to the logical grouping of objects that can be accessed by an application. This database is separated from its catalog, which is a database itself. The grouping of architectural objects, such as memory, disk, processes, etc., in each of these RDBMSs is called a server.
340
In general, an instance/server is configured during the install process, leaving to you the tasks of creating the database to hold data and setting up the protocols for providing access to the instance/server. The process to create a database, without using the graphical tools, is the same for each RDBMS. 1. Connect to the instance 2. Issue the CREATE DATABASE command, providing the location for datafiles, database size, etc.
The foregoing command shows many more parameters and attributes than the minimum required to create a database in DB2; this is done for illustration purposes only. The command will create our TIVDW database. After that command is completed, you can start to create the needed tables, Storage Polls, etc. How we could create the same database on a heterogeneous RDBMS? Let's start with Oracle:
341
GROUP 1 ('/oracle/tivdw/logs/log1.log', /oracle/tivdw/logsmirror/log1.log') SIZE 50M, GROUP 2 ('/oracle/tivdw/logs/log2.log', '/oracle/tivdw/logsmirror/log2.log') SIZE 50M MAXLOGFILES 50 MAXLOGHISTORY 100 MAXDATAFILES 100 MAXINSTANCES 2 ARCHIVELOG CHARACTER SET UTF8 NATIONAL CHARACTER SET AL16UTF16 DATAFILE '/oracle/tivdw/data/df1.dbf' SIZE 50M AUTOEXTEND ON, '/oracle/tivdw/logs/df2.dbf' SIZE 50M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED DEFAULT TEMPORARY TABLESPACE temp_ts EXTENT MANAGEMENT LOCAL UNIFORM 32K SIZE 32K UNDO TABLESPACE undo_ts;
Notice the following differences between DB2 and Oracle: 1. DB2 does not specify LOG file destination, because its default location is the SQLOGDIR under the path you specified in the create database <dbname> on clause. 2. You can initialize the CATALOG, TEMP, AND USER tablespaces for DB2 by specifying their definitions inside the Create Database command. 3. There are two kinds of tablespaces in DB2, just as in Oracle: System Managed Tablespace and Database Managed Tablespace. See the following sections for a brief description of the differences. For the PREFETCHSIZE and EXTENTSIZE, refer to the section on Creating Tablespaces.
342
To create a database, you must connect to the server, using the isql program, and issue the following tasks: 1. Initialize disk device to be used by your database. This is done by mapping a disk or file to a logical device in the Sybase subsystem, by using the DISK INIT command:
DISK INIT NAME = "tvdw_data", PHYSNAME = "/dev/tvdw_data", VDEVNO = 2, SIZE = 25600 DISK INIT NAME = "tvdw_log", PHYSNAME = "/dev/tvdw_log", VDEVNO = 2, SIZE = 25600
These commands would initialize a device called tvdw_data, with 50MB to be used for data storage by our Tivoli Database and a device named tvdw_log, with the same size for transaction log. Information for this device would be inserted in sysdevices table in the master database. 2. After have initialized the device, you could issue the CREATE DATABASE command to create the database for use by Tivoli Data Warehouse, as in the following example:
CREATE DATABASE tivdwdb ON tvdw_data = 50, LOG ON tvdw_log=50
Comparing the Sybase syntax to DB2 syntax is fruitless. All of the options in the DB2 syntax are missing in the Sybase syntax, except by database name. Also in DB2, you do not need to pre-initialize a database device. Sybase collation, for example, is server-wide, while DB2 collation is specified at database creation time.
Managing space
DB2 organizes datafiles in objects called tablespaces, just as Oracle does. But Oracle datafiles are called containers in DB2. Sybase and MSSQL do not have a tablespace concept; Informix uses DBSpace for organizing physical objects.
343
We are creating a tablespace with one container of 1GB (specified as 1000M). One of the options you can use, and which we left out, is PAGESIZE. This parameter can be used when you want to use a non-default pagesize, just in the same way Oracle 9i does with non-default block size caches. To use this non-default PAGESIZE in DB2, you have to create a new BUFFERPOOL configured to use the pagesize you want to, as in the following sample:
db2 CREATE BUFFERPOOL TIVBP SIZE 1000 PAGESIZE 16K.
Then you would modify our CREATE TABLESPACE example to use this bufferpool and the custom pagesize, in the following way:
db2 CREATE TABLESPACE TIVTS PAGESIZE 16K MANAGED BY DATABASE USING(FILE '/DB2/tivdwda1/tivdw/d2dms' 1G) EXTENTSIZE 16 PREFETCHSIZE 16 BUFFERPOOL TIVBP
When creating a tablespace in DB2, here are some considerations: 1. You should specify the space management system for that specific tablespace, except when you want to use SYSTEM MANAGED SPACE, the default. We are using DATABASE MANAGED SPACE because we want to have full control over space allocation. The key difference between these two types of tablespaces is that SMS has its space allocation managed by operating system, while DMS has its space allocation managed by the database manager. When deciding what kind of tablespace to use, keep in mind that SMT cannot have containers added to it after it is created, and that you need to pay attention to OS limitations and allow enough disk space for the tablespace to grow automatically. For example, if your OS has a limit of 2 GB for files, and you need a tablespace of 128 GB, you have to create the tablespace with 64 containers. This is true for the initial CREATE DATABASE command when you specify one of the tablespace creation clauses and for the CREATE TABLESPACE command. 2. EXTENTSIZE is used to set the number of pages DB2 will write before skipping to the next container. This parameter is important for performance because of the way that DB2 stores data, balancing the writing of data between all containers in the tablespace in a cycling manner. 3. PREFETCHSIZE specifies the number of pages that will be read in read-ahead operations. You can use this parameter to reduce I/O in queries. 4. To be able to recover dropped tables, use the DROPPED TABLE RECOVERY option in the CREATE TABLESPACE or in the ALTER TABLESPACE clauses.
344
After creating the tablespace, you can use the ALTER TABLESPACE clause to modify tablespace options. You can change every option except PAGESIZE and EXTENTSIZE, so it is a good idea to think very carefully about these values. Use ALTER TABLESPACE to add, resize or extend a DMS tablespace's container. For example, the following command will add a container with 1GB:
db2 ALTER TABLESPACE TIVTS ADD(FILE '/DB2/tivdwda1/tivdw/d3dms' 1G)
To add a new datafile to an Oracle tablespace, we use the ALTER TABLESPACE command:
ALTER TABLESPACE TIVTS ADD DATAFILE '/oracle/tivdw/data/df2.dbf' SIZE 200M AUTOEXTEND ON NEXT 1M MAXSIZE 1G;
345
Then you use the ALTER DATABASE command to increase the database size, using the new device:
ALTER DATABASE tivdwdb ON tvdw_data1 = 50
346
This will create the table, but will not populate it. This is similar to the CREATE TABLE AS Oracle syntax or SELECT INTO clause in Sybase. However, DB2 also implements a single copy syntax that can be used to directly copy table structure:
CREATE TABLE tbLikeSamp LIKE DEPTO
347
This will create a table called tbLikeSamp with the same structure as the DEPTO Table. As with Oracle syntax, the created table will not have referential attributes, constraints, or indexes. Stored procedures are created using CREATE PROCEDURE command, like this:
CREATE PROCEDURE PROC_NAME (IN PRM1 INT, OUT prm2 DOUBLE) RESULT SETS 1 LANGUAGE SQL BEGIN <SQL COMMANDS HERE> DECLARE EXIT HANDLER FOR NOT FOUND; END
DB2 procedures can be written in external languages like C, Java, COBOL, or any language that allows writing an ActiveX (on Windows) DLL. Note that you can use an exception handler in the procedure, one feature that is not available in Sybase. The Oracle syntax to create a procedure is again similar:
CREATE PROCEDURE PROC_NAME (PRM1 IN INT, prm2 OUT DOUBLE) AS BEGIN <SQL COMMANDS HERE> EXCEPTION WHEN NOT FOUND THEN ; END
Note that Oracle also allows a stored procedure to be written in Java or C languages, giving some flexibility. In Sybase, you issue this command:
CREATE PROC PROC_NAME @PRM1 INT = "D%", @PRM2 DOUBLE OUTPUT AS
Unfortunately, Sybase has no error handling for procedures, and has no flexibility in choosing the language to write the procedure. If you need more power than those offered by Transact-SQL, then you will have to use an extended procedure, which is generally written in languages like C or Pascal, using the Sybase Open Service API. (This can be really painful!)
348
Appendix B.
349
Report listing
AMY : IBM Tivoli Monitoring for Operating Systems, Version 5.1.1, Warehouse Enablement Pack, Version 1.1.0.3 Name AVA Frequency Type Code
Health of a backup server Usage of a Domain Controller Memory Utilization Paging File Utilization
GWA : IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: Apache HTTP Server, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Apache Health Check Error Report Apache Health Check Performance Report
gwa gwa
Hourly Hourly
HealthCheck HealthCheck
INV : Tivoli Configuration Manager, Version 4.2.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Distribution Failures related to package size Distribution Status by distribution id Distribution results by file pack and operation Distribution results by file pack, host, operation Success rate of distributions by distribution id Success rate of distributions by file pack name Success Rate of distributions by time Elapsed distribution time to target Receiving distribution time by target Distribution transfer rate by file package in kb/s File pack distributions that have the most failure Distributions that have the most failures OS that have the most distribution failures Networks that have the most distribution failures
inv inv inv inv inv inv inv inv inv inv inv inv inv inv
Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily
WorstCase Summary Summary Summary WorstCase WorstCase HealthCheck WorstCase WorstCase WorstCase WorstCase WorstCase WorstCase WorstCase
350
Operations in verify failure state Hosts that have the most distribution failures
inv inv
Daily Daily
WorstCase WorstCase
CTD : IBM Tivoli Monitoring for Databases, Version 5.1.0: IBM DB2, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
CTD Hourly Percent Connections Used CTD Hourly Deadlocks Delta Health Report CTD Hourly Percent Catalog Cache Hits CTD Hourly Minimum Buffer Pool Hit Ratio CTD Hourly Maximum Percentage Used of Primary Log
ABA : IBM Tivoli Monitoring for Messaging and Collaboration, Version 5.1.0: Lotus Domino, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Bottom Servers By Availability Calendar Entry Database Access Domino Server Usage: Sessions Dropped Mail Average Statistics Mail Dead Mail Waiting NAB Search Net Echo Replicate Local Round Trip Mail Web Access
ABA ABA ABA ABA ABA ABA ABA ABA ABA ABA ABA ABA
Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily
Summary WorstCase WorstCase WorstCase Summary WorstCase WorstCase WorstCase WorstCase WorstCase WorstCase WorstCase
GWI : IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: Internet Information Server, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
351
gwi
Hourly
HealthCheck
CTR : IBM Tivoli Monitoring for Databases, Version 5.1.0: Informix, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Informix Health Check - 7 Days Informix Thread Activity - 7 Days Informix Disk Utilization - 7 Days Informix Logical Log - 7 Days
GWP : IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: iPlanet Web Server, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
gwp
Hourly
HealthCheck
AMW : IBM Tivoli Monitoring, Version 5.1.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
AMY : IBM Tivoli Monitoring for Operating Systems, Version 5.1.1, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
352
HMI : IBM Tivoli Monitoring for Business Integration 5.1.0 : WebSphere MQ Integrator, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
HMI Metrics for WebSphere MQI Monitor Nodes HMI Status Down for WebSphere MQI Brokers HMI Status for WebSphere MQI Brokers HMI Status for WebSphere MQI Config Managers HMI Status for WebSphere MQI User Name Servers
CTQ : IBM Tivoli Monitoring for Business Integration, Version 5.1.0 : WebSphere MQ, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
CTQ Maximum Down Status for Queue Managers Daily CTQ Maximum Outstanding Messages for Queues Daily CTQ Maximum Running Status for Channels Daily CTQ Availability Status for Queue Managers Daily CTQ Message and Handle Summary for Queues Daily CTQ Availability Status for Channels Daily
CTW : IBM Tivoli Monitoring for Databases, Version 5.1.1: Microsoft SQL Server, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Daily Filegroup Space Usage Health Check Daily Database Space Used (Filegroup) Summary Daily Server Availability Extreme Case Daily Replication Agent Latency Health Check Daily Server CPU Usage Extreme Case Daily Server Error Message Count Summary Daily Database Usage Health Check
353
ANM : IBM Tivoli Netview, Version 7.1.3, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Daily Status Summary By SmartSet Nodes With Longest Outage Time In Routers SmartSet Nodes With Most Status Changes In Routers SmartSet Nodes With The Longest Outage Times Nodes With The Most Daily Status Changes Summary Of Daily Network Status Summary Of Total Outage Time By SmartSet Summary Of Total Status Changes By SmartSet Total Daily Status Changes In Monitored Network Total Daily Status Changes In Routers SmartSet
anm anm anm anm anm anm anm anm anm anm
Daily Hourly Daily Hourly Daily Daily Daily Daily Daily Daily
Summary WorstCase WorstCase WorstCase WorstCase HealthCheck Summary Summary HealthCheck HealthCheck
CTO : IBM Tivoli Monitoring for Databases, Version 5.1.0: Oracle, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
BufferCache Hit Ratio (Daily) - Extreme Case Deadlocks (Daily) - Health Check Dispatcher contention (Daily) - Summary Oracle RDBMS Availability (Daily) - Extreme Case Tablespace Usage (Daily) - Extreme Case
HRM : IBM Tivoli Risk Manager, Version 4.1.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Events by Destination Host - Last 30 Days Service Compromise Events - Last 30 Days Events by Destination Subnetwork - Last 30 Days Events by Destination and Category - Last 30 Days Access/Authentication Events - Last 30 Days Infection Events - Last 30 Days
354
hrm
Daily
HealthCheck
ABH : IBM Tivoli Monitoring for Applications, Version 5.1.0: mySAP.com, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
mySAP server CPU ranking by rolling 30 days mySAP application area by rolling 30 days mySAP application area dialogs by rolling 30 days mySAP SLA conformance by rolling 30 days mySAP application server logins by rolling 30 days mySAP program count by rolling 30 days mySAP dialog count by rolling 30 days mySAP task type CPU ranking by rolling 30 days mySAP application server uptime by rolling 30 days
GMS : IBM Tivoli Monitoring for Applications, Version 5.1.0 : Siebel, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Daily CPU Usage for Siebel Servers Weekly CPU Usage for Siebel Servers Hourly CPU Usage for Siebel Servers Monthly CPU Usage for Siebel Servers Daily Memory Usage for Siebel Servers Daily Memory Usage Health Check for Siebel Servers Daily CPU Usage Health Check for Siebel Servers Daily Memory Usage Health Check for Siebel Tasks Daily CPU Usage Health Check for Siebel Tasks Summary of Daily Status for Connection Broker Summary of Daily Status for Siebel Gateways Summary of Daily Status for Siebel Servers Summary of Daily Memory Usage for Siebel Tasks Summary of Daily Memory Usage for Siebel Servers Summary of Daily CPU Usage for Siebel Tasks
gms gms gms gms gms gms gms gms gms gms gms gms gms gms gms
Daily Weekly Hourly Monthly Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily Daily
WorstCase WorstCase WorstCase WorstCase WorstCase HealthCheck HealthCheck HealthCheck HealthCheck Summary Summary Summary Summary Summary Summary
355
gms
Daily
Summary
COD : Tivoli License Manager v1.1.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Products Installed by Division Summary Report Products Installed by Division Extreme Report Agents 1.1 Installed by Division Summary Report Agents 1.1 Installed by Division Extreme Report Agents 1.0 Installed by Division Summary Report Agents 1.0 Installed by Division Extreme Report Agents 1.1.1 Installed by Division Summary Report Agents 1.1.1 Installed by Division Extreme Report
AWS : IBM Tivoli Workload Scheduler, Version 8.2.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
Jobs with the highest average duration time Workstation with highest CPU utilization Run time statistics for all jobs Run states statistics for all jobs Jobs with highest number of unsuccessful runs Unsuccessful runs for a workstation
IZY : IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: WebSphere Application Server, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
IZY EJB Performance Health IZY EJB Resource Model Summary IZY EJBs with the Most Hits IZY JVM Runtime Resource Model Summary IZY Servlet Performance Health
356
IZY Servlet Session Resource Model Summary IZY Servlets with the Highest Response Time IZY Servlets with the Most Hits IZY Transaction Manager Resource Model Summary IZY Web Application Resource Model Summary
GWL : IBM Tivoli Monitoring for Web Infrastructure: WebLogic, Version 5.1.0, Warehouse Enablement Pack, Version 1.1.0 Name AVA Frequency Type Code
WebLogic Server Availability (Daily) - EC WebLogic JDBC Connection Pool Statistics (Daily) WebLogic EJB Transactions (Daily) - HC WebLogic Servlet Performance (Daily) - EC WebLogic JMS Load (Daily) - EC
Measurement sources
App Name Msmt Source Code Name Parent Code
390 CICS 390 CICS 390 DB2 390 DB2 390 DFSMS 390 DFSMS 390 IMS 390 IMS 390 MQS 390 MQS 390 MVS 390 MVS 390 NPM 390 NPM
D04 DRL D05 DRL D08 DRL D03 DRL D06 DRL D01 DRL D10 DRL
Tivoli Decision Support for OS/390 (CICS component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (DB2 component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (DFSMS component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (IMS component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390(MQS component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (MVS component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (NPM component) Tivoli Decision Support for OS/390
DRL Tivoli DRL Tivoli DRL Tivoli DRL Tivoli DRL Tivoli DRL Tivoli DRL Tivoli
357
390 NPMIP 390 NPMIP 390 OPC 390 OPC 390 RACF 390 RACF 390 RMF 390 RMF 390 WAS 390 WAS Apache PAC Apache PAC BMC Patrol CM for ATMs CM for ATMs DB2 PAC DB2 PAC DM 3.7 Domino PAC IIS PAC IIS PAC ITM 4.1 ITM 5.1.0 ITM for OS 5.1.1 Informix PAC Informix PAC Inventory-Software Distribution Inventory-Software Distribution License Manager MQ PAC
D11 DRL D07 DRL D09 DRL D02 DRL D12 DRL AMX GWA BP6 CMA Tivoli AMX CTD DMN ABA AMX GWI AMW AMW AMY AMX CTR DIS INV COD CTQ
Tivoli Decision Support for OS/390 (NPMIP component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (OPC component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 (RACF component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390(RMF component) Tivoli Decision Support for OS/390 Tivoli Decision Support for OS/390 IBM Tivoli Monitoring IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: Apache HTTP Server BMC PATROL IBM Tivoli Configuration Manager for Automated Teller Machines Tivoli Application IBM Tivoli Monitoring IBM Tivoli Monitoring for Databases: DB2 Distributed Monitoring Classic Edition
DRL Tivoli DRL Tivoli DRL Tivoli DRL Tivoli Tivoli Tivoli AMX null Tivoli null Tivoli AMX Tivoli
IBM Tivoli Messaging and Collaboration, Version 5.1.0: Lotus AMX Domino IBM Tivoli Monitoring IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: Internet Information Server Distributed Monitoring Advanced Edition IBM - Tivoli Monitoring 5.1 IBM Tivoli Monitoring for Operating Systems IBM Tivoli Monitoring IBM Tivoli Monitoring for Databases, Version 5.1.0: Informix IBM Tivoli Software Distribution IBM Tivoli Inventory Tivoli License Manager 1.1 Tivoli AMX Tivoli Tivoli AMX Tivoli AMX Tivoli Tivoli Tivoli
IBM Tivoli Monitoring for Business Integration, Version 5.1.0 AMX : WebSphere MQ
358
MQI PAC MQWorkflow PAC MS SQL Netview Netview Oracle PAC Risk Manager Siebel PAC Siebel PAC Siebel PAC TAPM 2.1 TBSM 1.5 TEC 3.7.1 TEC 3.8 TEDW 1.2 TEDW 1.2 TMTP 5.1 TSRM TSRM TWSA TWSM 1.7 Talking Blocks Talking Blocks Talking Blocks Talking Blocks WS Interchange PAC WebLogic PAC WebLogic PAC WebSphere PAC iPlanet PAC
HMI BIW CTW ANM MODEL1 CTO HRM AMX GMS Tivoli APF GTM ECO ECO Tivoli MODEL1 BWM BTM Tivoli AWT BWM TS2 SNMP SHARED SDESK1 BIX AMX GWL IZY AMX
IBM Tivoli Monitoring for Business Integration 5.1.0 : WebSphere MQ Integrator IBM Tivoli Monitoring for Business Integration - MQSeries Workflow 5.1.0
AMX AMX
IBM Tivoli Monitoring for Databases, Version 5.1.0: Microsoft AMX SQL Server IBM Tivoli Netview Tivoli Common Data Model v 1 IBM Tivoli Monitoring for Databases, Version 5.1.0: Oracle IBM Tivoli Risk Manager Tivoli Monitoring IBM Tivoli Monitoring for Applications 5.1.0 : Siebel eBusiness Applications Tivoli Application Tivoli Application Performance Management Tivoli Business System Manager Tivoli Enterprise Console Tivoli Enterprise Console Tivoli Application Tivoli Common Data Model V1 IBM Tivoli Monitoring for Transaction Performance - Web Transaction Performance 5.1 IBM Tivoli Storage Resource Manager Tivoli Application Tivoli Web Site Analyzer 4.2 Tivoli Web Services Manager 1.7 Talking Blocks Simple Network Management Protocol Shared Service Desk IBM Tivoli Monitoring for Business Integration - WebSphere InterChange Server 5.2.0 IBM Tivoli Monitoring IBM Tivoli Monitoring for Web Infrastructure : WebLogic IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: WebSphere Application Server IBM Tivoli Monitoring Tivoli null AMX Tivoli Tivoli AMX null Tivoli Tivoli Tivoli Tivoli null null Tivoli Tivoli null Tivoli Tivoli null null null null AMX Tivoli AMX AMX Tivoli
359
IBM Tivoli Monitoring for Web Infrastructure, Version 5.1.0: iPlanet Web Server IBM Tivoli Monitoring for Applications, Version 5.1.0: mySAP.com Tivoli application
360
Appendix C.
361
Here we list some of the characteristics of the properties defined in the twh_install_props.cfg file: Statements take the form PROPERTY=VALUE. The property name must be uppercase. Each property in the file must have a value. The properties file is used by the installer. The installer will request the location of this file when installing the WEP. Ensure that a carriage return terminates each line including the last line in the file. A property cannot contain white space. That is, do not put white space on the left side of the equal sign (=).
362
Table C-1 shows the attributes and parameters of the warehouse enablement pack properties file: twh_install_props.cfg.
Table C-1 WEP installation properties Property Description
AFTER_SCRIPT
Optional Perl script that runs at the end of the warehouse pack installation. The script name must be misc/<ava_code>_after.pl, and must be lowercase.. Specifies if the WEP was coded to support multiple data sources. User display name for the WEP. User displayed version. The internal version of the warehouse pack. Three-character identifier unique to the warehouse pack. Vendor support type for CDW. Allowed values are: - DB2UDB (for IBM DB2 UDB Enterprise Edition) - DB2390 (for DB2 UDB for z/OS and OS/390) - DB2UDB,DB2390 (for both) Must be in uppercase.
DWC_INIT_STEP_NAME
One-time initialization step that must be run manually after installation, and before the first job is scheduled. Data mart vendor types supported by this warehouse pack. Same options as CDW_DB_TYPE. Languages supported by the WEP. Supported values of pt_BR,en,fr,de,it,ja,ko,zh_CN,es,zh_TW or if no internationalization TDW_NO_BUNDLES_REQUIRED.
MART_DB_TYPE
NLS_LANG_LIST
363
Property
Description
PRE_ETL_SCRIPT
Optional Perl script that runs before the TAG file is loaded into the DB2 Data Warehouse Center. Optional Perl script that runs after the environment has been validated and the files have been copied to the $TWH_TOPDIR directory. If reports are included in the WEP. TRUE or FALSE. First step in the central data warehouse ETL process. First step in a data mart ETL process. The version number of your source application that is required by this warehouse pack. For documentation purposes only. The required version of TDW for this WEP. The type of ETL processes implemented by the WEP. ETL1 only ETL1 processes. ETL2 only ETL2 processes. E1E2 both ETL1 and ETL2. Minimum prerequisite WEP names required for this WEP to run. Minimum prerequisite WEP version required for this WEP to run.
PRE_SCRIPT
TWH_CORE_PREREQ_VERSION WEP_ROLE
WPACK_PREREQ_AVA_NAME WPACK_PREREQ_VERSION_n
364
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 367. Note that some of the documents referenced here may be available in softcopy only. Introduction to Tivoli Enterprise Data Warehouse, SG24-6607 IBM Tivoli Monitoring Version 5.1: Advanced Resource Monitoring, SG24-5519 Tivoli Data Warehouse Report Interfacing, SG24-6084
Other publications
These publications are also relevant as further information sources: IBM DB2 Universal Database Command Reference Version 7, SC09-2951 IBM Tivoli Monitoring Users Guide Version 5.1.1, SH19-4569 IBM Tivoli Monitoring - Resource Model Reference, SH19-4570 Tivoli Enterprise Data Warehouse Release Notes, GI11-0857 IBM DB2 Universal Database for Windows Quick Beginnings, GC09-2971 Crystal Enterprise 9 Installation Guide Crystal Enterprise 9 Administrators Guide Crystal Enterprise 9 Getting Started Guide Crystal Enterprise 9 ePortfolio Users Guide
365
Online resources
These Web sites and URLs are also relevant as further information sources: IBM Software Support Web site
http://www.ibm.com/software/sysmgmt/products/support
Index of /html/Something_about_XSLM/About_XSLM
http://www.xslm.org/html/Something_about_XSLM/About_XSLM/ Original_Requirements
366
Related publications
367
368
Index
Symbols
.config 236
C
CENTR_LOOKUP 61 Central data warehouse naming convention 39 central data warehouse 4, 36, 38 Centralized logging 19 client administrative 9 Client tier 12 CMC 59 Cognos Powerplay 14 collecting data 232 compete effectively 6 components requirements 36 compromising security 58 configuring Crystal Enterprise 87 configuring the control database 99 control center server 21 control database 38 control database configuration 99, 113 control server 36, 38 create the itm_db database 235 creating agent sites 126, 132 creating CDWs 116 creating MARTs 119 Crystal Enterprise architecture 11 Limited Edition 10 Professional Edition 10 user authority 59 Crystal Enterprise 9 33 Crystal Enterprise configuration 87 Crystal Enterprise installation 86 Crystal Enterprise port numbers 56 Crystal Enterprise Professional version 9 for Tivoli 11 Crystal Enterprise requirements 33 Crystal Enterprise Server 22 Crystal Enterprise server 36 Crystal Enterprise Version 9 Special Edition 11 Crystal Management Console 59 Crystal object rights 59 Crystal object security model 59 Crystal Publishing Wizard 105
Numerics
1.1 WEPs 46 390 systems 45
A
Activating data collection 237 adding CDWs 116 adding MARTs 119 Addition of warehouse packs 32 admin server user 81 adverse effects 61 agent 89 agent configuration 18 agent discovery 18 agent site 126 Agent Sites 22 agents installation 126 Aggregation time line 231 analytical processing 24 Apache 35 APS 28, 33 Architectural choices 62 Automated Process Schedule 28 Automated Process Scheduler 33
B
backing up the TWH_CDW database 242, 304 backing up the TWH_MART database 242, 304 backing up the TWH_MD database 242, 305 batch report 6 BI 5 Buffer Pool 54, 115 buffer pool 146 Business Intelligence 5 business intelligence 4, 64 Business Intelligence tools 70
369
D
data cleansing 8 Data collection 67 data collection process 236 data manipulation 6768 data mart 5, 21, 36 Data mart database naming convention 41 data mining 7 data model 5 data sources 32 Data tier 13 data warehouse 4 accessing 8 building 8 designing 8 governing 8 integrated 5 maintaining 8 subject-oriented 5 time-variant 5 Data Warehouse Center 89 database heap 148 Database Managed Space 150 datacollector delay 240 datacollector prefix 236 datacollector.db_purge_interval 237 datacollector.db_purge_time 237 datacollector.delay 237 datacollector.max_retry_time 237 datacollector.rim_name 237 datacollector.sleep_time 237 datamart 8 DB2 78 DB2 admin 81 DB2 data flow 51 DB2 fenced 80 DB2 Fix Pack 32 DB2 Fix Pack 8 installation 82, 84 DB2 instance 79 DB2 JDBC Applet Server 128 DB2 logs 147 DB2 on z/OS 32 DB2 performance 146
DB2 private protocols 51 DB2 Server installation 82 DB2 server installation 76 DB2 setup 78 DB2 Warehouse Manager administrative client 9 agent 9 components 8 metadata 9 DB2 Warehouse Manager installation 128 DB2_ENABLE_LDAP settings 77 DB2-DB2CTLSV 128 DB2SYSTEM 81 demographic data 5 Development status 274 DHTML 58 distributed deployment install 103 Distributed installation 29 DMS 150 DMS tablespaces 150 DNS 74 Domain Name Server 74 Domino 35 driving forces 6
E
effect on network 61 endpoint 239 ETL development 65 ETL grouping 64 ETLs 63
F
fenced procedures 57 fenced user 80 fetched data 146 FFDC 19 firewall considerations 58 First Failure Data Capture 19 Fix Pack for DB2 32 FMID JDB771D 32, 115 fp8_wr21314 84 Future data growth 32
G
gateway 233, 239 generic ETL1 229
370
I
I/O servers 148 IBM Console 10 IBM DB2 admin 81 IBM DB2 fenced 80 IBM DB2 instance 79 IBM DB2 subsystem 41 IBM Tivoli Monitoring 37 IBM Tivoli Monitoring 5.1.1 232 IBM Tivoli Storage Manager 37 IMS 14 increase revenues 6 information access 6 Information Catalog Manager 9 information delivery 7 Information Management System 14 information tokens 77 Informix 33 Install methods Distributed installation 28, 43 Quick start installation 28, 42 Installation IBM Tivoli Monitoring 5.1.1 232 single machine 42 installation enhancements 18 installation preparation 72 installation process overview 73 installation steps 232 installing CDWs on z/OS 116 installing DB2 Warehouse Manager 128 installing MARTs on z/OS 119 installing remote agent sites 126 installing TDW 93, 103 installing WEPs 142 instance 79 instance owner 57, 235 INSTHOME 81 integrated 5 integrated warehouse 5 Intelligence tier 12 intranet 7 iPlanet 35 iPlanet Enterprise Server 87 irs.conf file 74 ITM 37
ITM Middle Layer Repository 228 ITM V 5.1.1 Generic Warehouse Enablement Pack 241 ITSM 37
J
Java heap 147 JavaScript 39, 41 JDB771D 32, 115 JDBC code level 81
L
large amounts of data 28, 42 LDAP 59, 77 LDAP authentication 59 level of DB2 on z/OS 32 level token 57 local warehouse agent 50 location name 41 logbufsz 147148 logfilsiz 147 logical design 36 logprimary 147 logsecond 147 LogXML Viewer 19
M
maintenance window 63 Managed Resource 237 manager 9 manipulate data 68 mapping data sources 66 MDAC 31 metadata 9 metricsdata 240 metricsdata table 240 Microsoft Data Engine 33, 86 Microsoft SQL Server 33 MON_HEAP_SZ 148 monitoring application metadata 229 MSDE 86 MSDE database 33 MSDE user account 88 multicenter support 5960 Multicustomer support 60 multicustomer support 59 multiple customers support 60
Index
371
Q
query 6 quick start install 93 Quick start installation 29
N
naming convention 39, 41 netsvc.conf file 74 Network Information Service 74 NIS 74 nsswitch.conf file 74 NUM_IOCLEANERS 146 NUM_IOSERVERS 146
R
RDBMS administrator 235 RDBMS Interface Module See RIM Redbooks Web site 367 Contact us xxiii reduce costs 6 remote agents creation 132 remote agents installation 126 remote warehouse agent 50 remote warehouse agent site 126 Remote warehouse agents 62 REORG 149 REORGCHK 149 Report Interface requirements 33 Reporting 70 reporting 6 requirements for Crystal Enterprise 33 resolv.conf file 74 resource model 237238 restart DB2 242, 304 RIM 232 RIM configuration 233 RIM host 232 RIM host machine 235 RIM object 234, 239 RIM object connection 235 RUNSTATS 149
O
object rights 59 object security 59 ODBC 32 ODBC connections 101, 123 ODBC data sources 18 OLAP 4, 24 OLTP 4 online analytical processing 4 operational data 4 Operational Data Stores 8 Oracle 33 organization 67 OS/390 45
P
PAE 152 page cleaners 146 PCI bus 152 performance DB2 146 Physical Address Extensions 152 physical database 235 physical design 36 point of control 8 policy region 237 port numbers 56 prefetchers 146 Process Modeller window 259 Processing tier 13 Product_Code 61 Production status 274 production window 63 profile 237238 profile manager 237 Promote the ETL status 272
S
sa user ID 88 SAP R/3 8 scalability 8 security considerations 57 security model 59 Service Pack 6 33 shell script 235 Single machine installation 42 single machine installation 42 skills required 67 SMF 14 SMS 150 sortheap 148
372
source databases 36 specific object types 238 SQL 61 SQL admin account 88 SQL administrator account 88 SQL scripts 235 stand alone installation 29, 42 star schema 42 Storage Group 54, 115 Structured Query Language 61 See SQL Subject Area 259 subject-oriented 5 subject-oriented warehouse 5 supported Web browsers 35 Sybase 33 System Managed Space 150 System Management Facility 14
T
Tablespace size 54, 115 TCDWn databases 39 TCPIP ports 56 TDS for OS/390 14 TDW architecture 20 TDW installation distributed deployment 103 quick start 93 TDW performance 155 tedw_apps_etl 142 Test the ETLs 267 thin-client 7 timekey_dttm 240 time-variant 5 time-variant warehouse 5 Tivoli Data Warehouse advanced configuration 50 basic components 36 components requirements 36 database requirements 32 logical design 36 ODBC 32 physical design 36 security considerations 57 supported Web browsers 35 warehouse agent 50 Tivoli Decision Support 13 Tivoli desktop 232
Tivoli Enterprise 28 Tivoli Enterprise Data Warehouse 28 deploying 28 hardware requirements 29 software requirements 30 supported Web browsers 33 Tivoli Enterprise Data Warehouse server 226 Tivoli Enterprise Data Warehouse Support, Version 5.1.1 configuration 232 Installation 232 Tivoli environment 232 Tivoli gateway 226 Tivoli managed node 232 Tivoli Management Region server 233 Tivoli Presentation Services 10 Tivoli_Admin_Privileges 235 TMARTn databases 41 TMR server 226 Tmw2kProfile 237 token object 57 transformations statistical 8 TWG.CUST 61 TWH_CDW 39, 93 twh_configwep 142 twh_create_datasource 102, 124 twh_list_agentsites.bat 138 twh_list_cdws.bat 136 twh_list_cs.bat 135 twh_list_marts.bat 137 TWH_MART 41, 93 TWH_MD 38, 93 twh_update_rptsrv 139 twh_update_rptsvr 139 twh_update_userinfo 140
U
UDF 57 update JDBC level for DB2 81 user authentication for Crystal 59 user rights 84 UTF8 54, 115
V
VCAT 54, 115 verify remote agent install 141
Index
373
W
warehouse management infrastructure 8 warehouse agent 18, 22, 36, 50 distributed 8 local 9 remote 9 warehouse agent site 126 warehouse databases on z/OS 54 Warehouse Enablement Pack (WEP) 18 warehouse enablement pack install 142 warehouse enablement packs See WEP warehouse server 9 warehousing 4 wcrtrim 234 wdmcollect 239, 292 wdmconfig 236 wdmdistrib 239 wdmlseng 239 WEP 11, 63, 262 WEP installation 142 WEP Installation Wizard 18 Windows 2000 Advanced Serve 33 work area 261 work flow 67 wrimtest 235 wsetrimpw 234
Z
z/OS 36, 45 z/OS support 13
374
Back cover
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.