Вы находитесь на странице: 1из 140

Tivoli Application Dependency Discovery Manager


Version 7.2

Application Dependency Discovery Manager Administrator's Guide


Tivoli Application Dependency Discovery Manager
®


Version 7.2

Application Dependency Discovery Manager Administrator's Guide


Note
Before using this information and the product it supports, read the information in “Notices,” on page 127.

This edition applies to version 7, release 2, modification 0 of IBM Tivoli Application Dependency Discovery
Manager (product number 5724-N55) and to all subsequent releases and modifications until otherwise indicated in
new editions.
© Copyright International Business Machines Corporation 2006, 2009.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
About this information
The purpose of this PDF document is to provide the related topics from the IBM®
Tivoli® Application Dependency Discovery Manager information center in a
printable format.

The IBM Tivoli Application Dependency Discovery Manager Troubleshooting Guide and
the troubleshooting topics in the information center include information on the
following items:
v How to identify the source of a software problem
v How to gather diagnostic information, and what information to gather
v Where to get fixes
v Which knowledge bases to search
v How to contact IBM Support

Terms used in this information


The following terms represent important concepts in the Tivoli Application
Dependency Discovery Manager (TADDM):
Domain Database
The database that a Domain Server uses to store topology and
configuration data that is populated using sensors, DLAs or the TADDM
API.
Domain Manager UI
The Web UI for administering a single TADDM Domain Manager Server
including: access control, configuring and running discovery, viewing
topology maps and configuration details, and running reports.
Domain Server
An instance of TADDM (including discovery, analytics, and database).
Enterprise Domain Database
The database that is used to store topology and configuration data that is
populated using synchronization with one or more domain servers.
Enterprise Domain Manager UI
The Web UI for administering multiple TADDM Domain Manager Servers
including: access control, synchronization, and viewing cross-domain
topology maps.
Enterprise Domain Server
One or more domain servers that are linked together into a federated
change management database.
launch in context
The concept of moving seamlessly from one Tivoli product UI to another
Tivoli product UI (either in a different console or in the same console or
portal interface) with single sign-on and with the target UI in position at
the proper point for users to continue with their task.
multitenancy
In TADDM, the use by a service provider or IT vendor of one TADDM
installation to discover multiple customer environments. Also, the service

© Copyright IBM Corp. 2006, 2009 iii


provider or IT vendor can see the data from all customer environments,
but within each customer environment, only the data that is specific to the
respective customer can be displayed in the user interface or viewed in
reports within that customer environment.
Product Console
The Java™ client UI for TADDM.

Conventions used in this information


The following conventions are used for denoting operating system-dependent
variables and paths and for denoting the COLLATION_HOME directory.

Operating system-dependent variables and paths


This information uses the UNIX® convention for specifying environment variables
and for directory notation.

When using the Windows® command line, replace $variable with %variable% for
environment variables, and replace each forward slash (/) with a backslash (\) in
directory paths.

If you are using the bash shell on a Windows system, you can use the UNIX
conventions.

COLLATION_HOME directory
The COLLATION_HOME directory is the directory where TADDM is installed
plus the dist subdirectory.

On operating systems such as AIX® or Linux®, the default location for installing
TADDM is the /opt/IBM/cmdb directory. Therefore, in this case, the
$COLLATION_HOME directory is /opt/IBM/cmdb/dist.

On Windows operating systems, the default location for installing TADDM is the
c:\IBM\cmdb directory. Therefore, in this case, the %COLLATION_HOME%
directory is c:\IBM\cmdb\dist.

iv Administrator's Guide
Contents
About this information . . . . . . . . iii Windows Management Instrumentation (WMI)
Terms used in this information . . . . . . . . iii dependency . . . . . . . . . . . . . 30
Conventions used in this information . . . . . . iv Configuring a federated data source in the TADDM
Operating system-dependent variables and paths iv server . . . . . . . . . . . . . . . . 31
COLLATION_HOME directory . . . . . . . iv Accessing federated data sources using the Domain
Manager . . . . . . . . . . . . . . . 32
Chapter 1. Architecture overview . . . . 1 Backing up data . . . . . . . . . . . . . 33
Restoring data . . . . . . . . . . . . . 33
TADDM server . . . . . . . . . . . . . 2
TADDM Enterprise Domain Server . . . . . . . 3
Chapter 4. Configuring your
Chapter 2. Security overview . . . . . . 5 environment for discovery . . . . . . 35
Permissions. . . . . . . . . . . . . . . 5 Discovery overview . . . . . . . . . . . 35
Enabling data-level security . . . . . . . . 5 Discovery profiles . . . . . . . . . . . . 36
Access collections. . . . . . . . . . . . . 6 Configuring for Level 1 discovery . . . . . . . 37
Creating an access collection . . . . . . . . 6 Configuring for Level 2 discovery . . . . . . . 38
Editing an access collection . . . . . . . . 6 Configuring target computer systems . . . . . 38
Deleting an access collection . . . . . . . . 7 Creating the service account . . . . . . . . 40
Roles . . . . . . . . . . . . . . . . . 7 Secure Shell protocol overview . . . . . . . 41
Creating a role. . . . . . . . . . . . . 8 Configuring System p and System i . . . . . 42
Deleting a role . . . . . . . . . . . . . 8 Configuring for Level 3 discovery . . . . . . . 43
Editing a role . . . . . . . . . . . . . 8 Configuring Web and application servers for
Users . . . . . . . . . . . . . . . . . 9 discovery . . . . . . . . . . . . . . 43
Creating a user . . . . . . . . . . . . 9 Configuring the Microsoft Exchange server . . . 45
Changing a user . . . . . . . . . . . . 9 Configuring VMware servers . . . . . . . 45
Deleting a user . . . . . . . . . . . . 10 Database set up for discovery . . . . . . . 46
User groups . . . . . . . . . . . . . . 10
Creating a user group . . . . . . . . . . 10 Chapter 5. Creating a Discovery Library
Changing a user group . . . . . . . . . 11 store . . . . . . . . . . . . . . . 49
Deleting a user group . . . . . . . . . . 11 Discovery Library Adapters . . . . . . . . . 50
Encryption . . . . . . . . . . . . . . 12 IdML schema . . . . . . . . . . . . . 50
FIPS compliance . . . . . . . . . . . . . 12
Resetting security policies . . . . . . . . . 13 Chapter 6. Tuning guidelines . . . . . 53
Security for the TADDM Enterprise Domain Server 14
Windows operating system tuning. . . . . . . 53
Configuring for LDAP . . . . . . . . . . . 15
Network tuning . . . . . . . . . . . . . 53
Configuring for WebSphere federated repositories 16
Database tuning . . . . . . . . . . . . . 53
Configuring the TADDM server for WebSphere
Both DB2 and Oracle database tuning . . . . . 53
federated repositories . . . . . . . . . . 16
DB2 database tuning . . . . . . . . . . . 54
Configuring for Microsoft Active Directory . . . 18
Oracle database tuning . . . . . . . . . . 57
Securing the authentication channel . . . . . 19
Database performance tuning . . . . . . . . 59
Discovery parameters tuning . . . . . . . . 60
Chapter 3. TADDM set up . . . . . . . 21 Bulk load parameters tuning . . . . . . . . 61
Prerequisites for starting the Product Console . . . 21 IBM parameters tuning for Java Virtual Machine
Deploying the Product Console . . . . . . . . 22 (JVM) . . . . . . . . . . . . . . . . 61
Checking server status. . . . . . . . . . . 23 Sun Java Virtual Machine (JVM) parameters tuning 63
Configuring firewalls between the Product Console Java console GUI Java Virtual Machine (JVM)
and the TADDM server . . . . . . . . . . 23 settings tuning . . . . . . . . . . . . . 63
Configuring firewalls between the Enterprise
Domain Server and the TADDM server . . . . . 24 Chapter 7. Populating the database . . 65
Starting the TADDM server . . . . . . . . . 25
Stopping the TADDM server . . . . . . . . 26
Testing server status . . . . . . . . . . . 27 Chapter 8. Database maintenance . . . 69
Windows setup . . . . . . . . . . . . . 28 Deleting old database records . . . . . . . . 69
Configuring Bitvise WinSSHD . . . . . . . 29 Optimizing DB2 for z/OS performance . . . . . 71
Configuring the Cygwin SSH daemon . . . . 30
Chapter 9. Log file overview . . . . . 73
© Copyright IBM Corp. 2006, 2009 v
Setting the maximum size and maximum number of Viewing bar charts that graph memory usage
IBM log files . . . . . . . . . . . . . . 73 for individual services . . . . . . . . . 107
Suggestions for searching logs . . . . . . . . 74 Viewing the circular gauge chart that graphs
processor usage . . . . . . . . . . . 107
Chapter 10. Server properties in the Viewing circular gauge charts that graph
collation.properties file . . . . . . . 75 processor usage for individual services . . . . 107
Viewing table summary of system availability 108
Properties that you should not change . . . . . 75
Viewing configuration items . . . . . . . . 108
API port settings . . . . . . . . . . . . 76
Viewing total number of configuration item
Commands that might require elevated privilege . . 76
changes in the past week . . . . . . . . 108
Database settings . . . . . . . . . . . . 78
Viewing totals for configuration items . . . . 109
Discovery settings . . . . . . . . . . . . 79
Viewing plot chart of configuration items . . . 109
DNS lookup customization settings . . . . . . 81
Working with the table summary of situation
GUI JVM memory settings . . . . . . . . . 82
events . . . . . . . . . . . . . . . 109
GUI port settings . . . . . . . . . . . . 83
Jini settings . . . . . . . . . . . . . . 83
LDAP settings . . . . . . . . . . . . . 84 Chapter 12. Integration with other
Logging settings . . . . . . . . . . . . . 85 Tivoli products . . . . . . . . . . . 111
Operating system settings . . . . . . . . . 86 Configuring for launch in context. . . . . . . 111
Performance settings . . . . . . . . . . . 86 Views that you can launch from other Tivoli
Reconciliation settings . . . . . . . . . . . 87 applications . . . . . . . . . . . . . 111
Reporting and graph settings . . . . . . . . 88 Specifying the URL to launch TADDM views 111
Secure Shell (SSH) settings . . . . . . . . . 88 Sending change events to external systems . . . 114
Security settings . . . . . . . . . . . . . 89 Configuring TADDM . . . . . . . . . . 114
Sensor settings . . . . . . . . . . . . . 90 Configuring IBM Tivoli Netcool/OMNIbus . . 115
Startup settings . . . . . . . . . . . . . 94 Configuring IBM Tivoli Enterprise Console . . 115
Topology builder settings . . . . . . . . . . 95 Configuring an IBM Tivoli Monitoring data
Topology manager settings . . . . . . . . . 95 provider . . . . . . . . . . . . . . 116
View manager settings. . . . . . . . . . . 96 Creating configuration change situations in IBM
XML performance settings . . . . . . . . . 98 Tivoli Monitoring . . . . . . . . . . . 118
Creating detail links in configuration change
Chapter 11. Self-monitoring tool event reports in IBM Tivoli Monitoring . . . . 118
overview . . . . . . . . . . . . . . 99 Configuring change events for a business
system . . . . . . . . . . . . . . 121
Viewing availability data . . . . . . . . . . 99
Integration with IBM Tivoli Business Service
Viewing error messages . . . . . . . . . 100
Manager . . . . . . . . . . . . . . . 121
Viewing services . . . . . . . . . . . 100
Integration with IBM Tivoli Monitoring . . . . 122
Working with non-operational processes . . . 100
IBM Tivoli Monitoring sensor . . . . . . . 122
Viewing errors . . . . . . . . . . . . . 102
IBM Tivoli Monitoring DLA . . . . . . . 123
Viewing performance data . . . . . . . . . 103
Monitoring Coverage report . . . . . . . 123
Viewing the table summary of response times 103
Change events . . . . . . . . . . . . 124
Viewing the bar chart of current response times 104
Self-monitoring tool . . . . . . . . . . 124
Viewing the plot chart of response times . . . 104
Launch in context . . . . . . . . . . . 124
Working with the table summary of situation
Tivoli Directory Integrator . . . . . . . . . 125
events . . . . . . . . . . . . . . . 104
Viewing the infrastructure data . . . . . . . 106
Viewing the bar chart that graphs an Appendix. Notices . . . . . . . . . 127
information summary . . . . . . . . . 107 Trademarks . . . . . . . . . . . . . . 128

vi Administrator's Guide
Chapter 1. Architecture overview
The Tivoli Application Dependency Discovery Manager (TADDM) provides
automated application dependency mapping and configuration auditing.

For TADDM to deliver the mapping and auditing, it depends on the discovery of
information. Discovery is a multithreaded process and usually occurs on multiple
targets simultaneously. The discovery process produces cross-tier dependency
maps and topological views. The following scenario describes some of the features
of TADDM:
1. The agent-free discovery engine instructs and coordinates the sensors to
determine and collect the identity, attributes, and settings of each application,
system, and network component.
2. The configuration data, dependencies, and change history are stored in the
database, and the topologies are stored in a cache on the TADDM server.
3. The discovered data is displayed as runtime, cross-tier application topologies.
4. TADDM generates reports and additional topological views of the information
stored in the database.

The job of the sensor is to discover configuration items, create model objects, and
persist the model objects to the database. The sensors use protocols that are
specific to the resources that they are designed to discover, for example, JMX,
SNMP, CDP, SSH, and SQL. When possible, a secure connection is used between
the TADDM server and the targets to be discovered. .

Computer system sensors, for example, emulate a user logging into the target and
running standard operating system commands, the output of which are returned to
the TADDM server. The TADDM server analyzes the data returned by the sensors.
Through the analysis, the target resource and the relationships to it are identified.

There are three configurable aspects to a sensor:


v Scope: Identifies which IP addresses or applications are queried.
v Access list: The credentials that the sensor uses to work with the security
mechanism provided in the standard environment. Examples of access lists
include SSH keys, community strings, or WebSphere® credentials including SSL
keys.
v Interval for discovery: Identifies whether sensors are scheduled for discovery or
run on demand.

Figure 1 on page 2 illustrates the discovery process. Discovery is an iterative


process that starts with a seed. A seed defines the type of sensor and enough
information to determine how to talk to the computer system, or application, to be
discovered. For example, a seed for DB2® would include the IP address and port
where DB2 can be contacted. As the discovery process continues, the process
determines whether the initial target being discovered is a network device or a
computer system. In addition, sensors discover, with greater specificity, the
in-scope devices and application until all items in the environment are identified.
At each step of the process, as more information is gathered, the discovery engine
calls more specific sensors.

© Copyright IBM Corp. 2006, 2009 1


Figure 1. Discovery process

Figure 2 is an overview of the TADDM architecture that is described in subsequent


sections.

Enterprize
Domain
Server

Database

Domain Domain Domain


Server Server Server

Database Database Database

Figure 2. Overview of TADDM architecture

TADDM server
A domain is a logical subset of the infrastructure of a company or other
organization. Domains can delineate organizational, functional, or geographical
boundaries. One TADDM server is deployed per domain to discover domain
applications and their supporting infrastructure, configurations, and dependencies.
A single TADDM server can also be referred to as a TADDM Domain Server.

When you use TADDM servers, you use the Product Console and Domain
Manager to set up and view TADDM data. The Product Console is a graphical
Java based application that you use to set up and customize discoveries and view
and analyze configuration information that is collected. The Domain Manager is a
graphical Web application that you use to view enterprise-wide information. You
also use the Domain Manager to administer security.

TADDM is structured to scale to large data centers. A single TADDM server can
support approximately 10 000 physical servers. You can tune TADDM operating
characteristics (for example, discovery engine thread counts and discovery sensor

2 Administrator's Guide
time-outs) and increase the TADDM server or TADDM database resources (for
example, memory and processor) to achieve increased support for infrastructure
discovery and storage.

If you specified an IBM Tivoli Change and Configuration Management Database


(Tivoli CCMDB) server when you installed the TADDM server, you can also view
artifact information in the Tivoli CCMDB.

TADDM Enterprise Domain Server


The TADDM Enterprise Domain Server is used in large-scale, enterprise
environments and unifies the data from individual TADDM domains.

Although it is possible to configure TADDM servers to support large enterprise


environments, you can also use the TADDM Enterprise Domain Manager.

The TADDM Enterprise Domain Manager synchronizes data from multiple local
TADDM server instances to provide a single, enterprise-wide repository of
information.

The Domain Manager also provides a Web application to administer the local
domain servers and to view the data. It has a query and reporting user interface
(UI) that you can customize and share across the TADDM Enterprise Domain
Manager.

Use the TADDM Domain Manager to add new TADDM Domain Servers and to
view individual domain discovery information.

The following list summarizes the functionality of the TADDM Enterprise Domain
Server:
v Maintains change history
v Provides bulk load and import capabilities
v Provides cross-domain query capability
v Supports a common security framework across domains
v Provides the same API for access to data across domains

The TADDM Enterprise Domain Server does not provide support for the following
items:
v Discovery
v Domains cannot belong to more than one TADDM Enterprise Domain Server
v Nesting of the TADDM Enterprise Domain Server

Chapter 1. Architecture overview 3


4 Administrator's Guide
Chapter 2. Security overview
The product software controls user access to configuration items through the use of
permissions, access collections, and roles.

Access control to configuration items is established by the following process:


1. Configuration items are aggregated into access collections.
2. Roles are defined that aggregate sets of permissions.
3. Users or user groups are defined, and roles are assigned to each user or user
group to grant specific permissions (for specific access collections) to that user.

Permissions
A permission authorizes the user to perform an action or access a specific
configuration item.

The following four permissions are provided:


Read The user can view information about a configuration item.
Update
The user can view and modify information about a configuration item.
Discover
The user can initiate a discovery, create and update discovery scope
objects, or create new objects from the Product Console Edit menu.
Admin
The user can create or update users, roles, and permissions. The user can
also configure authorization policy with the authorization manager.

Permissions are aggregated into roles, and users are granted permissions by
assigning them roles that have those permissions.

Permissions are classified as data-level or method-level as follows:


Data-level
Read, Update
Method-level
Discover, Admin

When the product is installed, data-level security is disabled by default, giving all
users Read and Update permissions for any configuration item.

Enabling data-level security


You can enable data-level security for Linux, Solaris, AIX, and Linux on System z®
operating systems by editing the $COLLATION_HOME/etc/collation.properties file.
For Windows operating systems, you must edit the %COLLATION_HOME%\etc\
collation.properties file.

About this task

When the product is installed, data-level security is disabled by default. As a


result, all users can view and modify information about any configuration item. To
enable data-level security so you can grant Read and Update permissions
selectively, complete the following steps:

© Copyright IBM Corp. 2006, 2009 5


1. Use your preferred file editor to open the collation.properties file.
2. Locate the following line in the file:
com.collation.security.enabledatalevelsecurity=false
3. Change false to true.
4. Save the file.
5. Stop and restart the TADDM server.

Access collections
An access collection is a set of configuration items that is managed collectively for
security purposes.

TADDM does not manage access to configuration items on an individual basis.


Instead, the configuration items are aggregated into sets called access collections.
The security of each access collection is then managed by creating roles and
assigning the roles to users.

Access collections are used to limit the scope of a role. The role applies only to the
access collections that you specify when assigning the role to a user.

An access collection called DefaultAccessCollection (containing all configuration


items) is created when the product is installed. All users have Read and Update
permissions for this access collection by default, unless data-level security is
enabled.

Creating an access collection


To control user access to configuration items, you must create an access collection.

Before you begin

Before creating an access collection, run a discovery to ensure that the database of
configuration items is up-to-date.

About this task

To create an access collection, complete the following steps:


1. From the Product Console, click Edit → Create Collection. The Create Collection
window is displayed.
2. Type a name for the collection.
3. Select the Access Collection check box.
4. To indicate which configuration items that you want to include in the
collection, Click the Add and Remove buttons.
5. Click OK. The access collection is created.

Editing an access collection


You can edit an access collection to change its contents.

Before you begin

Before editing an access collection, run a discovery to ensure that the database of
configuration items is up-to-date.

6 Administrator's Guide
About this task

To edit an access collection, complete the following steps:


1. Open the Product Console.
2. In the Discovered Components list, select Collections. A list of collections is
displayed.
3. Right-click a collection, and click Edit. The Create Collection window is
displayed.
4. To modify the list of configuration items included in the collection, click the
Add and Remove buttons.
5. To save your changes, click OK.

Deleting an access collection


You can delete an access collection that is no longer needed.

About this task

Important: Deleting a collection results in it no longer being aggregated. The


configuration items belonging to the collection are not deleted; they
simply are no longer aggregated under the access collection name.

To delete an access collection, complete the following steps:


1. Open the Product Console.
2. In the Discovered Components list, select Collections. A list of collections is
displayed.
3. Right-click a collection and click Delete. A confirmation window is displayed.
4. Click Yes. The access collection is deleted.

Roles
A role is a set of permissions that can be assigned to a user. Assigning a role
confers specific access capabilities.

There are three predefined roles:


Operator
This role has Read permission.
Supervisor
This role has Read, Update, and Discover permissions.
Administrator
This role has Read, Update, Discover, and Admin permissions.

You can create additional roles to assign other combinations of permissions. Some
useful suggested combinations are as follows:
Read Permission to read objects in assigned access collections. This is suitable for
an operator role.
Read + Update
Permission to read and update objects in assigned access collections.
Read + Update + Admin
Permission to read and update objects in assigned access collections, and to
create users, roles, and permissions.

Chapter 2. Security overview 7


Read + Update + Discover
Permission to read and update objects in assigned access collections, and to
start discovery operations. This combination is suitable for a supervisor
role.
Read + Update + Admin + Discover
All permissions. This is suitable for an administrator role.

When you assign a role to a user, you must specify one or more access collections
for that role. This limits the scope of the role to only those access collections that
are appropriate for that user.

For example, Sarah is responsible for the NT servers and workstations of your
company, so you assign her the supervisor role over an access collection that
contains those systems. Jim is responsible for the Linux systems, so you assign him
the supervisor role for an access collection that contains them. Both users perform
exactly the same operations, and thus are assigned the same role, but have access
to different resources.

If you are using an Enterprise Domain Server and you want to create a role, you
must create the role for each TADDM domain, and then synchronize them with the
Enterprise Domain Server.

Creating a role
If the predefined roles are not sufficient for your needs, you can create a new one
with the permissions that you choose.

About this task

To create a new role, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → Roles. A list of roles is displayed.
3. Click Create Role. The Create Role window is displayed.
4. Type a name for the new role, then select the permissions that you want to
grant.
5. Click Create Role. The list of roles is displayed again, with the new role
included in the list.

Deleting a role
You can delete a role that is no longer needed.

About this task

To delete a role, complete the following steps.

Restriction: The predefined roles (administrator, operator, and supervisor) cannot


be deleted.
1. Start the Domain Manager.
2. Click Administration → Roles. A list of roles is displayed.
3. Click Delete next to the role that you want to delete. The role is deleted.

Editing a role
You can edit a role to set its permissions.

8 Administrator's Guide
About this task

To create a new role, complete the following steps.

Restriction: The predefined roles (administrator, operator, and supervisor) cannot


be edited.
1. Start the Domain Manager.
2. Click Administration → Roles.
3. From the list of roles, select the role you want to edit and then click Edit.
4. In the Edit Role window, select the permissions you want to grant.
5. Click OK. The list of roles is updated to show the changes.

Users
In the context of security, a user is a person who is given access to configuration
items.

Users are created in the Domain Manager. User access to configuration items is
defined by the roles and access collections that you assign to that user. You can
change these assignments at any time.

Creating a user
When the file-based registry is used for user management, you can create a new
user and assign roles to that user.

About this task

To create a user, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → Users. A list of users is displayed.
3. Click Create User. The Create User window is displayed.
4. Enter the following information for the new user in the following fields:
v Username
v Email address
v Password (twice for confirmation)
v Session timeout (in minutes)
For an unlimited session timeout for the Product Console, the session
timeout value is -1.
5. Assign roles to the new user. For each role that you assign, perform the
following steps:
a. Select the check box for that role.
b. Specify the scope of the role by selecting one or more access collections.
6. Click Create User. The user is added. The list of users is displayed again, with
the new user included in the list.

Changing a user
When the file-based registry is used for user management, you can change the
information for an existing user.

Chapter 2. Security overview 9


About this task

In addition to changing the user details (email address, password, and session
timeout), you can also change the access permissions by assigning different roles
and access collections.

To change a user, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → Users. A list of users is displayed.
3. Click the user name that you want to edit, and then click Edit. The information
for the user is displayed.
4. Change the user details as needed:
v Email address.
v New password (twice for confirmation).
v New password expiration date. If a date is specified that is not valid, the
expiration is set to 90 days from the current date.
v Session timeout (in minutes).
5. Change the roles and access permissions to meet your security requirements.
6. The button that you click to save your changes depends on the properties that
you change:
v To save User Detail properties, click Change
v To save Change Password properties, click Change Password
v To save Change Role Assignment properties, click Change Role

Deleting a user
When the file-based registry is used for user management, you can delete a user
that you created.

About this task

Restriction: The administrator cannot be deleted.

To delete a user, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → Users. A list of users is displayed.
3. Select the user you want to delete and click Delete. A confirmation window is
displayed.
4. Click OK. The user is deleted.

User groups
In the context of security, a user group consists of several users who have the same
roles or permissions.

User groups are created in the Domain Manager. User group access to
configuration items is defined by the roles and access collections that you assign to
that user group. You can change these assignments at any time.

Creating a user group


When the file-based registry is used for user management, you can create a new
user group.

10 Administrator's Guide
About this task

To create a user group, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → User Groups. The User Groups pane is displayed.
3. Click Create Group. The Create Group pane is displayed.
4. In the Create Group pane, select the users for your user group.
5. Assign roles to the new user group. For each role that you assign, perform the
following steps:
a. Select the check box for that role.
b. Specify the scope of the role by selecting one or more access collections.
6. Click OK. The user group is added. The list of user groups is displayed again,
with the new user group included in the list.

Changing a user group


When the file-based registry is used for user management, you can change the
information for an existing user group.

About this task

In addition to adding or removing users in a user group, you can also change the
access permissions by assigning different roles and access collections.

To change a user group, perform the following steps:


1. Start the Domain Manager.
2. Click Administration → User Groups. A list of user groups is displayed.
3. Select the user name of the user group that you want to change and click Edit.
The Edit User Groups pane is displayed.
4. Add or remove users from the user group.
5. If the security requirements of the user group have changed, change the roles
and access permissions as needed.
6. Click OK. Your changes are saved.

Deleting a user group


When the file-based registry is used for user management, you can delete a user
group that you created.

About this task

To delete a user group, complete the following steps:


1. Start the Domain Manager.
2. Click Administration → User Groups. The User Groups pane is displayed.
3. Select the user group you want to delete and click Delete. A confirmation
window is displayed.
4. Click OK. The user group is deleted.

Chapter 2. Security overview 11


Encryption
Encryption is the process of transforming data into an unintelligible form in such a
way that the original data either cannot be obtained or can be obtained only by
using a decryption process.

TADDM uses the AES 128 algorithm from the FIPS-compliant IBMJCEFIPS security
provider to encrypt the following items:
v Passwords, including entries in collation.properties and userdata.xml.
v Access list entries stored in the database.

When you install TADDM, an encryption key is generated and passwords are
encrypted using this new encryption key. When migrating to TADDM 7.2 from a
previous release, an encryption key is generated and any existing passwords and
access list entries are migrated to use this new encryption key.

The default location for the TADDM encryption key is etc/TADDMSec.properties.


To change the location of the key file, in the collation.properties file, change the
value of the following property: com.collation.security.key. You can set the
property to another location relative to $COLLATION_HOME.

Note: To avoid data loss, keep a backup copy of the encryption key in a separate
location so that it can be restored if a problem occurs with the original copy.

To change the TADDM encryption key, use the bin/changekey.sh script, or


equivalent batch script file. This script migrates encrypted entries in
collation.properties and userdata.xml, as well access list entries stored in the
database.

Use the changekey script as follows:


./changekey.sh $COLLATION_HOME admin_user admin_password

For example:
./changekey.sh /opt/IBM/cmdb/dist administrator taddm

FIPS compliance
You can configure TADDM to operate in a mode that uses FIPS-compliant
algorithms for encryption by setting the FIPSMode property to true in
collation.properties.

When in FIPS mode, TADDM uses the following FIPS 140-2 approved
cryptographic providers:
v IBMJCEFIPS (certificate 376)
v IBMJSSEFIPS (certificate 409)
For more information about certificates 376 and 409, see the National Institute of
Standards and Technology (NIST) Web site, http://csrc.nist.gov/cryptval/140-1/
1401val2004.htm.

FIPS mode can be used with the following types of TADDM discoveries:
v Level 1 discoveries where the TADDM server and discovered systems are any
TADDM-supported platforms.

12 Administrator's Guide
v Level 2 discoveries where the TADDM server is Windows based and Windows
Management Instrumentation (WMI) is used to discover Windows platforms.
v Level 1 and Level 2 discoveries through IBM Tivoli Monitoring, using the
ITMScopeSensor, discovering systems managed by IBM Tivoli Monitoring. The
TADDM server can run on any supported TADDM platform.
Other TADDM discovery types are not supported.

Resetting security policies


If it becomes necessary to reset the security policies (permissions, roles, and access
collections) to their default state, you can do so by replacing two files.

About this task

Important: Resetting security policies requires that you delete and recreate all
users, so it should be used with caution.

The security policies are stored in two files. The files are used to initialize the
security policies. The following files are the current working versions:
v AuthorizationPolicy.xml
v AuthorizationRoles.xml

The files are located in different directories depending on which operating system
you are using:
v For the supported Linux, Solaris, AIX, and Linux on System z operating systems,
the files are located in the $COLLATION_HOME/var/policy directory.
v For the supported Windows operating systems, the files are located in the
%COLLATION_HOME%\var\policy directory.

After the security policies are initialized, these files are renamed and stored in the
same directory. The following files have been renamed:
v AuthorizationPolicy.backup.xml
v AuthorizationRoles.backup.xml

Default versions of the files, containing the supplied security policies, are also
located in the same directory. The following files are the default versions:
v DefaultPolicy.xml
v DefaultRoles.xml

To restore the default security policies, complete the following steps:


1. If you need to save the current policy files, rename them or move them to a
different directory.
2. Delete any users that you created.
3. Delete one of the following directories:
v For Linux, Solaris, AIX, and Linux on System z, delete the
$COLLATION_HOME/var/ibmsecauthz directory.
v For Windows systems, delete the %COLLATION_HOME%\var\ibmsecauthz
directory.
4. Create a copy of the DefaultPolicy.xml file and name it
AuthorizationPolicy.xml.

Chapter 2. Security overview 13


5. Create a copy of the DefaultRoles.xml file and name it
AuthorizationRoles.xml.
6. Restart the server.
7. As needed, create users.

Security for the TADDM Enterprise Domain Server


When you use a TADDM Enterprise Domain Server, you must make security
changes when configuring the server for your environment.

If you are using the TADDM file-based registry and a TADDM domain is added to
a TADDM Enterprise Domain Server, you must re-create in the TADDM Enterprise
Domain Server any users that already exist in a domain, including assigned roles
and access that is granted to access collections. If you are using a Lightweight
Directory Access Protocol (LDAP) or WebSphere Federated Repositories user
registry, you must add to the TADDM Enterprise Domain Server the authorization
for any users that access TADDM.

When you add a domain to the TADDM Enterprise Domain Server, authentication
and authorization for the new domain is delegated to the TADDM Enterprise
Domain Server.

Logins to the domain are processed at the TADDM Enterprise Domain Server. In
addition, security manager method calls are processed by the TADDM Enterprise
Domain Server.

The following list summarizes other security information that you need to know to
configure your TADDM Enterprise Domain Server:
v For TADDM to function properly, the TADDM Enterprise Domain Server must
be running. A TADDM domain delegates security operations to the TADDM
Enterprise Domain Server, and this delegation is updated every 2.5 minutes. If 5
minutes pass and this delegation is not updated, the TADDM domain no longer
delegates security operations and proceeds as if no TADDM Enterprise Domain
Server is present. In this situation, TADDM UIs must be restarted to re-establish
the sessions with the Enterprise Domain Server.
v In each of the following situations, a TADDM UI must be restarted to
re-establish sessions with the correct Enterprise Domain Server:
– The domain in which the UI is running is added to an Enterprise Domain
Manager.
– The UI is opened in a domain while that domain is connected to an
Enterprise Domain Manager, but the Enterprise Domain Manager later
becomes unavailable, such as during a restart of the Enterprise Domain
Server or when network problems occur.
v Roles, permissions, and access collections that are stored in the TADDM server
are synchronized from the domain to the TADDM Enterprise Domain Server.
User to role mappings are not synchronized.
v Roles that you created for the domain can be used by the TADDM Enterprise
Domain Server after these objects are synchronized from the domain to the
Enterprise Domain Server.
v Users are not synchronized to the TADDM Enterprise Domain Server.

14 Administrator's Guide
v A central user registry, such as LDAP or a WebSphere Federated Repositories
registry, is the preferred method of authentication for the TADDM Enterprise
Domain Server. Using a central user registry, user passwords are stored in one
location.
v To use Microsoft® Active Directory as the authentication method for TADDM,
you need to configure TADDM to use WebSphere federated repositories and
then configure WebSphere federated repositories to use Active Directory.
v Access collections cannot span domains.
v Synchronization works from the domain to the TADDM Enterprise Domain
Server. Objects that are created in the TADDM Enterprise Domain Server are not
propagated to the domain.
v Create and populate access collections at the domain, and synchronize with the
TADDM Enterprise Domain Server.
v Create roles at the domain, and synchronize with the TADDM Enterprise
Domain Server.
v Authorize users at the TADDM Enterprise Domain Server to provide access to
access collections from multiple domains.

Configuring for LDAP


You can configure an external LDAP server for user authentication.

Before you begin

If you want to authenticate to an LDAP user registry, an LDAP V2 or V3 registry


can be configured. IBM(r) Tivoli(r) Application Dependency Discovery Manager
LDAP support has been tested with IBM Tivoli Directory Server Version 6 Release
0.

About this task

If you have an LDAP registry, you can use the users that are defined in the LDAP
registry without defining new users by configuring for LDAP. When you configure
for LDAP, you must create a user named administrator in your LDAP registry. This
administrator user is allowed to configure access to TADDM and grant other users
access to TADDM objects and services.

To configure an external LDAP server for user authentication, set the following
security-related properties in the collations.properties file:
v com.collation.security.usermanagementmodule
v com.collation.security.auth.ldapAuthenticationEnabled
v com.collation.security.auth.ldapHostName
v com.collation.security.auth.ldapPortNumber
v com.collation.security.auth.ldapBaseDN
v com.collation.security.auth.ldapUserObjectClass
v com.collation.security.auth.ldapUIDNamingAttribute
v com.collation.security.auth.ldapGroupObjectClass
v com.collation.security.auth.ldapGroupNamingAttribute
v com.collation.security.auth.ldapGroupMemberAttribute
v com.collation.security.auth.ldapBindDN
v com.collation.security.auth.ldapBindPassword

Chapter 2. Security overview 15


Related reference
“LDAP settings” on page 84
An external LDAP server can be used for user authentication. Both anonymous
authentication and password-based authentication are supported with an external
LDAP server.

Configuring for WebSphere federated repositories


If you have a Tivoli WebSphere application configured for a central user registry
that uses WebSphere federated repositories, you can configure for WebSphere
federated repositories in a federated repositories registry.

Configuring the TADDM server for WebSphere federated


repositories
WebSphere federated repositories is a flexible meta-repository within WebSphere
that supports multiple types of user registries, including Microsoft Active
Directory.

Before you begin

If your user registry is Active Directory, you must configure TADDM to use
federated repositories. You must configure TADDM to use WebSphere federated
repositories if you use other Tivoli products in your environment, and you require
single sign-on between TADDM and any of the following products:
v IBM Tivoli Change and Configuration Management Database (CCMDB) 7.1, or
later
v IBM Tivoli Business Service Manager 4.2, or later
v any other Tivoli integrated portal-based application

This configuration enables single sign-on between Tivoli applications using


WebSphere Lightweight Third-Party Authentication (LTPA) tokens. For example,
configuring TADDM for the same WebSphere federated repositories used by
CCMDB supports single sign-on for launch in context between IBM Tivoli CCMDB
and TADDM.

About this task

When configuring for WebSphere federated repositories, you must create a user
named administrator in your federated repositories registry. This administrator user
can configure access and grant other users access to objects and services.

To automatically configure for WebSphere federated repositories, install TADDM


and select WebSphere Federated Repositories as your user registry during
installation.

If necessary, you can manually configure for WebSphere federated repositories. To


perform the configuration manually, complete the following steps:
1. Stop the TADDM server.
2. Specify the user management module used by this TADDM server. The
following are possible values:
v file: for a file-based user registry. (This is the default value.)
v ldap: for an LDAP user registry

16 Administrator's Guide
v vmm: for a user registry that uses the federated repositories of WebSphere
Application Server
For example, in the $COLLATION_HOME/etc/collation.properties file:
com.collation.security.usermanagementmodule=vmm
3. Specify the WebSphere host name and port in the collation.properties file.
For example:
com.collation.security.auth.websphereHost=localhost
com.collation.security.auth.webspherePort=2809
If you are manually configuring TADDM to use WebSphere federated
repositories, there is a configuration consideration:
v When specifying the WebSphere port in the collations.properties file, use
the following property: com.collation.security.auth.webspherePort. The
WebSphere port should be the bootstrap port for the WebSphere server. For
WebSphere Application Server and the embedded version of WebSphere
Application Server, the default port is 2809. For WebSphere Application
Server Network Deployment, which IBM Tivoli CCMDB uses, the default
port is 9809.
4. Specify the WebSphere administrator user name and password in the
collation.properties file. For example:
com.collation.security.auth.VMMAdminUsername=administrator
com.collation.security.auth.VMMAdminPassword=password
5. Make the following change to the authentication services configuration file:
v For the Linux, Solaris, AIX, and Linux on System z operating systems, the
file is located in the following path: $COLLATION_HOME/etc/
ibmessclientauthncfg.properties.
v For the Windows operating systems, the file is located in the following
path: %COLLATION_HOME%\etc\ibmessclientauthncfg.properties.
In the authnServiceURL property, substitute the fully qualified domain name of
the system your WebSphere instance is installed on and the HTTP port of the
WebSphere instance.
# This is the URL for the Authentication Service
authnServiceURL=http://localhost:9080/TokenService/services/Trust
6. Copy the WebSphere orb.properties and iwsorbutil.jar files into the JRE
used by your TADDM installation. For example in a TADDM Linux
installation, do the following:
a. Copy dist/lib/websphere/6.1/orb.properties to dist/external/
jdk-1.5.0-Linux-i686/jre/lib/.
b. Copy dist/lib/websphere/6.1/iwsorbutil.jar to dist/external/
jdk-1.5.0-Linux-i686/jre/lib/ext/.
7. Specify the WebSphere host name and port in the sas.client.props file:
v For the Linux, Solaris, AIX, and Linux on System z operating systems, file is
located in the following path: $COLLATION_HOME/etc/sas.client.props.
v For the Windows operating systems, file is located in the following path:
%COLLATION_HOME%\etc\sas.client.props, for example:
com.ibm.CORBA.securityServerHost=host1.austin.ibm.com
com.ibm.CORBA.securityServerPort=2809

Note: For WebSphere Application Server and the embedded version of


WebSphere Application Server, the default port is 2809. For WebSphere
Application Server Network Deployment, which IBM Tivoli CCMDB
uses, the default port is 9809.

Chapter 2. Security overview 17


8. Specify the WebSphere administrator user name and password in the
sas.client.props file, for example:
# RMI/IIOP user identity
com.ibm.CORBA.loginUserid=administrator
com.ibm.CORBA.loginPassword=password
9. Optionally, you can use the following steps to encrypt the login password in
the sas.client.props file:
a. Copy the sas.client.props file back to the TADDM server, in the
$COLLATION_HOME/etc directory.
b. Encrypt the password as follows, depending on which operating system
you have installed WebSphere.
v For Linux, Solaris, AIX, and Linux on System z operating systems:
Use the PropFilePasswordEncoder.sh command.
v For Windows operating systems:
Use PropFilePasswordEncoder.bat For example,
C:\WebSphere\profiles\AppSrv01\bin\PropFilePasswordEncoder C:\temp\sas
.client.props com.ibm.CORBA.loginPassword
c. Copy the sas.client.props file back to the TADDM server, in the etc
directory.
10. Start the TADDM server.

Important: There are security configurations for Tivoli CCMDB that allow groups
and group memberships to be created and maintained in the Maximo®
user and group applications.

When Tivoli CCMDB is configured for this, TADDM uses its own,
separate repository from Tivoli CCMDB. Users must be created in both
Tivoli CCMDB/Maximo and TADDM.

TADDM can be configured to use user and group definitions in


external user registries through WebSphere Federated Repositories.
However, TADDM cannot use user and group definitions that are
stored in Tivoli CCMDB because these are not supported by
WebSphere Federated Repositories.

Configuring for Microsoft Active Directory


You can use Microsoft Active Directory as the authentication method for TADDM
using WebSphere federated repositories as an intermediary.

About this task

You can use the users defined in an Active Directory registry, without defining
new users, by configuring TADDM to use WebSphere federated repositories and
then configuring WebSphere federated repositories for Active Directory.

When you configure for Active Directory, you must create a user named
administrator in the Active Directory registry. This administrator user is allowed to
configure access to TADDM and grant other users access to TADDM objects and
services.

To configure for Microsoft Active Directory, complete the following steps:


1. Configure TADDM for WebSphere federated repositories.

18 Administrator's Guide
For more information on configuring TADDM for WebSphere federated
repositories, see “Configuring the TADDM server for WebSphere federated
repositories” on page 16.
2. Configure WebSphere federated repositories for Microsoft Active Directory.
For more information on configuring supported entity types in a federated
repository configuration, see the section called Configuring supported entity
types in a federated repository configuration in the WebSphere Application
Server Information Center.
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp?topic=/
com.ibm.websphere.nd.doc/info/ae/ae/twim_entitytypes.html

Securing the authentication channel


When you configure TADDM to use WebSphere federated repositories, you can
secure communications between the authentication client and the authentication
service.

About this task

TADDM uses an authentication service that supports single sign-on. The


authentication service is installed during the installation of IBM Tivoli Change and
Configuration Management Database 7.1 or IBM Tivoli Business Service Manager
4.2.

There are two mechanisms by which you can secure communications between an
authentication client and an authentication service:
v SSL
v Client authentication

Configuring the authentication channel for SSL


You can secure communications by using the WebSphere signer certificates to
configure SSL between the authentication client and the authentication server.

About this task

To configure for SSL between the authentication client and the authentication
server, complete the following steps:
1. In WebSphere, navigate to SSL certificate and key mgmt → Manage endpoint
security configurations → Node1 → Key stores and certificates →
NodeDefaultTrustStore → Signer certificates
2. Export the WebSphere signer certificates to files (for example, signer1.cert and
signer2.cert).
3. Create a truststore and import the WebSphere signer certificates as follows:
C:\eWAS\java\bin>keytool -genkey -alias truststore -keystore truststore.jks
C:\eWAS\java\bin>keytool -import -trustcacerts -alias default
-file signer1.cert -keystore truststore.jks
C:\eWAS\java\bin>keytool -import -trustcacerts -alias dummyserversigner
-file signer2.cert -keystore truststore.jks
4. Copy the truststore.jks file to the TADDM directory. Include the truststore
password and location in the $COLLATION_HOME/collation.properties entries:
com.collation.security.auth.ESSClientTrustStore=/dist/etc/truststore.jks
com.collation.security.auth.ESSClientTrustPwd=password

Chapter 2. Security overview 19


Configuring client authentication
To configure client authentication between the authentication client and the
authentication server, it is recommended that you enable WebSphere application
security.

Before you begin

After WebSphere application security is enabled, you can add the role called
TrustClientRole to the WebSphere administrator user that you specified during the
TADDM installation. This provides added security for the authentication service by
restricting the users that can authenticate to the authentication service to only
those with the TrustClientRole.

To add the TrustClientRole to the WebSphere administrator specified during


TADDM installation, complete the following steps:
1. Log in to the WebSphere Administration Console.
2. Under the Security tab, click Enterprise Applications. The Enterprise
Applications pane is displayed.
3. In the Enterprise Applications table, click on the Authentication Service
application (authnsvc_ctges) in the Name column. The Enterprise Applications
> authnsvc_ctges pane is displayed.
4. In the Enterprise Applications > authnsvc_ctges pane, in the Detailed Properties
list, click Security role to user/group mapping. The Enterprise Applications >
authnsvc_ctges > Security role to user/group mapping pane is displayed.
5. In the table on the Enterprise Applications > authnsvc_ctges > Security role to
user/group mapping pane, complete the following steps:
v In the table, select the check box next to TrustClientRole.
v Clear the Everyone check box.
v Click the Lookup Users or Lookup Groups button. The Enterprise
Applications > authnsvc_ctges > Security role to user/group mapping >
Lookup users or groups pane is displayed.
v In the Enterprise Applications > authnsvc_ctges > Security role to
user/group mapping > Lookup users or groups pane, complete the following
steps:
– Search for users or groups, using the Limit and Search string input boxes.
When a group or user is found, it is displayed in the Available list.
– From the Available list, select the user or group that you want.
– Click the Move button to add that user or group to the Selected list.
v Click OK. The Enterprise Applications > authnsvc_ctges > Security role to
user/group mapping pane is displayed.
v In the Enterprise Applications > authnsvc_ctges > Security role to
user/group mapping pane, clear the Everyone check box.
v Click OK. The Enterprise Applications > authnsvc_ctges pane is displayed.
v Click Save to save the configuration. The Enterprise Applications pane is
displayed.
v Click OK. The Enterprise Applications > authnsvc_ctges pane is displayed.

20 Administrator's Guide
Chapter 3. TADDM set up
When you set up Tivoli Application Dependency Discovery Manager (TADDM),
you need to meet prerequisites for starting the Product Console, configuring
firewalls, and starting the server.

During the set up, you might also have to stop the server, or need to back up and
restore files. Instructions for completing these tasks are provided in this section.

Prerequisites for starting the Product Console


The Java based Product Console can be run on either the TADDM server or any
remote workstation that meets the requirements in Chapter 2, ″Planning,″ in the
IBM Tivoli Application Dependency Discovery Manager Planning and Installation Guide .

About this task

The Product Console must be run from a system that has the IBM Java 2 Platform
Standard Edition 5.0 or 6.0 available. Attempting to launch and run TADDM with
an unsupported JRE can produce unsatisfactory results.

To determine if you have the correct version of the Java platform, enter the
following command:
java -version

If the Java version is not IBM 1.5 or IBM 1.6, install the Java source from the
TADDM installation media. Instructions are listed below for each type of system
that you are using to run the TADDM Product Console.
v For Windows systems, complete the following steps:
1. Close all open browser windows.
2. From the TADDM installation DVD, copy the TADDM/ibm-java/windows/ibm-
java2–jre-version–win-i386.exe file to the system that you are using to log
into the TADDM Java Client.
3. Run the executable file to install the JRE on the system.
v For Solaris systems, complete the following steps:
1. Close all open browser windows.
2. From the TADDM installation DVD, copy the TADDM/collation/solaris.zip
file to the system that you are using to log into the TADDM Java Client.
3. Extract the dist/external/jdk/jdk-version–SunOS-sparc.zip file from the
solaris.zip file.
4. Extract the jdk-version–SunOS-sparc.zip file.
5. Configure the application type of your browser for ’JNLP File’ (Java Network
Launch Protocol) with the Java Web Start Launcher found in the
jdk-version–SunOS-sparc\jre\bin\javaws directory.
v For AIX systems, complete the following steps:
1. Close all open browser windows.
2. From the TADDM installation DVD, copy the TADDM/collation/aix.zip file
to the system you are using to log into the TADDM Java Client.

© Copyright IBM Corp. 2006, 2009 21


3. Extract the dist/external/jdk/jdk-version–AIX-powerpc.zip file from the
aix.zip file.
4. Extract the jdk-version–AIX-powerpc.zip file.
5. Configure the application type of your browser for ’JNLP File’ with the Java
Web Start Launcher found in the jdk-version–AIX-powerpc\jre\bin\javaw
directory.
v For Linux systems, complete the following steps:
1. Close all open browser windows.
2. Use one of the following procedures to copy the appropriate file to the
system you are using to log into the TADDM Java Client:
– For Linux on xSeries®, from the TADDM installation DVD, copy the
TADDM/collation/linux.zip file to the system you are using to log into the
TADDM Java Client.
– For Linux on zSeries®, from the TADDM installation DVD, copy the
TADDM/collation/linuxS390.zip file to the system you are using to log
into the TADDM Java Client.
3. Use one of the following procedures to extract the appropriate file from
linux.zip file:
– For 32 bit operation systems on xSeries, extract the dist/external/jdk/
jdk-version-Linux-i686.zip file from the linux.zip file.
– For 64 bit operation systems on xSeries, extract the dist/external/jdk/
jdk-version-Linux-x86_64.zip file from the linux.zip file.
– For 32 bit operation systems on zSeries, extract the dist/external/jdk/
jdk-version-Linux-s390.zip file from the linuxS390.zip file.
– For 64 bit operation systems on zSeries, extract the dist/external/jdk/
jdk-version-Linux-s390x.zip file from the linuxS390.zip file.
4. Extract the jdk file.
5. Configure the application type of your browser for ’JNLP File’ with the Java
Web Start Launcher appropriate for your operating system. This is found in
the jdk-version-Linux_platform\jre\bin\javaws directory, where
Linux_platform is one of the following:
– Linux-i686
– Linux-x86_64
– Linux-s390x
– Linux-s390
For more information on running Java Network Launch Protocol (JNLP) files,
see the documentation for your browser.

Deploying the Product Console


After you confirm that the TADDM server is available, you can deploy the Product
Console.

About this task

To deploy the Product Console, complete the following steps:


1. Provide users with the URL (including the port number) of the system where
you installed the TADDM server
For example, you can provide users with something similar to the following
URL:

22 Administrator's Guide
http://system.company.com:9430
2. Provide users with their user name and password.
3. Specify whether users should use Secure Sockets Layer (SSL).
In cases where SSL is being used, instruct users to save a trust store for the
TADDM server by following the instructions on the Product Console
Installation and Start page. For more information, refer to the Tivoli Application
Dependency Discovery Manager Installation Guide.

Important: You should use SSL for all communication between the Product
Console and the TADDM server.
4. Users need to have the correct version of Java installed on the system that they
are using to view the Product Console.
5. Supply users with the IBM Tivoli Application Dependency Discovery Manager
User’s Guide, which includes information about how to install and start the
Product Console.

Checking server status


You can use the Administrator Console to obtain the current status of the TADDM
server.

Open a Web browser and enter the URL and port number of the system where you
installed the TADDM server.

For example, you could enter something similar to the following URL:
http://system.company.com:9430

A window opens and the Administrator console is displayed on the page. It lists
the components of the TADDM server and their status.

Configuring firewalls between the Product Console and the TADDM


server
To have policies that enable communication, you must configure firewalls located
between the Product Console and the TADDM server.

Confirm that the computer running the Product Console is able to establish
connections to the TADDM server on the configured ports.

Table 1 lists the default ports. If you specified different ports during installation,
you must open the ports that you specified.
Table 1. Port configuration
Default port Protocol Use
9430 TCP Initial Web page and
Administrator Console -
Non-SSL
9431 TCP Initial Web page and
Administrator Console over
SSL
9433 TCP RMI Naming Service
9434 TCP Required only for SSL
communication

Chapter 3. TADDM set up 23


Table 1. Port configuration (continued)
Default port Protocol Use
9435 TCP Required only for non-SSL
communication

If all the default ports and the ports that you specified during installation are open
and the Product Console responds with the message of server is not running
when you log in, the TADDM server might be requesting that the client connect to
the wrong location. Complete the following steps:
1. Verify that the host identifies itself as its fully qualified host name.
2. Verify that the forward and reverse DNS mappings for the fully-qualified host
name match.
If this is not possible or practical, override the host name returned to the client
with an IP address. To override the host name returned to the client, change the
following property in the collation.properties file, for example:
com.collation.clientproxy.rmi.server.hostname=192.168.253.128

Use the appropriate IP address for your environment.

Configuring firewalls between the Enterprise Domain Server and the


TADDM server
To ensure that the computer running the Enterprise Domain Server can establish
connections to the TADDM server you must configure the firewall so that specific
ports are open for communication.

Table 2 describes the firewall ports you need to open on the Enterprise Domain
Server to enable communication between the Enterprise Domain Server and the
TADDM server.
Table 2. Communication ports used by the firewall
Default
port Description Direction
4160 The port used for communicating unicast discovery Outgoing
information. It is the listening port of the TADDM
database.
9430 The port used for communicating HTTP information. Outgoing
9433 The port used for communicating naming service Outgoing
information.
9435 The port used for communicating RMI information. Outgoing
9540 The port used for communicating security manager Outgoing
information in an enterprise environment.
9550 The port used for communicating topology manager Outgoing
information in an enterprise environment.
19430 The port used for communicating topology manager Outgoing
information.
19431 The port used for communicating change manager Outgoing
information.
19432 The port used for communicating API server information. Outgoing

24 Administrator's Guide
Table 2. Communication ports used by the firewall (continued)
Default
port Description Direction
19433 The port used for communicating user registry Incoming
information.
19434 The port used for communicating reports server Outgoing
information.
19435 The port used for communicating with the Enterprise Outgoing
Domain Server.

If you have changed any of the default ports set in $COLLATION_HOME/etc/


collation.properties, you must ensure that you open ports that are appropriate
to how your environment is configured.

The following default port values are set in $COLLATION_HOME/etc/


collation.properties, on the TADDM server:
com.collation.jini.unicastdiscoveryport=4160
com.collation.webport=9430
com.collation.rmiport=9433
com.collation.commport=9435
com.collation.TopologyManager.port=19430
com.collation.ChangeManager.port=19431
com.collation.ApiServer.port=19432
com.collation.SecurityManager.port=19433
com.collation.ReportsServer.port=19434

In addition, the default Enterprise Domain Server port value of 19435 is set in
$COLLATION_HOME/external/gigaspaces-4.1/policy/reggie.config.
import net.jini.jeri.tcp.TcpServerEndpoint;
import net.jini.jeri.BasicJeriExporter;
import net.jini.jeri.BasicILFactory;

com.sun.jini.reggie {
initialMemberGroups = new String[] { "${INITIAL_LOOKUP_GROUP}" };
persistenceDirectory = "${REGGIE_LOG_FILE}";

serverExporter = new BasicJeriExporter(


TcpServerEndpoint.getInstance(19435),
new BasicILFactory()
);
}

Starting the TADDM server


If you chose the Start at Boot option at installation, the TADDM server
automatically starts during every system boot.

About this task

Important: A local or remote database server must be started and running before
the TADDM server is started. The TADDM server cannot initialize or
run properly if the database is not available.

To manually start the TADDM server, complete the following steps:


1. Log in as the non-root user that was defined during the installation process.
2. Open a command prompt window.

Chapter 3. TADDM set up 25


Note: On a Windows Server 2008 system with User Account Control turned on,
open the command prompt window with administrator privileges. You
can do this by right-clicking on the Command Prompt icon and then
clicking Run as administrator.
3. Go to the directory where you installed the TADDM server.
4. Use one of the following commands to run the start script:
v For Linux, Solaris, AIX, and Linux on System z operating systems:
$COLLATION_HOME/bin/control start
v For Windows operating systems:
%COLLATION_HOME%\bin\control.bat start

Attention: With the %COLLATION_HOME%\bin\control.bat start command,


the TADDM server remains running as long as the user is logged on. When
the user logs out, the TADDM server stops because that user owns the
process. However, the TADDM server can also be started as a service by
using the %COLLATION_HOME%\bin\startServer.bat command issued by the
run-as user that was specified during installation. With the
%COLLATION_HOME%\bin\startServer.bat command, the TADDM server
remains when the run-as user logs off the system.
When starting the server on a Windows system, you might see the following
timeout error message: Error 1053: The service did not respond to the
start or control request in a timely fashion. This error occurs because the
TADDM server can take longer than the allowed time to start. You can
disregard this message; the startup process continues until it completes.
If you installed the TADDM server with root privileges, you can manually start
the TADDM server by running the following script:
/etc/init.d/collation start

Stopping the TADDM server


You can manually stop the TADDM server and related discovery processes.

About this task

To manually stop the TADDM server, complete the following steps:


1. Log in as the non-root user that was defined during the installation process.
2. Open a command prompt window.

Note: On a Windows Server 2008 system with User Account Control turned on,
open the command prompt window with administrator privileges. You
can do this by right-clicking on the Command Prompt icon and then
clicking Run as administrator.
3. Go to the directory where you installed the TADDM server.
4. Use one of the following commands to run the stop script:
v For Linux, Solaris, AIX, and Linux on System z operating systems:
$COLLATION_HOME/bin/control stop
v For Windows operating systems:
%COLLATION_HOME%\bin\control.bat stop
If you installed the TADDM server with root privileges, you can manually stop
the TADDM server by running the following script:
/etc/init.d/collation stop

26 Administrator's Guide
What to do next

Some sensors run in their own special Java Virtual Machine (JVM). When running
a discovery, if you use the control script (./control stop) to stop TADDM, you
might need to manually stop these additional JVMs, which are called local anchors.
If you do not stop the local anchors, unexpected behavior can result. For example,
there might be degraded performance of certain discoveries.

To verify that the process for the local anchor is no longer running, enter the
following command:
% ps -ef |grep -i anchor

This command identifies any local anchor processes that are running. The output
looks like the following code example:
coll 23751 0.0 0.0 6136 428 ? S Jun02 0:00 /bin/sh
local-anchor.sh 8494 <more information here>

If a process is running, stop the process by running the following command:


- % kill -9 23751

After running the command, verify that the process stopped by running the
following command:
% ps -ef |grep -i anchor

Testing server status


You can use the control command to test the status of the TADDM server.

About this task

To test the server status, complete the following steps:


1. Log in as the non-root user that was defined during the installation process.
2. Open a command prompt window.
3. Go to the directory where you installed the TADDM server.
4. Use one of the following commands:
v For Linux, Solaris, AIX, and Linux on System z operating systems:
$COLLATION_HOME/bin/control status
v For Windows operating systems:
%COLLATION_HOME%\bin\control.bat status
If the TADDM server is running, the following output is displayed:
------------------------------------
Discover: Started
GigaSpaces: Started
DbInit: Started
Tomcat: Started
Topology: Started
DiscoverAdmin: Started
Proxy: Started
EventsCore: Started

TADDM: Running
--------------------------------

Chapter 3. TADDM set up 27


Windows setup
TADDM supports two ways to discover Windows computer systems:
gateway-based discovery and SSH-based discovery.
v Gateway-based discovery requires a dedicated Windows computer system,
accessible through SSH, to serve as the gateway. All discovery requests go
through the gateway. The gateway uses Windows Management Instrumentation
(WMI) to discover the target Windows computer systems.
v SSH-based discovery does not require a dedicated Gateway computer system.
Instead, discovery uses a direct SSH connection to the target Windows computer
system.

Typically, gateway-based discovery is preferred over SSH-based discovery, because


the setup of the gateway and WMI are easier than setting up SSH. This is because
WMI is available by default on all Windows targets supported by TADDM. Other
than the gateway computer (which requires a SSH server) there are no special
software requirements for the Windows targets. However, discovery using SSH can
be faster because a gateway is not involved in the discovery flow, and there is no
need to deploy the WMI Provider.

Doing a direct discovery requires an SSH server on each Windows target computer.
In addition, direct discovery using SSH requires the .NET 1.1 Framework on each
Windows target. .NET Framework 1.1 is not installed by default on Windows
Server 2000.

For both types of discovery, the TADDM Windows discovery program,


TaddmTool.exe file, is used to perform the discovery. For discovery using a
gateway, the TaddmTool program is deployed to the gateway during discovery
initialization. For discovery using SSH, the TaddmTool program is deployed to
each Windows target computer system. The TaddmTool program is a .NET
application.

The following two properties in the collation.properties file control how Windows
discovery decides whether to use a gateway or SSH to discovery a particular
Windows target.
v com.collation.AllowPrivateGateways=true
The AllowPrivateGateways property controls whether a Windows computer
system can be discovered directly using SSH. If this property is false, then only
Gateway-based discovery can be used.
v com.collation.PreferWindowsSSHOverGateway=false
The PreferWindowsSSHOverGateway property controls which type of discovery
to use if a Windows computer system supports SSH. That is, even if a Windows
computer system supports SSH, this property (when false) uses Gateway-based
discovery. The PreferWindowsSSHOverGateway property is ignored if the
AllowPrivateGateways property is false.
By default, TADDM is configured, based on the default values of these two
properties, com.collation.AllowPrivateGateway and
com.collation.PreferWindowsSSHOverGateway, to only use Gateway-based
discovery.

Whether you use a Windows Gateway with WMI or direct connect with SSH, the
information that is retrieved is identical. You need to decide which method is best
suited for your environment. The following list identifies some things you need to
consider when making this decision:

28 Administrator's Guide
There are prerequisites for discovery using a gateway and WMI:
1. Requires a dedicated Windows Server 2003 computer system to serve as the
gateway.
2. The gateway should be in the same firewall zone as the Windows computers to
be discovered.
3. You must install a supported version of an SSH server on the gateway
computer system.
4. The gateway uses remote WMI to discovery each Windows target. In addition,
a WMI Provider is automatically deployed to each Windows target computer
system during the discovery initialization. The WMI Provider is used to
discover data not included in the core WMI. Enable WMI on the Windows
target computer system to be discovered. By default, on most Windows 2000
and later systems, WMI is enabled.

There are prerequisites for discovery using SSH:


1. You must install a supported version of an SSH server on each Windows target
computer system.
2. You must install .NET Framework 1.1 on each Windows Server 2000 target
computer system.

Configuring Bitvise WinSSHD


You can use Bitvise WinSSHD to provide SSH access to Windows systems.

Before you begin

For gateway-based discovery, Bitvise WinSSHD must be installed on the gateway


system; for direct SSH discovery, Bitvise WinSSHD must be installed on each
Windows system.

Note: TADDM supports Bitvise WinSSHD 4.06 through 4.28 and 5.09 or later. If
you use TADDM with a different version of Bitvise, problems might occur.

Bitvise WinSSHD is available from http://www.bitvise.com/.

About this task

To configure Bitvise WinSSHD:


1. On the WinSSHD control panel, click Start to start the server.
2. Disable the host lockout feature.
3. On the WinSSHD control panel, click Settings → Session.
4. Set the following variables to 0:
v IP blocking - window duration
v IP blocking - lockout time
5. Create a user that is a member of the local Administrator’s group. Instead of a
domain user, use a local user.
6. If the user is a domain account, set up a virtual group and a virtual account.
For example, if there is an account with the itaddm name in the LAB2 domain,
complete the following steps:
a. Create a virtual group named itaddm:
1) In the WinSSHD control panel, type itaddm as the virtual account name.
2) Click Virtual account password and type the password.

Chapter 3. TADDM set up 29


3) Select Use default Windows account.
Do not change the default values in the other fields.
b. In the WinSSHD control panel, click Settings → Edit → View → Settings.
c. In the navigation tree, click Access Control.
d. Click Virtual Groups → Add.
e. Complete the following fields:
v For Group, type itaddm.
v For Windows account domain, type LAB2.
v For Windows account name, type itaddm.
v Select Login allowed.
Do not change the default values in the other fields.
f. In the WinSSHD control panel, click Settings → Access Control → Hosts → IP
rules.
g. Delete the 0.0.0.0/0 entry and replace it with an entry for the IP address of
the TADDM server. This change restricts SSH host access to the TADDM
server.

Configuring the Cygwin SSH daemon


You can use the Cygwin SSH daemon (sshd) to provide SSH access to Windows
systems.

About this task

For gateway-based discovery, the Cygwin SSH daemon must be installed on the
gateway system; for direct SSH discovery, the daemon must be installed on each
Windows system. Your Cygwin installation must include the following packages:
v From the admin category: cygrunsrv (version 1.17–1 or later).
v From the net category: opensshd (version 4.6p 1–1 or later).

Cygwin is available from http://www.cygwin.com/.

To configure the Cygwin SSH daemon:


1. Start the cygwin bash shell.
2. From your system information, use the cygwin mkpasswd utility to create an
initial /etc/passwd. You can also use the mkgroup utility to create an initial
/etc/ group. See the Cygwin User’s Guide for more details.
For example, the following command sets up the password file, passwd, from
the local accounts on your system:
mkpasswd -1 > /etc/passwd
3. Run the ssh-host-config program setup.
4. Configure SSH. Answer Yes to all questions.
5. Start the SSH server by running the following command:
net start sshd

Windows Management Instrumentation (WMI) dependency


TADDM relies on WMI to discover Windows Computer Systems. TADDM can be
configured to restart the WMI service if a problem is encountered with WMI.

30 Administrator's Guide
If the WMI service is restarted, all WMI dependent services that were running
before the restart are also restarted. These restarts are controlled by the following
collation.properties settings:
com.collation.RestartWmiOnAutoDeploy=false
com.collation.RestartWmiOnAutoDeploy.1.2.3.4=false
Restart WMI if a WMI error is encountered during AutoDeploy of the
TADDM WMI Provider.
com.collation.RestartWmiOnFailure=false
com.collation.RestartWmiOnFailure.1.2.3.4=false
Restart WMI if a WMI error is encountered (except during AutoDeploy).

For more information about WMI-related properties in collation.properties, see


the “Windows computer system sensor” topic in the TADDM Sensor Reference.

Note: The default value for WMI restart is false. Setting these values to true may
provide more reliable Windows discovery. This must be weighed against the
potential negative impact of a WMI service temporarily being stopped and
restarted.

Configuring a federated data source in the TADDM server


TADDM has a built in, lightweight federation capability that allows data from DB2
or Oracle data sources to be quickly federated in custom queries.

Before you begin

You can use a text editor to edit an XML file which defines for TADDM how to
locate, connect and authenticate to the remote data source. After the federation
information is defined and the TADDM server restarted, the federated data can be
accessed using the custom query function found in the TADDM Product Console.

If it becomes necessary to federate with federated data sources that are not
accessed using JDBC, then a more advanced federation configuration is needed.
Use the Websphere Federation Server to federate with the non-JDBC data sources.

Important: Ensure that the federated data source that you are configuring is
started and available.

About this task

The following example illustrates how to add federated tables using the
lightweight federation capability which is built into TADDM:
1. Navigate to the following directory:
v For Linux, Solaris, AIX, and Linux on System z operating systems,
$COLLATION_HOME/etc/cdm/adapters
v For Windows operating systems, %COLLATION_HOME%\etc\cdm\adapters
2. Copy the existing views.xml file to a backup file. If you need to restore the
original file, use the backup file.

Important:
v Even though the views.xml file contains the comments ″DO NOT
EDIT THIS FILE″, it is safe to edit this file when you have a
backup copy.

Chapter 3. TADDM set up 31


v There are other XML files in the adapters directory. Do not edit
these XML files. These files should remain closed.
3. Edit the original views.xml file using the example below:
<bean id="Db2Example"
class="com.collation.proxy.api.edm.DataViewDefinition">
<!-- The view name -->
<property name="name" value="Db2Example"></property>
<!-- The name of the data adapter -->
<property name="adapter" value="SQLAdapter"></property>
<!-- All of the properties needed to create a data view -->
<property name="viewDefinition">
<props>
<!-- connection string commented out to disable example -->
<prop key="connection string">jdbc:db2://caesar:50000/sample</prop>
<prop key="user">db2inst</prop>
<!-- password changed to secure the e-mail -->
<prop key="password">xyzzy</prop>
<prop key="sql query">select * from girard.license</prop>
</props>
</property>
</bean>
4. When you update and save the changes to this file, stop and restart the
TADDM server. Currently, each view definition must define the database and
user connection information.

Accessing federated data sources using the Domain Manager


After your federated data sources are configured, you can use the custom query,
part of the Domain Manager, to ask for a list of federated external sources.

About this task

To access your federated data source data through the Custom Query item in the
Domain Manager, complete the following steps:
1. Log in to the Domain Manager.
2. From the Analytics tab, select Custom Query.
3. From the Custom Query pane, in the DataSource list, part of the Component
Type section, perform the following steps:

Important: The list of federated data sources in the DataSource list should be
the views that you added to the views.xml file.
a. Select localhost as the TADDM component from which to query. The
Component list is populated with the components that are associated with
localhost.
b. In the Component list, select the component to query from the localhost
DataSource and click Add. The selected component is added to the input
box on the right side of the Component Type section.
c. Select externalAdapter as the federated data source component with which
to federate. The Component list is populated with the components that are
associated with externalAdapter DataSource.
d. In the Component list, select the federated data source component to
federate with from the externalAdapter DataSource and click Add. The
selected component is added to the input box on the right side of the
Component Type section.
4. In the Attributes section of the Custom Query pane, create a query that joins
the TADDM table data with the federated data source data.

32 Administrator's Guide
Important: Any time a join is performed with an federated data source, the
federated data source must be located on the right hand side of the
query. Only the local TADDM components can be specified on the
left hand side of the query.
5. To see the report results, click Run Query.

What to do next

For more information on running queries, see Creating a custom query report in
IBM Tivoli Application Dependency Discovery Manager User’s Guide.

Backing up data
Back up your data on a regular basis so you can recover from a system failure.

Before you begin


Before you back up data, stop the TADDM server.

About this task

To back up files for the TADDM server, save all the files in the directory where
you installed the TADDM server:
v For Linux, Solaris, AIX, and Linux on System z operating systems, the default
path to the directory is /opt/IBM.
v For Windows operating systems, the default path to the directory is C:\opt\IBM.

What to do next

To backup the database files, use the documentation provided by the database
vendor.

Restoring data
Following a system failure, you can restore the configuration, data, and database
files. As a result, you can resume operation from the point of the last backup prior
to the failure.

About this task

To restore data from backup media, complete the following steps:


1. Do one of the following:
v Restore the /opt/IBM directory, and restart TADDM.
v Restore the C:\opt\IBM directory, and restart TADDM.
2. Locate the backup copy of the data files.
3. Open a command prompt window.
4. Navigate to the directory where you installed the TADDM server.
5. Copy the backup copy of the data files to the installation directory.
6. Close the command prompt window.
7. Start the TADDM server.

Chapter 3. TADDM set up 33


What to do next

If the database is affected by the system failure, restore the database files using the
documentation from the database vendor.

34 Administrator's Guide
Chapter 4. Configuring your environment for discovery
Complete these steps to optimize the information that TADDM gathers from your
environment during discoveries.

About this task

The specific configuration tasks required depend upon the level of discovery you
need to support in your environment.

What to do next

In addition to configuring your environment for discovery, you must also complete
any required TADDM sensor setup and access list configuration.

For information about how to run a discovery, including defining a scope and
setting a schedule, refer to the Tivoli Application Dependency Discovery Manager
User’s Guide.

Discovery overview
Discovery is a multilevel process that collects configuration information about the
IT infrastructure, including deployed software components, physical servers,
network devices, virtual LANs, and host data that is used in the runtime
environment.

In general, discovery follows an iterative process:


1. First, the discovery engine attempts TCP connections to every address in the
scope to identify the devices that are present
2. For each discovered IP interface, the discovery engine starts a sensor to
discover and categorize the component type by matching it to the appropriate
signatures in the data model. The discovery sensor queries the component for
configurations and dependencies.
3. Additional sensors are started until the entire infrastructure is discovered. For
example, a host discovery triggers the discovery of applications and services
that reside on the host system.

When the discovery is complete, TADDM processes the discovered component


data to generate a topological representation of the infrastructure. Subsequent
discoveries update the topologies. In addition, a change history of the
infrastructure configuration and dependencies is maintained.

© Copyright IBM Corp. 2006, 2009 35


Related tasks
“Configuring for Level 1 discovery” on page 37
Some minimal configuration is required for Level 1 discovery (credential-less
discovery), which scans the TCP/IP stack to gather basic information about active
computer systems.
“Configuring for Level 2 discovery” on page 38
In addition to the requirements for Level 1 discovery, Level 2 discovery requires
additional configuration to support discovery of detailed host configuration
information.
“Configuring for Level 3 discovery” on page 43
In addition to the requirements for Level 2 discovery, Level 3 discovery requires
additional configuration to support discovery of application configuration and host
data.

Discovery profiles
You can control what is discovered by using discovery profiles.

A discovery profile defines a set of options for discovery, including sensor


discovery modes, discovery scope, and configuration details for individual sensors.
You can use profiles to manage multiple configurations of the same sensor, pick
the appropriate configuration based on a set of criteria, and manage sets of
configurations of different sensors to be applied on a single discovery run.

By selecting the appropriate profile, you can control the depth of discovery, or
discovery level. There are four discovery profiles. Three of them correspond to
discovery levels, and one is a utilization profile:
Level 1
Level 1 discovery (also called a ″credential-less″ discovery) discovers basic
information about the active computer systems in the environment. This
level of discovery uses the Stack Scan sensor and does not require any
operating-system or application credentials.
Level 1 discovery is very shallow and captures only the host name,
operating system, IP address, fully qualified domain name, and Media
Access control (MAC) address of each discovered interface. (MAC address
discovery is limited to Linux on System z and Windows systems). Level 1
discovery does not discover subnets; for any discovered IP interfaces that
do not belong to an existing subnet discovered during Level 2 or Level 3
discovery, new subnets are created based on the value of the
com.collation.IpNetworkAssignmentAgent.defaultNetmask property in
the collation.properties configuration file.
Level 2
Level 2 discovery captures detailed information about the active computer
systems in the environment, including operating-system details and
shallow application information (depending on the value of the
com.collation.internalTemplatesEnabled property in the
collation.properties configuration file). This level of discovery requires
operating-system credentials.
Level 2 discovery captures application names, as well as computer systems
and ports associated with each running application. If an application has
established a TCP/IP connection to another application, this is captured as
a dependency.

36 Administrator's Guide
Level 3
Level 3 discovery captures information about the entire application
infrastructure, including physical servers, network devices, virtual LANs,
host configuration, deployed software components, application
configuration, and host data used in the environment. This level of
discovery requires application operating-system and application
credentials.
Utilization discovery
Utilization discovery captures utilization information for the host system.
To run a utilization discovery, you must have computer system credentials.

To run a discovery, you must specify a profile; if no profile is specified, discovery


uses the Level 3 discovery profile by default. (You can change the default profile in
the Product Console.)

Note: Level 2 and Level 3 discoveries capture more detailed information than
Level 1 discoveries. Therefore, if objects created during a Level 2 or Level 3
discovery match objects previously created by a Level 1 discovery, the Level
1 objects are replaced by the newly created objects. This causes the object
GUIDs to change, so in general, Level 1 data should not be used for
integration with other products.

For detailed information on discovery profiles, see Best practices for Discovery
Profiles and User Scenarios on the TADDM wiki at http://www.ibm.com/
developerworks/wikis/display/tivoliaddm/A+Flexible+Approach+to+Discovery

For instructions about how to use discovery profiles, see the section on Using
discovery profiles in the Tivoli Application Dependency Discovery Manager User’s Guide.
Related tasks
“Configuring for Level 1 discovery”
Some minimal configuration is required for Level 1 discovery (credential-less
discovery), which scans the TCP/IP stack to gather basic information about active
computer systems.
“Configuring for Level 2 discovery” on page 38
In addition to the requirements for Level 1 discovery, Level 2 discovery requires
additional configuration to support discovery of detailed host configuration
information.
“Configuring for Level 3 discovery” on page 43
In addition to the requirements for Level 2 discovery, Level 3 discovery requires
additional configuration to support discovery of application configuration and host
data.

Configuring for Level 1 discovery


Some minimal configuration is required for Level 1 discovery (credential-less
discovery), which scans the TCP/IP stack to gather basic information about active
computer systems.

About this task

Level 1 discovery uses the Stack Scan sensor. For detailed information about Nmap
and the Stack Scan sensor, see the “Stack Scan sensor” topic in the TADDM Sensor
Reference.

Chapter 4. Configuring your environment for discovery 37


Level 1 discovery uses the IBM Tivoli Monitoring sensor. For detailed information
about configuring the IBM Tivoli Monitoring sensor, see the “IBM Tivoli
Monitoring sensor” topic in the TADDM Sensor Reference.

For Level 1 discovery, you must configure the network devices in your
environment that you want the TADDM server to discover. To do this, complete
the following steps:
1. Depending on your SNMP version, record the following information for use
with the TADDM server:
v For SNMP V1 and V2: record the SNMP MIB2 GET COMMUNITY string.
v For SNMP V3: record the SNMP user name and password.
2. Assign permission for MIB2 System, IP, Interfaces, and Extended Interfaces.
Related concepts
“Discovery overview” on page 35
Discovery is a multilevel process that collects configuration information about the
IT infrastructure, including deployed software components, physical servers,
network devices, virtual LANs, and host data that is used in the runtime
environment.
“Discovery profiles” on page 36
You can control what is discovered by using discovery profiles.

Configuring for Level 2 discovery


In addition to the requirements for Level 1 discovery, Level 2 discovery requires
additional configuration to support discovery of detailed host configuration
information.
Related concepts
“Discovery overview” on page 35
Discovery is a multilevel process that collects configuration information about the
IT infrastructure, including deployed software components, physical servers,
network devices, virtual LANs, and host data that is used in the runtime
environment.
“Discovery profiles” on page 36
You can control what is discovered by using discovery profiles.

Configuring target computer systems


If you want TADDM to discover the target computer systems in your environment,
those computer systems must be configured with the minimum requirements for
discovery.

The following list shows the minimum requirements that apply to the target
computer systems that you want TADDM to discover:
Secure Shell (SSH)
You can use either OpenSSH, or the vendor-supplied version of SSH that
comes with the operating system. For more information on Windows
operating systems, see “Windows Management Instrumentation (WMI)
dependency” on page 30.
LiSt Open Files (lsof)
To provide complete information on dependencies, install the LiSt Open
Files (lsof) program on all target computer systems according to the
requirements in Table 3 on page 39.

38 Administrator's Guide
Table 3. Requirements for running the lsof program
Operating Requirement for running lsof
system program Where to obtain the lsof program
AIX One of the following v http://ftp.unicamp.br/pub/unix-tools/
requirements must be met: lsof/binaries/aix/
v The setuid (set user ID) access v http://www.ibm.com/developerworks/
right flag must be set for the aix/library/au-lsof.html
lsof program file.
v http://www-03.ibm.com/systems/p/
v The user must be in the system os/aix/linux/toolbox/download.html
and sys groups, which allow
v http://www.bullfreeware.com/
read access to the /dev/mem and
/dev/kmem files.
v The user must use the sudo
command to run the lsof
program.
HPUX One of the following v http://hpux.connect.org.uk/hppd/cgi-
requirements must be met: bin/search?package=&term=/lsof
v The setuid (set user ID) access v http://www.ibm.com/developerworks/
right flag must be set for the aix/library/au-lsof.html
lsof program file.
.
v The user must use the sudo
command to run the lsof
program.
Linux One of the following v http://rhn.redhat.com
requirements must be met:
v The setuid (set user ID) access
right flag must be set for the
lsof program file.
v The user must use the sudo
command to run the lsof
program.
Solaris One of the following v http://sunfreeware.com
requirements must be met:
v The user must be in the sys
group.
v The setgid (set group ID)
access right flag must be set for
the lsof program file.
v The user must use the sudo
command to run the lsof
program.
Tru64 One of the following Refer to the Open Source disc provided
requirements must be met: with the operating system
v The setuid (set user ID) access
right flag must be set for the
lsof program file.
v The user must use the sudo
command to run the lsof
program. For TADDM, the
Tru64 dop command does not
serve the same function that
the sudo command does.

Chapter 4. Configuring your environment for discovery 39


Also, because the lsof program is dependent on the version of the
operating system for which it was compiled, ensure that you install the
correct lsof program for your version. For example, if you get the following
type of message, the correct lsof program is not installed:
sushpatel79: $ lsof -nP -i | awk '{print $2, $9, $10}' | sort -k 2 | uniq -f 1
lsof: WARNING: compiled for AIX version 4.3.2.0; this is 5.1.0.0.
10352 *
24770 * (CLOSED)
12904 * (LISTEN)

SUNWscpu
(Solaris environment only)
To provide complete information on processes, install the SUNWscpu
(Source Compatibility) package.

For other commands that use sudo, see “Commands that might require elevated
privilege” on page 76.
Related reference
“Commands that might require elevated privilege” on page 76
These properties specify the operating system commands used by TADDM that
might require elevated privilege, root or superuser, to run on the target system.

Creating the service account


You must create a service account on all computer systems that are discovered
using SSH key-based and password-based connections. This is the primary method
for discovering the computer systems (servers) in your network.

About this task

To simplify the discovery setup, create the same service account on each target
computer system that you want to discover. The service account must allow access
to all resources on the target computer system that TADDM needs to discover.

TADDM requires read-only access to the target computer system. A service account
with non-root privilege can be used. However, some operating system commands
used during discovery might require elevated privilege, root or superuser, in order
to run on the target computer system. There are a couple different strategies that
you can use to allow this elevated privilege. For more information on elevated
privilege, see “Commands that might require elevated privilege” on page 76.

Complete one of the following procedures to create a service account on the target
computer system:
1. For a Linux, Solaris, AIX, and Linux for System z operating system, assume
that the service account name is coll and use the following commands to create
the service account:
# mkdir -p /export/home/coll
# useradd -d /export/home/coll -s /bin/sh \
-c "Service Account" -m coll
# chown -R coll /export/home/coll
2. For a Windows computer system, create a service account that is a member of
the local administrator’s group. This account can be a local account or a
domain account. Because TADDM relies on WMI for discovery, the account
must have access to all WMI objects on the local computer. The service account
must be created on the Windows Gateway and all target Windows computer
systems.

40 Administrator's Guide
Related reference
“Commands that might require elevated privilege” on page 76
These properties specify the operating system commands used by TADDM that
might require elevated privilege, root or superuser, to run on the target system.

Secure Shell protocol overview


The TADDM server can connect to either OpenSSH (version 1 or 2) or
vendor-supplied SSH that comes with the operating system.

The TADDM server supports the following authentication methods:


v SSH2 key-based login (RSA and DSA keys) and SSH1 key-based login (RSA)
v User name and password using SSH2, and user name and password using SSH1

Although you can use any of the authentication methods, the SSH2 key-based
login is preferred. The server automatically tries each method in the order listed
previously and uses the first method that works successfully. The TADDM server
then uses the same method with that host for the entire discovery run.

Creating key pairs using Secure Shell


You can create a public/private key pair using the Secure Shell protocol (SSH) for
key-based login with the TADDM server.

About this task

Depending on the version of SSH that you are using, SSH key-based login uses the
keys shown in Table 4:
Table 4. SSH keys
SSH Version/Algorithm Private Key Public Key
Openssh/SSH2/RSA $HOME/.ssh/id_rsa $HOME/.ssh/id_rsa.pub
Openssh/SSH2/DSA $HOME/.ssh/id_dsa $HOME/.ssh/id_dsa.pub
Openssh/SSH1/RSA $HOME/.ssh/identity $HOME/.ssh/identity.pub
Commercial/SSH2/RSA $HOME/.ssh2/id_dss_1024_a $HOME/.ssh2/id_dss_1024_a
.pub

You can also generate a public/private key pair using OpenSSH, version 2. To
generate a public/private key pair using an SSH program other than OpenSSH or
another version of OpenSSH, refer to the SSH documentation. To generate a
public/private key pair using OpenSSH, version 2, complete the following steps:
1. Log in as the owner of the TADDM server.
2. To generate the SSH key, enter the following command:
$ ssh-keygen -t rsa
Accept the command defaults. TADDM supports key pairs with or without a
passphrase
3. On each target computer system where you want to allow for a key-based
login, insert the contents of the id_rsa.pub file into the $HOME/.ssh/
authorized_keys file for the service account. Certain SSH2 implementations
generate the keys in a directory other than $HOME/.ssh. If your SSH
implementation generates the keys in a different directory or with a different
name, copy, link, or move the private key file to the $HOME/.ssh/id_rsa or
$HOME/.ssh/id_dsa directory, depending on the algorithm.

Chapter 4. Configuring your environment for discovery 41


What to do next

For more information on SSH, refer to your SSH documentation, or go to the


following Web sites:
Table 5. Additional information for SSH
SSH More information
OpenSSH http://www.openssh.org/manual.html
SSH.com http://www.ssh.com/support/
documentation

Setting up password authentication with Secure Shell


To set up a user name and password using the Secure Shell (SSH) password
authentication method, from the Product Console add an access list entry. For the
access list entry, specify the user name and password to log in to the target
computer system that you want discovered.

Before you begin

Before you use the Product Console, ensure that all services that are listed under
the Administrator’s Console on the TADDM Launch page have started.

Start the Product Console with Establish a secure (SSL) session option selected
before you set up the access list. This option encrypts all data, including access list
user names and passwords, before the data is transmitted between the Product
Console and the TADDM server.

About this task

To set up password authentication with Secure Shell (SSH), complete the following
steps:
1. Start the Product Console. Select the Establish a secure (SSL) session check
box.
2. Add a new access list entry and specify the login name and password. See the
section on Adding a new access list entry in the TADDM User’s Guide for
details.

Configuring System p and System i


Discovery of an IBM Power5 technology-based system (System p® or System i®)
and its logical partitions is done through a management console. TADDM supports
two types of management consoles: the Hardware Management Console (HMC)
and the Integrated Virtualization Manager (IVM).

TADDM discovers the management console using SSH. The discovery scope must
include the IP address of the management console and the Access List must
include an entry of type Computer System with the proper credentials (user name
and password) specified.

In addition to the user credentials, the discovery user must be defined on the
management console with the following minimal permissions:
v Hardware Management Console (HMC)

42 Administrator's Guide
– For an HMC management console, a user based on the hmcoperator role is
needed. For example, create a new role called taddmViewOnly based on the
hmcoperator role. In addition, the following command line tasks must be
assigned to the new role:
Managed System
Needed to use the lshwres and lssyscfg commands
Logical Partition
Needed to use the lshwres, lssyscfg, and viosvrcmd commands.
HMC Configuration
Needed to use the lshmc comand.
v Integrated Virtualization Manager (IVM).
For an IVM management console, a user with the View Only role is needed.

Configuring for Level 3 discovery


In addition to the requirements for Level 2 discovery, Level 3 discovery requires
additional configuration to support discovery of application configuration and host
data.

About this task

Level 3 discovery uses the Citrix server sensor. For detailed information about the
Citrix server sensor, see the “Citrix server sensor” topic in the TADDM Sensor
Reference.
Related concepts
“Discovery overview” on page 35
Discovery is a multilevel process that collects configuration information about the
IT infrastructure, including deployed software components, physical servers,
network devices, virtual LANs, and host data that is used in the runtime
environment.
“Discovery profiles” on page 36
You can control what is discovered by using discovery profiles.

Configuring Web and application servers for discovery


You must configure the Web servers and application servers in your environment
that you want the TADDM server to discover.

This section provides the steps for configuring Web and application servers.

The Microsoft IIS server does not require configuration. There are no access
requirements. The user account that is already established on the host is sufficient.

For the Apache Web server, ensure that the TADDM server has read permission to
the Apache configuration files, such as the httpd.conf file.

For the Sun iPlanet Web server, ensure that the TADDM server has read
permission to the iPlanet configuration files.

For Lotus® Domino® servers, ensure that the following requirements of the
TADDM software are met in order to access Domino servers:
v The IIOP server must be running on at least one Domino server for each
Domino domain.

Chapter 4. Configuring your environment for discovery 43


v The list of IIOP servers must be added to $COLLATION_HOME/etc/discover-
sensors/LotusDominoInitialSensor.xml file.
v For each of the IIOP servers, you must have at least one valid user ID and
password combination.
v The user ID on the IIOP server must have read permission to the names.nsf file.
v Ensure that the user ID is in the following fields in the server document:
– AccessServer
– Run restricted LotusScript/Java agents

Important: On the Lotus Domino system, there is a user account that is


configured with proper access to resources being discovered, for
example, files and databases.

Enabling a JBoss system for discovery


There are two steps you need to complete to enable a JBoss system for discovery.

About this task

To enable a JBoss system for discovery, complete the following steps:


1. Copy the jbossall-client.jar and jboss-jmx.jar files from a JBoss
distribution to the $COLLATION_HOME/lib/jboss/402 directory.
2. Check that there is a JBoss system login ID and password for the Product
Console. The ID and password is required by the TADDM server.

Configuring an Oracle Application server


The discovery of an Oracle Application server uses JAR files that are included with
the Oracle Application server. These JAR files are not included in the TADDM
server installation.

About this task

There is a property in the $COLLATION_HOME/etc/collation.properties file for


pointing to an existing installation of the Oracle Application server. The following
text is in the $COLLATION_HOME/etc/collation.properties file:
# Location of the root directory for Oracle Application Server on
the Tivoli Application Dependency Discovery Manager
server
# 1. An example is /home/oracle/product/10.1.3/OracleAS_1
# 2. A relative directory is relative to com.collation.home
# 3. This directory (and its subdirectories) must be accessible
for the user under which the server runs, usually the collation user.
# 4. Ignore if you do not intend to discover an Oracle Application server.

To point to an existing installation of the Oracle Application server, edit the


following line in the $COLLATION_HOME /etc/collation.properties file:
com.collation.oracleapp.root.dir=lib/oracleapp

In an Oracle Application server installation, the directories that contain the


required JAR files are owned by the oracle user with permissions: rwx------. This
means no user other than from the owner (usually, an Oracle application) can
access these directories. If the TADDM server is run using the oracle user, these
directories are accessible. However, if this is not the case, you must change the
directory permissions of the following directories to 711 so that all users can access
the them:
v <OracleAppServerHome>

44 Administrator's Guide
v <OracleAppServerHome>/j2ee
v <OracleAppServerHome>/j2ee/home
v <OracleAppServerHome>/opmn
v <OracleAppServerHome>/opmn/lib, where an example of
<OracleAppServerHome> is /home/oracle/product/10.1.3/OracleAS_1

For discovery of an Oracle Application Server, you must set the


com.collation.platform.os.ignoreLoopbackProcesses property in the
$COLLATION_HOME/etc/collation.properties file to true. In the
$COLLATION_HOME/etc/collation.properties file, edit the following line:
com.collation.platform.os.ignoreLoopbackProcesses=true

Configuring the Microsoft Exchange server


You must configure the Microsoft Exchange server that you want the TADDM
server to discover.

About this task

To discover the Microsoft Exchange Server, the Microsoft Exchange Management


service must be running on the target Windows system. The Windows service ID
for the TADDM service account must be created on the Windows system on which
the Microsoft Exchange server is running. The Windows service ID must have full
permission (Execute Methods, Full Write, Partial Write, Provider Write, Enable
Account, Remote Enable, Read Security, and Edit Security) to the following WMI
namespaces:
v Root\CIMV2
v Root\CIMV2\Applications\Exchange
v Root\MicrosoftExchangeV2

If the Windows service ID for the TADDM service account has sufficient
permissions to discover a Microsoft Exchange server, the sensor will use the
Windows service ID and a separate Microsoft Exchange server access list entry is
not required.

If the Windows service ID for the TADDM service account does not have sufficient
permissions to discover a Microsoft Exchange server, you must complete the
following steps to create a separate Microsoft Exchange server access list:
1. In the Functions pane of the Product Console, click Discovery → Access List to
display the Access List pane.
2. Click Add to display the Access Details pane.
3. In the Component Type list, select Messaging Servers.
4. In the Vendor list, select Microsoft Exchange Server.
5. Click OK. The updated information is displayed in the Access List pane.

Configuring VMware servers


When properly configured, the TADDM discovery process returns information
about the following versions of VMware servers: 2.5x and 3.0.

About this task

To configure VMware servers, versions 2.5x and 3.0 for discovery, set the read-only
permissions for the non-root TADDM service account in the VMware ESX console.

Chapter 4. Configuring your environment for discovery 45


As an alternative, you can use the root user for discovery. For more information
about VMware servers, you can search the topics on the VMware community
board at http://www.vmware.com/community/.

Database set up for discovery


To support discovery of your databases, you must create DB2, Oracle, or Sybase
database users for the TADDM server. The TADDM server uses these database
users to collect information about the databases that are running on remote hosts.

Creating a DB2 user


To more completely discover DB2 instances on remote computer hosts, create a
DB2 user.

About this task

To create a DB2 user, complete the following steps:


1. Create a user with access to the following items:
v The DB2 database TADDM server
v All the instances in the DB2 database TADDM server that need to be
discovered
2. Configure this DB2 user to have SSH access to the system that hosts the DB2
database server.
3. In the TADDM server access list, complete the following steps to add the user
name and password for the DB2 user:
a. In the Product Console toolbar, click Discovery → Access List. The Access
List pane is displayed.
b. Click Add. The Access Details window is displayed.
c. In the Access Details window, complete the following information:
1) In the Component Type list, select Database.
2) In the Vendor list, select DB2.
3) Enter the Name, User Name, and Password for the DB2 user.
d. Click OK to save your information. The Access List pane is displayed with
the new information.

Creating a Microsoft SQL Server user


To more completely discover Microsoft SQL Server instances on remote computer
hosts, create a Microsoft SQL server user.

About this task

To create a Microsoft SQL server user, complete the following steps:


1. Create a Microsoft SQL server user with db_datareader role privileges. This
might need to be completed by the Microsoft SQL server administrator.
2. In the Product Console, complete the following steps to add the user name and
password for the Microsoft SQL server user in the TADDM server access list:
a. In the toolbar, click Discovery → Access List. The Access List pane is
displayed.
b. Click Add. The Access Details window is displayed.
c. In the Access Details window, enter the following information:
1) In the Component Type list, select Database.
2) In the Vendor list, select Microsoft SQL server.

46 Administrator's Guide
3) Enter the Name, User Name, and Password.
d. Click OK to save your information. The Access List pane is displayed with
the new information.

Creating an Oracle user


To more completely discover Oracle instances on remote computer hosts, create an
Oracle user.

About this task

To create an Oracle user, complete the following steps:


1. Create an Oracle user with SELECT_CATALOG_ROLE privileges. This might
need to be completed by the Oracle administrator.
For example, use the following command to create the IBM Oracle user:
create user collation identified by collpassword;
grant connect, select_catalog_role to collation;
2. In the Product Console, complete the following steps to add the user name and
password for the Oracle user in the TADDM server access list:
a. In the toolbar, click Discovery → Access List. The Access List pane is
displayed.
b. Click Add. The Access Details window is displayed.
c. In the Access Details window, complete the following information:
1) In the Component Type list, select Database.
2) In the Vendor list, select Oracle.
3) Enter the Name, User Name, and Password for the computer.
d. Click OK to save your information. The Access List pane is displayed with
the new information.

Creating a Sybase user


To completely discover Sybase ASE on remote computer hosts, create a Sybase user
assigned to an appropriate role.

About this task

To create a Sybase user, complete the following steps:

Use the following command to create a Sybase user that is a member of sa-role.
sp_role "grant",sa_role,IBM

Ensure that the Sybase IQ user is a member of DBA. If the Sybase IQ user is not a
member of DBA, the Sybase IQ database-specific information cannot be found.

Chapter 4. Configuring your environment for discovery 47


48 Administrator's Guide
Chapter 5. Creating a Discovery Library store
A Discovery Library store is a directory or folder on a computer in the data center.
The store represents the common location for all Discovery Library Adapters to
write the XML files that contain resource information. XML data files to be bulk
loaded into a TADDM system are placed in the Discovery Library store.

About this task

Typically, the Discovery Library store is located on the TADDM server. If you do
not set up the Discovery Library store on the TADDM server, then you must make
sure the TADDM bulk load program that runs on the TADDM server can access
the Discovery Library store. If using a remote system for the store, there is no
requirement as to the particular computer or operating system that acts as the
Discovery Library store. The computer that hosts the Discovery Library store does
not need to be exclusive. Other applications can run on the same computer that
hosts the Discovery Library store. Each Discovery Library Adapter writes XML
files that contain resource information in a particular XML format called the
Identity Markup Language, or IDML. Any XML file that is written in the IDML
format is commonly referred to as a book. For information on the IDML
specification and additional details on the Discovery Library store, refer to the
Discovery Library Adapter Developer’s Guide.

To create the Discovery Library store, complete the following steps:


1. Create a directory to store the XML files on a computer, with a distinct
directory name (for example, c:\IBM\DLFS). Optionally, you can create
subdirectories in the main discovery library store for each DLA that you will
use.
2. Set up a File Transfer Protocol Server (FTP) with at least one user ID to allow
write, rename, and read access to the directory that stores the Discovery
Library XML files. If you are not using FTP to transfer the XML files to the
discovery library store, ensure that the tool you use and the user ID used to
run the tool have write permissions to the discovery library store directory.
3. Ensure that the various Discovery Library Adapters have access to the name of
the system (host name) that hosts the Discovery Library store, since most
Discovery Library Adapters will copy XML files to the Discovery Library store.
The Discovery Library Adapters also need the user ID and password to connect
to the FTP server.
4. Ensure that the various Discovery Library Adapters have the user ID and
password to connect to the FTP server.
5. If the Discovery Library Adapter does not utilize FTP, copy your XML files
(books) that you want the bulk loader program to access into that shared
directory. The shared directory must be accessible by the bulk loader program.
It is the responsibility of the book writers and the administrator to get the
books into the Discovery Library Adapter store. One way that you can do this
is to set up a cron job to send the produced IDML books to the Discovery
Library Adapter store using FTP.

© Copyright IBM Corp. 2006, 2009 49


What to do next

If you are creating a Discovery Library store and want to set up a TADDM
Domain Database to contain Discovery Library Adapter books, a local drive on the
TADDM Domain Server can be the networked Discovery Library store. This
directory should be defined in the following file on the TADDM Domain Server
where the data is loaded:
v For Linux, Solaris, AIX, and Linux on System z operating systems:
$COLLATION_HOME/etc/bulkload.properties
v For Windows operating systems: %COLLATION_HOME%\etc\bulkload.properties
If you have multiple TADDM Domain Servers, configure the correct bulk loader
program to access the corresponding shared directory. The bulk loader does not
delete XML files from the Discovery Library store. You must maintain the files in
the discovery library store and ensure that there is enough disk space on the server
for the files in the directory. If new XML files are added to the directory frequently,
you should regularly clean up the directory.

If you have a TADDM Enterprise Domain Database environment, you must choose
from the following options:
v If the scope of books matches exactly with each TADDM Domain Server, load
each book into the matching TADDM Domain Server.
v If the scope of books do not match exactly with each TADDM Domain Server,
load all books into the TADDM Enterprise Domain Server.

Discovery Library Adapters


A Discovery Library Adapter (DLA) is a software program that extracts data from
a source application, such as IBM Tivoli Monitoring, IBM Tivoli Business Services
Manager, IBM Tivoli Composite Application Management (ITCAM), and so on. You
must create a DLA store in order to use the bulk loader program.

You can see the Tivoli collection of books that can load the TADDM database with
data from other Tivoli products on the IBM Tivoli Open Process Automation
Library (OPAL) Web site at: http://catalog.lotus.com/wps/portal/tccmd.

DLAs are specific to a particular product, because each product has a distinct
method of accessing the resources from the environment. The configuration and
installation of a DLA is different for every application. A typical DLA is installed
on a system that has access to the data of a particular application. For example, the
DLA for IBM Tivoli Monitoring is installed on a computer that has access to the
IBM Tivoli Monitoring enterprise management system database. All DLAs are run
using the command-line interface and can be scheduled to run using any type of
scheduling program in your environment (for example, cron).

You can create a DLA to extract information from existing products or databases in
your environment. For more information on how to create a Discovery Library
Adapter, refer to the Discovery Library Adapter Developer’s Guide.

IdML schema
The discovery library uses an XML specification called Identification Markup
Language (IdML) to enable data collection. Access to the IdML code is provided
through the discovery library adapters.

50 Administrator's Guide
The XML Schema Definition (XSD) describes the operations that are necessary to
take data about resource and relationship instances from an author and instantiate
it into the repository of a reader. This schema defines the operations that occur on
instances of resources and relationships. To facilitate future model versions and
updates, this schema references an external schema, the Common Data Model, to
define the resource and relationships. All files that are in the Discovery Library
conform to the IdML schema. Books in the Discovery Library that do not validate
against the IdML schema are in error and cannot be used by readers. TADDM is
an example of a reader.

The IdML schema is designed to separate the operations from the model
specification to enable the schema to handle updates to the model specification
without changing the IdML schema. The TADDM reader treats the individual
elements within the operations as a transaction.

TADDM XML and IdML

There are some differences between IdML files and TADDM XML files. The
following table summarizes the differences.
Table 6. Differences between IdML books and TADDM XML files
IdML books TADDM XML files
IdML is a standard that supports objects TADDM XML files is an application format
defined in the Common Data Model. ACLs, that supports objects defined in the
users, scopes, and schedules are not Common Data Model and TADDM. ACLs,
supported. users, scopes, and schedules are supported.
Operation codes include delta (default), and There are no operation codes. The delta
refresh. behavior is the default.
In-file MSS information is supported. MSS information can be provided through
the command line.
Relationships have to be defined explicitly. Implicit relationships can and must also be
defined.
XML objects are not nested. XML objects are nested.
Virtual, relative IDs are supported. Relative IDs are not supported. Real
Relationships can link configuration items objectGUID is supported. Relationships must
defined with relative IDs. link configuration items identified with
GUID or naming attributes.

Chapter 5. Creating a Discovery Library store 51


52 Administrator's Guide
Chapter 6. Tuning guidelines
Tune and configure the following to maximize the performance of the TADDM
application:

Windows operating system tuning


The following is a summary guideline for tuning Windows systems:
1. If possible, configure your Windows system to use the /3GB switch in the
boot.ini file. This assumes the correct version of the Windows operating
system and at least 4 GB of memory. With this configuration, you can allocate
more memory resources, such as Java heaps, buffer pool, package cache, and so
on.
2. If possible, locate the system paging file on a separate disk drive. It should not
be on the same drive as the operating system.
3. On your database and application server, configure the server to maximize data
for networking applications.

Network tuning
The following is a summary guideline for tuning the network:

The network can influence the overall performance of your application, but usually
exposes itself when there is a delay in the following situations:
v The time between when a client system sends a request to the server and when
the server receives this request.
v The time between when the server system sends data back to the client system
and the client system receives the data.

After a system is implemented, the network should be monitored, to assure that its
bandwidth is not being consumed more than 50%.

Database tuning
Tuning the database is critical to the efficient operation of any computer system.

The default database configurations that are provided with the product are
sufficient for proof of concept, proof of technology, and small pilot
implementations of TADDM.

If your organization does not have the skills available to monitor and tune your
database systems, consider contacting IBM Support or another vendor for resources
to perform this important task.

Both DB2 and Oracle database tuning


The following is a summary guideline for both DB2 and Oracle database tuning:
1. Do not try to limit the number of physical disk drives available to your
database based on storage capacity alone.

© Copyright IBM Corp. 2006, 2009 53


2. Ideally, the following components should be placed on separate disk
drives/arrays:
v Application data (such as tables and indexes)
v Database logs
v Database temporary space: used for sort and join operations
3. Use the fastest disks available for your log files.
4. Enable Asynchronous I/O at the operating system level.

For more information on both DB2 and Oracle database tuning, refer to Database
Performance Tuning on AIX at http://www.redbooks.ibm.com/redbooks/pdfs/
sg245511.pdf.

DB2 database tuning


The following are some guidelines for tuning DB2 databases.
1. Regular maintenance is a critical factor in the performance of any database
environment. For DB2 databases, this involves running the REORG and
RUNSTATS utilities, in that order, on the database tables.
Critical: Running the REORGs and RUNSTATS utilities are critically important
for optimal performance with DB2 databases. After the database is populated,
this should be done on a regularly scheduled basis, for example, weekly. A
regularly scheduled maintenance plan is essential to maintain peak
performance of your system.
v REORG: After many changes to table data caused by the insertion, deletion,
and updating of variable length columns activity, logically sequential data
might be on non-sequential physical data pages so that the database manager
must perform additional read operations to access data. Reorganize DB2
tables to eliminate fragmentation and reclaim space using the REORG
command.
– To generate all of the REORG TABLE commands that you need to run,
run the following SQL statement on the DB2 database server, where dbuser
is the value from com.collation.db.user=:
select 'reorg table '||CAST(RTRIM(creator) AS VARCHAR(40))||'.
"'||substr(name,1,60)||'" ; ' from sysibm.systables where creator
= '<dbuser>' and type = 'T' and name not in ('CHANGE_SEQ_ID')
order by 1'
– To run this procedure, complete the following steps:
a. Copy the SQL statement above to a file, for example, temp.sq.
b. On the database server, on a DB2 command line, connect to the DB
and run the following commands:
db2 –x –tf temp.sql > cmdbreorg.sql
db2 –tvf cmdbreorg.sql > cmdbreorg.out
v RUNSTATS: The DB2 optimizer uses information and statistics in the DB2
catalog to determine the best access to the database, based on the query that
is provided. Statistical information is collected for specific tables and indexes
in the local database when you run the RUNSTATS utility. When significant
numbers of table rows are added or removed, or if data in columns for
which you collect statistics is updated, run the RUNSTATS command again
to update the statistics.
a. Ensure that your TADDM database tables are populated before running
the RUNSTATS command on the database. This can occur by way of
discovery, bulk load, or by using the API. Running the RUNSTATS

54 Administrator's Guide
command on your database tables before there is data in them results in
the catalog statistics reflecting 0 rows in the tables. This generally causes
the DB2 optimizer to perform table scans when accessing the tables, and
to not use the available indexes, resulting in poor performance.
b. The DB2 product provides functions to automate database maintenance
using database configuration parameters. You need to evaluate the use of
these parameters in your environment to determine if they fit into your
database maintenance process. In a typical production environment, you
want to control when database maintenance activities occur (for example,
database maintenance activities are typically performed during off-peak
hours to prevent major problems with the database).
The following list describes some of the database configuration
parameters:
– Automatic maintenance (AUTO_MAINT): This parameter is the
parent of all the other automatic maintenance database configuration
parameters (auto_db_backup, auto_tbl_maint, auto_runstats,
auto_stats_prof, auto_prof_upd, and auto_reorg). When this parameter
is disabled, all of its child parameters are also disabled, but their
settings, as recorded in the database configuration file, do not change.
When this parent parameter is enabled, recorded values for its child
parameters take effect. In this way, automatic maintenance can be
enabled or disabled globally.
- The default for DB2 V8 is OFF.
- The default for DB2 V9 is ON.
- (Important) Set this parameter to OFF until you populate your
database tables as previously explained.
UPDATE db cfg for dbname using AUTO_MAINT OFF
– Automatic table maintenance (AUTO_TBL_MAINT): This parameter is
the parent of all table maintenance parameters (auto_runstats,
auto_stats_prof, auto_prof_upd, and auto_reorg). When this parameter
is disabled, all of its child parameters are also disabled, but their
settings, as recorded in the database configuration file, do not change.
When this parent parameter is enabled, recorded values for its child
parameters take effect. In this way, table maintenance can be enabled
or disabled globally.
– Automatic runstats (AUTO_RUNSTATS): This automated table
maintenance parameter enables or disables automatic table runstats
operations for a database. A runstats policy (a defined set of rules or
guidelines) can be used to specify the automated behavior. To be
enabled, this parameter must be set to ON, and its parent parameters
must also be enabled.
c. There is a program in the /dist/bin directory called gen_db_stats.jy.
This program outputs the database commands for either an Oracle or
DB2 database to update the statistics on the TADDM tables. The
following example shows how the program is used:
1) Run the following command:
cd TADDM_install_dir/dist/bin
2) Run the following command, where tmpdir is a directory where this
file can be created:
./gen_db_stats.jy > tmpdir/TADDM_table_stats.sql
3) Copy the file to the database server and run the following command:
db2-tvf tmpdir/TADDM_table_stats.sql

Chapter 6. Tuning guidelines 55


d. (This is for only a DB2 database) There is an additional performance fix
that is used to modify some of the statistics that are generated by the
RUNSTATS command. There is a program in the TADDM_install_dir/
dist/bin directory called db2updatestats.sh (for UNIX and Linux
systems), or db2updatestats.bat (for Windows systems). This program
should be run immediately after the prior procedure (c.) or as part of
your standard RUNSTATS procedure. The following example shows how
the program is used:
1) Run the following command:
cd TADDM_install_dir/dist/bin
2) Run the following command:
./dbupdatestats.sh
2. A buffer pool is memory used to cache table and index data pages as they are
being read from disk, or being modified. The buffer pool improves database
system performance by allowing data to be accessed from memory instead of
from disk. Because memory access is much faster than disk access, the less
often the database manager needs to read from or write to a disk, the better the
performance. Because most data manipulation takes place in buffer pools,
configuring buffer pools is the single most important tuning area. Only large
objects and long field data are not manipulated in a buffer pool.
v Modify the buffer pool sizes based on the amount of available system
memory that you have and the amount of data that is in your database. The
default buffer pool sizes provided with the TADDM database are generally
not large enough for production environments. There is no definitive answer
to the question of how much memory you should dedicate to the buffer
pool. Generally, more memory is better. Because it is a memory resource, its
use has to be considered along with all other applications and processes that
are running on a server. Use the DB2 SNAPSHOT monitor to determine
buffer pool usage and hit ratios. If an increase to the size of the buffer pools
causes system paging, lower the size to eliminate paging.
v Buffer pool size guidelines:
– < 500K CIs
4K - 50 000
8K - 5000
32K - 1000
– 500K - 1M CIs
4K - 90 000
8K - 12 000
32K - 1500
– > 1M CIs - eCMDB
4K - 150 000
8K - 24 000
32K - 2500
v For example, you can implement the buffer pool changes as follows (this
might require a database restart):
– ALTER BUFFERPOOL IBMDEFAULTBP SIZE 90000
– ALTER BUFFERPOOL BUF8K SIZE 12000
– ALTER BUFFERPOOL BUF32K SIZE 1500

56 Administrator's Guide
The following list includes important DB2 database configuration parameters
that might need to be adjusted, depending on data volumes, usage, and
deployment configuration:
v DBHEAP
v NUM_IOCLEANERS
v NUM_IOSERVERS
v LOCKLIST
3. The following list includes important DB2 database manager parameters that
might need to be adjusted, depending on data volumes, usage, and deployment
configuration:
v ASLHEAPSZ
v INTRA_PARALLEL
v QUERY_HEAP_SZ
v RQRIOBLK
4. Set the following DB2 Registry Variables:
v DB2_PARALLEL_IO
This enables parallel I/O operations.
This is applicable only if your table space containers and hardware are
configured appropriately.
v DB2NTNOCACHE=ON - (Windows only)
v DB2_USE_ALTERNATE_PAGE_CLEANING
5. Database logs:
v Tune the Log File Size (logfilsiz) database configuration parameter so that
you are not creating excessive log files.
v Use Log Retain logging to ensure recoverability of your database.
v Mirror your log files to ensure availability of your database system.
v Modify the size of the database configuration Log Buffer parameter
(logbufsz) based on the volume of activity. This parameter specifies the
amount of the database heap to use as a buffer for log records before writing
these records to disk. Buffering the log records results in more efficient
logging file I/O because the log records are written to disk less frequently,
and more log records are written at a time.
6. Modify the PREFETCHSIZE on the table spaces based on the following
formula. An ideal size is a multiple of the extent size, the number of physical
disks under each container (if a RAID device is used) and the number of table
space containers. The extent size should be fairly small, with a good value
being in the range of 8 - 32 pages. For example, for a table space on a RAID
device with 5 physical disks, 1 container (suggested for RAID devices) and an
EXTENTSIZE of 32, the PREFETCHSIZE should be set to 160 (32 x 5 x 1).

For more information on DB2 database tuning, refer to Database Performance Tuning
on AIX at http://www.redbooks.ibm.com/redbooks/pdfs/sg245511.pdf, Relational
Database Design and Performance Tuning for DB2 Database Servers at
http://catalog.lotus.com/wps/portal/topal/details?catalog.label=1TW10EC02, and
DB2 UDB Version 8 Product Manuals at http://www-01.ibm.com/support/
docview.wss?rs=71&uid=swg27009554.

Oracle database tuning


The following guidelines are for tuning Oracle databases.

Chapter 6. Tuning guidelines 57


1. Regular maintenance is a critical factor in the performance of any database
environment. For Oracle databases, this involves running the dbms_stats
package on the database tables. Oracle uses a cost based optimizer. The cost
based optimizer needs data to decide on the access plan, and this data is
generated by the dbms_stats package. Oracle databases depend on data about
the tables and indexes. Without this data, the optimizer has to estimate.
Critical: Rebuilding the indexes and running the dbms_stats package is
critically important for optimal performance with Oracle databases. After the
database is populated, this should be done on a regularly scheduled basis, for
example, weekly. A regularly scheduled maintenance plan is essential to
maintain peak performance of your system.
v REBUILD INDEX: After many changes to table data, caused by insertion,
deletion, and updating activity, logically sequential data might be on
non-sequential physical data pages, so that the database manager must
perform additional read operations to access data. Rebuild the indexes to
help improve SQL performance.
a. Generate the REBUILD INDEX commands by running the following SQL
statement on the Oracle database, where <dbuser> is the value from
com.collation.db.user=:
select 'alter index <dbuser>.'||index_name||' rebuild tablespace
'||tablespace_name||';' from dba_indexes where owner = '<dbuser>';
This generates all of the ALTER INDEX commands that you need to run.
b. Run the commands in SQLPLUS or some comparable facility. Rebuilding
the indexes on a large database takes 15 - 20 minutes.
2. DBMS_STATS: Use the Oracle RDBMS to collect many different kinds of
statistics as an aid to improving performance. The optimizer uses information
and statistics in the dictionary to determine the best access to the database
based on the query provided. Statistical information is collected for specific
tables and indexes in the local database when you run the DBMS_STATS
command. When significant numbers of table rows are added or removed, or if
data in columns for which you collect statistics is updated, run the
DBMS_STATS command again to update the statistics.
v There is a program in the <TADDM_install_dir>/dist/support/bin directory
called gen_db_stats.jy. This program outputs the database commands for
either Oracle or DB2 databases to update the statistics on the TADDM tables.
The following example shows how the program is used:
a. cd <TADDM_install_dir>/dist/support/bin
b. Run this SQL statement, where <tmpdir> is a directory where this file is
created:
./gen_db_stats.jy<tmpdir>/TADDM_table_stats_sql
c. After this is complete, copy the file to the database server and run the
following command:
– To execute a script file in SQLPlus, type @ and then the file name: SQL
> @{file}
d. Run the commands in SQLPLUS or some comparable facility.
3. Buffer pool: A buffer pool or buffer cache is a memory structure inside Oracle
System Global Area (SGA) for each instance. This buffer cache is used for
caching data blocks in the memory. Accessing data from the memory is
significantly faster than accessing data from disk. The goal of block buffer
tuning is to efficiently cache frequently used data blocks in the buffer cache
(SGA) and provide faster access to data. Tuning block buffer is a key task in
any Oracle tuning initiative and is a part of the ongoing tuning and monitoring
of production databases. The Oracle product maintains its own buffer cache
58 Administrator's Guide
inside the SGA for each instance. A properly sized buffer cache can usually
yield a cache hit ratio over 90%, which means that nine requests out of ten are
satisfied without going to disk. If a buffer cache is too small, the cache hit ratio
will be small and more physical disk I/O results. If a buffer cache is too big,
parts of the buffer cache are underutilized and memory resources are wasted.
v Buffer pool size guidelines: (db_cache_size)
– < 500K CIs - 38 000
– 500K - 1M CIs - 60 000
– > 1M CIs - eCMDB - 95 000

For more information on Oracle database tuning, refer to Database Performance


Tuning on AIX at http://www.redbooks.ibm.com/redbooks/pdfs/sg245511.pdf.

Database performance tuning


The DB2 and Oracle databases run more efficiently for TADDM when you
complete some performance tuning tasks.

RUNSTATS command

You can update DB2 statistics using the RUNSTATS command. You should run the
command after any task significantly changes database contents, for example, after
a discovery or bulk load.

In general, you should run the RUNSTATS command once a week. This task can
be completed manually, or using the DB2 facility to automatically enable the
RUNSTATS command.

There is sample script that you can use to enable the RUNSTATS command. You
can use the sample provided to design your own production scripts. Using a
production script that you develop, you can automate the process so that the
RUNSTATS command runs on all of the TADDM database tables with one
command.

For Linux, Solaris, AIX, and Linux on System z operating systems, the example
script is located in the following path: $COLLATION_HOME/bin/
runstatus_db2_catalog.sql

For Windows operating systems, the example script is located in the following
path: %COLLATION_HOME%\bin\runstatus_db2_catalog.sql

The following commands are an example of how you can run the example script:
su - db2 instance owner
db2 connect to cmdb
db2 -stf runstats_db2_catalog.sql

You should develop your own production script to run the RUNSTATS command
for your environment at an appropriate frequency to ensure good database
performance.

Query optimizer

The DB2 query optimizer benefits from having recent statistics for the TADDM
tables. For example, the query optimizer can help estimate how much buffer pool
is available at run time.

Chapter 6. Tuning guidelines 59


There is a sample script, in the TADDM installation directory, that you can use for
the query optimizer. Using this script, you can output the database commands for
the Oracle database and DB2 database to update the statistics on the TADDM
tables.

For Linux, Solaris, AIX, and Linux on System z operating systems, the example
script is located in the following path: TADDM_install_dir/dist/bin/
gen_db_stats.jy

For Windows operating systems, the example script is located in the following
path: TADDM_install_dir\dist\bin\gen_db_stats.jy

The following commands are an example of how you can run the script:
cd TADDM_install_dir/dist/bin
./gen_db_stats.jy >tmpdir/TADDM_table_stats.sql

When these command have completed, copy the file to the database server and
run the following command:
DB2: db2 -tvf
xOracle: sqlplus
Related tasks
Chapter 8, “Database maintenance,” on page 69
To avoid performance problems, perform database maintenance tasks on a regular
basis.

Discovery parameters tuning


Most of the TADDM parameters that can be modified are contained in the
collation.properties file. This is a Java properties file with a list of names and
value pairs that are separated by an equal sign (=). The following is an example of
this file:
v <TADDM_install_dir>/dist/etc/collation.properties

The two major areas for tuning are attribute discovery rate and storage.
v Attribute discovery rate: This is the area with the most potential for tuning. In
this file, the property with the most impact on performance is the number of
discovery worker threads.
– #Max number of discovery worker threads
com.collation.discover.dwcount=16
By observing a discovery run and comparing the number of in progress sensors
that are in the started stage versus the number of in progress sensors in the
discovered or storing stages, an assessment can be made on whether attribute
discovery is faster or slower than attribute storage for a particular environment.
As with all changes to the collation.properties file, the server must be
restarted for the change to take effect.
v Storage: Storage of the discovery results is the discovery performance bottleneck,
if the number of sensors in the storing state is approximately the value of the
property:
– com.collation.discover.observer.topopumpcount
This property is the number of parallel storage threads. It is one of the main
settings for controlling discovery storage performance and must be adjusted
carefully.

60 Administrator's Guide
For more information on discovery parameters tuning, refer to Tuning Discovery
Performance at http://catalog.lotus.com/wps/portal/topal/
details?catalog.label=1TW10CC0G.

Bulk load parameters tuning


There are three distinct phases for loading data by way of the Bulk Loader:
1. Analyze the objects and relationships to determine the graphs in the data.
Typically, 1 - 5% of execution time
2. Construct model objects and build graphs.
Typically, 2 - 5% of execution time
3. Pass the data to the Application Programming Interface (API) server.
Typically, 90 - 99% of execution time

There are two options for loading data:


v Data can be loaded ″one″ record at a time. This is the default.
– Files with errors must use default loading.
– Files with extended attributes must use default loading.
v Data can be loaded ″in bulk″ (called graph writing).
– Bulk loading with the graph-write option is significantly faster than running
in the default mode. (Reference the Bulk Load measurements for details). The
following is an example of running with the graph-write option, where
-g=buffer and pass blocks of data to the API server:
- ./loadidml.sh –g –f /home/confignia/testfiles/sample.xml
– The following parameter in the bulkload.properties can be used to improve
graph writing performance:
- com.ibm.cdb.bulk.cachesize=800 (this is the default)
- This parameter controls the number of objects to be processed in a single
write operation when performing a graph write.
v Increasing this number improves performance at the risk of running out
of memory either on the client or at the server. The number should only
be altered when specific information is available to indicate that
processing a file with a larger cache provides some benefit in
performance.
v The cache size setting currently can be no larger than 2000.

IBM parameters tuning for Java Virtual Machine (JVM)


When using the IBM implementation of JVM, the application should not be
making any explicit garbage collection calls, for example, SystemGC(). Disable this
by using the DisableExplicitGC property for each JVM.
v -Xdisableexplicitgc

Fragmentation of the Java heap can occur as the number of objects that are
processed increases. There are a number of parameters that you can set to help
reduce fragmentation in the heap.
v A kCluster is an area of storage that is used exclusively for class blocks. It is
large enough to hold 1280 entries. Each class block is 256 bytes long. This
default value is usually too small and can lead to fragmentation of the heap. Set
the kCluster parameter, -Xk, as follows to help reduce fragmentation of the

Chapter 6. Tuning guidelines 61


heap. These are starting values and might have to be tuned in your
environment. An analysis of a heap dump would be best to determine the ideal
size.
– Topology: -Xk8300
– EventsCore: -Xk3500
– DiscoverAdmin: -Xk3200
– Proxy: -Xk5700
– Discover: -Xk3700
– Gigaspaces: -Xk3000
Implement these changes in the collation.properties file by adding entries in
the JVM Vendor Specific Settings section. For example, to implement these
changes for the Topology server, add the following line:
com.collation.Topology.jvmargs.ibm=-Xdisableexplicitgc -Xk8300
v Another option for fragmentation issues is to allocate some space specifically for
large objects; > 64K. Use the -Xloratio parameter. For example:
– -Xloratio0.2
This command reserves x% of the active Java heap (not x% of -Xmx but x% of
the current size of the Java heap), to the allocation of large objects (≥64 KB) only.
If changed, -Xmx should be changed to make sure that you do not reduce the
size of the small object area. An analysis of a heap dump would be best to
determine the ideal setting for this parameter.

There are a few additional parameters that can be set that affect Java performance.
To change an existing JVM option to a different value, edit one of the following
files:
v <TADDM_install_dir>/dist/deploy-tomcat/ROOT/WEB-INF/cmdb-context.xml file
v If eCMDB is in use, edit the <TADDM_install_dir>/dist/deploytomcat/ROOT/WEB-
INF/ecmdb-context.xml file.

To edit one of these files to change the settings for one of the TADDM services,
first find the service in the file. The following is an example of the beginning of a
service definition in the XML file:
<bean id="Discover"
class='com.collation.platform.jini.ServiceLifecycle"initmethod="start"
destroy-method="stop">
<property name="serviceName">
<value>Discover</value>
</property>

Within the definition, there are some elements and attributes that control the JVM
arguments. For example:
<property name="jvmArgs">
<value>-Xms8M;-Xmx512M;
-Djava.nio.channels.spi.SelectorProvider=sun.nio.ch.PollSelectorProvider
</value>
</property>

The JVM arguments can be set as a semicolon separated list in the following
element:
<property name="jvmArgs"><value>

62 Administrator's Guide
Sun Java Virtual Machine (JVM) parameters tuning
When using the Sun JVM, make the following changes:
v Implement these changes in the collation.properties file by adding entries in
the JVM Vendor Specific Settings section. For example, to implement these
changes for the Topology server, add the following line:
com.collation.Topology.jvmargs.sun=-XX:+MaxPermSize=128M -XX:+DisableExplicitGC
-X:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError

Java console GUI Java Virtual Machine (JVM) settings tuning


The default settings in the collation.properties file for the graphical user
interface (GUI) Java Virtual Machine (JVM) settings are as follows.

Note: These guidelines are based on the number of server equivalents (SEs) in
your environment. A server equivalent is a representative unit of Information
Technology (IT) infrastructure, defined as a computer system (with standard
configurations; operating system, network interfaces, storage interfaces)
installed with server software, such as a database (such as DB2 or Oracle), a
Web server (such as Apache or IPlanet) or an application server (such as
WebSphere or WebLogic). An SE also accounts for network, storage and
other subsystems that provide services to the optimal functioning of the
server. Each SE consists of a number of Configuration Items (CIs).
v Small environment (fewer than 1000 SEs):
– com.collation.gui.initial.heap.size=128m
– com.collation.gui.max.heap.size=512m
v Medium environment (1000–2500 SEs):
– com.collation.gui.initial.heap.size=256m
– com.collation.gui.max.heap.size=268m
v Large environment (2500–5000 SEs):
– com.collation.gui.initial.heap.size=512m
– com.collation.gui.max.heap.size=1024m

Chapter 6. Tuning guidelines 63


64 Administrator's Guide
Chapter 7. Populating the database
When items are discovered, they are stored in the Tivoli Application Dependency
Discovery Manager (TADDM) database from the built-in sensors, from Discovery
Library Books that are generated by external management software systems, and
from APIs.

TADDM provides a variety of specialized sensors to enable discovery of almost all


components found in the typical data center, across the application software, host,
and network tiers. You can also develop custom sensors for unique components.
Sensors reside on the TADDM server and collect configuration attributes and
dependencies.

Configuration information is discovered and collected for the entire application


infrastructure, identifying deployed software components, physical servers,
network devices, virtual LANs, and host data used in a runtime environment.
Discovery is performed using sensors which are currently built and deployed as
part of TADDM. The sensors determine how the host and installed applications are
configured and what they are communicating with.

Sensors work by emulating a user running locally to discover information. Sensors


are non-intrusive, meaning they do not run on the client system. Instead, they run
on the TADDM server. The sensor is able to gather discovery-related information
without incurring any of the costs associated with the installation and maintenance
of an agent locally on the client systems to be discovered. The sensors use secure
network connections, encrypted access credentials, and host-native utilities. In this
way, a sensor is safe to use and provides the same data acquisition that you could
acquire by having the software reside and run locally on the client system.

Sensors and the discovery process

The following sequence describes how a sensor discovers configuration items in


your environment:
1. Identifies the active IP devices in the chosen scope:
a. Tries a connection on several ports (22,135) looking for some response.
b. Any response is enough to tell TADDM that the device exists.

Important: For more information on scopes and how to set them, see the
section on Setting the discovery scope in the IBM Tivoli Application
Dependency Discovery Manager User’s Guide.
2. Determines if there is a method of establishing a session to the IP device. Tries
to establish a TCP connection on several ports (including 22 and 135) in order
to determine how best to proceed in discovering the host.
3. Tries to establish an SSH connection using credentials from the access list, if an
SSH port was open.
a. Access list entries of type computer system or windows computer system are
tried from the access list, in sequence, until an entry works or the list is
exhausted.
b. If a WMI port was open, an SSH connection is established with a gateway
computer system (provided one can be found for the target). Access list

© Copyright IBM Corp. 2006, 2009 65


entries of type windows computer system are tried from the access list, in
sequence, until an entry works or the list is exhausted.
c. If a session cannot be established, then an SNMP sensor is run.
d. If a session is established, then a computer system sensor is run.
4. A computer system sensor tries to determine what type of operating system is
installed, for example, AIX, Linux, SunOS, HpUX, Windows, Tru-64, OpenVMS.
TADDM launches an OS-specific sensor to do a deep-dive discovery of the
operating system.
5. During the operating system discovery, software-specific sensors, which are
based on specific criteria (port number, process name, and so on), are launched
to discover application details.

Sensor enabling and disabling

You can globally disable a sensor even if the sensor has been enabled by a profile.
You can also globally enable a sensor and allow the setting in the profile to work.
For example, if a sensor is globally enabled and the profile enables the sensor, the
sensor runs. If the sensor is globally enabled, but disabled in the profile, the sensor
does not run when the aforementioned profile is selected for discovery.

For the global enabling (and disabling) to work for sensors that have an osgi
directory (/opt/IBM/cmdb/dist/osgi/plugins), the AgentConfigurations in the osgi
directory need to be changed.

For example, for the Db2Sensor, look for these paths to the files:
v /opt/IBM/cmdb/dist/osgi/plugins/
com.ibm.cdb.discover.sensor.app.db.db2_7.1.0/Db2Sensor.xml
v /opt/IBM/cmdb/dist/osgi/plugins/
com.ibm.cdb.discover.sensor.app.db.db2windows_7.1.0/Db2WindowsSensor.xml

When editing the XML files, to enable the sensor, set enabled to true. To disable the
sensor, set enabled to false.

For sensors that do not use the osgi/plugin directory, the configuration
information is stored in the sensor configuration XML file, in the
etc/discover-sensors directory.

Sensor logging

There is a property in the $COLLATION_HOME/etc/collation.properties file that


improves readability of the logs by separating the logging into per-sensor log files
for each discovery run. To enable this, set the following property:
com.collation.discover.engine.SplitSensorLog=true

Important: If you do not set this property to true, default logging for all sensors is
put into the $COLLATION_HOME/log/services/DiscoveryManager.log file.

This property separates the logs into the following directory structure:
v $COLLATION_HOME/log/sensors/<runid>/sensorName-IP.log for example:
sensors/20070621131259/SessionSensor-10.199.21.104.log

Important:
v The runid option includes the date of the discovery run and the log
file name includes the sensor name and IP address of the target.

66 Administrator's Guide
v When using this option, the logs are not automatically cleared. This
must be done manually, if required.

For more information on sensor logging, see the Troubleshooting Guide.

Chapter 7. Populating the database 67


68 Administrator's Guide
Chapter 8. Database maintenance
To avoid performance problems, perform database maintenance tasks on a regular
basis.
Related reference
“Database performance tuning” on page 59
The DB2 and Oracle databases run more efficiently for TADDM when you
complete some performance tuning tasks.

Deleting old database records


The number of data records in the change_history_table database grows over time,
and you periodically need to remove data in the table manually to maintain the
table at a reasonable size.

To free storage space in TADDM databases, use SQL queries to remove old data
manually from the change_history_table. The following command is an example of
such an SQL query, where the integer 1225515600000 represents the date, 1
November 2008, expressed in the same format as that returned by the
System.currentTimeMillis() Java method, or a number equal to the difference,
measured in milliseconds, between the current time and midnight, 1 January 1970
UTC:
DELETE FROM CHANGE_HISTORY_TABLE
WHERE PERSIST_TIME < 1225515600000 (this is the Java time stamp)

To convert a date to a Java time stamp, use the following code:


import java.util.*;
import java.text.*;
import java.sql.Timestamp;

public class DateToString {

public static void main(String args[]) {


try {
String str = args[0];
SimpleDateFormat formatter = new SimpleDateFormat("dd/MM/yyyy");
Date date = formatter.parse(str);

long msec = date.getTime();

System.out.println("Date is " +date);


System.out.println("MillSeconds is " +msec);

} catch (ParseException e)
{System.out.println("Exception :"+e); }

}
}

Run the code as follows:


java DateToString 1/11/2008
Date is Sat Nov 01 00:00:00 EST 2008
MillSeconds is 1225515600000

Use the resulting Java time stamp in the SQL query.

© Copyright IBM Corp. 2006, 2009 69


If an exceptional amount of records exist in the change_history_table, incremental
deletes might be required (deleting a subset of records at a time) to prevent filling
transaction logs in the database.

Timeframes for removing data

In general, remove data that is older than 4 - 6 weeks, but you can adapt this
timeline to the specific change history requirements of your scenario.

Understanding how you use the change history data within your scenario is the
key to determining the right time frame for removing data from the change history
table. For example, if you are using change history data for problem determination
and you want to investigate problems that occurred 5 weeks ago, keep at least 5
weeks of data in change_history_table.

Change history data is also used by other applications. Tivoli Business Service
Manager (TBSM), for example, uses change history records to determine which
data to import from a TADDM installation. Synchronize TBSM more frequently
than the number of weeks of change history data maintained in the change history
table. For example, if you synchronize TBSM weekly, maintain more than one week
of change history data in the TADDM change history table. In other words, make
sure that the period of time between application synchronizations is shorter than
period of time for which you have change history data.

Single-domain environment versus enterprise environment

In a single domain TADDM installation, you can make data maintenance decisions
based solely on the data needs for that one TADDM domain. However, with an
enterprise scale configuration, additional considerations apply. You must
coordinate the removal of change history data between the different TADDM
domain databases and the TADDM Enterprise Domain Database, and you need to
remove the data from each individual TADDM Domain Database and from the
TADDM Enterprise Domain Database.

For an enterprise scale environment, the following considerations apply: When a


TADDM Domain Database is connected to a TADDM Enterprise Domain Server,
there is a synchronization schedule for pulling data from the TADDM Domain
Database to the TADDM Enterprise Domain Database. Keep change history data at
the domain level for a period of time that is greater than the period of time
between the scheduled synchronizations of that TADDM Domain Database with
the TADDM Enterprise Domain Database. For example, if the TADDM Enterprise
Domain Database is synchronized weekly, then maintain at least two weeks of
change history data in each TADDM Domain Database.

Remove data at the TADDM domain level first, and then remove data at the
enterprise level. Best practice is to maintain the same number of weeks of change
history data in all TADDM Domain Databases. The period for which change
history data is kept in the TADDM Enterprise Domain Database can vary from the
period for which such data is kept at the individual TADDM Domain Databases,
based on needs. After you determine a time frame for data removal that meets the
specific needs of your configuration scenario, it is best to perform the actual
removal of the data just after the occurrence of a synchronization between the
TADDM Domain Databases and the TADDM Enterprise Domain Database.

70 Administrator's Guide
Optimizing DB2 for z/OS performance
Perform these database maintenance steps to avoid performance issues with a DB2
for z/OS® database.
1. Use the Product Console to run a discovery. This populates the domain
database with data.
2. Stop the TADDM server.
3. Generate and run the RUNSTATS control statement for each new database. The
following example assumes the databases are named CMDBA and CMDBB:
SELECT DISTINCT 'RUNSTATS TABLESPACE '||DBNAME||'.'!!TSNAME!!' INDEX(ALL)
SHRLEVEL REFERENCE' FROM SYSIBM.SYSTABLES
WHERE DBNAME IN ('CMDBA', 'CMDBB') ORDER BY 1;
4. Immediately after RUNSTATS is complete, generate and run the UPDATE
control statement for each new database. Run the following statements for the
schemas corresponding to both the primary user ID and the archive user ID:
select
'UPDATE SYSIBM.SYSINDEXES SET FIRSTKEYCARDF=FULLKEYCARDF WHERE NAME =
'||''''||CAST(RTRIM(name) AS VARCHAR(40))||''''||' AND CREATOR =
'||''''||CAST(RTRIM(creator) AS VARCHAR(40))||''''||' AND TBNAME =
'||''''||CAST(RTRIM(tbname) AS VARCHAR(40))||''''||' AND TBCREATOR =
'||''''||CAST(RTRIM(tbcreator) AS VARCHAR(40))||''''||';'
from sysibm.sysindexes a
where tbcreator = 'SYSADM'
AND NAME IN
(SELECT IXNAME
FROM SYSIBM.SYSKEYS B
WHERE A.CREATOR = B.IXCREATOR
AND A.NAME = B.IXNAME
AND COLNAME = 'PK__JDOIDX');

select
'UPDATE SYSIBM.SYSCOLUMNS SET COLCARDF=(SELECT FULLKEYCARDF FROM
SYSIBM.SYSINDEXES WHERE NAME = '||''''||CAST(RTRIM(name)
AS VARCHAR(40))||''''||' AND CREATOR = '||''''||CAST(RTRIM(creator)
AS VARCHAR(40))||''''||' AND TBNAME = '||''''||CAST(RTRIM(tbname)
AS VARCHAR(40))||''''||' AND TBCREATOR = '||''''||CAST(RTRIM(tbcreator)
AS VARCHAR(40))||''''||') WHERE NAME = '||''''||'PK__JDOIDX'||''''||'
AND TBNAME = '||''''||CAST(RTRIM(tbname) AS VARCHAR(40))||''''||'
AND TBCREATOR = '||''''||CAST(RTRIM(tbcreator) AS VARCHAR(40))||''''||';'
from sysibm.sysindexes a
where tbcreator = 'SYSADM'
AND NAME IN
(SELECT IXNAME
FROM SYSIBM.SYSKEYS B
WHERE A.CREATOR = B.IXCREATOR
AND A.NAME = B.IXNAME
AND COLNAME = 'PK__JDOIDX');

where SYSADM is the schema name corresponding to the primary or archive


user ID. Then run the resulting UPDATE SYSIBM.SYSINDEXES and UPDATE
SYSIBM.SYSCOLUMNS statements for each schema.
5. On an ongoing basis, monitor the size of the TADDM database tables and
adjust their storage attributes if necessary. In particular, monitor the size of the
following database tables, which can become very large:
v CHANGE_HISTORY_TABLE
v CMDB_GUID_ALIAS
v PERSOBJ
v RELATION
v SFTCMP
v MEDACDEV

Chapter 8. Database maintenance 71


v WINSVC
v MSSOBJLINK
v BINDADDR
v OPSYS
v OPERATINGSYSENTS_FD67DE48X
v COMPSYS
v COMPOSITE
v MSSOBJLINK_REL
v SOFTMODL
v RUNTIMEPROCESSJDO_PORTS_X
v COMPUTERSYSTICES_E032D816X
v APPSRVR
v IPINTRFC
v ORCLINITV
v RUNTIMEPROCEORTS_13B7EE75X
v IPROUTE
v IPADDR
Use ALTER statements to modify the PRIQTY and SECQTY attributes
according to the needs of your environment. If appropriate, consider moving
tables to separate tablespaces.
6. Use the REBIND command on the following packages with the
KEEPDYNAMIC(YES) option:
v SYSLH200
v SYSLH201
v SYSLH202

72 Administrator's Guide
Chapter 9. Log file overview
The TADDM server creates log files about its operation. This information is helpful
when troubleshooting problems.

You can set the maximum number of log files that the TADDM server creates and
the maximum size of each log file. When the maximum size of a log file is reached,
the TADDM server automatically copies the log file to a new file name with a
unique extension and creates a new log file.

For example, assuming that the maximum number of log files is four, when the
current log file reaches its maximum size, the TADDM server manages the older
log files in the following way:
v The logfile.3 file overwrites logfile.4.
v The logfile.2 file overwrites logfile.3.
v The logfile.1 overwrites logfile.2.
v The logfile file overwrites logfile.1.
v A new logfile is created.

The TADDM server stores log files in the $COLLATION_HOME/log directory, where
COLLATION_HOME is the path where you installed the TADDM server.

Setting the maximum size and maximum number of IBM log files
You can set the maximum number and size of each log file that TADDM creates.

About this task

To customize the size and number of log files, complete the following steps:
1. Open the collation.properties file, which is located in the following
directory:
v For Linux, Solaris, AIX, and Linux on System z operating systems,
$COLLATION_HOME/etc
v For Windows operating systems, %COLLATION_HOME%\etc
2. To specify the maximum size of each log file, edit the following property:
com.collation.log.filesize
The default value is 20 MB. You can enter the number of bytes directly, or by
specifying the number of kilobytes or megabytes using KB and MB respectively.
The following examples are valid log file size values:
v 1000000
v 512 KB
v 10 MB
3. To specify the maximum number of log files, edit the following property:
com.collation.log.filecount
The default value is 5.
4. Save and close the collation.properties file.
The changes in the log file settings automatically take effect as a result of
dynamic logging. For more information on dynamic logging, see the IBM
TivoliApplication Dependency Discovery Manager Troubleshooting Guide.

© Copyright IBM Corp. 2006, 2009 73


Suggestions for searching logs
When viewing log files, there are some suggestions that you can use to quickly
find pertinent information.

The following list provides suggestions for finding and searching log files:
v In the following directory, the log file names all have lowercase letters (for
example, logfile.log):
– For Linux, Solaris, AIX, and Linux on System z operating systems,
$COLLATION_HOME/log
– For Windows operating systems, %COLLATION_HOME%\log
v In the following directory, the log file names all use the ″mixed case″ convention
(for example, logFile.log):
– For Linux, Solaris, AIX, and Linux on System z operating systems,
$COLLATION_HOME/log/services
– For Windows operating systems, %COLLATION_HOME%\log\services
v For Linux, Solaris, AIX, and Linux on System z operating systems, use the less,
grep, and vi commands for searching logs.
v If you installed Cygwin, you can use the less, grep, and vi command on
Windows systems.
v Start at the end of the file and search backwards.
v Filter the DiscoverManager.log file using the following methods:
– The DiscoverManager.log file can be large. To work with this file:
- Break the file into pieces using the split command, available on UNIX
platforms.
- Use the grep command to look for specific strings, and pipe the results into
another file.
– If the result is verbose and you want additional filtering, use Target or
Thread.
– If you are reviewing the entire file, start by finding the target and sensor that
you are working with. For example, search for IpDeviceSensor-9.3.5.184.
After you search for the target and sensor, use the Find-next function for the
Thread ID. For example, DiscoverWorker-10.
– If you are searching a filtered log and find something you are looking for,
note the time stamp. For example, 2007-08-29 21:42:16,747. Look at the
complete log for the lines near that time stamp.

74 Administrator's Guide
Chapter 10. Server properties in the collation.properties file
The collation.properties file contains properties for the TADDM server, and you
can edit some of these properties.

The collation.properties file is located in the $COLLATION_HOME/etc directory. The


file contains comments about each of the properties. If you make changes to the
collation.properties file, you must save the file and restart the server for the
change to take effect.

Scoped and non-scoped properties

The collation.properties file contains two types of properties: scoped and


non-scoped. A scoped property means that you can append either an IP address or
the name of a scope that is set to the end of the value specified for the property.
The IP address or the scope set name makes the property dependent on the host
being discovered. You can use only scope set names that do not contain spaces.

A non-scoped property means that you cannot restrict the parameters of a property
to be specific to an object. For example, com.collation.ignorepropertyscopes is a
non-scoped property. Another example of a non-scoped property is
com.collation.discover.agent.OracleAgent.searchWindowsRegistry. The default
value is true. You can scope this property by appending either an IP address or a
scope set name to the property statement.
v To append an IP address (129.42.56.212), change the property as follows:
com.collation.discover.agent.OracleAgent.searchWindowsRegistry.129.42.56.212=true
v To append a scope set named scope1, change the property as follows:
com.collation.discover.agent.OracleAgent.searchWindowsRegistry.scope1=true

Properties that you should not change


Some properties in the collation.properties file should not be changed.
Changing these properties can render your system inoperative.

The following list identifies some of the properties that you should not change:
com.collation.version
Identifies the product version.
com.collation.branch
Identifies the branch of code.
com.collation.buildnumber
Identifies the build number. This number is set by the build process.
com.collation.oalbuildnumber
Identifies the build number for another build process.
com.collation.debugsocketsenabled
This flag is used by IBM Software Support when debugging.
com.collation.debug.optimizeitdir
The path indicates where the Borland OptimizeIt program is installed.

© Copyright IBM Corp. 2006, 2009 75


com.collation.SshWeirdReauthErrorList=Permission denied
This property allows for the retry of the user name and password pairs
that have previously worked during discovery runs. The property is
needed because Windows systems randomly deny valid login attempts.
The property needs to have the Permission denied setting. Do not change
this setting.
com.ibm.cdb.discover.sensor.idd.stackscan.nmap.hostscannerfallback
If Nmap is not installed, the host scanner is used. The default setting is
true. If you change the setting to false, the host scanner is not used. This
property should not be changed.

API port settings


Settings for API ports must be an integer value and the value can be set to any
available port on the server.

You can edit the properties for API port settings. When you make changes to the
file, you must save the file and restart the server for the change to take effect.

The following are details for API port settings:


com.collation.api.port=9530
Must be an integer value. This value indicates the port that the API server
listens on for non-SSL requests. The value can be set to any available port
on the server. Any client using the API for connection must specify this
port for a non-SSL connection.

Note: Ensure that com.collation.api.port is not confused with


com.collation.ApiServer.port, which is the firewall port used by the
API server.
com.collation.api.ssl.port=9531
Must be an integer value. This value indicates the port that the API server
listens on for SSL requests. The value can be set to any available port on
the server. Any client using the API for connection must specify this port
for a SSL connection.

Commands that might require elevated privilege


These properties specify the operating system commands used by TADDM that
might require elevated privilege, root or superuser, to run on the target system.

Typically, sudo is used on UNIX and Linux systems to provide privilege escalation.
The following alternatives can be used instead of sudo:
v Enable the setuid access right on the target executable
v Add the discovery service account to the group associated with the target
executable
v Use root for the discovery service account (not preferred)

For each property, sudo can be configured globally, meaning to run the command
with sudo on every operating system target, or restricted to a specific IP address or
scope set.

Important: On each target system for which privilege escalation is needed, sudo
must be configured with the NOPASSWD option. Otherwise, your
discovery hangs until sudo times out.
76 Administrator's Guide
com.collation.discover.agent.command.hastatus.Linux=sudo /opt/VRTSvcs/bin/
hastatus
com.collation.discover.agent.command.haclus.Linux=sudo /opt/VRTSvcs/bin/haclus
com.collation.discover.agent.command.hasys.Linux=sudo /opt/VRTSvcs/bin/hasys
com.collation.discover.agent.command.hares.Linux=sudo /opt/VRTSvcs/bin/hares
com.collation.discover.agent.command.hagrp.Linux=sudo /opt/VRTSvcs/bin/hagrp
com.collation.discover.agent.command.hatype.Linux=sudo /opt/VRTSvcs/bin/hatype
com.collation.discover.agent.command.hauser.Linux=sudo /opt/VRTSvcs/bin/hauser
v These properties are needed to discover Veritas Cluster components.
v To execute these commands without sudo, the TADDM service account
must be a member of the Veritas Admin Group on the target.
com.collation.discover.agent.command.vxdisk=vxdisk
com.collation.discover.agent.command.vxdg=vxdg
com.collation.discover.agent.command.vxprint=vxprint
com.collation.discover.agent.command.vxlicrep=vxlicrep
com.collation.discover.agent.command.vxupgrade=vxupgrade
v These properties discover Veritas standard storage information plus
additional Veritas specific information like disk group, Veritas volumes,
plexes, and subdisks.
com.collation.platform.os.command.ps.SunOS=/usr/ucb/ps axww
com.collation.platform.os.command.psEnv.SunOS=/usr/ucb/ps axwweee
com.collation.platform.os.command.psParent.SunOS=ps -elf -o ruser,pid,ppid,comm
com.collation.platform.os.command.psUsers.SunOS=/usr/ucb/ps auxw
v These properties are needed to discover process information on Solaris
systems.
com.collation.discover.agent.command.lsof.Vmnix=lsof
com.collation.discover.agent.command.lsof.Linux=lsof
com.collation.discover.agent.command.lsof.SunOS.1.2.3.4=sudo lsof
com.collation.discover.agent.command.lsof.Linux.1.2.3.4=sudo lsof
v These properties are needed to discover process/port information.
com.collation.discover.agent.command.dmidecode.Linux=dmidecode
com.collation.discover.agent.command.dmidecode.Linux.1.2.3.4=sudo dmidecode
v These properties are needed to discover manufacturer, model, and serial
number on Linux systems.
com.collation.discover.agent.command.cat.SunOS=cat
com.collation.discover.agent.command.cat.SunOS.1.2.3.4=sudo cat
v This property is used to discover configuration information for a
CheckPoint firewall on Solaris systems.
com.collation.discover.agent.command.interfacesettings.SunOS=sudo ndd
com.collation.discover.agent.command.interfacesettings.Linux=sudo mii-tool
com.collation.discover.agent.command.interfacesettings.SunOS.1.2.3.4=sudo ndd
com.collation.discover.agent.command.interfacesettings.Linux.1.2.3.5=sudo mii-tool

v These properties are needed to discover advanced network interface


information (interface speed, for example).
com.collation.discover.agent.command.adb.HP-UX=adb
com.collation.discover.agent.command.adb.HP-UX.1.2.3.4=sudo adb
v This property is needed to discover processor information on HP
systems.

Chapter 10. Server properties in the collation.properties file 77


com.collation.discover.agent.command.kmadmin.HP-UX=kmadmin
com.collation.discover.agent.command.kmadmin.HP-UX.1.2.3.4=sudo
/usr/sbin/kmadmin
v This property is needed to discover kernel modules on HP systems.
com.collation.platform.os.command.partitionTableListing.SunOS=prtvtoc
v This property is used to discover partition table information on Solaris
systems.
com.collation.platform.os.command.lvm.lvdisplay.1.2.3.4=sudo lvdisplay -c
com.collation.platform.os.command.lvm.vgdisplay.1.2.3.4=sudo vgdisplay -c
com.collation.platform.os.command.lvm.pvdisplay.1.2.3.4=sudo pvdisplay -c
v These properties are used to discover storage volume information.
com.collation.platform.os.command.lputil.SunOS.1.2.3.4=sudo /usr/sbin/lpfc/lputil
v This property is used to discover Emulex fibre channel HBA information
on Solaris systems.
com.collation.platform.os.command.crontabEntriesCommand.SunOS=crontab -l
com.collation.platform.os.command.crontabEntriesCommand.Linux=crontab -l -u
com.collation.platform.os.command.crontabEntriesCommand.AIX=crontab -l
com.collation.platform.os.command.crontabEntriesCommand.HP-UX=crontab -l
v These properties are used to discover crontab entries.
com.collation.platform.os.command.filesystems.Linux=df -kTP
com.collation.platform.os.command.filesystems.SunOS=df -k | grep -v ’No such file
or directory’ | grep -v ’Input/output error’ | awk ’{print $1, $2, $4, $6}’
com.collation.platform.os.command.filesystems.AIX=df -k | grep -v ’No such file or
directory’ | grep -v ’Input/output error’ | awk ’{print $1, $2, $3, $7}
com.collation.platform.os.command.filesystems.HP-UX=df -kP | grep -v ’No such
file or directory’ | grep -v ’Input/output error’ | grep -v Filesystem
v These properties are used to discover file systems.
com.collation.platform.os.command.fileinfo.ls=sudo ls
com.collation.platform.os.command.fileinfo.ls.1.2.3.4=sudo ls
com.collation.platform.os.command.fileinfo.cksum=sudo cksum
com.collation.platform.os.command.fileinfo.cksum.1.2.3.4=sudo cksum
com.collation.platform.os.command.fileinfo.dd=sudo dd
com.collation.platform.os.command.fileinfo.dd.1.2.3.4=sudo dd
v These properties are used for privileged file capture.
v Privileged file capture is used in situations where the discovery service
account does not have read access to application configuration files that
are required for discovery.
com.collation.discover.agent.WebSphereVersionAgent.versionscript=sudo
This property can be enabled for access to the Websphere versionInfo.sh
file if the discovery user does not have access on the target WebSphere
Application Server system.

Database settings
The database passwords for both the database user and archive user are stored in
the $COLLATION_HOME/etc/collation.properties file.

Use the following properties to change the passwords on the TADDM server:
com.collation.db.password=password
com.collation.db.archive.password=password

78 Administrator's Guide
To encrypt passwords in the collation.properties file, you must first edit the
database user, or archive the user password using clear text, or both. Then, stop
the TADDM server, and run the encryptprops.sh file or encryptprops.bin file
(located in the $COLLATION_HOME/bin directory). This script encrypts the passwords.
Restart the TADDM server, and the passwords are encrypted in the
collation.properties file.

Discovery settings
You can edit the properties for discovery. When you make changes to the file, you
must save the file and restart the server for the change to take effect.

The following list identifies extra details for the properties for discovery:
com.collation.discover.agent.exchange.command.timeout=600000
Specifies the timeout for the Exchange Server sensor. The default timeout is
600000 milliseconds, which is 10 minutes. If you change the default, make
sure that you specify an integer.
com.collation.discover.agent.usePMACNamingRule=false
Specifies whether to include the primary MAC address as one of the
ComputerSystem naming rules. The default is false, meaning that the
primary MAC address is not to be used as a naming rule. A computer
system can usually be identified by other characteristics like IP address,
manufacturer, model, and serial number. In the majority of environments,
the primary MAC address naming rule is not needed. Also note that the
primary MAC address naming rule is deprecated. Contact IBM support
before setting this property to true.
com.collation.discover.anchor.forceDeployment=true
Specifies if all anchors for the discovered scope are to be deployed during
discovery startup. Valid values are true and false. The default is true. If you
change the default to false, anchors are deployed only if any IP address
from the scope cannot be pinged, or if port 22 cannot be reached on any of
the discovered IP addresses.
com.collation.discover.anchor.lazyDeployment=false
Specifies if files should be copied during anchor deployment, or when
sensor requiring the files is going to be launched. In both cases only
different files are copied. Valid values are true and false. The default is false.
The following example provide some insight to how this property affects
TADDM functionality:
The WebSphere Application Server sensor has dependencies in the
dist/lib/websphere directory that take 130 MB. If the flag is set to false,
this data is copied to the target host when the anchor is deployed. If the
flag is set to true, the data is copied when the WebSphere Application
Server sensor is about to be run on the anchor. If no WebSphere
Application Server sensor is run through the anchor, 130 MB is not sent to
the remote host.
com.collation.discover.DefaultAgentTimeout=600000
Specifies the default timeout for sensors in milliseconds, which is 10
minutes.
The default for all sensors is 10 minutes. The default can be changed. It
can also be specified by individual sensors.

Chapter 10. Server properties in the collation.properties file 79


To override the timeout for a particular sensor, add the following line to
the collation.properties file:
com.collation.discover.agent.<sensorName>Agent.timeout=
<timeInMilliseconds>

For example,
com.collation.discover.agent.OracleAgent.timeout=1800000
com.collation.gui.showRunningDiscHistory=true
This property can affect the speed at which the user interface is displayed
when a discovery is running. Valid values are true, false, and prompt. The
default is true.
When true is specified, the discovery history events are retrieved. When
false is specified, the discovery history events are not retrieved, and, as a
result, the user interface can be displayed faster when a discovery is
running. When prompt is specified, a prompt asks the user if the discovery
history events should be retrieved.
com.collation.IpNetworkAssignmentAgent.defaultNetmask=ip_start-ip_end/
netmask[, ...]
This property defines how IP addresses discovered during a Level 1
discovery are assigned to generated subnets. A Level 1 discovery does not
discover subnets; instead, IpNetwork objects are generated to contain any
interfaces that are not associated with an existing subnet discovered during
a Level 2 or Level 3 discovery. This configuration property defines which
IpNetwork objects should be created, and how many nodes each subnet
should contain. (It also applies to any interface discovered during a Level 2
or Level 3 discovery that for any reason cannot be assigned to a discovered
subnet.)
The value for this property consists of a single line containing one or more
entries separated by commas. Each entry describes an IP address range in
IPv4 dotted decimal format, along with a subnet mask specified as an
integer between 8 and 31. Discovered interfaces in the specified range are
then placed in created subnets no larger than the size specified by the
subnet mask.
For example, the following value defines two subnet address ranges with
different subnet masks:
9.0.0.0-9.127.255.255/23, 9.128.0.0-9.255.255.255/24
The specified address ranges can overlap. If a discovered IP address
matches more than one defined range, it is assigned to the first matching
subnet as they are listed in the property value.
After you create or change this configuration property and restart the
TADDM server, any subsequent Level 1 discoveries use the defined
subnets. To reassign existing IpInterface objects in the TADDM database,
go to the $COLLATION_HOME/bin directory and run one of the following
commands:
v adjustL1Networks.sh (Linux and UNIX systems)
v adjustL1Networks.bat (Windows systems)

Note: If the value is not specified correctly then the appropriate messages
are displayed only when running the command line utility
adjustL1Networks.sh (Linux and UNIX systems) or
adjustL1Networks.bat (Windows systems). Otherwise the messages

80 Administrator's Guide
are placed in the TopologyBuilder.log file in the
$COLLATION_HOME/log/services directory.
This script reassigns all IpInterface objects discovered during Level 1
discoveries to the appropriate subnets as described in the configuration
property. Any generated IpNetwork object that contains no interfaces is
then deleted from the database. After the script completes, the TADDM
interface might show multiple notifications of changed components
because of the modified objects. You can clear these notifications by
refreshing the window.

Note: Before you use this command, make sure the TADDM server is
running, and that no discovery or bulk load operation is currently in
progress. This script is not supported on the Enterprise Domain
Server.
com.collation.rediscoveryEnabled=false
Valid values are true and false. The default is false. Change the value to true
to enable the rediscovery function. In addition to enabling the rediscovery
function, setting the property to true also ensures information is stored
during the rediscovery.
com.collation.ChangeManager.port=19431
Specifies the firewall port used by the change manager.

DNS lookup customization settings


You can edit the properties for DNS lookup customization settings. When you
make changes to the file, you must save the file and restart the server for the
change to take effect.

The following list identifies extra details for DNS lookup customization settings:
com.collation.platform.os.disableDNSLookups=false
Valid values are true or false. The default is false. If you change the property
to true, name lookups (for example, JAVA and DNS) are disabled for the
TADDM server.
com.collation.platform.os.disableRemoteHostDNSLookups=false
Valid values are true or false. The default is false. If you change the property
to true, name lookups (DNS only) are disabled on remote discovered hosts.
This property forces all name lookups to occur on the TADDM server.
com.collation.platform.os.disableRemoteInterfaceFQDNLookups=true
Valid values are true or false. The default is true. If you change the property
to false, you enable the remote lookup of Internet Protocol (IP) interface
names. Performance can decline if this property is set to true.
com.collation.platform.os.forceDNSLookupForFqdn.1.2.3.4=true
This command specifies whether you should use DNS lookup for
fully-qualified domain names. Valid values are true or false. A value of true
means to use DNS. A value of false means that you should use the Java
Application Program Interface (API) to lookup names as per Network File
System (NFS) and Network Information Service (NIS) settings on the host.
com.collation.platform.os.cacheTTLSuccessfulNameLookups=60
This command is specified in the java.security file to indicate the caching
policy for successful name lookups from the name service. The value is

Chapter 10. Server properties in the collation.properties file 81


specified as an integer to indicate the number of seconds to cache the
successful lookup. A value of 0 means ″never cache″. A value of -1 means
″cache forever″.
com.collation.platform.os.cacheTTLUnsuccessfulNameLookups=60
This command is specified in the java.security file to indicate the caching
policy for unsuccessful name lookups from the name service. The value is
specified as an integer to indicate the number of seconds to cache the
failure for an unsuccessful lookup. A value of 0 means ″never cache″. A
value of -1 means ″cache forever″.
com.collation.platform.os.ignorePortMapFailure=false
Valid values are true or false. The default is false. If you change the property
to true, os.getPortMap errors are ignored.
com.collation.platform.os.command.fqdn=nslookup $1 | grep Name | awk
’{print $2}’
This command is used to find the fully-qualified domain name (fqdn). In
most situations, this property is not needed because the default fully
qualified domain name (FQDN) algorithm works in most production
environments. If this property is not needed, you must comment it out.
However, in environments where the fully-qualified domain name is to be
derived from the host name, you might enable this property. For example,
enable this property if the host names are configured as aliases in DNS.
If this property is used, ensure that DNS is available and properly
configured. Otherwise, the nslookup command is likely to fail or have a
slow response time.
If enabled, this property is only used on the TADDM server. Currently,
only AIX, Linux, and SunOS operating systems are supported. This
property is not supported on a Windows TADDM server.

GUI JVM memory settings


The default settings in the collation.properties file for the GUI JVM memory
settings are used in small, medium, and large environments.

The following list identifies the default settings in the collation.properties file for
the GUI JVM memory settings:
com.collation.gui.initial.heap.size=128m
Initial heap size for the TADDM user interface.
com.collation.gui.max.heap.size=512m
Maximum heap size for the TADDM user interface.

These settings are appropriate for a small TADDM Domain. For the purposes of
sizing, the following categories of TADDM servers are used (based on server
equivalents):
v Small: up to 1000 server equivalents
v Medium: 1000 - 2500 server equivalents
v Large: 2500 - 5000 server equivalents

Increasing these values for medium and large environments improve performance
for some GUI operations. Some views do not complete properly if there is not
sufficient memory available to TADDM at the time of the action.

For a medium environment:

82 Administrator's Guide
com.collation.gui.initial.heap.size=256m
com.collation.gui.max.heap.size=768m

For a large environment:


com.collation.gui.initial.heap.size=512m
com.collation.gui.max.heap.size=1024m

GUI port settings


You can edit the properties for GUI port settings. When you make changes to the
file, you must save the file and restart the server for the change to take effect.

The following list identifies extra details for GUI port settings:
com.collation.tomcatshutdownport=9436
This port is used for the Tomcat shutdown command.
com.collation.webport=9430
The HTTP port is used without SSL.
com.collation.websslport=9431
The HTTPS port is used with SSL.
com.collation.commport=9435
The RMI data port to use without SSL.
com.collation.commsslport=9434
The RMI data port to use with SSL.
com.collation.rmiport=9433
The naming service RMI registry port.
com.collation.jndiport=9432
The naming service JNDI lookup port.

Jini settings
The properties for Jini settings must be an integer value.

The following list identifies extra details for Jini settings:


com.collation.jini.rmidtimeout=30000
Must be an integer value. This value indicates how long (in milliseconds)
RMID should wait for created children to start up.
com.collation.jini.service.timeout=10
Must be an integer value. This value indicates how long (in seconds) Jini
should wait when attempting to get a remote service.
com.collation.jini.service.retries=200
Must be an integer. This value indicates the number of times Jini tries to
get a remote service.
com.collation.jini.rmidport=1098
Must be an integer value.
com.collation.jini.unicastdiscoveryport=4160
Must be an integer value.

Chapter 10. Server properties in the collation.properties file 83


LDAP settings
An external LDAP server can be used for user authentication. Both anonymous
authentication and password-based authentication are supported with an external
LDAP server.

The LDAP server host name, port number, base distinguished name, bind
distinguished name, and password (required for password-based authentication)
are configurable in the collation.properties file. You can also configure the
specific naming attribute that can be searched for to match the user ID (UID).

LDAP configuration is recommended in TADDM Enterprise Domain Server and


TADDM Domain Server setups. In an enterprise environment, configure the
TADDM Domain Server and TADDM Enterprise Domain Server to use the same
user registry. When you log in to a TADDM Domain Server that is connected to a
TADDM Enterprise Domain Server, the login is processed at the TADDM
Enterprise Domain Server. If a network connection problem occurs between a
TADDM Enterprise Domain Server and a TADDM Domain Server, you can
successfully log in to the TADDM Domain Server without reconfiguration if the
TADDM Domain Server is configured to use the same user registry as the TADDM
Enterprise Domain Server.

The following list identifies some of the properties that you use to configure
LDAP:
com.collation.security.usermanagementmodule=ldap
This property defines the user management module used by the TADDM
server. The valid values are:
v file: for a file-based user registry. This is the default value.
v ldap: for an LDAP user registry
v vmm: for a user registry that uses the federated repositories of
WebSphere Application Server
com.collation.security.auth.ldapAuthenticationEnabled=true
This property defines whether LDAP authentication has been enabled.
com.collation.security.auth.ldapHostName=ldap.ibm.com
This property defines the host name for the LDAP server.
com.collation.security.auth.ldapPortNumber=389
This property defines the port for the LDAP server.
com.collation.security.auth.ldapBaseDN=ou=People,dc=ibm,dc=com
This property defines the LDAP Base Distinguished Name (DN). The
LDAP Base Distinguished Name is the starting point for all LDAP
searches.
com.collation.security.auth.ldapUserObjectClass=person
This property defines the name of the class used to represent users in
LDAP.
com.collation.security.auth.ldapUIDNamingAttribute=cn
This property defines the name of the attribute used for naming users in
LDAP.
com.collation.security.auth.ldapGroupObjectClass=groupofuniquenames
This property defines the class used to represent user groups in LDAP.

84 Administrator's Guide
com.collation.security.auth.ldapGroupNamingAttribute=cn
This property defines the name of the attribute used for naming groups in
LDAP.
com.collation.security.auth.ldapGroupMemberAttribute=uniquemember
This property defines the name of the attribute used to contain the
members of a group in LDAP.
com.collation.security.auth.ldapBindDN=uid=ruser,dc=ibm,dc=com
If simple authentication is used, this property defines the user ID that is
used to authenticate to LDAP.

Important:
v If a value for com.collation.security.ldapBindDN is not
supplied or if the property is commented out,
v an anonymous connection to LDAP is attempted. The
following example shows how the property can be
commented out with the number sign (#):
#com.collation.security.auth.ldapBindDN=uid=ruser,
dc=ibm,dc=com
v If a value is specified for
com.collation.security.auth.ldapBindDN, simple
authentication is used and
v a value for com.collation.security.auth.ldapBindPassword
must also be specified.
com.collation.security.auth.ldapBindPassword=ruser
If simple authentication is used, this property defines the user password
that is used to authenticate to LDAP.
Related tasks
“Configuring for LDAP” on page 15
You can configure an external LDAP server for user authentication.

Logging settings
You can edit the properties for logging. When you make changes to the file, you
must save the file and restart the server for the change to take effect.

The following list identifies extra details about some of the properties that control
logging settings:
com.collation.log.level=INFO
Sets the log level. The default is INFO. The following list identifies other
valid options:
v FATAL
v ERROR
v WARNING
v INFO
v DEBUG (Setting the DEBUG option decreases system performance.)
v TRACE (Setting the TRACE option causes passwords to be logged.)
If com.collation.deploy.dynamic.logging.enabled=true, you do not have to
restart the TADDM server after the logging level is changed. If
com.collation.deploy.dynamic.logging.enabled=false, you must restart the
TADDM server after the logging level is changed.

Chapter 10. Server properties in the collation.properties file 85


com.collation.log.filesize=20MB
The maximum size for the log file. When the file reaches this size limit, a
new log file is created. The current log file is saved with the .N file
extension. N is the number 1 through the value set in the
com.collation.log.filecount property. You set how many log files can be
created and kept before the files are rotated with the
com.collation.log.filecount property.
com.collation.log.filecount=5
The number of log files that you maintain.
com.collation.deploy.dynamic.logging.enabled=true
Valid values are true or false. The default is true. You use this property to
change the logging level without restarting the server.
com.collation.log.level.vm.vmName=INFO
Sets the log level for each virtual system.
vmName is a Java virtual system associated with a TADDM service name.
The following list identifies other valid options:
v Topology
v DiscoverAdmin
v EventsCore
v Proxy
v Discover
The default log level is INFO. The following list identifies other valid
options:
v FATAL
v ERROR
v WARNING
v INFO
v DEBUG (Setting the DEBUG option decreases system performance.)
v TRACE (Setting the TRACE option causes passwords to be logged.)

See the Logging settings section of the IBM Tivoli Application Dependency Discovery
Manager Troubleshooting Guide for more information on the following topics:
v JVMs for which you can set logging locally
v Logging levels

Operating system settings


Use this property to configure operating systems.

The following property configures operating systems:


com.collation.64bit=false
Valid values are true or false. The default is false. This flag indicates
whether the TADDM server should use 64-bit JVM.

Important: This property is valid only for Solaris environments.

Performance settings
This product is tuned for a 2 processor, 4 GB system. If you have more processors
or memory, you can change the thread settings in the collation.properties file.
86 Administrator's Guide
There is no formula that you can use to determine the optimal values for these
thread counts. Optimal values depend on how sparse your subnets are and how
the firewalls are configured. Here are a few general guidelines:
1. If all of the processors are not saturated during a discovery run, you should
increase the dwcount by a few threads and try again. If you increase this
thread count when your processors are saturated, the sensors start to timeout
and you do not receive complete discover results.
2. If you increase the dwcount by a few threads and continue not to get increased
CPU usage, bump the sccount by 1 and try again.
3. Do not change the ascount.

The following list identifies extra details for the properties for performance:
com.collation.discover.dwcount=16
This property defines how many discover worker thread can be running
simultaneously. It must be an integer value. The default is 16.
com.collation.discover.sccount=2
This property specifies the number of ″seed″ creator threads that are
created. It must be an integer value. The default is 2.
com.collation.discover.ascount=1
This property specifies the number of agent selector threads that are
created. It must be an integer value. The default is 1. Do not change this
property.
com.collation.discover.observer.topopumpcount=16
This property specifies the number of database ″writer″ threads that are
created. These threads are used for persisting discovery results into the
TADDM database. It must be an integer value. The default is 16.

Reconciliation settings
Set the com.collation.reconciliation.enablenewfeatures and the
com.collation.reconciliation.enablepriority properties in the collation.properties
file to true to enable merging, reconciliation plug-ins, and attribute prioritization
features.

If you change the values for these properties, you must restart the server.

The reconciliation features are designed so that you can turn them off in your
environment. For example, you can turn them off if you encounter problems you
suspect are related to the features. For more information about reconciliation and
prioritization, see the Understanding reconciliation and prioritization section in the
IBM Tivoli Application Dependency Discovery Manager User’s Guide.

The properties in the collation.properties file that control the reconciliation


features are listed below:
com.collation.reconciliation.enablenewfeatures=true
Valid values are true or false. The default is true.
If this property is set to false, it turns off merge, reconciliation plug-in, and
attribute prioritization features. If it is set to true, it allows individual
feature settings to have an effect. This acts like a master switch, and if it is
set to false, none of the other reconciliation settings work.
com.collation.reconciliation.enableplugins=true
Valid values are true or false. The default is true.

Chapter 10. Server properties in the collation.properties file 87


If this property is set to true, the reconciliation plug-in feature is enabled. If
it is set to false, no plug-ins are loaded or called.
com.collation.reconciliation.enablepriority=true
Valid values are true or false. The default is true.
If this property is set to true, the attribute prioritization is turned on. If it is
set to false, it defaults the system to the last one in wins strategy for all data
from all sources.
com.collation.reconciliation.enablemerge=true
Valid values are true or false. The default is true.
If this property is set to true, n-way merging is turned on. If set to false, no
n-way merging occurs.
com.collation.reconciliation.mergetype=0
It must be an integer value. If anything other than 0 is used, no merging
takes place. Do not change this value.

Reporting and graph settings


Use these properties to control reporting settings and graphs.

The following list identifies extra details about some of the properties that control
reporting settings and graphs:
com.collation.domain.pdfreport.enabled=false
Valid values are true or false. The default is false. If you set the value to
true, you can save the Domain Manager reports in the PDF file. This
setting works only for the English language locale.
com.collation.gui.doNotShowGraphs=Business Applications,Application
Infrastructure,Physical Infrastructure
Use this property to not display graphs in the Java user interface. There are
three valid values that you can enter: Business Applications, Application
Infrastructure, and Physical Infrastructure. If you enter more than one value,
use a comma to separate the entries.
com.collation.ReportsServer.port=19434
Specifies the firewall port used by the reports server.

Secure Shell (SSH) settings


Use these properties to control Secure Shell (SSH) settings.

The following list identifies extra details for properties that control SSH settings:
com.collation.SshLogInput=false
Valid values are true or false. The default is false. If you set the value to
true, SSH input is logged.
com.collation.SshPort=22
It must be an integer value. This setting indicates the port the server uses
for all SSH connections.
com.collation.SshSessionCommandTimeout=120000
Must be an integer value. This value indicates the time (in milliseconds)
that is allowed for the SSH command to run. If this property is used from
an agent, the setting for this property should be a lesser value than the
setting for the AgentRunnerTimeout property to be effective.

88 Administrator's Guide
com.collation.SshSessionCopyTimeout=360000
It must be an integer value. This value indicates the time (in milliseconds)
that is allowed to copy a file to a remote system.
com.collation.SshSessionIdleTimeout=240000
It must be an integer value. This value indicates the time (in milliseconds)
that an SSH session can remain inactive.
com.collation.SshSessionInitTimeout=60000
It must be an integer value. This value indicates the time (in milliseconds)
that is allowed to initialize an SSH connection. This time setting should be
a lesser value than all other SSH timeouts.
com.collation.SshWeirdReauthErrorList=Permission denied
This property allows for the retry of the user name and password pairs
that have previously worked during discovery runs. The property is
needed because Windows systems randomly deny valid login attempts.
The property needs to have the Permission denied setting. Do not change
this setting.
com.collation.TaddmToolUseSshInput=false
Valid values are true or false. The default is false. If you set the value to
true, the SSH input is used instead of command line options.
com.collation.WmiInstallProviderTimeout=240000
It must be an integer value. This value indicates the time (in milliseconds)
that is allowed to wait for the WMI InstallProvider script to run.

Security settings
Use these properties to control security settings.

The following list identifies extra details for properties that control security
settings:
com.collation.security.privatetruststore=true
Valid values are true or false. The default is true.
com.collation.security.enablesslforconsole=true
Valid values are true or false. The default is true.
com.collation.security.enabledatalevelsecurity=false
Valid values are true or false. The default is false. To restrict access to
collections of TADDM objects by user or user group, set this value to true.
com.collation.security.enforceSSL=false
Valid values are true or false. The default is false. To disable non-secure
connections and force the use of SSL connections, set this flag to true.
com.collation.security.FIPSMode=false
Valid values are true or false. The default is false. To configure TADDM to
use FIPS-compliant algorithms for encryption, set this flag to true.
com.collation.security.usermanagementmodule=file
There are three options for this property:
v file: for a TADDM file-based user registry
v ldap: for an LDAP user registry
v vmm: for a user registry that uses the federated repositories of
WebSphere Application Server
The default is file.

Chapter 10. Server properties in the collation.properties file 89


com.collation.security.auth.sessionTimeout=240
It must be an integer value.
com.collation.security.auth.searchResultLimit=100
It must be an integer value. Use this setting if you have a lot of users.

Important: If you have more than 100 users in an LDAP or WebSphere


Federated repository, increase this value to support the
expected number of users. For example,
com.collation.security.auth.searchResultLimit=150
com.collation.security.auth.websphereHost=localhost
Type the fully qualified domain name of the system hosting the federated
repositories functionality of the WebSphere Application Server.
com.collation.security.auth.webspherePort=2809
It must be an integer value. This value indicates the WebSphere system
port. The default port number is 2809.
com.collation.SecurityManager.port=19433
Specifies the firewall port used by the security manager.

Sensor settings
Use these properties to control sensor settings.

The following list identifies extra details for properties that control sensor settings:
com.collation.agent.weblogic.protocols=t3,http
By default, this property is disabled and the T3 protocol is used. If you
uncomment this property, you can specify the list of protocols to be used
for the WebLogic sensors. The default property, T3, is used if the property
is disabled or removed from the collation.properties file.
You can add protocols to the entry and separate entries with a comma. For
example:
com.collation.agent.weblogic.protocols=t3,http

The T3 protocol is the first protocol that is tried. If this protocol fails, the
HTTP protocol is used. If you want to use the HTTP protocol to connect to
a WebLogic server instance, you must enable HTTP tunneling for that
instance using the WebLogic console.
The only valid values are t3 and http. If you code an incorrect value, such
as a value with typographical errors, the WebLogic server cannot process
the request properly and might stop.
com.collation.AllowPrivateGateways=true
This property is used by the WindowsComputerSystemSensor sensor.
Valid values are true or false. The default is true. The default allows SSH
connections to Windows systems not listed in the gateway.properties file.
com.collation.discover.agent.Db2WindowsAgent.sshSessionCommandTimeout
=300000
This property is the Windows DB2 discover sensor (Db2WindowsSensor)
command timeout.
It must be an integer value. This value indicates the maximum amount of
time (in milliseconds) that the sensor has to complete.

90 Administrator's Guide
To be effective, the value specified for this property should be greater than
the SshSesssionCommandTimeout property and less than the
DefaultAgentTimeout property. If needed, you can change the
SshSesssionCommandTimeout and DefaultAgentTimeout properties.
The com.collation.SshSessionCommandTimeout property controls the time
needed for the SSH connection and run command on the Windows
gateway.
If this Db2WindowsSensor command timeout property is a lesser value
than the com.collation.SshSessionCommandTimout property, the
com.collation.SshSessionCommandTimout value is used.
Because the Db2WindowsSensor sensor can stop before it finishes
collecting information, the value of the
com.collation.disocver.DefaultAgentTimeout property should be greater
than this Db2WindowsSensor command timeout property.
com.collation.discover.agent.OracleAgent.connectionThreadTimeout=10000
This property is used by the OracleSensor sensor.
It must be an integer value. This value indicates the maximum amount of
time (in milliseconds) allowed to pass before the database connection task
times out. This property is used to control the use of Connector threads
that prevent hung JDBC connections from hanging the sensor. Use with the
com.collation.discover.agent.OracleAgent.useConnectorThreads property.
com.collation.discover.agent.OracleAgent.registrySearchRegexes
=ORA_([^_]+)_AUTOSTART
This property is used by the OracleSensor sensor.
This regular expression property can be a semicolon separated list of
regular expressions applied to Windows registry key names to extract SID
candidates. Extracting the same SID multiple times is not a problem. The
SID is discovered only once.
com.collation.discover.agent.OracleAgent.searchWindowsRegistry=true
This property is used by the OracleSensor sensor. This is a scoped
property.
Valid values are true or false. The default is true. If you change the value
to false, you search the Windows registry keys for Oracle SIDs. You can
append an IP address to make the behavior dependent on the host being
discovered.
com.collation.discover.agent.OracleAgent.suppressPorts
This property is used by the OracleSensor sensor.
This regular expression property can be a comma (,) separated list of ports.
This property stops the connection to ports on the listener.
com.collation.discover.agent.OracleAgent.suppressSIDs
This property is used by the OracleSensor sensor.
This regular expression property can be a comma (,) separated list of SIDs.
This property stops the discovery of SIDs.
com.collation.discover.agent.OracleAgent.useConnectorThreads=true
This property is used by the OracleSensor sensor.
Valid values are true or false. The default is true. This property is used to
control the use of Connector threads that prevent hung JDBC connections

Chapter 10. Server properties in the collation.properties file 91


from hanging the sensor. Use with the
com.collation.discover.agent.OracleAgent.connectionThreadTimeout
property.
com.collation.discover.agent.SLDServerAgent.connectionTimeout=30
This property is used by the SAP SLDServerSensor sensor. This is a scoped
property.
It must be an integer value. The value you specify indicates the maximum
amount of time (in seconds) to wait for the initial SLD connection test. The
default value is 30 seconds. You can append a host name or IP address to
make the behavior dependent on the host that is being discovered. For
example,
com.collation.discover.agent.SLDServerAgent.connectionTimeout.
Linux.1.2.3.4=60
com.collation.discover.agent.SLDServerAgent.connectionTimeout.
SunOS=45
Connection timeouts are recorded in the DiscoveryManager.log file.
com.collation.discover.agent.WebSphereAgent.timeout=7200000
This property by default is set to 7 200 000 milliseconds (2 hours) time out
value, which means if the sensor is running for more than 2 hours,
discovery of WebSphere stops. Typically the time should be increased
when discovering large environments. You also need to set the
com.collation.discover.agent.WebSphereNodeSensor.timeout and the
com.collation.discover.localanchor.timeout properties to the same value.
com.collation.discover.engine.SplitSensorLog=true
Valid values are true or false. The default is true. This property improves
the readability of the logs because it specifies the creation of a separate log
file for each sensor. If you do not set this property to true, default logging
for all sensors is placed in the following directory:
v For Linux, Solaris, AIX, and Linux on System z operating systems,
dist/log/services/DiscoveryManager.log
v For Windows operating systems, dist\log\services\
DiscoveryManager.log
If you set this property to true, the logs are separated into the following
directories:
v For Linux, Solaris, AIX, and Linux on System z operating systems,
$COLLATION_HOME/log/sensors/<runid>/sensorName-IP.log
v For Windows operating systems, COLLATION_HOME%\log\sensors\<runid>\
sensorName-IP.log
The runid option includes the date of the discovery run. The log file name
includes the sensor name and IP address of the target.
When using this option, the logs are not automatically clear. You must
manually clear the logs.
com.collation.internalTemplatesEnabled=false
Valid values are true or false. The default is false. If you change the
value to true, you enable shallow discovery of applications running on a
target system using only the system credentials.
If you set this property to true, you receive a CustomAppServer object
representing the application running on the target system. You do not need
to provide application credentials to enable this property. If you set this

92 Administrator's Guide
property to false, the custom application server seeds for internal
templates is created only when the deep sensor corresponding to the
application fails.
With this feature enabled, the internal templates TADDM uses to start
sensors are used instead to launch CustomAppServerSensors which creates
Objects in TADDM representing the application running on the target.
These CustomAppServerSensors require no credentials. In this discovery
mode, because template matching is only intended to launch a sensor with
a high probability that the sensor being launched is the correct sensor for
the running application, TADDM creates ″generic″ application server types,
such as WebServer and J2EEServer, instead of the more specific types that a
sensor would create, such as ApacheServer or WebSphereServer. The
objectType attribute will be set to the ″best match″ type of the running
application software. Note that if a credentialed sensor and a custom
server both run for the same target during one or more discoveries, the
following limitations apply:
1. Given the different nature of the data discovered without credentials,
objects created by this feature might not reconcile with L3 sensor
created artifacts.
2. Credential free discovery does not create specific classes. For example,
httpd might be Apache or another Web server, and therefore, in
credential-free mode, are only classified as HTTP Servers.
com.collation.oracleapp.root.dir=lib/oracleapp
This property is used by the OracleAppSensor sensor.
If you do not want to discover the Oracle Application Server, ignore this
property. If you want to discover the Oracle Application Server, use this
property to specify the location of the Oracle Application Server root
directory for the TADDM server. An absolute directory path can be
specified. If you use a relative directory path, it must be relative to the
$COLLATION_HOME directory. This directory and all subdirectories must be
accessible to the use under which the TADDM server runs. Oracle
Application Server libraries must be available on the TADDM server under
this directory.
com.collation.platform.os.ignoreLoopbackProcesses=true
Valid values are true and false. The default is true. Do not change this
default. You must set this property to true if you want to discover an
Oracle Application Server or the WebLogic sensors. For example, if the
WeblogicServerVersionSensor sensor tries to start using a local host
address, this property must be set to true.
com.collation.discovery.engine.allow.pooling.local.anchors=true
This property allows multiple sensors that require local anchors to share a
single Java virtual machine (JVM). If this flag is set to false, each sensor
runs in its own JVM. For performance reasons, it is preferred this property
is set to true. Only the WebLogic sensor uses this property.
com.collation.pingagent.ports=xx,yy, ...
By default, this property is used by the PingSensor sensor. It is not defined
in the collation.properties file and has to be manually defined if needed.
Valid values are non-negative, numerical.
To override the default set of ports that the Ping sensor attempts to use,
add this property to the collation.properties file and specify the port

Chapter 10. Server properties in the collation.properties file 93


numbers as a comma-separated list. The default set of ports the Ping
sensor attempts to use are port 22 and then port 135, if it cannot first make
a connection to port 22.
For example, to add the SNMP port 161 to the existing ports that the Ping
sensor attempts to connect to, you would add 161 to the end of the list of
default ports: com.collation.pingagent.ports=22,135,161.
If you only wanted the Ping sensor to use port 161, set it to the following:
com.collation.pingagent.ports=161.
com.collation.platform.os.WindowsOs.AutoDeploy=true
This property is used by the WindowsComputerSystemSensor sensor.
Valid values are true or false. The default is true. The default allows the
WMI provider to be automatically installed.
com.collation.PreferWindowsSshOverGateway=false
This property is used by the WindowsComputerSystemSensor sensor.
Valid values are true or false. The default is false. The default sets your
preference for Windows SSH discovery over gateway discovery.
This property is ignored if the com.collation.AllowPrivateGateways
property is set to false.
com.collation.vcs.discoverymode=true
The Veritas Cluster Server sensor generates a lot of configuration items,
most of them Veritas resources, and stores them in the database. You can
run the Veritas Cluster Server sensor in lite mode and not collect all of this
configuration item information.
This property specifies whether the sensor runs in lite mode. The default
value is true, and so, the default behavior is that the sensor runs in lite
mode. If you set the value to false, the Veritas Cluster Server sensor
collects and stores all configuration item information.

Startup settings
Use these properties to determine whether TADDM monitors server startup.

If enabled, the restartwatcher will monitor the TADDM startup process so that the
TADDM server will restart if it does not start within a certain defined time frame.
The restartwatcher checks for the following conditions:
v All but one of the services has reached started state
v One or more of the services is in stopped state and at least one service has
started
If these conditions persist for a configurable period of time, a restart is requested.
The restartwatcher turns off after the server has started so that it will not interfere
with the service restart configuration options.

The following properties control the monitoring of the startup process:


com.collation.platform.jini.restartwatcher.enabled=false
This is a boolean property that determines whether TADDM will monitor
server startup. The default is false.
com.collation.platform.jini.restartwatcher.delay=240
This property has an integer value that determines the number of seconds
to wait after a restart condition is detected before restarting the server. The

94 Administrator's Guide
default value is 240, and the minimum value is 60. Slower systems should
use longer delay times to allow for longer service start times.

Topology builder settings


Use these properties to control topology builder settings.

The following list identifies extra details for properties that control topology
builder settings:
com.collation.synchronizer.enableTopologyBuild=true
Valid values are true and false. The default is true.
The topology builder runs automatically after a synchronization is
complete. If you set this property to false, topology build does not run
automatically after the synchronization completes. This is most commonly
used if you synchronize from several TADDM servers and manually run
topology build after all of the synchronizations are complete.
com.collation.topobuilder.RuntimeGcUnknownServerRetentionSpan
This property specifies how long (in days) to keep unknown processes. The
default value is 5, and the maximum value is 14. Unknown processes
determine when custom server templates are needed, however, without
regular clean up, the number of unknown processes can build up over
time. This might cause topology performance issues. The following item is
not removed by this processing:
v zOS Address Spaces

Topology manager settings


Use these properties to control topology manager settings.

The following list identifies extra details for properties that control topology
manager settings:
com.ibm.JdoQuery.FetchBatchSize=50
The batch size is a configurable property and corresponds to the
kodo.FetchBatchSize property. This property represents the number of
rows to fetch at a time when scrolling through a result set of a query run.
The default is 50 rows.
com.collation.topomgr.securityCheck=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.topomgr.optimisticTransaction=false
Valid values are true or false. The default is false. Do not change this setting.
com.collation.topomgr.lockManager=pessimistic
This property is used only if the
com.collation.topomgr.optimisticTransaction property is set to false. Do not
change this setting.
com.collation.topomgr.readLockLevel=read
This property is used to configure kodo.ReadLockLevel. Do not change this
setting.
com.collation.topomgr.writeLockLevel=write
This property is used to configure kodo.WriteLockLevel. Do not change
this setting.
Chapter 10. Server properties in the collation.properties file 95
com.collation.topomgr.lockTimeout=-1
This property is used to configure kodo.LockTimeout. Do not change this
setting.
com.collation.topomgr.isolationLevel=default
This property is used to set kodo.jdbc.TransactionIsolation. Do not change
this setting.
com.collation.topomgr.generateExplicitRelationship=false
Valid value is false. Do not change the this value. This attribute has been
deprecated.
com.collation.TopologyManager.port=19430
Specifies the firewall port used by the topology manager.

View manager settings


Use these properties to control view manager settings.

The following list identifies extra details for properties that control view manager
settings:
com.collation.view.cache.optimization.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, view caching is disabled. In general, do not change this setting. If you
change this setting, every request for a view means that the view must be
built and performance can be affected.
An example scenario where caching might be turned off temporarily is
when a number of data loads, object creations, modifications, deletions,
and discovery runs are being planned, and a higher priority is placed on
getting the data in before viewing the data. In this scenario, disabling the
cache prevents the cache from being constantly rebuilt in response to
changes.
com.collation.view.cache.disk=true
Valid values are true or false. The default is true. If you change the value to
false, memory caching is used. Disk caching should always be used in large
environments to avoid out of memory errors.
com.collation.view.cache.disk.path=var/viewmgr
This value identifies the directory of the view cache. The default is
var/viewmgr. If you want to change this value, the path you set has to be a
relative path from the $COLLATION_HOME property.
com.collation.view.maxnodes=500
Must be an integer value. When viewing a topology graph in the Product
Console, this property identifies the maximum number of nodes allowed.
The default is 500.
com.collation.view.accesscontrol.enabled=false
Valid values are true or false. The default is false. If you change the value to
true, the views are ACL aware. If the views are ACL aware, you want a
user that is logged in to view only objects that the user has access to.
When access control for views is enabled, no view caching is enabled.
Performance for large view is affected.

96 Administrator's Guide
com.collation.view.prebuildcache.treeview.physical.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, the Navigation Tree for physical infrastructure is not prebuilt into the
cache. Do not change this setting.
com.collation.view.prebuildcache.treeview.components.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, the Navigation Tree for software components tree is not prebuilt into
the cache. Do not change this setting.
com.collation.view.prebuildcache.treeview.application.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, the Navigation Tree for applications tree is not prebuilt into the cache.
Do not change this setting.
com.collation.view.prebuildcache.treeview.bizservice.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, the Navigation Tree for business services tree is not prebuilt into the
cache. Do not change this setting.
com.collation.view.prebuildcache.graph.appinfrastructure.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, Application Software Infrastructure Topology is not prebuilt into the
cache. If you change the value to false, update the
com.collation.gui.doNotShowGraphs property to not display this graph.
com.collation.view.prebuildcache.graph.physicalinfrastructure.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, Physical Infrastructure Topology is not prebuilt in to the cache. If you
change the value to false, update the com.collation.gui.doNotShowGraphs
property to not display this graph.
com.collation.view.prebuildcache.graph.bizapp.enabled=true
Valid values are true or false. The default is true. If you change the value to
false, Business Application Topology is not prebuilt in to the cache. If you
change the value to false, update the com.collation.gui.doNotShowGraphs
property to not display this graph.
com.collation.view.prebuildcache.treeview.define.application.enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.view.prebuildcache.treeview.define.bizservice.enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.view.prebuildcache.treeview.define.appserver.enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.view.prebuildcache.treeview.define.appserverclusters.enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.view.prebuildcache.treeview.define.appserverhosts.enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.view.prebuildcache.treeview.define.appserverclusterserviceshosts.
enabled=true
Valid values are true or false. The default is true. Do not change this setting.
com.collation.change.event.damping.interval.secs=10
It must be an integer value. This value indicates the time interval, in
seconds, that should lapse before the view is rebuilt. When the view is
rebuilt, the change of events is posted.

Chapter 10. Server properties in the collation.properties file 97


XML performance settings
Use these properties to control XML performance settings.

The following list identifies extra details for properties that control XML
performance settings:
com.collation.platform.xml.gcEnabled=false
Valid values are true or false. The default is false. Do not change the value
to true.
com.collation.platform.xml.showLeafLastMtime=false
Valid values are true or false. The default is false. If you set the value to
true, the last time a leaf object (a configuration item that is not going to be
displayed in the XML because it is too deep in the query) that was
changed is displayed. Displaying the attribute takes time and affects
performance. Instead of changing the value to true, you can increase the
depth of the query.

98 Administrator's Guide
Chapter 11. Self-monitoring tool overview
The TADDM self-monitoring tool provides detailed tracking of performance and
availability of the TADDM server and its component processes. You can view
errors that TADDM has discovered by using the tool with the summaries of
configuration item data that is stored in the database. Before you can use this tool,
you must have IBM Tivoli Monitoring 6.1, or later, installed in your environment.

The self-monitoring tool is instrumented with IBM Tivoli Monitoring 6.1, which
provides visualization of collected availability and performance data. Based on the
data provided by the self-monitoring tool, administrators can create reflexive
actions to counteract availability degradation.

Viewing availability data


The self-monitoring tool provides information about availability data.

About this task

To view the availability data provided by the self-monitoring tool, complete the
following steps:
1. Start the Tivoli Enterprise Portal Server user interface.
2. Go to the IBM Tivoli Monitoring Navigator Physical view. This is the initial
view of the Navigator Physical view:
Enterprise
Operating Platform
The Operating Platform is one or more of the following systems:
v AIX systems
v Linux systems
v Solaris systems
v Windows systems
3. Select the Enterprise icon.
4. Right-click Enterprise, then click Workspace → Tivoli Application Dependency
Discovery Manager Availability. The corresponding workspace opens. The
workspace includes the following information:
v A table that provides a time stamp and message abstract for error messages.
v A table of services with corresponding states.
v A table of processes that are not operational.
v A table that provides database status. The table lists the IBM Tivoli
Monitoring Version 6.x, database agents that monitor the TADDM database
in that host.
5. Click the workspace icon, , next to the Status cell, to load the default
workspace for the database agent. When you load the workspace, you can see
performance metrics for the database.
6. (Optional) To sort the columns of the tables in the workspace, click the column
header. For example, in the Error Messages table, click the LocalTimeStamp

© Copyright IBM Corp. 2006, 2009 99


column header to sort the messages by the time stamp. If the messages are
already sorted by time stamp, the sort order (ascending or descending) is
reversed.

Viewing error messages


The self-monitoring tool provides error messages that are about system availability.

The table of error messages includes two columns of information:


LocalTimeStamp
The time at the portal client when the status of a situation changes.
ErrorMessage
The text of the error message that includes the date and the time
associated with the message.

Viewing services
The self-monitoring tool provides services that are related to system availability.

The services table includes two columns of information:


ServiceName
The name of the service.
State The state of the service.

Working with non-operational processes


The self-monitoring tool provides information about non-operational processes.

The table of non-operational processes includes nine columns of information:


v Status: The status for the situation:

Acknowledged
The event is acknowledged.

Deleted
The situation is deleted.
Problem
The situation is in a problem state.

Expired
The acknowledgement of the event is expired and the event remains
open.

Opened
The situation is running and is now true, which opens an event.

Closed
The situation is running, was true, but is now false.

Reopened
The acknowledgement was removed before it expired, and the event is
still open (reopened).

Started
The situation is started.

100 Administrator's Guide


Stopped
The situation is stopped.
v Situation name: The name of the situation or policy.
v Display item: If the situation was defined with a display item, it is displayed
here. Otherwise, this cell is empty. (A display item is an attribute designated to
further qualify a situation. With a display item set for a multiple-row attribute
group, the situation continues to look at the other rows in the sampling and
opens more events if other rows qualify. The value is displayed in the event
workspace and in the message log and situation event console views. You can
select a display item when building a situation with a multiple row attribute
group.)
v Source: The source of the situation.
v Impact: The impact of the situation.
v Opened: The time stamp indicating when the situation event was opened.
v Age: The length of time the situation event has existed.
v LocalTimeStamp: The time at the portal client when the status of the situation
changes.
v Type: The type of event. The possible values are sampled and pure.
Sampled
Sampled events occur when a situation becomes true. Situations sample
data at regular intervals. When the situation is true, it causes an event,
which gets closed automatically when the situation become false again.
You can also close the event manually.
Pure Pure events are unsolicited notifications. Examples of pure events are an
out-of-paper condition on a printer and an occurrence that adds an entry
to a log. Some monitoring agents have attributes that report pure events,
such as the Windows Event Log and Windows File Change attribute
groups. A situation using one of these attributes can monitor for pure
events. Because of the nature of pure events, they are not closed
automatically like sampled events; you must close the pure event
manually. Alternatively, you can create the situation with an UNTIL
clause.

The table of non-operational processes is preceded by a row of icons. The


following list shows these icons and information about each of them:

- Filter Critical
Click the icon to display only Critical events.

- Filter Warning
Click the icon to display only Warning events.

- Filter Informational
Click the icon to display only Information events.

- Filter Open
Click the icon to display only Open events, including any events that were
reopened (acknowledgement removed) or whose acknowledgement
expired.

- Filter Acknowledged
Click the icon to display only Acknowledged events.

Chapter 11. Self-monitoring tool overview 101


- Filter Stopped
Click the icon to display only Stopped situations.
- Filter Problem
Click the icon to display only Problem situations. These are situations that
are in error for some reason.

- Console Pause
A workspace control you can use to pause the automatic refresh. Click the
icon to stop automatic refresh temporarily. You can manually refresh the
workspace if you want. Click the Resume Refresh button ( ) to refresh
the workspace and resume automatic refresh.

Following the list of icons, there is text that provides details about filters that are
applied to the contents of the table. For the latest information about changing the
filters, refer to the IBM Tivoli Monitoring User’s Guide.

Viewing errors
The self-monitoring tool provides information about errors that hinder availability
and threaten the system health.

About this task

When viewing availability data, there is a table of error messages. To restore the
health of the systems, you need to resolve these errors.

To view the errors that need to be resolved for TADDM, complete the following
steps:
1. Start the Tivoli Enterprise Portal Server user interface.
2. Go to the IBM Tivoli Monitoring Navigator Physical view. This is the initial
view of the Navigator Physical view:
Enterprise
Operating Platform
The Operating Platform is one or more of the following systems:
v AIX systems
v Linux systems
v Solaris systems
v Windows systems
3. Select Enterprise.
4. Right-click Enterprise, then click Workspace → Tivoli Application Dependency
Discovery Manager Health. The corresponding workspace opens and includes
a table of error messages. The table includes the following two columns of
information:
LocalTimeStamp
The time at the portal client when the status of the situation changes.
ErrorMessage
The text of the error message that includes the date and the time
associated with the message.

102 Administrator's Guide


Viewing performance data
The self-monitoring tool provides charts and tables with statistics related to
resource performance.

About this task

You can view performance data by performing the following steps:


1. Start the Tivoli Enterprise Portal user interface.
2. Go to the IBM Tivoli Monitoring Navigator Physical view. This is the initial
view of the Navigator Physical view:
Enterprise
Operating Platform
The Operating Platform is one or more of the following systems:
v AIX systems
v Linux systems
v Solaris systems
v Windows systems
3. Select Enterprise.
4. Right-click the Enterprise item, then click Workspace → Tivoli Application
Dependency Discovery Manager Performance. The corresponding workspace
opens. The workspace includes the following information:
v A table with a summary of response times
v A bar chart that graphs current response times
v A plot chart with response times
v A table with a summary of situation events
5. (Optional) To sort the columns of the tables in the workspace, click the column
header. For example, in the response time table, click the LocalTimeStamp
column header to sort the messages by the time stamp. If the messages are
already sorted by time stamp, the sort order (ascending or descending) is
reversed.

Viewing the table summary of response times


You can view the table of response times that is provided by the self-monitoring
tool.

The table of response times that is provided by the self-monitoring tool includes
four columns of information:
LocalTimeStamp
The time at the portal client when the status of the situation changes.
RealTime
The total amount of time, in seconds, used to complete the transaction.
UserTime
The amount of user processing time, in seconds, used to complete the
transaction.
SystemTime
The amount of system processing time, in seconds, used to complete the
transaction.

Chapter 11. Self-monitoring tool overview 103


Viewing the bar chart of current response times
You can view the bar chart of response times that is provided by the
self-monitoring tool.

The bar chart with current response time graphs includes the following
information:
RealTime
The total amount of time, in seconds, used to complete the transaction.
UserTime
The amount of user processing time, in seconds, used to complete the
transaction.
SystemTime
The amount of system processing time, in seconds, used to complete the
transaction.

The y-axis of the chart indicates the time in seconds.

Viewing the plot chart of response times


You can view the plot chart of response times that is provided by the
self-monitoring tool.

The plot chart of response time graph includes the following information:
RealTime
The total amount of time, in seconds, used to complete the transaction.
UserTime
The amount of user processing time, in seconds, used to complete the
transaction.
SystemTime
The amount of system processing time, in seconds, used to complete the
transaction.

The x-axis of the chart graphs the time in 10-second increments. For example, if
data collection begins at 10:22:35 (hh:mm:ss), there are entries along the x-axis for
10:22:35, 10:22:45, and 10:22:55.

The y-axis of the chart indicates the time in seconds.

Working with the table summary of situation events


The table summary of situation events provided by the self-monitoring tool
includes nine columns of information.

The following list provides details for the columns:


v Status: The current status for the situation. The following are the possible
values:
– Acknowledged: The event is acknowledged.
– Deleted: The situation is deleted.
– Problem: The situation is in a problem state.
– Expired: The acknowledgement of the event is expired and the event remains
open.
– Opened: The situation is running and is now true, which opens an event.

104 Administrator's Guide


– Closed: The situation is running, was true, but is now false.
– Reopened: The acknowledgement was removed before it expired, and the
event is still open (reopened).
– Started: The situation is started.
– Stopped: The situation is stopped.
v Situation name: The name of situation or policy.
v Display item: If the situation was defined with a display item, it is displayed
here. Otherwise, this cell is empty. (A display item is an attribute designated to
further qualify a situation. With a display item set for a multiple-row attribute
group, the situation continues to look at the other rows in the sampling and
opens more events if other rows qualify. The value is displayed in the event
workspace and in the message log and situation event console views. You can
select a display item when building a situation with a multiple row attribute
group.)
v Source: The source of the situation.
v Impact: The impact of the situation.
v Opened: The time stamp indicating when the situation event was opened.
v Age: The length of time the situation event has existed.
v LocalTimeStamp: The time at the portal client when the status of the situation
changes.
v Type: The type of event. The possible values are sampled and pure.
Sampled
Sampled events occur when a situation becomes true. Situations sample
data at regular intervals. When the situation is true, it causes an event,
which gets closed automatically when the situation becomes false again.
You can also close the event manually.
Pure Pure events are unsolicited notifications. Examples of pure events are an
out-of-paper condition on a printer and an occurrence that adds an entry
to a log. Some monitoring agents have attributes that report pure events,
such as the Windows Event Log and Windows File Change attribute
groups. A situation using one of these attributes can monitor for pure
events. Because of the nature of pure events, they are not closed
automatically like sampled events; you must close the pure event
manually. Alternatively, you can create the situation with an UNTIL
clause.

The table summary of situation events is preceded by a row of icons. The


following list shows these icons and information about each of them:

- Filter Critical
Click the icon to display only Critical events.

- Filter Warning
Click the icon to display only Warning events.

- Filter Informational
Click the icon to display only Information events.

- Filter Open
Click the icon to display only Open events, including any events that were
reopened (acknowledgement removed) or whose acknowledgement
expired.

Chapter 11. Self-monitoring tool overview 105


- Filter Acknowledged
Click the icon to display only Acknowledged events.

- Filter Stopped
Click the icon to display only Stopped situations.
- Filter Problem
Click the icon to display only Problem situations. These are situations that
are in error for some reason.

- Console Pause
A workspace control that you can use to pause the automatic refresh. Click
the icon to stop automatic refresh temporarily. You can manually refresh
the workspace if you want. Click the Resume Refresh button ( ) to
refresh the workspace and to resume automatic refresh.

Following the list of icons, there is text that provides details about filters that are
applied to the contents of the table. For the latest information about changing the
filters, refer to the IBM Tivoli Monitoring User’s Guide.

Viewing the infrastructure data


You can use the self-monitoring tool to view infrastructure data and statistics.

About this task

The self-monitoring tool provides charts and tables with statistics related to the
resource infrastructure. To view infrastructure data, complete the following steps:
1. Start the Tivoli Enterprise Portal Server user interface.
2. Go to the IBM Tivoli Monitoring Navigator Physical view. This is the initial
view of the Navigator Physical view:
Enterprise
Operating Platform
The Operating Platform is one or more of the following systems:
v AIX systems
v Linux systems
v Solaris systems
v Windows systems
3. Select Enterprise.
4. Right-click the Enterprise icon, then click one of the following options:
v For Linux servers: Workspace → Tivoli Application Dependency Discovery
Manager Infrastructure - Linux
v For AIX and Solaris servers: Workspace → Tivoli Application Dependency
Discovery Manager Infrastructure - UNIX
The selected workspace opens. The workspace includes the following
information:
v A bar chart that graphs a summary of system memory usage
v Bar charts that graph the memory usage for individual services
v A circular gauge chart that graphs the processing usage for the system
v Circular gauge charts that graph the processing usage for individual services

106 Administrator's Guide


v A table summary of availability
There are two versions of the TADDM Infrastructure workspace. One version
provides information for Linux servers. The other version provides information
for AIX and Solaris servers.
5. (Optional) To sort the columns of the tables in the workspace, click the column
header. For example, in the system availability table, click the ServiceName
column header to sort the contents by the name of the service. If the messages
are already sorted by service name, the sort order (ascending or descending) is
reversed.

Viewing the bar chart that graphs an information summary


You can view the bar chart, provided by the self-monitoring tool, that graphs an
information summary.

The self-monitoring tool provides a bar chart that graphs a summary of system
information. The bar chart with the summary of system information graphs the
following items:
Total Memory (MB)
The total amount of memory available for the resource.
Memory Used (MB)
The amount of memory currently used by the resource.
Memory Free (MB)
The amount of memory not currently used and available for use by the
resource.

The y-axis of the chart indicates the memory quantity in megabytes.

Viewing bar charts that graph memory usage for individual


services
The self-monitoring tool provides a bar chart that graphs the memory usage for
services. The bar chart indicates the amount of memory used by each service.

The x-axis of the chart indicates the memory quantity in kilobytes.

Viewing the circular gauge chart that graphs processor usage


The self-monitoring tool provides circular gauge charts that graph the processing
usage for the system.

The circular gauge chart shows the proportional amount of processing that is being
used. The number displayed below the circular gauge indicates the exact
percentage displayed in the chart.

Viewing circular gauge charts that graph processor usage for


individual services
The self-monitoring tool provides circular gauge charts that graph the processing
usage for individual services.

The circular gauge chart shows the proportional amount of processing that is being
used. The number displayed below the circular gauge indicates the exact
percentage displayed in the chart.

Chapter 11. Self-monitoring tool overview 107


Viewing table summary of system availability
You can view the table of system availability that is provided by the
self-monitoring tool.

The table summary of system availability includes two columns of information:


ServiceName
The name of the service.
State The state of the service.

Viewing configuration items


You can view the configuration items tracked by the self-monitoring tool.

About this task

To view the configuration items tracked by the self-monitoring tool, complete the
following steps:
1. Start the Tivoli Enterprise Portal Server user interface.
2. Go to the IBM Tivoli Monitoring Navigator Physical view. This is the initial
view of the Navigator Physical view:
Enterprise
Operating Platform
The Operating Platform is one or more of the following systems:
v AIX systems
v Linux systems
v Solaris systems
v Windows systems
3. Click the Enterprise icon.
4. Right-click the Enterprise icon, then click Workspace → Tivoli Application
Dependency Discovery Manager Configuration Items The workspace opens.
The workspace includes the following information:
v A bar chart that provides the total number of configuration item changes in
the past week
v A bar chart that provides the totals for major configuration items, including,
system items, application items, network items, and storage items
v A plot chart that displays trends over time and among configuration items
v A table with a summary of situation events
5. (Optional) To sort the columns of the tables in the workspace, click the column
header. For example, in the Situation Events Console table, click the
LocalTimestamp column header to sort the messages by the time stamp. If the
messages are already sorted by time stamp, the sort order (ascending or
descending) is reversed.

Viewing total number of configuration item changes in the


past week
You can view the total number of configuration item changes in the past week.

The bar chart indicates the total number of configuration item changes over the
last seven days. The x-axis of the chart indicates the quantity.

108 Administrator's Guide


Viewing totals for configuration items
The bar chart provides a summary of configuration items totals.

The bar chart graphs the following items:


v Total items
v System items
v Application items
v Network items
v Storage items

The y-axis of the chart indicates the quantity of items.

Viewing plot chart of configuration items


Put your short description here; used for first paragraph and abstract.

The x-axis of the chart indicates the time in 10-second increments. For example, if
data collection begins at 10:22:35 (hh:mm:ss), there are entries along the x-axis for
10:22:35, 10:22:45, and 10:22:55. The y-axis of the chart indicates the quantity of
items.

Working with the table summary of situation events


The table summary of situation events provided by the self-monitoring tool
includes nine columns of information.

The following list provides details for the columns:


v Status: The current status for the situation. The following are the possible
values:
– Acknowledged: The event is acknowledged.
– Deleted: The situation is deleted.
– Problem: The situation is in a problem state.
– Expired: The acknowledgement of the event is expired and the event remains
open.
– Opened: The situation is running and is now true, which opens an event.
– Closed: The situation is running, was true, but is now false.
– Reopened: The acknowledgement was removed before it expired, and the
event is still open (reopened).
– Started: The situation is started.
– Stopped: The situation is stopped.
v Situation name: The name of situation or policy.
v Display item: If the situation was defined with a display item, it is displayed
here. Otherwise, this cell is empty. (A display item is an attribute designated to
further qualify a situation. With a display item set for a multiple-row attribute
group, the situation continues to look at the other rows in the sampling and
opens more events if other rows qualify. The value is displayed in the event
workspace and in the message log and situation event console views. You can
select a display item when building a situation with a multiple row attribute
group.)
v Source: The source of the situation.
v Impact: The impact of the situation.

Chapter 11. Self-monitoring tool overview 109


v Opened: The time stamp indicating when the situation event was opened.
v Age: The length of time the situation event has existed.
v LocalTimeStamp: The time at the portal client when the status of the situation
changes.
v Type: The type of event. The possible values are sampled and pure.
Sampled
Sampled events occur when a situation becomes true. Situations sample
data at regular intervals. When the situation is true, it causes an event,
which gets closed automatically when the situation becomes false again.
You can also close the event manually.
Pure Pure events are unsolicited notifications. Examples of pure events are an
out-of-paper condition on a printer and an occurrence that adds an entry
to a log. Some monitoring agents have attributes that report pure events,
such as the Windows Event Log and Windows File Change attribute
groups. A situation using one of these attributes can monitor for pure
events. Because of the nature of pure events, they are not closed
automatically like sampled events; you must close the pure event
manually. Alternatively, you can create the situation with an UNTIL
clause.

The table summary of situation events is preceded by a row of icons. The


following list shows these icons and information about each of them:

- Filter Critical
Click the icon to display only Critical events.

- Filter Warning
Click the icon to display only Warning events.

- Filter Informational
Click the icon to display only Information events.

- Filter Open
Click the icon to display only Open events, including any events that were
reopened (acknowledgement removed) or whose acknowledgement
expired.

- Filter Acknowledged
Click the icon to display only Acknowledged events.

- Filter Stopped
Click the icon to display only Stopped situations.
- Filter Problem
Click the icon to display only Problem situations. These are situations that
are in error for some reason.

- Console Pause
A workspace control that you can use to pause the automatic refresh. Click
the icon to stop automatic refresh temporarily. You can manually refresh
the workspace if you want. Click the Resume Refresh button ( ) to
refresh the workspace and to resume automatic refresh.

Following the list of icons, there is text that provides details about filters that are
applied to the contents of the table. For the latest information about changing the
filters, refer to the IBM Tivoli Monitoring User’s Guide.
110 Administrator's Guide
Chapter 12. Integration with other Tivoli products
For extended capabilities in managing your IT environment, you can integrate the
Tivoli Application Dependency Discovery Manager (TADDM) with other Tivoli
products, including IBM Tivoli Business Service Manager, IBM Tivoli Monitoring,
and event management systems such as IBM Tivoli Enterprise Console® and IBM
Tivoli Netcool/OMNIbus.

Configuring for launch in context


To see more detailed information about components in your environment, you can
launch TADDM views from other Tivoli applications. To configure your application
to launch TADDM views in context, you must specify a URL.

Views that you can launch from other Tivoli applications


From other Tivoli applications, you can launch both Product Console and Domain
Manager views. You can also launch the details and change history report for a
specified configuration item (CI).

In the Product Console and Domain Manager views, you can see more information
for the following component groupings:
v Application infrastructure
v Physical infrastructure
v Business applications
v Collections
v Business services

If both the TADDM server and the application from which TADDM is being
launched are not configured for a single sign-on, a sign-on window is shown.
Before you can view additional information in either the Product Console or the
Domain Manager, you must provide a user name and password.

Specifying the URL to launch TADDM views


To launch TADDM views in context from other Tivoli applications, you must
specify a URL.

The URL format for launching in context is:


Protocol://TADDMHostname:TADDMPort/ContextRoot/?queryString

The following list describes the valid values for each variable in the URL format:
Protocol
The Web protocol to use. Valid values are http or https.
TADDMHostname
The host name for the TADDM server to which you are launching.
TADDMPort
The port number for the TADDM server to which you are launching. The
default value is 9430.

© Copyright IBM Corp. 2006, 2009 111


ContextRoot
The only valid value is cdm/servlet/LICServlet, which is the relative path to
the Java servlet that is deployed in the Apache Tomcat server.
queryString
Contains name-value pair parameters that are delimited by separators. The
format for a name-value pair is name=value. Use = to separate names and
values, and use & to separate name-value pairs.
The following list describes the valid name-value pairs that can be used in the
queryString variable:
console
Specifies whether to launch into the Product Console (a Java console) or
the Domain Manager (a Web console).
If this parameter is not provided, the Product Console is launched.
The following string values are valid:
v java
v web
target
Specifies whether to launch a new or existing instance of the Product
Console.
If this parameter is not provided, an existing Product Console is launched.
If console=web is specified, the target parameter is not applicable (does not
have any effect).
The following string values are valid:
v existing
v new
view
Specifies that you want to display change history.
The only valid value is changehistory.
days_previous
Specifies the time period (the number of past days) for which to show the
change history of a particular configuration item.
The valid value is a positive integer.
guid
Specifies the Globally Unique Identifier (GUID) for a configuration item.
If the graph parameter is specified with any of the following values, the
guid parameter is optional:
v businessapplications
v applicationinfrastructure
v physicalinfrastructure
If the graph parameter is specified with any other type of topology graph,
the guid parameter is required.
The valid value is a valid string representation of a GUID, as shown in the
following example:
BA2842345F693855A3165A4B5F0D8BDE

112 Administrator's Guide


You should specify only one GUID for each URL request for launch in
context.
graph
Specifies the type of topology graph to be launched.
If you also specify a configuration item by providing its GUID on the guid
parameter, the requested configuration item is then selected, if it is found
in the topology graph that is specified on this graph parameter.
Valid values regardless of whether the guid parameter is also specified:
v businessapplications
v applicationinfrastructure, except that this value is not valid
for the Enterprise Domain Server
v physicalinfrastructure
Valid values only if the guid parameter is also specified, except that
these values are not valid for the Enterprise Domain Server:
v For business application objects:
– app_software for Business Application Software Topology
– app_physical for Business Application Physical Topology
v For business service objects:
– bus_svc_software for Business Service Software Topology
– bus_svc_physical for Business Service Physical Topology
v For collection objects:
– collection_relationship for Collection Relationship
Topology
– collection_physical for Collection Physical Topology
username
Specifies the user name used to log in to TADDM.
password
Specifies the password used to log in to TADDM.

Examples of how to specify the URL

The following examples show how to specify the URL to launch TADDM views:
URL for launching the Product Console, specifying only a GUID
http://home.taddm.com:9430/cdm/servlet/LICServlet?guid=BA2842345F693855A3165A4B5F0D8BDE

URL for launching the Product Console, specifying only a graph name
http://home.taddm.com:9430/cdm/servlet/LICServlet?graph=businessapplications

URL for launching the Product Console, specifying a graph name with GUID
http://home.taddm.com:9430/cdm/servlet/LICServlet?graph=app_software
&guid=213l3jlk120bksdf

URL for launching the Product Console to display the change history view, with
the change history starting 20 days prior to the current date
http://home.taddm.com:9430/cdm/servlet/LICServlet?guid=BA2842345F693855A3165A4B5F0D8BDE
&view=changehistory&days_previous=20

URL for launching the Domain Manager, specifying a graph name


http://home.taddm.com:9430/cdm/servlet/LICServlet?console=web
&graph=applicationinfrastructure

Chapter 12. Integration with other Tivoli products 113


URL for launching the Domain Manager, without entering authorization
information separately
http://home.taddm.com:9430/cdm/servlet/LICServlet?username=administrator
&password=adminpwd&console=web&guid=BA2842345F693855A3165A4B5F0D8BDE

Important: You must only use credentials as part of the URL for launching
in context if you are using a trusted connection because the
user name and password are not encrypted.

Sending change events to external systems


You can configure TADDM to notify an external event-handling system when a
change to a discovered resource is detected.

To send change events from TADDM, you must have one or more of the following
event-handling systems installed:
v IBM Tivoli Monitoring 6.2.1 Fixpack 2, or later
v IBM Tivoli Enterprise Console version 3.9, or later
v IBM TivoliNetcool/OMNIbus 7.1, or later, including the Event Integration
Framework (EIF) Probe
If you want to send events to IBM Tivoli Netcool/OMNIbus 7.1 and IBM Tivoli
Enterprise Console 3.9 simultaneously, IBM Tivoli Enterprise Console should have
Fix Pack 7, or later, installed. If Fix Pack 6, or earlier, is installed, additional
configuration must be performed.

When a discovery completes, TADDM checks for changes to items being tracked
by external event-handling systems. If any are detected, they are sent, using EIF,
directly to IBM Tivoli Netcool/OMNIbus and/or IBM Tivoli Enterprise Console,
and to IBM Tivoli Monitoring using the Universal Agent.

The Universal Agent converts the received notifications to asynchronous events,


and forwards the data to the IBM Tivoli Enterprise Monitoring Server component
of IBM Tivoli Monitoring. The IBM Tivoli Monitoring Server stores the events and
uses them to evaluate situations. The events are then passed to the IBM Tivoli
Enterprise Portal for display.

IBM Tivoli Netcool/OMNIbus and IBM Tivoli Enterprise Console servers process
received events according to their internal rules and display them.

To set up the sending of change events from TADDM to external event-handling


systems, you must enable change events in TADDM, and configure each external
recipient to handle incoming events, as appropriate.

Configuring TADDM
To send change events, you must configure TADDM with information about the
event-handling systems to which you want to send change events.

About this task

To enable the sending of change event information, complete the following steps:
1. To enable change events, in the $COLLATION_HOME/etc/collation.properties
file, set the following property: com.ibm.cdb.omp.changeevent.enabled=true
2. To configure which resources are tracked for changes and to which
event-handling systems the events are sent, edit the $COLLATION_HOME/etc/

114 Administrator's Guide


EventConfig.xml file. For information about the format you should use to
specify information in the EventConfig.xml file, see the comments within the
file.
3. If you specified a IBM Tivoli Enterprise Console or IBM Tivoli
Netcool/OMNIbus event-handling system in the EventConfig.xml file, create a
corresponding EIF property file for each system type. For examples of EIF
properties files, see the following examples:
v $COLLATION_HOME/etc/tec.eif.properties
v $COLLATION_HOME/etc/omnibus.eif.properties

Configuring IBM Tivoli Netcool/OMNIbus


You do not need to perform any further configuration to ensure that IBM Tivoli
Netcool/OMNIbus Version 7.3 or later receives change events that TADDM sends.
However, to aggregate and customize the event data that is displayed in previous
versions of Tivoli Netcool/OMNIbus, you can define event-handling logic.

Before you begin

If you want to send events to IBM Tivoli Netcool/OMNIbus 7.1 and IBM Tivoli
Enterprise Console 3.9 simultaneously, IBM Tivoli Enterprise Console 3.9 Fix Pack
7 or later must be installed. If Fix Pack 6 or earlier is installed, you must use two
instances of the event module script (changeevents.sh on UNIX operating systems
or changeevents.bat on Windows operating systems), with each one loading a
separate EIF JAR file.

About this task

The default behavior in IBM Tivoli Netcool/OMNIbus versions prior to Version 7.3
is for all events from an event module to be combined into a single event, with the
Count attribute set to display the number of events that are contained in the
combined event. To modify this behavior, complete the following steps:
1. On the TADDM server, open the following file for editing:
$COLLATION_HOME/etc/omnibus.eif.properties
2. Set the following TADDMEvent_Slot properties property values:
TADDMEvent_Slot_sub_source=$TADDM_GUID
TADDMEvent_Slot_origin=$TADDM_OBJECT_NAME
TADDMEvent_Slot_hostname=$TADDM_OBJECT_NAME
TADDMEvent_Slot_msg='$TADDM_HOST $TADDM_PORT
$TADDM_CLASS_NAME $TADDM_OBJECT_NAME
$TADDM_ATTRIBUTE_NAME
$TADDM_CHANGE_TYPE $TADDM_GUID'

Configuring IBM Tivoli Enterprise Console


You do not need to perform any further configuration to ensure that change events
sent by TADDM are received by IBM Tivoli Enterprise Console, but you can define
event-handling logic to aggregate and customize the event data displayed.

Before you begin


If you want to send events to IBM Tivoli Netcool/OMNIbus 7.1 and IBM Tivoli
Enterprise Console 3.9 simultaneously, IBM Tivoli Enterprise Console should have
Fix Pack 7, or later, installed. If Fix Pack 6, or earlier, you must use two instances
of the event module script (changeevents.sh on UNIX operating systems, or
changeevents.bat on Windows operating systems), with each one loading a
separate EIF JAR file.
Chapter 12. Integration with other Tivoli products 115
About this task

IBM Tivoli Enterprise Console event classes define each type of event. You can
define a TADDM event class if you do not want to use the generic TEC_Notice
class. To define a customized event class, complete the following steps:
1. Using BAROC notation, define a customized event class for TADDM change
events.
TEC_CLASS: TADDMEventClass ISA TEC_Notice
DEFINES {
TADDMEvent_Slot_TADDM_HOST: STRING;
TADDMEvent_Slot_TADDM_PORT: STRING;
TADDMEvent_Slot_TADDM_GUID: STRING;
TADDMEvent_Slot_TADDM_OBJECT_NAME: STRING;
TADDMEvent_Slot_TADDM_CHANGE_TYPE: STRING;
TADDMEvent_Slot_TADDM_CHANGE_TIME: STRING;
TADDMEvent_Slot_TADDM_CLASS_NAME: STRING;
TADDMEvent_Slot_TADDM_ATTRIBUTE_NAME: STRING;
TADDMEvent_Slot_TADDM_OLD_VALUE: STRING;
TADDMEvent_Slot_TADDM_NEW_VALUE: STRING;
severity: default = CRITICAL;
};
END
2. Import the event class into IBM Tivoli Enterprise Console. For information
about how to do this, see the IBM Tivoli Enterprise Console Rule Developer’s
Guide.
3. Update $COLLATION_HOME/tec.eif.properties to reflect the new event class.

Configuring an IBM Tivoli Monitoring data provider


You can configure the Universal Agent initialization file to define a new data
provider.

About this task

To configure an IBM Tivoli Monitoring data provider, complete the following steps:
1. If you are running the Universal Agent on a Windows machine, complete the
following steps:
a. On the Windows machine where the Universal Agent is installed, click Start
→ IBM Tivoli Monitoring → Manage Tivoli Monitoring Services.
b. Right-click the Universal Agent and click Reconfigure.
c. In each of the two Agent Advanced Configuration windows, click OK.
d. To update the Universal Agent initialization file, click Yes. The KUMENV
file is opened in the system text editor.
e. Set the KUMA_STARTUP_DP value to POST:
KUMA_STARTUP_DP=POST

Note: If the Universal Agent is already configured to use another data


provider, separate the POST value with a comma, for example:
KUMA_STARTUP_DP=ASFS,POST
f. Add the required POST parameter information to the KUMENV file:
*----------------------------------------*
* TADDM POST DP Parameters *
*----------------------------------------*
KUMP_POST_DP_PORT=7575
KUMP_POST_GROUP_NAME=TADDM
KUMP_POST_APPL_TTL=14400

116 Administrator's Guide


g. Save the KUMENV file, and close it.
h. To configure the agent, click Yes.
i. In the Manage Tivoli Enterprise Monitoring Services window, click
Universal Agent → Start.
j. In the system text editor, create a text file. Enter the following information in
the file:
//APPl CONFIGCHANGE
//NAME dpPost E 3600
//ATTRIBUTES ';'
Post_Time T 16 Caption{Time}
Post_Origin D 32 Caption{Origination}
Post_Ack_Stamp D 28 Caption{Event time stamp}
Comp_Type D 512 Caption{Component type}
Comp_Name D 512 Caption{Component name}
Comp_Guid D 512 Caption{Component GUID}
Change_Type D 512 Caption{Change type}
Chg_Det_Time D 512 Caption{Change detection time}
Chg_Attr D 512 Caption{Changed attribute}
Srv_Addr D 512 Caption{TADDM server}
k. Save the file as %ITM_HOME%\TMAITM6\metafiles\KUMPOST.

Note: Ensure that you spell the file name, KUMPOST, with uppercase letters,
as shown here.
l. Open a Windows command prompt and navigate to the %ITM_HOME%\TMAITM6
folder.
m. Run the KUMPCON.exe program to validate and import the KUMPOST metafile.
n. In the Manage Tivoli Monitoring Services window, right-click the Universal
Agent, and select Recycle.
2. If you are running the Universal Agent on a UNIX or Linux machine, complete
the following steps:
a. Reconfigure the universal agent using the following command:
itmcmd config – A um

When you are prompted for the data provider, enter POST.
b. In the $ITM_HOME/config directory, make a backup copy of the um.config
file, and then add the following entries to the original copy of the file:
# TADDM POST DP Parameters
KUMP_POST_DP_PORT=7575
KUMP_POST_GROUP_NAME=TADDM
KUMP_POST_APPL_TTL=14400
c. In the $ITM_HOME/interp/um/metafiles directory, create a text file. Enter the
following information in the file:
//APPl CONFIGCHANGE
//NAME dpPost E 3600
//ATTRIBUTES ';'
Post_Time T 16 Caption{Time}
Post_Origin D 32 Caption{Origination}
Post_Ack_Stamp D 28 Caption{Event time stamp}
Comp_Type D 512 Caption{Component type}
Comp_Name D 512 Caption{Component name}
Comp_Guid D 512 Caption{Component GUID}
Change_Type D 512 Caption{Change type}
Chg_Det_Time D 512 Caption{Change detection time}
Chg_Attr D 512 Caption{Changed attribute}
Srv_Addr D 512 Caption{TADDM server}
d. Save the file as KUMPOST.

Chapter 12. Integration with other Tivoli products 117


Note: Ensure that you spell the file name, KUMPOST, with uppercase letters,
as shown here.
e. Restart the Universal Agent using the following commands:
itmcmd agent stop um
itmcmd agent start um
f. Run the $ITM_HOME/bin/um_console command to validate and refresh the
KUMPOST metafile.

Creating configuration change situations in IBM Tivoli


Monitoring
You can use the Situation function in the Tivoli Enterprise Portal to monitor change
events and to trigger situations that are based on the information in a change
event.

About this task

To create a configuration change situation in IBM Tivoli Monitoring, complete the


following steps:
1. In the Navigator pane of IBM Tivoli Enterprise Portal, right-click the node that
contains the change event report. Click Situations.
2. In the ″Situations for node_name″ window, right-click Universal Data Provider.
Click Create New. The Create Situation or Rule window is displayed.
3. In the Name field, type the name of the situation. For example,
ConfigurationChanged.
4. In the Description field, type the description of the situation. For example, A
change to a tracked object was detected by TADDM.
5. From the Monitored Application list, select Universal Data Provider.
6. Ensure that the Correlate Situations across Managed Systems check box is
clear.
7. Click OK. The ″Select condition″ window is displayed.
8. From the Attribute Group list, select DPPOST.
9. From the Attribute Item list, select Component name.
10. Click OK. The Formula tab for the situation is displayed.
11. Configure the situation so that it is triggered when the component name
matches the name of the resource in your environment that you want to
monitor.
12. Click OK.

Results

When configuration change events are received, their component name is checked.
If the component name matches that of the component you have specified in the
situation formula, the configured situation is triggered.

Creating detail links in configuration change event reports in


IBM Tivoli Monitoring
You can create links in a report table to a workspace displaying change history and
details directly from the TADDM server. These links give more detailed
information than what is displayed in a report.

118 Administrator's Guide


About this task

To create a link, in a configuration change event report, to more detailed change


event information, complete the following steps:
1. To create a workspace to display the information, complete the following
steps:
a. In the Navigator pane, right-click the node within which you want to
contain the workspace. Click File → Save workspace as. The Save
Workspace As window is displayed.
b. In the Name field, type the name of the workspace. For example,
ConfigChangeDetails.
c. In the Description field, type a description of the workspace. For example,
Generic workspace for the change event table.
d. Select the Only selectable as the target of a Workspace Link check box.
e. Click OK.
2. To configure the workspace if you are using IBM Tivoli Monitoring 6.2, or
earlier, complete the following steps:
a. Configure the workspace to have one navigator pane and two browser
panes.
b. In the Location field of one of the browser panes, type the URL of the
Change History view in TADDM. When you have typed the URL, do not
press Enter.
http://$taddm_server$:$taddm_port$/cdm/servlet/LICServlet?
view=changehistory&hoursback=10000&console=web&guid=$taddm_guid$

The hoursback parameter specifies the number of hours for which change
events are displayed. For example, setting hoursback to 6 displays all
change events in the previous six hours.
c. In the Location field of the second of the browser panes, type the URL of
the Object Details view in TADDM. When you have typed the URL, do not
press Enter.
http://$taddm_server$:$taddm_port$/cdm/servlet/LICServlet?console=web
&guid=$taddm_guid$
d. To save the new workspace, click File → Save.
To configure the workspace if you are using IBM Tivoli Monitoring 6.2.1, or
later, complete the following steps:
a. Configure the workspace to have one navigator pane and two browser
panes.
b. Click Edit → Properties.
c. In the Browser pane, select the first instance of Getting Started.
d. In the Style pane, select Use Provided Location.
e. Click OK.
f. In the Location field of one of the browser panes, type the URL of the
Change History view in TADDM. When you have typed the URL, do not
press Enter.
http://$taddm_server$:$taddm_port$/cdm/servlet/LICServlet?
view=changehistory&hoursback=10000&console=web&guid=$taddm_guid$

The hoursback parameter specifies the number of hours for which change
events are displayed. For example, setting hoursback to 6 displays all
change events in the previous six hours.

Chapter 12. Integration with other Tivoli products 119


g. In the Browser pane, select the second instance of Getting Started.
h. In the Style pane, select Use Provided Location.
i. Click OK.
j. In the Location field of the second of the browser panes, type the URL of
the Object Details view in TADDM. When you have typed the URL, do not
press Enter.
http://$taddm_server$:$taddm_port$/cdm/servlet/LICServlet?console=web
&guid=$taddm_guid$
k. To save the new workspace, click File → Save.
Immediately after you have typed the URL into the Location field, do not
press Enter, but save the workspace.
3. Open IBM Tivoli Enterprise Portal. In the Report pane, right-click a row in the
Report table.
4. Click Link To → Link Wizard. The Welcome page of the Workspace Link
Wizard is displayed.
5. Click Create a new link. Click Next. The Link Name page of the Workspace
Link Wizard is displayed.
6. In the Name field, type the name of the link. For example, Show Details.
7. In the Description field, type a description of the link. For example, Link to
details.
8. Click Next. The Link Type page of the Workspace Link Wizard is displayed.
9. Click Absolute. Click Next. The Target Workspace page of the Workspace Link
Wizard is displayed.
10. In the Navigator panel, select the node containing the workspace you created.
In the Workspace panel, select the workspace you created.
11. Click Next. The Parameters page of the Workspace Link Wizard is displayed.
12. You must add three symbols: ″taddm_server″, ″taddm_port″, and
″taddm_guid″. To add a symbol, complete the following steps:
a. Click Add Symbol. The Add Symbol window is displayed.
b. In the Symbol field, type the name of the symbol.
c. Click OK.
13. For each symbol you create, you must link it to an attribute representing the
correct column in the report.
v Link the ″taddm_server″ symbol to the TADDM server attribute.
v Link the ″taddm_port″ symbol to the TADDM Port attribute.
v Link the ″taddm_guid″ symbol to the Component GUID attribute.
To link a symbol to an attribute, complete the following steps:
a. In the Parameters page of the Workspace Link Wizard, select the symbol
you want to link to a report column.
b. Click Modify Expression. The Expression Editor window is displayed.
c. Click Symbol. The Symbols window is displayed.
d. Navigate to Attributes, and select the attribute you want to link to the
symbol. Click OK.
e. In the Expression Editor window, click OK. The Parameters page of the
Workspace Link Wizard is displayed.
14. Click Next. The Summary page of the Workspace Link Wizard is displayed.
15. Click Finish.

120 Administrator's Guide


Results

If you have active events in your change event report, a link icon is displayed next
to each table row. To move to the target workspace, click the link icon and select
Show Details. In the table row, values are substituted for symbols. In the
workspace, the Change History and Object Details panels are launched in context.

Configuring change events for a business system


You can use the change event functionality to send a change event whenever a
business system is changed.

About this task

By default, TADDM does not indicate a business system as changed if one of the
computers it depends on has changed. To enable the sending of change events for
business systems, complete the following steps:
1. Open $COLLATION_HOME/etc/propagationserver.xml in an appropriate editor.
2. In the Computer System section, for the application and business system
relationship elements, set the value of the enabled attribute to true. For
example:
<relationship enabled="true" source="sys.ComputerSystem" attribute="groups"
target="app.Application" targetAttribute="true"
collectionType="app.FunctionalGroup" radius="1"/>

<relationship enabled="true" source="sys.ComputerSystem" attribute="components"


target="sys.BusinessSystem" targetAttribute="true"/>
3. Restart TADDM.
4. Create a listener for the business system in the change event configuration
$COLLATION_HOME/etc/EventConfig.xml. In the following example, the event
recipient is mycompany-itm, and the business system name is MyBiz.
<listener object="ITSystem" enabled="true">
<alert recipient="mycompany-itm"/>
<attribute name="name" operator="equals">
<value>MyBiz</value>
</attribute>
</listener>

Integration with IBM Tivoli Business Service Manager


To generate explicit relationship information on discovered resources, run the
explicitrel.sh script or call the generateExplicitRelationships API. Using either
the script or calling the API can take a long time.

The explicitrel.sh script is located in the following directory:


v For Linux, Solaris, AIX, and Linux on System z operating systems:
/opt/IBM/cmdb/dist/bin
v For Windows operating systems: \opt\IBM\cmdb\dist\bin

The explicitrel.sh script takes one optional parameter. There are three options
for specifying the parameter:
v If the parameter is not supplied, the program runs in delta mode. In delta mode,
explicit relationships are created only from the data that was added since the
last time the program ran.

Chapter 12. Integration with other Tivoli products 121


v If the parameter supplied is 0, the program runs in full refresh mode. In full
refresh mode, each time the program runs, it deletes all explicit relationships
and creates new instances of the explicit relationships in the database.
v If the parameter supplied is 1, the program runs in delta mode. This mode is the
same as providing no parameter.

There are two APIs:


1. generateExplicitRelationships()
v This API defaults to the delta operation, as if the parameter supplied is true.
2. generateExplicitRelationships(boolean deltaGen)
v If true is passed in, a delta operation is performed, as if the parameter
supplied is 1.
v If false is passed in, a full refresh operation is performed, as if the parameter
supplied is 0.

If you need to generate explicit relationship data, but do not need to call methods
programmatically, you can run the included explicitrel.sh (explicitrel.bat)
script from the command line.

Integration with IBM Tivoli Monitoring


Depending on the specific tasks that you must do in your IT environment, you can
use the integration capabilities that are available between the IBM Tivoli
Application Dependency Discovery Manager (TADDM) and IBM Tivoli
Monitoring.

For the following tasks that you might need to do, the integration capabilities that
you should use are listed.
Table 7. User tasks with corresponding integration capabilities to use
Task Integration capabilities to use
Gain insight into availability by viewing the v “IBM Tivoli Monitoring sensor”
operating system settings, application
v “Launch in context” on page 124
settings, and change history of systems that
are monitored by IBM Tivoli Monitoring.
Ensure that operating systems and IBM DB2 v “IBM Tivoli Monitoring sensor”
databases that are discovered by TADDM
v “Monitoring Coverage report” on page
are monitored for availability.
123
View the availability and performance of v “IBM Tivoli Monitoring DLA” on page
systems that are discovered by TADDM. 123
v “Monitoring Coverage report” on page
123
Monitor a business application for v “IBM Tivoli Monitoring sensor”
configuration changes.
v “Change events” on page 124
v “Launch in context” on page 124
Monitor the availability of TADDM. v “Self-monitoring tool” on page 124

IBM Tivoli Monitoring sensor


The Tivoli Application Dependency Discovery Manager (TADDM) can perform
Level 1 or Level 2 discoveries using an IBM Tivoli Monitoring 6.2.1 or later

122 Administrator's Guide


infrastructure. The IBM Tivoli Monitoring sensor discovers configuration items in
the IBM Tivoli Monitoring environment by using only the credentials for your
Tivoli Enterprise Portal Server rather than the credentials for each computer that
the portal server monitors. TADDM Level 3 discovery is not supported.

TADDM leverages the Tivoli Monitoring infrastructure in the following two ways:
v By obtaining the list of Tivoli Monitoring endpoints from the Tivoli Enterprise
Portal Server
v By using the Tivoli Monitoring infrastructure to run the CLI commands for the
IBM Tivoli Monitoring sensor on discovery targets and to capture the output of
those commands

This capability provides the following benefits:


v Rapid deployment of TADDM in existing Tivoli Monitoring environments
v No need for TADDM anchor and gateway servers
v No need to define scope sets that contain computers to scan. Only a scope with
a single entry for the Tivoli Enterprise Portal Server is required.
v No need to define an access list (operating system credentials) for discovery
targets
v Only a single access list entry for the Tivoli Enterprise Portal Server GUI logon
is required.

The “IBM Tivoli Monitoring sensor” topic in the IBM Tivoli Application Dependency
Discovery Manager Sensor Reference describes the details of configuring the Tivoli
Monitoring sensor and contains troubleshooting information for any known
problems that might occur when deploying or using the sensor.

IBM Tivoli Monitoring DLA


The IBM Tivoli Monitoring discovery library adapter (DLA) extracts configuration
data from Tivoli Monitoring about the computer systems and databases that Tivoli
Monitoring monitors. The output of the DLA is a formatted XML file that contains
these components and their relationships. The output of the DLA also includes
data that represents Tivoli Monitoring agents and data that is used for launching
availability views from TADDM.

Instructions for running the DLA are included in the IBM Tivoli Monitoring 6.2.1
documentation that is available at http://publib.boulder.ibm.com/infocenter/
tivihelp/v15r1/index.jsp?topic=/com.ibm.itm.doc_6.2.1/main_win305.htm.

For information about loading the DLA-exported data into TADDM, see the
information about the bulk load program in the IBM Tivoli Application Dependency
Discovery Manager User’s Guide.

For information on troubleshooting the DLA, see “IBM Tivoli Monitoring DLA
problems ” in the IBM Tivoli Application Dependency Discovery Manager
Troubleshooting Guide.

Monitoring Coverage report


The Monitoring Coverage report shows details for the operating systems or
databases components in your environment that are monitored by IBM Tivoli
Monitoring 6.1 or later agents. The report contains two main sections: the
Management Software System (MSS) section and the Monitoring Coverage
Summary. You can run this report in the Monitoring Coverage pane.

Chapter 12. Integration with other Tivoli products 123


The Management Software System section provides an inventory of IBM Tivoli
Monitoring agents that are installed and includes the launch-in-context links to
workspaces in IBM Tivoli Monitoring. The Monitoring Coverage Summary
provides a list of monitored and unmonitored systems, which helps the user
monitor and maintain monitoring agents.

Level 1 discovery using the IBM Tivoli Monitoring sensor enables the Monitoring
Coverage Summary of the Monitoring Coverage report. Loading the IBM Tivoli
Monitoring discovery library adapter (DLA) enables both the Monitoring Coverage
Summary and the Management Software System sections of the Monitoring
Coverage report.

Also see the information about the Monitoring Coverage pane in the IBM Tivoli
Application Dependency Discovery Manager User’s Guide.

Change events
You can configure TADDM to notify IBM Tivoli Monitoring when a change to a
discovered resource is detected.
“Sending change events to external systems” on page 114
You can configure TADDM to notify an external event-handling system when a
change to a discovered resource is detected.
“Configuring TADDM” on page 114
To send change events, you must configure TADDM with information about the
event-handling systems to which you want to send change events.
“Configuring an IBM Tivoli Monitoring data provider” on page 116
You can configure the Universal Agent initialization file to define a new data
provider.
“Configuring change events for a business system” on page 121
You can use the change event functionality to send a change event whenever a
business system is changed.

Self-monitoring tool
The self-monitoring tool of the Tivoli Application Dependency Discovery Manager
(TADDM) provides detailed tracking of performance and availability of the
TADDM server and its component processes. The self-monitoring tool is
instrumented with IBM Tivoli Monitoring 6.x.
Table 8. Topics that contain more information about the self-monitoring tool
Information Location of information
Overview Chapter 11, “Self-monitoring tool overview,” on page
99
Installation instructions, including Installation Guide, “Installing the self-monitoring tool”
troubleshooting the installation of
the tool
Troubleshooting problems during Troubleshooting Guide, “Self-monitoring tool problems”
use of the tool

Launch in context
With launch in context, you can view Tivoli Application Dependency Discovery
Manager (TADDM) data within the Tivoli Enterprise Portal views of IBM Tivoli
Monitoring.

124 Administrator's Guide


By configuring topology views to show in the Tivoli Enterprise Portal, you can
view physical infrastructure, application infrastructure, and business system
topologies within Tivoli Enterprise Portal availability views.
Table 9. Topics that contain more information about launch in context
Information Location of information
URLs that are required for display “Configuring for launch in context” on page 111
of topology views
Instructions for configuring launch “Creating detail links in configuration change event
in context for viewing the operating reports in IBM Tivoli Monitoring” on page 118
system settings, application
settings, and change history for
incoming change events

Tivoli Directory Integrator


When you purchase the IBM Tivoli Application Dependency Discovery Manager
(TADDM), you also receive the Tivoli Directory Integrator, which enables you to
integrate TADDM with other data sources.
Tivoli Directory Integrator documentation at Tivoli Documentation Central
TADDM integration scenarios at Tivoli Wiki Central

Chapter 12. Integration with other Tivoli products 125


126 Administrator's Guide
Appendix. Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country or send inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan, Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan

The following paragraph does not apply to the United Kingdom or any other
country where such provisions are inconsistent with local law:

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS


PUBLICATION ″AS IS″ WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS
FOR A PARTICULAR PURPOSE.

Some states do not allow disclaimer of express or implied warranties in certain


transactions, therefore, this statement might not apply to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product and use of those Web sites is at your own risk.

© Copyright IBM Corp. 2006, 2009 127


IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

Such information may be available, subject to appropriate terms and conditions,


including in some cases payment of a fee.

The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.

Any performance data contained herein was determined in a controlled


environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.

If you are viewing this information in softcopy form, the photographs and color
illustrations might not be displayed.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the Web at “Copyright and
trademark information” at http://www.ibm.com/legal/copytrade.shtml .

128 Administrator's Guide


Java and all Java-based trademarks and logos are trademarks or
registered trademarks of Sun Microsystems, Inc. in the United States,
other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or


both.

Microsoft and Windows are trademarks of Microsoft Corporation in the United


States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other
countries.

Other company, product, and service names may be trademarks or service marks
of others.

Appendix. Notices 129


130 Administrator's Guide


Printed in USA

Вам также может понравиться