Вы находитесь на странице: 1из 340

CA Event Integration

Product Guide
r2.5

This documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation) is for your informational purposes only and is subject to change or withdrawal by CA at any time. This Documentation may not be copied, transferred, reproduced, disclosed, modified or duplicated, in whole or in part, without the prior written consent of CA. This Documentation is confidential and proprietary information of CA and may not be disclosed by you or used for any purpose other than as may be permitted in (i) a separate agreement between you and CA governing your use of the CA software to which the Documentation relates; or (ii) a separate confidentiality agreement between you and CA. Notwithstanding the foregoing, if you are a licensed user of the software product(s) addressed in the Documentation, you may print or otherwise make available a reasonable number of copies of the Documentation for internal use by you and your employees in connection with that software, provided that all CA copyright notices and legends are affixed to each reproduced copy. The right to print or otherwise make available copies of the Documentation is limited to the period during which the applicable license for such software remains in full force and effect. Should the license terminate for any reason, it is your responsibility to certify in writing to CA that all copies and partial copies of the Documentation have been returned to CA or destroyed. TO THE EXTENT PERMITTED BY APPLICABLE LAW, CA PROVIDES THIS DOCUMENTATION AS IS WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NONINFRINGEMENT. IN NO EVENT WILL CA BE LIABLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS DOCUMENTATION, INCLUDING WITHOUT LIMITATION, LOST PROFITS, LOST INVESTMENT, BUSINESS INTERRUPTION, GOODWILL, OR LOST DATA, EVEN IF CA IS EXPRESSLY ADVISED IN ADVANCE OF THE POSSIBILITY OF SUCH LOSS OR DAMAGE. The use of any software product referenced in the Documentation is governed by the applicable license agreement and such license agreement is not modified in any way by the terms of this notice. The manufacturer of this Documentation is CA. Provided with Restricted Rights. Use, duplication or disclosure by the United States Government is subject to the restricti ons set forth in FAR Sections 12.212, 52.227-14, and 52.227-19(c)(1) - (2) and DFARS Section 252.227-7014(b)(3), as applicable, or their successors. Copyright 2010 CA. All rights reserved. All trademarks, trade names, service marks, and logos referenced herein belong to their respective companies.

CA Technologies Product References


This document references the following CA Technologies products:

CA Catalyst CA CMDB CA eHealth Performance Manager CA eHealth TrapEXPLODER CA NSM CA OPS/MVS Event Management and Automation (CA OPS/MVS EMA) CA Spectrum Infrastructure Manager (CA Spectrum) CA Spectrum Service Assurance (CA Spectrum SA) CA SystemEDGE CA SYSVIEW Performance Management (CA SYSVIEW PM) CA Wily Introscope CA XOsoft Replication and High Availability (CA XOsoft) Unicenter Web Reporting Server (WRS)

Contact CA Technologies
Contact CA Support For your convenience, CA Technologies provides one site where you can access the information you need for your Home Office, Small Business, and Enterprise CA Technologies products. At http://ca.com/support, you can access the following:

Online and telephone contact information for technical assistance and customer services Information about user communities and forums Product and documentation downloads CA Support policies and guidelines Other helpful resources appropriate for your product

Provide Feedback If you have comments or questions about CA Technologies product documentation, you can send a message to techpubs@ca.com. If you would like to provide feedback about CA Technologies product documentation, complete our short customer survey, which is available on the CA Support website at http://ca.com/docs.

Contents
Chapter 1: Introduction 11
11 12 13 13 14 15 16 16 18 18 19 19 20 20 CA Event Integration ............................................................................ Product Packaging and Documentation ............................................................ CA Spectrum SA Usage Overview .................................................................. Usage with Other Products....................................................................... Architecture Overview .......................................................................... Event Processing ............................................................................... Integration Framework ...................................................................... Adaptors .................................................................................. Core Processing Engine ...................................................................... Connectors and Catalogs ..................................................................... Visualization ................................................................................... Administrative Interface ..................................................................... Dashboard ................................................................................ Reporting .................................................................................

Chapter 2: Installation

21
21 21 22 22 23 24 24 27 29 31 32 33 34 35 35 37 39 40 40

Component Overview ........................................................................... Manager .................................................................................. Connector ................................................................................. Installation Considerations ....................................................................... Database User Security .......................................................................... Component Installation ......................................................................... Install the Manager ......................................................................... Install the Connector on Windows ............................................................. Install the Connector on Solaris and Linux....................................................... Install the Manager and Connector on the Same Server ........................................... Installation Troubleshooting .................................................................. Windows Services .............................................................................. Solaris and Linux System Daemons ................................................................ Silent Installation ............................................................................... Perform a Silent Installation Using a Provided Response File ....................................... Create a Response File ...................................................................... Perform a Silent Installation Using a Created Response File ........................................ Uninstall CA Event Integration on Windows ......................................................... Clean up Windows User Information ...........................................................

Contents 5

Uninstall CA Event Integration on Solaris and Linux ................................................... 41 Security Considerations ......................................................................... 42

Chapter 3: Integrations and Deployment Scenarios

43

Integrating with CA Spectrum .................................................................... 43 CA Spectrum Implementation and Configuration ................................................. 44 CA Spectrum Usage ......................................................................... 52 CA Spectrum Deployment Scenarios ........................................................... 63 Resolving CA Spectrum Models ............................................................... 72 Integrating with CA NSM ........................................................................ 76 CA NSM Implementation and Configuration ..................................................... 77 CA NSM Usage ............................................................................. 79 CA NSM Deployment Scenarios ............................................................... 85 Integrating with CA Spectrum SA .................................................................. 90 CA Spectrum SA Implementation and Configuration .............................................. 91 CA Spectrum SA Usage ...................................................................... 94 CA Spectrum SA Deployment Scenarios ....................................................... 101 Integrating with Mainframe Products ............................................................. 108 Mainframe Products Implementation and Configuration ......................................... 108 Mainframe Deployment Scenario: CA OPS/MVS EMA to CA Spectrum SA ............................ 109 Integrating with HP Business Availability Center .................................................... 111 HP BAC Implementation and Configuration .................................................... 112 Integrating with CA Catalyst Connectors ........................................................... 113 CA Catalyst Connector Implementation and Configuration ........................................ 114 How to Implement a CA Catalyst Connector in CA Event Integration ................................ 115 Tiered CA Event Integration Implementation ....................................................... 116 How to Configure Tiered Connector Architecture ............................................... 117 Other Integrations ............................................................................. 118 Database Integration ....................................................................... 118 Windows Event Log Integration .............................................................. 119 SNMP Traps Integration .................................................................... 119 Application Log Files Integration ............................................................. 120 Web Services Eventing Integration ........................................................... 121 Tutorials ..................................................................................... 121

Chapter 4: Configuration and Administration

123
123 124 124 125 125

Configuration Basics ........................................................................... How to Create and Deploy Catalog Configurations .................................................. Open the Administrative Interface ............................................................... Refresh the Administrative Interface .......................................................... Administrative Tools ...........................................................................

6 Product Guide

Dashboard ............................................................................... Administration Tabs ....................................................................... Web Services ............................................................................. Policy ....................................................................................... Source Policy ............................................................................. Destination Policy ......................................................................... Enrichment Policy ......................................................................... Configure Policy Attributes .................................................................. Test Policy in Catalogs ...................................................................... Policy Creation and Customization ............................................................ Catalogs ..................................................................................... Create a Catalog ........................................................................... Edit a Catalog ............................................................................. Delete a Catalog ........................................................................... Preview a Catalog ......................................................................... Assign a Catalog to Connectors .............................................................. Deploy Assigned Catalogs ................................................................... Deploy All Catalogs ........................................................................ Connectors ................................................................................... View Connector Configuration ............................................................... Edit Connector Configuration ................................................................ Connectors Pane ..........................................................................

125 127 127 128 128 129 130 131 153 157 158 158 160 161 161 162 163 164 164 164 165 166

Chapter 5: Reporting

171
171 171 173 174 182 185 186 189 191 194 195 195 196 197 198 200 202

Reports ...................................................................................... Report Types ................................................................................. Destination Database Event Reports .............................................................. Run a Top N Report ........................................................................ Create an Event Report ..................................................................... Administrative Reports ......................................................................... Run a Catalog Files Audit Report ............................................................. Run a Deployment Audit Report.............................................................. Run a Policy Files Audit Report ............................................................... Configure an Audit Report .................................................................. Run a Catalog Configuration Report ........................................................... Run a Connector Configuration Report ........................................................ Run a Connector Detail Report ............................................................... Run a Connector Summary Report ............................................................ Schedule a Report ............................................................................. Publish a Report .............................................................................. Delete a Published Report ......................................................................

Contents 7

Export a Report to a PDF or CSV File .............................................................. 202

Chapter 6: Troubleshooting and Verification

205
205 206 207 208 209 210 212

Log Files ..................................................................................... Event Flow ................................................................................... How to Enable Event Flow Tracing ............................................................ Deployment Troubleshooting ................................................................... Generate Events Using the IFW Test Suite ..................................................... Test Catalog Configurations Using the Core Test Suite ............................................ View Unclassified Events .......................................................................

Appendix A: Upgrades and Migration

213
213 214 215 216 217 217 218

Supported Upgrades ........................................................................... Perform an Upgrade ........................................................................... Upgrade a Connector on Solaris or Linux .......................................................... Migration Considerations ....................................................................... How to Migrate from a Windows Connector to a Solaris or Linux Connector ......................... Migrate from CA Spectrum 8.1 to 9 ........................................................... How to Migrate SNMP Policies to Java SNMP Adaptor ...........................................

Appendix B: Writing Adaptors

221
221 221 222 223 223 224 225 225 227 227 228 229 229 229 230 231 231

Adaptor Overview ............................................................................. Adaptors Provided............................................................................. Adaptor Creation .............................................................................. Adaptor Internals ............................................................................. How Adaptors are Located and Executed ...................................................... Adaptor Configuration...................................................................... Adaptor Attributes ......................................................................... Event Inbox and Outbox Files ................................................................ Adaptor Coding and Implementation ............................................................. Adaptor Processing Model .................................................................. Sample Adaptor Files ....................................................................... How to Build and Compile a C++ Adaptor ...................................................... How to Build and Compile a Java Adaptor ...................................................... How to Configure and Test an In Adaptor ...................................................... How to Configure and Test an Out Adaptor .................................................... Adaptor Log Files .......................................................................... Adaptors and Policy ...........................................................................

8 Product Guide

Appendix C: Writing and Customizing Policy

233
233 234 234 235 236 237 237 246 247 247 252 252 256 257 258 259 260 265 267 269 277 284 285 287 287 290 291

Policy Overview ............................................................................... New Policy ............................................................................... Policy Customization ....................................................................... Policy Structure and Deployment ................................................................ Configure Core Modules .................................................................... Policy File Conventions ......................................................................... Event Properties and Values ................................................................. Event Classes ............................................................................. Hierarchy and Inheritance................................................................... Property Functions ........................................................................ Policy Operations ............................................................................. Configure Operation ....................................................................... Environment Operation .................................................................... Sample Event Operation .................................................................... Classify Operation ......................................................................... Parse Operation ........................................................................... Normalize Operation ....................................................................... Filter Operation ........................................................................... Consolidate Operation ..................................................................... Enrich Operation .......................................................................... Evaluate Operation ........................................................................ Format Operation ......................................................................... Write Operation ........................................................................... Sample Policies ............................................................................... Policy Customization Scenario: Application Log Source Policy ......................................... How to Configure and Implement Policy Files ...................................................... CA Catalyst Connector Policy ....................................................................

Appendix D: Web Services and Command Line Utilities

293
293 294 299 304 305 307 308 310 311 311 312

Web Services ................................................................................. Web Services Scripting ..................................................................... AssemblyOpsService Web Services ........................................................... AgentInstanceService Web Services .......................................................... TransformEventService Web Services ......................................................... AgentControlService Web Services ........................................................... PolicyControlService Web Services ........................................................... Version Web Services ...................................................................... Command Line Utilities ......................................................................... unregister_agent Command--De-Register a Connector from a Manager ............................. register_agent Command--Register a Connector with a Manager ..................................

Contents 9

control-axis2 Command--Control the Axis2 Service .............................................. control-core Command--Control the Core ...................................................... control-ifw Command--Control the Ifw ........................................................ control-tomcat Command--Control Tomcat Service ..............................................

312 313 314 315

Appendix E: Manager Database

317
317 318 318 319

Manager Database ............................................................................ Schema Overview ............................................................................. Tables ....................................................................................... Database Maintenance .........................................................................

Appendix F: High Availability

321
321 321 322 322 324 324 325 326

High Availability Overview ...................................................................... Connector Resiliency ........................................................................... Cluster Awareness ............................................................................. How to Implement CA Event Integration in a MSCS Environment .................................. Uninstall CA Event Integration HA Manager .................................................... Non-Cluster High Availability with CA XOsoft ....................................................... Replicated Information ..................................................................... How to Implement CA Event Integration in a Non-Cluster High Availability Environment ...............

Index

331

10 Product Guide

Chapter 1: Introduction
This section contains the following topics: CA Event Integration (see page 11) Product Packaging and Documentation (see page 12) CA Spectrum SA Usage Overview (see page 13) Usage with Other Products (see page 13) Architecture Overview (see page 14) Event Processing (see page 15) Visualization (see page 19)

CA Event Integration
Event management is crucial to understanding the dynamic state of an enterprise across network, security, system, application, service, and other domains. As the number of resources grows exponentially, so do the challenges of understanding and administering diverse management events from those resources. CA Event Integration is a lightweight event integration and processing solution that collects events from diverse sources, normalizes them into a common format with uniform semantics, and dispatches the reformatted and enhanced events to an event manager for subsequent actions. Implementing and deploying CA Event Integration in a complex distributed environment can result in a unified event management system with a common format for all events, regardless of their source. CA Event Integration can collect events from many sources out of the box, including the following:

Event management sources such as CA NSM, CA Spectrum Infrastructure Manager (CA Spectrum), and CA Spectrum Service Assurance (CA Spectrum SA) Operating system sources such as the Windows Event Log Device sources such as SNMP traps Application sources such as text log files

CA Event Integration transforms events from these sources so that all events reporting a similar problem look the same, making events easier to classify, understand, and ultimately simpler to resolve. During processing, the product can also enrich events with supplemental information from any external source, increasing event quality and facilitating more effective diagnostics and automation at the event destination.

Chapter 1: Introduction 11

Product Packaging and Documentation

CA Event Integration simplifies event management by providing the following:


A lightweight administrative footprint that functions as an add-on event integration and processing service to other management products Many configurable methods for event collection and dispatching An enrichment module for adding useful information to events from outside sources An administrative interface that lets you configure your complete event management environment A database as an event destination that provides advanced reporting, consolidation, and summation of its collected events Advanced features such as the ability to establish integrations with new event sources and destinations and customize event processing policy

Product Packaging and Documentation


CA Event Integration ships as a component of CA Spectrum SA, installable from the CA Spectrum SA installation media. It provides event processing and enrichment to infrastructure alerts from CA Spectrum SA connectors and integrations with additional event sources. A license key is required to enable CA Spectrum integration functionality. When you follow the best practice for installing CA Event Integration as a component of CA Spectrum SA, the manager installation is silent, and there is no opportunity to enter a license key. However, you can install interactively separate from CA Spectrum SA if you need to provide a license key and still enable CA Spectrum SA integration. The same documentation set is distributed with the product, regardless of whether you purchase a license. Therefore, the documentation describes CA Spectrum functionality that is unavailable to users of the unlicensed version.

12 Product Guide

CA Spectrum SA Usage Overview

CA Spectrum SA Usage Overview


As a component of CA Spectrum SA, CA Embedded Entitlements provides the following capabilities to enhance service-oriented event and alert management through the CA Spectrum SA Service Console:

A middle tier event processing and enrichment layer, installable as Event Enrichment from the CA Spectrum SA installation. CA Embedded Entitlements can collect all CA Spectrum SA infrastructure alerts, enrich them with information from external sources, and dispatch the enriched alerts to the Service Console. For example, you can add a URL to each alert that searches an internal knowledge website for a solution to the alert condition. During processing, CA Embedded Entitlements can also filter out unwanted alerts and consolidate similar alerts to return a reduced set of quality alerts to CA Spectrum SA. A CA Spectrum SA connector for integrating various raw event sources, installable as the Event connector from the CA Spectrum SA installation image. The Event connector lets you collect events from any CA Embedded Entitlements source adaptors, such as CA NSM and the Windows Event Log, and dispatch events to the CA Spectrum SA Service Console, where you can include information from these sources in service models. You can use the Event connector with CA Spectrum SA to extend service management through CA Spectrum SA into new domains, such as security and mainframe.

The CA Embedded Entitlements documentation contains information about using the product that is pertinent to CA Spectrum SA users and to those who use the product on a standalone basis. The CA Spectrum SA documentation also contains information specific to the Event Enrichment and Event connector features. For more information about installing and using Event Enrichment, see the CA Spectrum SA Implementation Guide and Administration Guide. For more information about installing and using the Event connector, see the CA Spectrum SA Connector Guide.

Usage with Other Products


While the primary CA Event Integration use case is as a component of CA Spectrum SA, you can also use CA Event Integration with CA Spectrum, CA NSM, and other management products to simplify event integration with operating system, device, application, and management event sources and destinations and to improve overall event quality. In a CA Spectrum environment, you install CA Event Integration connectors locally or remotely to each SpectroSERVER and configure them to collect, process, and dispatch CA Spectrum alarms. Installation in a CA NSM environment is similar, with connectors installed locally on each Event Agent or on an Event Manager where Event Agent events are forwarded in a tiered event management architecture. You can install the CA Event Integration manager on any server to use its administrative interface to configure event collection, processing, and dispatching.

Chapter 1: Introduction 13

Architecture Overview

Many configurations are possible in a CA Spectrum, CA NSM, or combined environment. For example, you can route SNMP traps, log file events, mainframe events, operating system logs, CA NSM agent events, and CA Spectrum alarms to a CA NSM event console, all of which will appear in a common format with uniform event grammar. The unified event format and grammar greatly simplifies the tasks of writing message record and action scripts, advanced event correlation policy, and alert management policy. You can configure CA Event Integration to dispatch multiple event sources to CA Spectrum, creating a unified event repository accessible from the OneClick Console. You can act on these events from CA Spectrum using event rules that trigger alarms when events meet certain criteria. CA Event Integration can also enhance events by extracting related information from external sources, enriching events with this information, and returning the events to their event source. For example, you can enrich a CA Spectrum alarm with technician contact information from an external source and update that alarm in CA Spectrum accordingly, and you can enrich a CA NSM event with IT service information from an external database or CA CMDB and display that information on the Event Console.

Architecture Overview
CA Event Integration uses an integration framework and a processing core to collect and process event data. The integration framework uses adaptors to establish integrations with event sources and collect events. Core processing modules transform the event data into a common format to prepare events for routing to their configured destination. The administrative interface installed with the manager lets you do the following:

Configure event integrations on each server with a connector installed Assign and configure catalog policy, which defines how events are processed from each integrated source Run and view reports related to events collected in the product database and product-specific metrics

You assemble processing policy into catalogs to specify integrations and processing rules, and you deploy assembled catalogs to connectors to configure your environment. CA Event Integration delivers this functionality through two base installable components: Manager Contains the web server and web services that comprise the administrative interface. The manager component also creates a database to hold event data for display.

14 Product Guide

Event Processing

Connector Contains the components necessary to collect, process, and dispatch events, including the integration framework, adaptors, and the core processing engine. You install connectors to integrate with each server from which you want to collect events.

Event Processing
The engine of CA Event Integration is the functionality that collects, processes, and dispatches events. The following illustration shows how events are processed, from their event sources to their defined destinations:

Note: This illustration does not show all adaptors and core modules. The integration framework uses adaptors to collect events from sources and dispatch events to destinations. The core processing engine normalizes and enriches the events before the adaptors dispatch them to their destinations. More information: Event Flow (see page 206)

Chapter 1: Introduction 15

Event Processing

Integration Framework
The integration framework (IFW) controls event flow in and out of CA Event Integration. The IFW uses modular binaries named adaptors to collect events from a specified source and send processed events to a specified destination.

Adaptors
Adaptors are the integration points for all external event sources. Source adaptors collect events from event sources, and destination adaptors dispatch events to event destinations. Each integrated source has a separate, isolated adaptor. CA Event Integration provides adaptors for many common event sources, including CA Technologies management products such as CA Spectrum, CA NSM, and CA Spectrum SA. Combining integrations enabled by the adaptors illustrates the flexibility of the product. You can route all collected events to one unified source, multiplex events from a single or multiple sources to many destinations, or create specialized catalogs to merge event data from only certain sources. The following list describes some basic example scenarios:

Routing collected events from all integrated sources to the manager database or an external source, such as CA NSM or CA Spectrum Routing events collected from CA NSM to CA Spectrum, or routing alarms from CA Spectrum to CA NSM Routing collected events from CA NSM and the Windows Event Log to CA Spectrum SA for an enterprise-wide view of event activity and the derived impact on important business services Processing events collected from an event manager and returning the processed events to the original source for a cleaner, more uniform event view

Provided Adaptors
CA Event Integration provides the following source adaptors. The product does not support adaptors marked as "Windows only" for use with Solaris and Linux connectors. CA NSM (Windows only) Collects CA NSM agent events from CA NSM Event Managers and Event Agents. CA Spectrum Collects alarms from CA Spectrum SpectroSERVERs. CA Spectrum SA Collects infrastructure alerts from the CA Spectrum SA IFW bus.

16 Product Guide

Event Processing

Windows Event Log (Windows only) Collects events from the Windows operating system logs (System, Application, and Security). Log Reader (Windows only) Collects events from generic application text log files. SNMP traps Collects SNMP traps sent from devices or applications. The SNMP adaptor can collect generic traps sent to the local system, and it facilitates specific SNMP-based integrations with the following products:

CA OPS/MVS Event Management and Automation CA SYSVIEW Performance Management HP Business Availability Center

Web Services Eventing Collects events generated through web service notifications. CA Catalyst Connector Framework Collects alerts from CA Catalyst connectors. CA Event Integration provides the following destination adaptors. The product does not support adaptors marked as "Windows only" for use with Solaris and Linux connectors. CA NSM (Windows only) Sends processed events to CA NSM Event Managers and Event Agents. CA Spectrum Sends processed events to a CA Spectrum SpectroSERVER. CA Spectrum SA Sends processed events to CA Spectrum SA through the IFW bus. CA Event Integration Forwards processed events to another CA Event Integration connector, enabling a tiered event management architecture. Windows Event Log (Windows only) Sends processed events to the Windows application event log.

Chapter 1: Introduction 17

Event Processing

Manager Database (Windows only) Sends processed events to the CA Event Integration internal database, where you can run various reports on the events filtered by important metrics such as event source, node, and resource type. You can also write new adaptors for event sources and destinations that the product does not provide. More information: Writing Adaptors (see page 221)

Core Processing Engine


The core is the processing engine that normalizes, enriches, and formats events. Core modules transform the source event to a normalized event, enrich it, and translate it to its destination schema. These modules perform read, classify, parse, normalize, filter, consolidate, enrich, evaluate, and format operations according to assigned catalog policy. CA Event Integration includes complete catalog policy for all provided sources and destinations. Providing information such as instance names, access credentials, and so on requires only basic post-installation configuration of policy attributes. Enrichment catalog policy is provided for external sources such as CA NSM WorldView (Windows only), CA Spectrum, CA CMDB, Internet search URL, and custom Oracle, MySQL, or Microsoft SQL Server databases. You can configure basic enrichments from the administrative interface to query one of these sources for information to add to events. Although the provided catalog policy should meet basic requirements, the policy engine is highly customizable, either by modifying existing policies or creating new policies. More information: Writing and Customizing Policy (see page 233)

Connectors and Catalogs


You define where events are collected and dispatched and how they are processed by configuring connectors and catalogs. These entities let you control the product's internal event processing functionality.

18 Product Guide

Visualization

You install connectors on all servers with event sources with which you want to integrate. Catalogs provide event processing instructions to connectors. You add policy to catalogs to define where events are collected, how they are processed, and where they are dispatched after processing. After creating a catalog with this information, you can deploy it to a connector to enact its settings on the connector's server. If you want to deploy the same configuration on multiple servers, you can deploy one catalog to multiple connectors. You can also create multiple catalogs if there are connectors that require different configurations. You configure and deploy catalogs and administer connectors from the administrative interface.

Visualization
The visualization layer lets you specify event source integrations, configure event processing, and view configuration and collected event information.

Administrative Interface
The administrative interface provides a web-based view of your event management environment and lets you configure and manage your connectors, catalogs, and policy. You have access to the following resources from the administrative interface: Dashboard Lets you view the status of and configure connectors, view and define policy, create, view, and deploy catalogs, and run reports on events and administrative data from one centralized location. Connectors tab Lets you view all connectors in your environment and configure their catalog assignment and deployment. Catalogs tab Lets you view all existing catalogs, create new catalogs, and preview how a catalog's policy will transform sample events. Policies tab Lets you view all policy and configure policy attributes. Reports tab Lets you run reports on events collected in the database and administrative data.

Chapter 1: Introduction 19

Visualization

The administrative interface is installed with the manager component. You must assign a connector to a manager during installation, and each connector server should appear on its manager node's administrative interface. From the administrative interface, you deploy catalogs to each connector to implement the appropriate event integration and processing configuration on servers throughout your enterprise. Note: The administrative interface is powered by web services (SOAP over HTTP). For more information about these web services, see the appendix "Web Services and Command Line Utilities."

Dashboard
The dashboard is the central workplace of the administrative interface. You can access all important configuration tasks, view status information, administer connectors, and access reporting views. The dashboard appears when you open the administrative interface and provides all required basic tasks in a logical workflow, so you can configure connectors, policy, and catalogs from one location.

Reporting
CA Event Integration uses CA Web Reporting Server to run reports on all aspects of the product's operation. You can run reports on the following:

Events in the manager database You can run reports on events sent to the database to view specific event data grouped by node, event source, resource type, IT service, and more. Reports on these groups display those with the most event activity and those that contain the most critical severity events. You can also customize or create event reports to filter based on any important statistics, including severity.

Administrative data You can run reports on administrative metrics such as catalog and connector status and configuration, policy and catalog files, and catalog deployment. Run these reports periodically to verify the health and correct configuration of your environment.

More information: Reporting (see page 171) Reports (see page 171) Report Types (see page 171)

20 Product Guide

Chapter 2: Installation
This section contains the following topics: Component Overview (see page 21) Installation Considerations (see page 22) Database User Security (see page 23) Component Installation (see page 24) Windows Services (see page 33) Solaris and Linux System Daemons (see page 34) Silent Installation (see page 35) Uninstall CA Event Integration on Windows (see page 40) Uninstall CA Event Integration on Solaris and Linux (see page 41) Security Considerations (see page 42)

Component Overview
Install the following components to implement CA Event Integration:

Manager Connector

When you run the installation, you select one or the other, or Both if you want to install a manager and connector on the same server. You should install the manager first on the server from which you want to manage your environment before installing connectors on servers across your enterprise.

Manager
The manager contains the web server and web services that comprise the administrative interface and creates a database to store events for reports. It is primarily responsible for the creation, configuration, and deployment of policy catalogs and their associated adaptors. The manager also provides in-depth reports for destination events, catalog audits, and connector health. The manager does not perform any event integration or processing tasks. These are performed by the connectors. The following components are installed with the manager:

Apache Tomcat web server Apache Axis2 web services

Chapter 2: Installation 21

Installation Considerations

CA Event Integration administrative interface JRE 1.6 Manager database CA Web Reporting Server eTrust Public Key Infrastructure

The manager includes all adaptors and policies in preparation for deployment to the connectors.

Connector
CA Event Integration connectors collect, process, and dispatch events. The sources and destinations may be local or remote, depending on the adaptors. Connectors contain the components necessary for integrating with other sources and processing events obtained from those sources. The following components are installed with each connector:

Core transformation engine Integration Framework Apache Axis2 web services eTrust Public Key Infrastructure

Installation Considerations
Consider the following before you install any CA Event Integration components:

Verify that your system meets all requirements listed in the Release Notes before running the installation. The Release Notes contains information such as hardware requirements, operating system support, database requirements, software support, and web browser support. Check the Readme for known issues that may affect installation. Find the latest version of the Readme at http://ca.com/support. Verify that all external components with which you want to integrate are installed. For example, if you want to use the product as an add-on to CA Spectrum, verify that CA Spectrum is implemented correctly in your enterprise. Plan the details of your implementation in your management product environment before you install any connectors. For basic implementation information, see the chapter "Integrations and Deployment Scenarios."

22 Product Guide

Database User Security

Verify that a supported database is installed on the server where you want to install the manager or on another server if you want to configure a remote database connection. Note: For information about supported database versions and installation and configuration requirements for Microsoft SQL Server, see the Release Notes.

Create or identify a user on the database server for creating the manager database before you install CA Event Integration. Verify that the installing user has "create database" authority when using SQL Server Windows authentication mode to create the database. When you install Event Enrichment as a part of the CA Spectrum SA installation, a CA Event Integration manager and connector is installed. When you install the Event connector as a part of the CA Spectrum SA installation, a connector is installed that relies on the Event Enrichment manager. For more information about installing CA Event Integration as an integrated CA Spectrum SA component from the CA Spectrum SA installation media, see the CA Spectrum SA documentation. An xterm is required to run a connector installation on Solaris or Linux systems. The xterm can exist on the connector system or a remote system, and you must run the installation from the xterm. By default, CA Event Integration is installed to C:\Program Files\CA\Event Integration on Windows and /opt/CA/EventIntegration on Solaris and Linux. The documentation uses the string EI_HOME to refer to the installation directory on all systems. All paths in the documentation use backslash (\) characters. Substitute forward slashes (/) when working with paths on a Solaris or Linux connector system.

More information: Integrations and Deployment Scenarios (see page 43)

Database User Security


During CA Event Integration manager installation, you must specify the type of database user security and the user name for creating the database. The database user can be one of the following types: Internal SQL Server user The internal, nontrusted security method, also called SQL authentication, uses a SQL Server user to create the database and perform database operations. You must provide this user's name and password during CA Event Integration installation, and the user must be defined before installation. The user name and encrypted password are stored in the database-dest.xml policy file.

Chapter 2: Installation 23

Component Installation

Trusted authentication (operating system or domain user) The trusted authentication method, also called operating system authentication, uses a local operating system or domain user to create the database and perform database operations. When you enable trusted authentication during CA Event Integration installation, the installer creates the database using the installing user's credentials and performs database operations using the credentials of the user you specify to run the product's services. The user names and encrypted passwords are stored in the database-dest.xml policy file. Any database user, whether internal or trusted, must have sufficient rights to create the manager database and perform other database operations. You can configure these rights in your database by assigning the user a role such as sysadmin or dbcreator.

Component Installation
The following topics contain instructions for installing the manager, connector, and both components simultaneously. Note: This section describes new, standalone installations. For information about upgrading from a previous release or to a different version of the same release, see the appendix "Upgrades and Migration." For information about installing CA Event Integration as a component of CA Spectrum SA from the CA Spectrum SA installation, see the CA Spectrum SA Implementation Guide and Connector Guide.

Install the Manager


Install the manager on the server from which you want to manage your CA Event Integration environment. Install the manager before you install any connectors in your enterprise. The manager can be any server in your environment as long as it contains a supported database or can connect to a remote database server. The manager does not require any relation to the servers containing event sources with which you want to integrate. A standalone manager installation cannot process events or integrate with event sources. After installing a manager, you must install connectors to enable this functionality. Note: If you plan to install a connector on the same server as the manager, install both components simultaneously (see page 31). If you want to install a connector on a server where a manager is already installed, you must uninstall the manager and install both components at the same time.

24 Product Guide

Component Installation

To install the manager 1. Double-click InstallEI.exe in the root directory of the installation image. The Introduction page of the CA Event Integration installation wizard opens. 2. Click Next. The License Agreement page opens. 3. Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. Note: If any CA Event Integration components are already installed on the server, a dialog appears listing the currently installed version and the version that will install on top of this version. Click Continue to either re-install or upgrade. You cannot edit any existing installation settings during an upgrade or re-installation, including the components installed. For more information about upgrading from a previous release, see the appendix "Upgrades and Migration." The Choose Install Set page opens. 4. Select Manager and click Next. The Check License Key page opens. 5. Do one of the following:

Enter the license key required to enable integration with CA Spectrum and click Next. Leave the field blank and click Next to install all standard functionality without CA Spectrum integration.

The Choose Install Folder page opens. 6. Do one of the following to specify the installation folder and click Next:

Accept the default. Enter the name of a new installation folder. Click Browse and select an installation folder.

The Enter Services User page opens. Note: The product detects if you are installing on a cluster node and displays the Enter Cluster Resource Group dialog for selecting a resource group on which to install. For more information about installing CA Event Integration in a cluster environment, see the appendix "High Availability." 7. Enter a user name and password for running the product's services and click Next. You can specify an existing local or domain administrator account. The user must be in the Administrators group. If you leave the fields blank, the installer creates a local ca_eis_user administrator account to run the services. The Enter Database Configuration - Step 1 page opens.

Chapter 2: Installation 25

Component Installation

8.

Do one of the following:

Click Next (without enabling trusted authentication) to use SQL Server credentials to create the manager database. This option requires you to enter an internal SQL Server database user and password on the next page. Select the Enable check box and click Next to enable trusted authentication, which causes the installer to create the manager database using the installing user's credentials, as long as this user has the credentials necessary to create a database. After installation, CA Event Integration connects to the database using the services user's credentials.

The Enter Database Configuration - Step 2 page opens. 9. Complete the following fields and click Next: Server name Specifies the server name, database instance, and port to use for the manager database. You can enter a node name or IP address for the database server. Use the following syntax to enter a server name, instance, and port in this field:
dbserver\instance:port

You must enter a port number if you are using a named instance. You can omit the \instance specification if you are not using a database instance or you want to use the default instance; and you can leave out the :port specification if you are using the default database port (1433) on the default instance. User (SQL authentication only) Specifies the user name of the user who will create the manager database and perform database operations. You must enter an internal (mixed mode) SQL Server user with the authority to create a database. Password (SQL authentication only) Specifies the password associated with the user name defined in the User field. The Configure Manager Application Server page opens. 10. Complete the following fields and click Next: Note: To avoid conflicts, you should verify that no other applications are using the Tomcat ports. The installation program detects conflicts and alerts you to change the specified port if it is already in use. Tomcat http port Specifies the port Tomcat uses to connect to the administrative interface. Default: 9091

26 Product Guide

Component Installation

Tomcat shutdown port Specifies the port Tomcat uses to shut down. Default: 8007 Tomcat AJP port Specifies the port Tomcat uses to integrate with the Apache Web Server. Tomcat does not use this port, but requires it to be configured. Default: 8011 Web UI application user Specifies the user name for accessing the administrative interface. Default: eiadmin Web UI application password Specifies the password to associate with the user specified in the Web UI application user field. Re-enter this password for verification in the Re-enter password field. The Enter Communication Ports page opens. 11. Enter the port number to use for web services communications between the manager and connectors, and click Next. The default port number is 8083. The Pre-Installation Summary page opens. 12. Verify the information on the Pre-Installation Summary page, and click Install. A page opens charting the installation progress. When installation is complete, the Install Complete page opens and summarizes the installation. More information: Install the Manager and Connector on the Same Server (see page 31) Upgrades and Migration (see page 213)

Install the Connector on Windows


Install connectors on all servers from which you want to collect events. Because you must define a manager node in a connector installation, install the manager on your manager server before you install any connectors. A connector must be registered with a manager before it can receive and enact event integration and processing instructions. To install the connector on Windows 1. Double-click InstallEI.exe in the root directory of the installation image. The Introduction page of the CA Event Integration installation wizard opens.

Chapter 2: Installation 27

Component Installation

2.

Click Next. The License Agreement page appears.

3.

Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. Note: If any CA Event Integration components are already installed on the server, a dialog lists the currently installed version and the version that will install over it. Click Continue to re-install or upgrade. You cannot edit existing installation settings during an upgrade or re-installation, including the components installed. For more information about upgrading from a previous release, see the appendix "Upgrades and Migration." The Choose Install Set page opens.

4.

Select Connector and click Next. The Check License Key page opens.

5.

Do one of the following and click Next:


Enter a license key to enable integration with CA Spectrum. Leave the field blank to install all standard functionality without CA Spectrum integration.

The Choose Install Folder page opens. Note: The product detects if you are installing on a cluster node and displays the Enter Cluster Resource Group dialog for selecting a resource group on which to install. For more information about installing CA Event Integration in a cluster environment, see the appendix "High Availability." 6. Do one of the following to specify the installation folder and click Next:

Accept the default. Enter the name of a new installation folder. Click Browse and select an installation folder.

The Enter Services User page opens. 7. Enter a user name and password for running the product's services and click Next. You can specify an existing local or domain administrator account. The user must be in the Administrators group. If you leave the fields blank, the installer creates a local ca_eis_user administrator account to run the services. The Enter Communication Ports page opens.

28 Product Guide

Component Installation

8.

Complete the following fields and click Next: Manager Host Name Specifies the host on which a manager is installed. A connector must register with a manager so it can receive event integration and processing instructions and appear on the manager's administrative interface. You can use the node name or IP address as the value for this field. Manager port Specifies the port number for web services communications that you entered for the manager server. Default: 8083. Connector port Specifies the port number to use for web services communications on the connector server. This port is independent of the port that you specified for the manager; it can use the same or a different port. Default: 8083 The Pre-Installation Summary page opens.

9.

Verify the information on the Pre-Installation Summary page, and click Install. A page opens charting the installation progress. When installation is complete, the Install Complete page opens and summarizes the installation.

More information: Integrations and Deployment Scenarios (see page 43)

Install the Connector on Solaris and Linux


Install connectors on all servers from which you want to collect events. Because you must define a manager node in a connector installation, install the manager on your manager server before you install any connectors. A connector must be registered with a manager before it can receive and enact event integration and processing instructions. The manager only supports installation on Windows.

Chapter 2: Installation 29

Component Installation

When you install a connector on Solaris or Linux, you must run the installation from an xterm. To install the connector on Solaris and Linux 1. 2. Copy the connector installation file (InstallEI.bin.Solaris or InstallEI.bin.Linux) from the installation image to a local temporary folder. Verify that the file possesses the appropriate root ownership and execute permissions as follows:
chown root ./InstallEI.bin.Linux chmod 555 ./InstallEI.bin.Linux

3.

Start the installation from an xterm as follows: Linux


./InstallEI.bin.Linux

Solaris
./InstallEI.bin.Solaris

Preface the command with the appropriate file path for the installer location on the Solaris or Linux system, if necessary. The introduction page of the CA Event Integration installation wizard opens. 4. Click Next. The License Agreement page opens. 5. Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. Note: If a connector is already installed on the server, a dialog lists the currently installed version and the version that will install over it. Click Continue to re-install. You cannot edit existing installation settings during a re-installation. The Choose Install Set page opens. 6. Select Connector and click Next. The Check License Key page opens. 7. Do one of the following and click Next:

Enter a license key to enable integration with CA Spectrum. Leave the field blank to install all standard functionality without CA Spectrum integration.

The Choose Install Folder page opens.

30 Product Guide

Component Installation

8.

Do one of the following to specify the installation folder and click Next:

Accept the default. Enter the name of a new installation folder. Click Browse and select an installation folder.

The Enter Services User page opens. 9. Enter a user name for running the product's services and click Next. You can specify an existing local or domain account. The user must be in the root group. If you leave the fields blank, the installer creates a local caeiuser administrator account to run the services. The Enter Communication Ports page opens. 10. Complete the following fields and click Next: Manager Host Name Specifies the host on which a manager is installed. A connector must register with a manager so it can receive event integration and processing instructions and appear on the manager's administrative interface. You can use the node name or IP address as the value for this field. Manager port Specifies the port number for web services communications that you entered for the manager server. Default: 8083. Connector port Specifies the port number to use for web services communications on the connector server. This port is independent of the port that you specified for the manager; it can use the same or a different port. Default: 8083 The Pre-Installation Summary page opens. 11. Verify the information on the Pre-Installation Summary page, and click Install. A page opens charting the installation progress. When installation is complete, the Install Complete page opens and summarizes the installation.

Install the Manager and Connector on the Same Server


If you need to install a manager and a connector on the same server, you must install them at the same time. You cannot run separate manager and connector installations on the same server, because reinstallations are only supported for existing components; you cannot add a component to an existing installation. When you install a manager and connector together, the connector automatically registers with the manager.

Chapter 2: Installation 31

Component Installation

Install the manager only on the server from which you want to manage your complete environment. To install the manager and connector on the same server 1. Double-click InstallEI.exe from the root directory of the installation image. The Introduction page opens. 2. Click Next. The License Agreement page opens. 3. Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. Note: If any CA Event Integration components are already installed on the server, a dialog appears listing the currently installed version and the version that will be installed on top of this version. Click Continue to either re-install or upgrade. You cannot edit any existing installation settings during an upgrade or re-installation, including the components installed. For more information about upgrading from a previous release, see the appendix "Upgrades and Migration." The Choose Install Set page opens. 4. Select Both and click Next. The Check License Key page opens. 5. Complete Steps 5-12 listed in Install the Manager (see page 24). A page appears charting the installation progress. When installation is complete, the Install Complete page appears and summarizes the installation. More information: Upgrades and Migration (see page 213)

Installation Troubleshooting
The Install Complete page of the installation wizard provides summary information but does not detail installation errors. Use the following methods to troubleshoot installation issues when the Install Complete page indicates that the installation completed with errors or if you notice problems with the product's operation:

Review the CA_Event_Integration_InstallLog.log file located in the root of the installation directory. This log primarily shows installation errors related to file copies, configuration settings, and so on. Review the install.log file located in the EI_HOME\logs directory. This log shows pre- or post-installation errors such as database creation failure, connector registration problems, and so on.

32 Product Guide

Windows Services

On Windows, verify that all CA Event Integration services appear in the Windows Services dialog, and that they are controlled by the ca_eis_user or the nondefault user name that you specified during installation. On Solaris or Linux, verify that all CA Event Integration system daemons are running, and that they are controlled by the caeiuser or the nondefault user name that you specified during installation. To verify the connector registration with the manager, open the manager administrative interface after the installation finishes, and verify that the node name of the connector appears on the Connectors pane of the Dashboard tab. The connector must be registered with the manager to begin enacting event processing instructions. If the connector does not appear, recycle all services on the manager and try again. If the connector still does not appear, run the register_agent command line utility to manually register the connector with the manager.

More information: Windows Services (see page 33) register_agent Command--Register a Connector with a Manager (see page 312)

Windows Services
The following services control CA Event Integration components on Windows. They start by default after installation and are viewable from the Windows Services dialog. By default, the ca_eis_user is created and used to control the services, but you can enter an operating system or domain user during installation for this purpose. CA EI AXIS2 Controls web services that drive administrative interface operation. When you stop this service, the web services are not available, so the administrative interface is not available. When you start this service, the administrative interface is functional. If you must change the Axis2 port number (which you originally specify during installation), you should recycle this service. The CA EI AXIS2 service installs with both the manager and connector. CA EI CORE Controls the core processing engine. When the CA EI CORE service is running, the core can process collected events. When the service is stopped, all processing stops, even if events are available for processing. This service should always be running, although you can stop the service to troubleshoot event integration errors. The CA EI CORE service installs with the connector.

Chapter 2: Installation 33

Solaris and Linux System Daemons

CA EI IFW Controls the integration framework. When the CA EI IFW service is running, adaptors can integrate with event sources (collecting events) and event destinations (dispatching events). When the service is stopped, the connections to event sources are broken until the service is restarted. This service should always be running, although you can stop it to troubleshoot event processing errors. The CA EI IFW service installs with the connector. Note: For information about using the CA EI CORE and CA EI IFW services to troubleshoot deployment or integration errors, see the chapter "Troubleshooting and Verification." CA EI Tomcat Controls the Tomcat web server that provides the administrative interface. When the CA EI Tomcat service is running, the administrative interface is available for use. When the service is stopped, you cannot access the interface. If you must change the Tomcat port number, which you originally specify during installation, you must recycle this service. The CA EI Tomcat service installs with the manager. Note: This service stops automatically when the CA EI AXIS2 service stops because the administrative interface cannot function without web services.

Solaris and Linux System Daemons


The following system daemons control CA Event Integration components on Solaris and Linux. They start by default after installation and are controlled by an operating system or domain user that you can specify during installation. By default, the caeiuser is created and used to control the daemons. caeiaxis2 Controls web services that drive communication with the administrative interface. If you must change the Axis2 port number (which you originally specify during installation), you should recycle this daemon. Use the following command to start, stop, restart, or check the status of this daemon:
/etc/init.d/control-axis2 start|stop|status|restart

caeicore Controls the core processing engine. When this daemon is running, the core can process collected events. When it is stopped, all processing stops, even if events are available for processing. This daemon should always be running, although you can stop it to troubleshoot event integration errors. Use the following command to start, stop, restart, or check the status of this daemon:
/etc/init.d/control-core start|stop|status|restart

34 Product Guide

Silent Installation

caeiifw Controls the integration framework. When the caeiifw daemon is running, adaptors can integrate with event sources (collecting events) and event destinations (dispatching events). When it is stopped, the connections to event sources are broken until the daemon is restarted. This daemon should always be running, although you can stop it to troubleshoot event processing errors. Use the following command to start, stop, restart, or check the status of this daemon:
/etc/init.d/control-ifw start|stop|status|restart

Note: For information about using the caeicore and caeiifw daemons to troubleshoot deployment or integration errors, see the chapter "Troubleshooting and Verification."

Silent Installation
You can perform an unattended installation using provided response files. The following response files are provided in the Response Files folder in the root of the installation image:

Manager-only: managerresponse.txt Connector-only: agentresponse.txt Manager and Connector: bothresponse.txt

Use these files to configure a silent installation for any installation type. A silent installation automates the installation process to save time for an enterprise requiring a large volume of installations. While the provided response files cover all possible installation types, you can also generate a response file during a traditional installation to use for silent installations.

Perform a Silent Installation Using a Provided Response File


The provided response files let you configure and run a silent installation of the manager, connector, or both components on the same server. After you configure the response file for the installation type that you want to implement, you can use the file on all servers in your enterprise without being prompted for any further information.

Chapter 2: Installation 35

Silent Installation

To perform a silent installation using a provided response file 1. Open the Response Files folder in the root directory of the installation image and open the response file for the type of installation that you want to configure:

Manager-only: managerresponse.txt Connector-only: agentresponse.txt Manager and Connector: bothresponse.txt

2.

Enter values for all configuration parameters. Many of the parameters contain default values; you need not change these values if the defaults are suitable. Although most of the parameters are self-explanatory, use the following notes as a reference:

Enter a license key to obtain CA Spectrum functionality for the USER_SUPPLIED_SERIAL_NUMBER parameter. Leave this parameter blank to install the standard functionality without CA Spectrum integration. This parameter appears in the response files for installing the manager or the manager and connector. Enter the user for the USER_INPUT_SERVICESUSER parameter (and the user password in the password parameters if you want to use a services user other than the default ca_eis_user (Windows) or caeiuser (Solaris and Linux). You need not enter a password if you use the default user, or if you entered a user on Solaris or Linux. Enter 1 in the USER_INPUT_DBTRUSTEDYES parameter to use trusted authentication. If you use trusted authentication, leave the database user and password fields blank. Enter the database server name to use for the manager database in the DBSERVER parameter. You can also enter a named instance if necessary and a port (if the database listens on a port other than 1433 or if you are using a named instance) using the following SQL convention: hostname\\instancename:port Enter an existing internal SQL Server user that has the necessary privileges to create the manager database in the DBUSER value. If you use trusted authentication, leave the user and password parameters blank. Verify that the three Tomcat ports are unique. Enter the Tomcat user name and password as the credentials to access the administrative interface.

Save the file when you finish entering parameter values. 3. Copy the installation program (InstallEI.exe, InstallEI.bin.Linux, or InstallEI.bin.Solaris) and the edited response file to a directory on your computer.

36 Product Guide

Silent Installation

4.

Open a command prompt on the server where you want to run the silent installation and navigate to the response file and installation program directory. Note: The response file must be in the same directory as the installation program for the silent installation to run.

5.

Enter the following command, using the response file path and name: Windows
InstallEI -f responsefile

Solaris or Linux
./InstallEI.bin.platform -f responsefile

responsefile Specifies the full path and name of the response file to use. For example, to use the connector response file at the root of the C:\ directory on Windows, you would enter C:\agentresponse.txt. The silent installation starts. You are not notified when the installation starts or finishes. On Windows, you can monitor the Windows Task Manager for the presence of the InstallEI.exe application. If a silent installation fails without finishing, the installer creates a log file named abort_cause.err in the C:\Documents and Settings\Administrator\Local Settings\Temp directory on Windows 2003, the C:\Users\Administrator\AppData\Local\Temp\2 directory on Windows 2008, and the /tmp directory on Solaris or Linux. Check this file to determine the reason for the silent installation failure. More information: Installation Troubleshooting (see page 32)

Create a Response File


You can create a unique response file to use for silent installations throughout your enterprise. To create a response file, you must run a traditional installation. If you want to avoid this step, use one of the response files provided in the Response Files folder on the installation image. To create a response file 1. Open a command prompt and navigate to the directory where the installation program resides.

Chapter 2: Installation 37

Silent Installation

2.

Enter the following command, using the response file path and the name of the file to create: Windows
InstallEI -r responsefile

Solaris or Linux
./InstallEI.bin.platform -r responsefile

responsefile Specifies the full path and name of the response file to use. For example, to generate a myresponse.txt file at the root of the C:\ directory on Windows, you would enter C:\myresponse.txt. The Introduction page of the installation wizard opens. 3. Answer the installer prompts as if you are performing a traditional installation. Make sure to select the type of installation for which you want to create a response file. Note: See the installation procedures in the Component Installation section for help with the installer prompts. Click Install on the last page to run the installation. When the installation finishes, the installer uses the information you provided to create the response file in the location you specified. 4. Open the created response file and add the following lines anywhere in the file:
INSTALLER_UI=silent CHOSEN_INSTALL_SET=installtype

installtype Specifies the type of installation to run. For a manager-only installation, enter Manager. For a connector-only installation, enter Connector. For a manager and connector installation, enter Both. 5. Re-enter the following values that are not saved when the response file is generated:

The license key, if you originally specified a license and you want to obtain CA Spectrum functionality The database and Tomcat passwords (Windows only) The password for the services user, if you specified a user to run the services

6.

Save and close the file. The response file is ready for use in a silent installation.

38 Product Guide

Silent Installation

Perform a Silent Installation Using a Created Response File


You can use a created response file to perform an unattended installation. The installation uses the information in the file to run transparently. On Solaris and Linux, the installer must be able to access an xterm session. To perform a silent installation using a created response file 1. 2. Verify that the response file and installation program are copied to the same location on your hard drive. Open a command prompt and enter the following command, verifying that you specify the correct response file location and name: Windows
InstallEI -f responsefile

Solaris or Linux
./InstallEI.bin.platform -f responsefile

responsefile Specifies the full path and name of the response file to use. For example, to use a myresponse.txt file at the root of the C:\ directory on Windows, you would enter C:\myresponse.txt. The installation begins. You are not notified when the installation starts or finishes. On Windows, you can monitor the Windows Task Manager for the presence of the InstallEI.exe application. Note: On Solaris and Linux, you may have to confirm that the installer can access the display xterm before the installation begins, depending on your xterm configuration. If a silent installation fails without finishing, the installer creates a log file named abort_cause.err in the C:\Documents and Settings\Administrator\Local Settings\Temp directory on Windows 2003, the C:\Users\Administrator\AppData\Local\Temp\2 directory on Windows 2008, and the /tmp directory on Solaris or Linux. Check this file to determine the reason for the silent installation failure. More information: Installation Troubleshooting (see page 32)

Chapter 2: Installation 39

Uninstall CA Event Integration on Windows

Uninstall CA Event Integration on Windows


You can uninstall any installed components and remove the manager database. Note: When you uninstall components that were installed silently, the uninstallation is also silent. For more information, see the Release Notes. To uninstall CA Event Integration 1. Select Start, Control Panel, Add or Remove Programs. The Add or Remove Programs dialog opens. 2. Select CA Event Integration and click Change/Remove. The Uninstall CA Event Integration dialog opens. 3. Do one of the following:

Click Next for an uninstallation that includes the manager. The Delete Database page opens.

Click Uninstall for an uninstallation that is connector-only. The uninstallation initializes, runs, and completes. A page appears summarizing the uninstallation. This page may list files or directories that were not removed.

4.

(Manager uninstallation only) Select the Remove database check box if you want to delete the database and click Uninstall. The uninstallation initializes, runs, and completes. A page appears summarizing the uninstallation. This page may list files or directories that were not removed.

5.

Click OK. When the dialog closes, further clean up is performed. Check the directories listed in the summary to verify that they were deleted.

Clean up Windows User Information


When you uninstall CA Event Integration on Windows, the following directories and registry entry related to the operating system user created with the product (ca_eis_user by default) may persist:

C:\Documents and Settings\username (Windows 2003) C:\Users\username (Windows 2008) HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileLists\S-1-5-21-... Note: The full name of the registry entry varies.

40 Product Guide

Uninstall CA Event Integration on Solaris and Linux

If you uninstall CA Event Integration and do not plan to reinstall, you should remove these persistent files. To clean up Windows user information 1. 2. Delete the C:\Documents and Settings\username (Windows 2003) or C:\Users\username (Windows 2008) directory if it still exists. Run the delprof.exe utility provided with Windows to delete all other materials, including the registry entry, associated with the user profile (and other inactive user profiles). You can download this utility from the Microsoft website if it is not on your system.

Uninstall CA Event Integration on Solaris and Linux


You can uninstall a connector on Solaris or Linux. You must run the uninstallation from an xterm. Note: When you uninstall components that were installed silently, the uninstallation is also silent. For more information, see the Release Notes. To uninstall CA Event Integration on Solaris and Linux 1. Start the uninstallation from an xterm with the following command:
'EI_HOME/Uninstall_CA Event Integration/Uninstall_CA_Event_Integration'

Enter the path to the installation directory for EI_HOME. For example, if you are using the default installation directory, the command would resemble the following:
'opt/CA/EventIntegration/Uninstall_CAEventIntegration/Uninstall_CA_Event_Inte gration'

The Uninstall CA Event Integration dialog opens. 2. Click Uninstall. The uninstallation runs. A page appears summarizing the uninstallation. This page may list files or directories that were not removed. 3. Click OK. When the dialog closes, further clean up is performed. Check the directories listed in the summary to verify that they were deleted.

Chapter 2: Installation 41

Security Considerations

Security Considerations
CA Event Integration encrypts all passwords entered in policy files (for example, database credentials and product access credentials). After you enter a password in a policy file from the administrative interface, the password appears as encrypted in the file after you save the changes. The administrative interface uses the default Tomcat security model, which stores a clear-text Tomcat password on the manager machine at EI_HOME\ThirdParty\tomcat\conf\tomcat-users.xml. You can secure this file using Windows operating system security.

42 Product Guide

Chapter 3: Integrations and Deployment Scenarios


This chapter discusses specific considerations for implementing, configuring, and using integrations with CA Spectrum, CA NSM, CA Spectrum SA, and other management products and event sources and provides deployment scenarios that represent common use cases. This section contains the following topics: Integrating with CA Spectrum (see page 43) Integrating with CA NSM (see page 76) Integrating with CA Spectrum SA (see page 90) Integrating with Mainframe Products (see page 108) Integrating with HP Business Availability Center (see page 111) Integrating with CA Catalyst Connectors (see page 113) Tiered CA Event Integration Implementation (see page 116) Other Integrations (see page 118) Tutorials (see page 121)

Integrating with CA Spectrum


CA Event Integration can do the following as an add-on option for CA Spectrum:

Collect CA Spectrum alarms and route them to other destinations Enrich existing CA Spectrum alarms with information from outside sources Enrich existing CA Spectrum alarms (or events from other sources) with CA Spectrum model attributes Collect external events and route them to the CA Spectrum alarm and event views

This section describes how to implement CA Event Integration with CA Spectrum and how to configure and deploy common use cases. Important! You must enter a license key during installation to enable CA Spectrum integration functionality. If you install Event Enrichment as a component of CA Spectrum SA, CA Event Integration is installed silently without a license key.

Chapter 3: Integrations and Deployment Scenarios 43

Integrating with CA Spectrum

CA Spectrum Implementation and Configuration


To integrate with CA Spectrum, you must use connectors, the event processing components of CA Event Integration, to collect and process alarms from all CA Spectrum sources in your environment. You install and configure connectors for each SpectroSERVER from which to collect, process, or dispatch events. You can install connectors locally to the SpectroSERVER or establish a remote connection with each SpectroSERVER with which to integrate. Connectors are not limited to a one-to-one relationship with SpectroSERVER; you can configure one connector to integrate with several connected SpectroSERVER, known as a Distributed SpectroSERVER (DSS) environment. You can configure a connector to be DSS-aware and collect alarms from all SpectroSERVERs in the distributed environment. Likewise, a connector can update alarms or send events to a DSS. The connector uses the configured SpectroSERVER landscape names to determine which SpectroSERVERs to monitor. The connector uses the configured Main Location Server to send destination events or alarm updates to the appropriate landscape. Note: You can install the CA Event Integration manager independently from the CA Spectrum environment. The manager can reside on any server from which you want to manage your CA Event Integration environment. The manager contains its own embedded Tomcat installation, so there is no benefit or detriment to installing on a server with an existing Tomcat installation.

CA Spectrum Configuration Requirements


After you install CA Event Integration, you must complete following manual operations to enable and configure the integration with CA Spectrum:

Create the ca_eis_user in CA Spectrum (see page 45) Configure event message format in CA Spectrum (see page 45) (on specific CA Spectrum versions only) Configure a remote CA Spectrum connection (see page 46) (on remote installations only) Configure Distributed SpectroSERVER operation (see page 47) (Distributed SpectroSERVER implementations only) Copy custom CspCause files (see page 49) (for custom alarms only) Configure CA Spectrum policy (see page 49)

CA Event Integration cannot collect and process alarms until the applicable tasks from this list are complete.

44 Product Guide

Integrating with CA Spectrum

Create the ca_eis_user in CA Spectrum


Before the CA Spectrum destination adaptor can send alarms and events processed by CA Event Integration to CA Spectrum, you must create an authorized user in CA Spectrum to perform these operations. The ca_eis_user is defined as the default in the spectrum-src.xml and spectrum-dest.xml policy files. This user (or any user) must have an Administrator License in CA Spectrum to interact with alarms and incoming events. You can perform this procedure on any OneClick Client that communicates with the appropriate SpectroSERVERs. This user is also required when configuring a remote connection with CA Spectrum, and the user must exist on all SpectroSERVERs with which you are integrating in a Distributed SpectroSERVER environment. To create the ca_eis_user in CA Spectrum 1. Open the CA Spectrum OneClick Console and select the Users tab. The Users List opens. 2. Click the Create User icon. The Create User dialog opens. 3. 4. Enter ca_eis_user in the Name field, and enter a password in the Web Password and Confirm Web Password fields. Assign the Administrator license in the Licenses tab, and click OK. The user is created. Note: CA Event Integration does not require knowledge of the password defined with the user in CA Spectrum. More information: Configure CA Spectrum Policy (see page 49)

Configure Event Message Format in CA Spectrum


CA Spectrum requires several specific event message format files to receive events and alarms processed by CA Event Integration. These files are shipped with CA Spectrum starting with r8.1 version H16, r9.0 SP1, and all versions of CA Spectrum r9.1 and later. If you have an older image of r8.1 or r9.0, you must copy the event message format files and other information manually into CA Spectrum to enable CA Event Integration to properly format events being dispatched to CA Spectrum, including the granular event variables. Complete these tasks on all SpectroSERVERs with which to integrate.

Chapter 3: Integrations and Deployment Scenarios 45

Integrating with CA Spectrum

To configure event message format in CA Spectrum 1. Navigate to the following directory on the CA Event Integration server:
EI_HOME\ThirdParty\CA\SPECTRUM\EventPCause

2.

Copy the following files from the CA Event Integration server to %SPECPATH%\SG-Support\CsEvFormat on the SpectroSERVER:

Event00010fa0 Event00010fa1 Event00010fa2 Event00010fa3 Event00010fa4 Event00010fa5 Event00010fa6 Event00010fa7

Note: %SPECPATH% indicates the installation path for CA Spectrum on your server. 3. 4. Copy the Prob00010fa0 file to %SPECPATH%\SG-Support\CsPCause. Make a backup copy of the %SPECPATH%\SS\CsVendor\Cabletron\EventDisp file and open the main copy using WordPad (or another editor that displays linefeed characters correctly). Open the EventDisp file distributed with CA Event Integration using WordPad and paste the content of the file to the end of the CA Spectrum EventDisp file. Save and close the CA Spectrum EventDisp file. The message format is configured in CA Spectrum. 7. Open OneClick and click Administration. The Administration page opens. 8. Click EvFormat/PCause Configuration, and click Reload to reload the event format and PCause files to include the files that you copied. The message format files are loaded and ready for use.

5. 6.

How to Configure a Remote CA Spectrum Connection


Connectors must communicate with all SpectroSERVERs from which you want to collect alarms for the integration with CA Spectrum to work. Installing a connector directly on all SpectroSERVERs from which you want to collect alarms establishes this connection; however, you may want to install connectors on a node other than a SpectroSERVER node or use one connector to integrate with multiple SpectroSERVERs. Use the following process to establish a remote CA Spectrum connection, with the connector and SpectroSERVER residing on different nodes:

46 Product Guide

Integrating with CA Spectrum

1. 2.

Define the ca_eis_user in CA Spectrum (see page 45). Add the connector node in CA Spectrum Host Security (see page 47).

Repeat the process for all remote SpectroSERVERs with which to integrate if you are integrating with a Distributed SpectroSERVER environment.

Add the Connector Node to CA Spectrum Host Security


If you have installed a connector meant to integrate with a SpectroSERVER on a remote node, you must define the connector node in the SpectroSERVER's Host Security to enable a connection between the two nodes. To add the connector node to CA Spectrum Host Security 1. 2. Open the SPECTRUM Control Panel on the SpectroSERVER with which to connect. Click Host Security. The Host Security dialog opens. 3. Enter the connector node name in the Server List dialog and click Add. The node is added to the Server List table. 4. Click OK. The changes are saved.

How to Configure Distributed SpectroSERVER Connections


CA Event Integration supports collecting alarms from a single SpectroSERVER or from multiple SpectroSERVERs operating in a Distributed SpectroSERVER (DSS) environment using one connector. Monitoring multiple SpectroSERVERs in a DSS environment with a single connector provides the following benefits:

Simpler CA Event Integration deployment Consolidation of alarm data in CA Event Integration Close adherence to the structure of your complex, layered CA Spectrum implementation without running multiple connector instances

When you configure a connector to operate in a DSS environment, CA Event Integration collects alarms from all specified SpectroSERVER landscapes, queries the CA Spectrum Main Location Server to obtain all of the landscapes in the distributed environment, and uses the model handle of each event to route to the correct SpectroSERVER. Each SpectroSERVER receives the events and alarms that correspond to its managed models.

Chapter 3: Integrations and Deployment Scenarios 47

Integrating with CA Spectrum

Complete the following process to configure CA Event Integration to operate in a Distributed SpectroSERVER environment: 1. Complete all configuration steps described in CA Spectrum Configuration Requirements (see page 44) for each SpectroSERVER with which to integrate. All SpectroSERVERs in the DSS environment must have the event message format, custom pcause files (if necessary), the ca_eis_user, and the connector node defined in Host Security. Note: You only need to configure the event message format for specific versions, and you only need to complete the remote CA Spectrum configuration if the connector is on a different node than the SpectroSERVER. 2. Verify that your DSS environment is working and properly configured. Configuring a DSS requires several manual operations, such as the following:

(CA Spectrum 8.1 only) Add a line to the <SPECPATH>\bin\vboa\agentaddr directory in each SpectroSERVER that lists the node names of all servers in the Distributed SpectroSERVER environment. Add a line to the Windows\system32\etc\hosts file for each SpectroSERVER that lists its node name and the node name of the Main Location Server.

Note: For more information about properly configuring a Distributed SpectroSERVER environment, see the CA Spectrum documentation. 3. Click the Policies tab in the CA Event Integration administrative interface and configure the following CA Spectrum policy attributes in spectrum-src.xml and spectrum-dest.xml: landscapein Enter a comma-delimited list of all SpectroSERVER landscapes in the distributed environment from which to collect alarms. landscapeout Enter the Main Location Server landscape name to operate in a DSS environment. vbroker.agent.addr (CA Spectrum 8.1 only) Enter the fully qualified host name or IP address of the Main Location Server. You must enter the same value in the spectrum-src.xml and spectrum-dest.xml files for the connection to work. For more information about configuring policy and other CA Spectrum policy configuration requirements, see Configure CA Spectrum Policy (see page 49).

48 Product Guide

Integrating with CA Spectrum

Copy Custom CsPCause Files into CA Event Integration


CA Spectrum CsPCause files contain information that CA Event Integration needs to collect CA Spectrum alarms. CA Event Integration must have a local copy of the CsPCause files for the CA Spectrum integration to work. By default, all CsPCause files provided with CA Spectrum 8.1, 9.0, and 9.2 are included in CA Event Integration. However, if you add custom alarms to CA Spectrum, you must copy the associated CsPCause files to the appropriate CA Event Integration directory. Copy all CsPCause files associated with custom alarms to one of the following directories on the associated CA Event Integration installation, depending on your version of CA Spectrum:

EI_HOME\ThirdParty\CA\SPECTRUM\81\CsPCause EI_HOME\ThirdParty\CA\SPECTRUM\90\CsPCause EI_HOME\ThirdParty\CA\SPECTRUM\92\CsPCause

Configure CA Spectrum Policy


To enable alarm collection from and event and alarm dispatching to CA Spectrum, you must define the following information in the CA Spectrum source and destination policy files in CA Event Integration:

The landscape host names of the appropriate source and destination SpectroSERVERs The user name to use in CA Spectrum for interaction with CA Event Integration The CA Spectrum version with which to integrate

You must configure these policy attributes from the administrative interface before deploying either policy file in a catalog. To configure CA Spectrum policy 1. Access the administrative interface on the CA Event Integration manager server. The Dashboard tab appears by default. 2. Click the Policies tab. The View Policies page opens. 3. Click the link on each of these files in separate operations:

spectrum-src.xml spectrum-dest.xml

The Policy Configuration: Spectrum page opens for each file.

Chapter 3: Integrations and Deployment Scenarios 49

Integrating with CA Spectrum

4.

Complete the following fields on each Policy Configuration: Spectrum page and click Save: landscapein (spectrum-src.xml only) Specifies the landscapes from which to collect alarms. To collect alarms from a Distributed SpectroSERVER environment, enter all landscapes in the environment in a comma-delimited format. You must use the landscape host names, not the hex codes, and values are case-sensitive. landscapeout (spectrum-dest.xml only) Specifies the landscapes to which to dispatch events and alarms. For Distributed SpectroSERVER environments, enter the Main Location Server landscape. You must use the landscape host name, not the hex code, and value is case-sensitive. vbrokeragentaddr (CA Spectrum 8.1 only) Specifies the fully qualified domain name or IP address of the SpectroSERVER landscape. To operate the connector in a Distributed SpectroSERVER environment, enter the Main Location Server domain name or IP address. If you leave this field blank, CA Event Integration uses the appropriate landscapein/landscapeout value as the vbrokeragentaddr. Note: If spectrum-src.xml and spectrum-dest.xml are deployed in the same catalog, the vbrokeragentaddr value must be the same in each file for the deployment to work. landscapeuser Specifies the CA Spectrum user defined for sending alarms to CA Event Integration and receiving processed alarms and events from CA Event Integration. Change the default if you want to use a user other than ca_eis_user in CA Spectrum for this purpose. This user name must be the same for and exist in all landscapes from which you are collecting alarms and to which you are dispatching alarms and events. Note: The specified user must have an Administrator license in CA Spectrum. Default: ca_eis_user plugin_version Specifies the version of CA Spectrum with which you are integrating. Select 81 if you are integrating with CA Spectrum 8.1 90 if you are integrating with CA Spectrum 9.0 or 9.1, or 92 if you are integrating with CA Spectrum 9.2. Note: You can configure additional options in the spectrum-dest.xml file, such as a lost and found model for unreconciled events, destinations for enrichment data, and custom alarm attribute specifications. However, these options are not required to begin processing events and alarms. For more information about configuring this functionality, see CA Spectrum Usage. All required information is defined.

50 Product Guide

Integrating with CA Spectrum

More information: CA Spectrum Usage (see page 52)

How to Configure SNMP Trap Collection on a CA Spectrum Server


If you deploy a catalog with SNMP source policy that uses the SNMPplugin.dll adaptor on a CA Spectrum server, CA Spectrum (or any other SNMP manager) could bind to the SNMP trap default port 162, preventing the SNMP adaptor from binding to this port and sending traps to CA Event Integration. Only source policies from previous releases use the C++ version of the SNMP adaptor (SNMPplugin.dll). All CA Event Integration r2.5 source policies based on the SNMP adaptor use the Java version of the adaptor (SNMPAdapter.jar), which does not conflict with other SNMP managers. To enable trap collection on a CA Spectrum server on which a deployed policy using the C++ SNMP adaptor exists, do one of the following:

Migrate your SNMP policy to use the Java SNMP adaptor (SNMPAdapter.jar) (see page 218). This option requires no configuration to external products or to Windows SNMP settings. Disable CA Spectrum traps. Change the CA Spectrum listener port to one other than the Windows SNMP trap service listener port, which is used by CA Event Integration. Change the Windows SNMP service trap listener port to one other than the CA Spectrum listener port. Use a trap multiplexor such as CA eHealth TrapEXPLODER or the CA NSM CaTrapMuxD utility to listen on port 162 and broadcast to other configured ports.

To disable CA Spectrum traps 1. 2. Open the %SPECPATH%\SS\.vnmrc file with an editor such as WordPad to ensure that line feed characters are preserved. Edit the following line as follows:
snmp_trap_port_enabled=FALSE

3. 4.

Restart the SpectroSERVER. Restart the CA EI IFW service.

Chapter 3: Integrations and Deployment Scenarios 51

Integrating with CA Spectrum

To change the CA Spectrum listener port 1. 2. Open the %SPECPATH%\SS\.vnmrc file with an editor such as WordPad to ensure that line feed characters are preserved. Enter a port other than 162 for CA Spectrum to use on the following line:
snmp_trap_port=

3. 4.

Restart the SpectroSERVER. Restart the CA EI IFW service.

To reconfigure the Windows SNMP service trap listener port 1. 2. Open the %windows%\system32\drivers\etc\services file in Notepad. Modify the following line to use a port other than 162:
snmptrap 162/udp snmp-trap #SNMP

3. 4.

Recycle the SNMP trap service from the Windows Services dialog. Restart the CA EI IFW service.

To use a trap multiplexor to listen on port 162 and broadcast to other configured ports 1. 2. Change CA Spectrum and the Windows SNMP trap service to listen on ports other than 162 using the previous procedures. See the documentation for CA eHealth TrapEXPLODER and CATrapMuxD to configure the multiplexor accordingly.

CA Spectrum Usage
CA Event Integration can increase the quality of events and alarms in CA Spectrum through enrichment and normalization, thereby decreasing event and alarm administration overhead. In addition to providing integration with various outside event sources, CA Event Integration can also help enhance the functionality and efficiency of many CA Spectrum event administration features, such as event rules and procedures, alarm filters, and alarm attributes. The most common CA Event Integration use cases for CA Spectrum users are as follows: Enriching and updating alarms You can add information (such as a contact address, a model attribute, or another property not represented by default) to alarms through enrichment and update the alarms in CA Spectrum. You can use any new values, such as contact information, in alarm filters to more efficiently handle alarm forwarding and resolution. Alarm enrichment increases event quality, allowing for more efficient administration.

52 Product Guide

Integrating with CA Spectrum

Normalizing and integrating alarms with other event sources You can send CA Spectrum alarms to other event sources, such as CA NSM, or send events from outside sources to the OneClick Console, creating a unified location for event management. For example, you can collect events from various network application log files and send them to CA Spectrum as events, thereby combining basic CA Spectrum network administration with vendor-specific network device administration. Performing advanced CA Spectrum operations on dispatched events CA Event Integration divides a CA Spectrum destination event into a common set of event variables that you can use in event rules and procedures to perform advanced correlation and automation. The separation of event variables makes using these advanced tools more efficient and feasible. Normalization can also simplify alarm forwarding and help classify SNMP traps with their appropriate models. For more information about how to deploy CA Event Integration for the most common CA Spectrum use cases, see CA Spectrum Deployment Scenarios.

How to Start Processing Alarms


Use the following high-level process to verify that you have completed all tasks necessary to start collecting, processing, and updating CA Spectrum alarms and dispatching collected events from other sources to CA Spectrum: Note: The following process does not consider operations such as enrichment, custom attributes or other CA Spectrum functionality available with CA Event Integration. 1. 2. 3. 4. 5. Install the manager and connectors compatible with your CA Spectrum environment. For more information, see the chapter "Installation." Complete the manual CA Spectrum configuration requirements (see page 44). Log in to the administrative interface (see page 124). Configure CA Spectrum policy (see page 49) to define the source and destination landscapes and the CA Spectrum user and version with which to integrate. Create a catalog (see page 158) with CA Spectrum policies assigned. Note: You assemble all sources, destinations, and enrichments in a catalog. For more information about sources and destinations that you can use in a catalog, see Other Integrations. For more information about configuring enrichments with CA Spectrum, see Alarm Enrichment. For more information about example CA Spectrum catalog configurations, see CA Spectrum Deployment Scenarios. 6. Assign (see page 162) and deploy (see page 163) the catalog on the appropriate connector.

Chapter 3: Integrations and Deployment Scenarios 53

Integrating with CA Spectrum

After you deploy a catalog with the appropriate configuration, CA Event Integration should begin enacting the catalog policy files on the defined sources and destinations. For more information about working with catalogs and connectors, see the chapter "Configuration and Administration."

Alarm Enrichment
CA Event Integration interacts with CA Spectrum alarms as follows:

Connectors collect alarms from CA Spectrum and process them according to internal policy. You can view processed alarms in the manager database or send them to any other available destination. Processed alarms are updated in CA Spectrum. No additional alarms are created; CA Spectrum accepts updates to the existing alarm. Alarm updates also create a corresponding informational event. Note: Events dispatched to CA Spectrum from other sources appear as events in OneClick.

The CA Event Integration enrichment modules let you enrich alarms with supplemental information from outside sources that did not originally appear in the alarm. The provided enrichment modules let you enrich alarms with information from any custom database, CA CMDB, CA NSM WorldView (Windows only), CA Spectrum model attributes, and an Internet search URL, and you can create custom enrichment modules to extract information from any external source. Enrichment can add value to alarms by providing information that enables more efficient alarm administration, classification, forwarding, and closure. You can use the administrative interface to place enrichment data in the following three existing CA Spectrum alarm attributes:

Troubleshooter Trouble Ticket ID Status

Any data that you place into the Troubleshooter attribute must be a valid defined Troubleshooter Name in CA Spectrum. You can also place enrichment data in custom alarm attributes and event variables. Enrichment from the provided enrichment modules provides flexibility in improving alarm quality and administration time. Some example use cases are as follows:

Enriching alarms with an email address from a contact database and assigning this email address to the Troubleshooter attribute, so that the appropriate contact is assigned to the alarm immediately after it is generated. This use case eliminates the step of manually assigning alarms or creating alarm filters to assign alarms.

54 Product Guide

Integrating with CA Spectrum

Enriching alarms with CI information from CA CMDB. CA Event Integration can use the alarm model handle to extract the device configuration item information from the CA CMDB such as instance name, location, and custom properties. Assigning this information to the Status field lets you view specified CA CMDB information for each alarm. This action integrates the CA CMDB with CA Spectrum and provides extra information that you can use for alarm filtering, forwarding, and resolution. Enriching alarms with the corresponding model's ID attribute, which is not included in the original alarm, to provide additional model information. This use case could apply when sending the alarm back to CA Spectrum or to an external destination, such as CA Spectrum SA, to decrease the time required for alarm resolution.

More information: Custom Event Codes (see page 63)

How to Configure Alarm Enrichment


You can enrich alarms with information from custom databases, CA CMDB, CA NSM WorldView (Windows only), CA Spectrum model attributes, and an Internet search URL, and create custom enrichment modules. Enrichment values can appear in the CA Spectrum Troubleshooter, Trouble Ticket ID, and Status fields, and in custom alarm and event attributes. Configuring an alarm enrichment requires that you configure the enrichment source, assign the enrichment a variable, and assign that variable to a specific location in the destination alarm. To complete these tasks, edit policy attributes in the administrative interface. Complete the following process to configure a basic alarm enrichment: Note: The following process only covers basic enrichment configuration; to deploy a configured enrichment, you must create a catalog with the appropriate policy and deploy the catalog on connectors. For a complete enrichment scenario, see CA Spectrum Deployment Scenarios. For information about how to configure complex alarm enrichments, see Manual Alarm Enrichment Configuration. 1. Verify that the integration with CA Spectrum is properly configured. For more information, see CA Spectrum Implementation and Configuration. 2. Select and prepare the enrichment source. If you are using a custom database, verify that it exists and is properly configured with current information. 3. Open the administrative interface and click the Policies tab. The View Policies page appears. This page lists all available policies that you can deploy in a catalog.

Chapter 3: Integrations and Deployment Scenarios 55

Integrating with CA Spectrum

4.

Click the policy file for the enrichment source you want to use. If you are using a custom database, click the file that corresponds to your database type (SQL Server, Oracle, or MySql). The Policy Configuration page for the selected file appears.

5. 6.

Configure the connection settings for the enrichment source. Edit the enrichment query to extract the appropriate information. For custom databases, the default query in the singleresultquery field is as follows:
select contact from Table1 where hostname=?

This query denotes that you are extracting data from the contact column in Table1 where the hostname column equals the alarm resource address. Construct an enrichment query for your database using the same conventions. Note: If you want the enrichment to key off of an alarm value other than the default resource address, you must edit the XML policy file directly. You must also edit the XML policy file directly if you want to construct a complex query returning multiple enrichment values. For more information, see Manual Alarm Enrichment Configuration (see page 59). 7. Click Save on the upper table. The connection settings and query are configured. 8. Enter a property value (or use the default) and select an enrichment variable for the enrichment data in the following fields, then click Save on the Assignment table: tagname Specifies the XML property used to insert the returned enrichment value in the destination alarm. assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. 9. Return to the View Policies page and click spectrum-dest.xml. The Policy Configuration: Spectrum page appears. Note: Verify that the basic connection attributes (landscapeout, landscapeuser, and plugin_version) are properly configured before you configure enrichment attributes. 10. Do the following to configure the enrichment value and enrichment destination:

Select the enrichment variable that you defined in the enrichment policy file in the enrichment_variable field. Select the CA Spectrum destination property to which to assign the enrichment data in the assigned_to_spectrumtag field.

56 Product Guide

Integrating with CA Spectrum

The enrichment data will appear in the destination alarm in the location represented by the selected property. For example, if you select spectrum_Alarm_Status, the enriched data will appear in the alarm Status field. 11. Save the Assignment table. The enrichment is configured. If you have already deployed a catalog with this policy, you must re-deploy the catalog for the enrichment to occur. More information: Manual Alarm Enrichment Configuration (see page 59) Custom Event Codes (see page 63) CA Spectrum Deployment Scenarios (see page 63) CA Spectrum Implementation and Configuration (see page 44)

How to Configure CA Spectrum Model Attribute Enrichment


You can enrich CA Spectrum alarms or events from other sources with CA Spectrum model attributes. This functionality provides the following benefits:

The ability to add CA Spectrum model information to events from outside sources. For example, you can add the Owner attribute from a model to CA NSM events, so that when the events are sent to their destination, you are able to discern the owner of the affected model in CA Spectrum. The ability to increase the CA Spectrum information included in CA Spectrum alarms, so that when they are sent to their destination (CA Spectrum or an external source, such as CA Spectrum SA), important model information that is normally not included in alarms can be considered during issue resolution.

By default, CA Spectrum enrichment adds the tag, id, owner, and organization attributes to events and alarms. When returning enriched alarms to CA Spectrum, they can appear in the Troubleshooter, Trouble Ticket ID, or Status alarm attributes as described in Alarm Enrichment (see page 54). When enriching alarms or events for other destinations, the attributes appear in the areas specified in each integration section in this chapter. Complete the following process to configure CA Spectrum model attribute enrichment: Note: The following process only covers basic enrichment configuration; to deploy a configured enrichment, you must create a catalog with the appropriate source and destination policy and deploy the catalog on connectors. For a complete enrichment scenario, see CA Spectrum Deployment Scenarios. For information about how to configure complex alarm enrichments, see Manual Alarm Enrichment Configuration. 1. Verify that the integration with CA Spectrum is properly configured. For more information, see CA Spectrum Implementation and Configuration. If other sources or destinations are involved in the enrichment, ensure that these integrations are configured properly.

Chapter 3: Integrations and Deployment Scenarios 57

Integrating with CA Spectrum

2.

Click spectrum-enrich.xml on the Policies tab. The Policy Configuration: Spectrum page appears.

3.

Enter the CA Spectrum connection and version information for the following attributes as described in Configure CA Spectrum Policy (see page 49):

landscapeout landscapeuser vbrokeragentaddr plugin_version

4.

Enter the hex codes for the model attributes to include in the enrichment in the attribute_hexcodes field. The default hex codes are for the tag, id, owner, and organization attributes. You can remove these codes and enter the codes for other attributes if necessary. Find the hex codes for model attributes using the Spectrum Model Attribute Editor.

5.

Click Save on the upper table. The connection settings and model attributes are saved.

6.

Enter a property value (or use the default) and select an enrichment variable for the enrichment data in the following fields, then click Save on the Assignment table: tagname Specifies the XML property used to insert the returned enrichment value in the destination alarm. Enter a name that contains a hex code that you are using for the enrichment. Default: ssenrich_12bfb assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy.

7.

Configure destination policy to set the enrichment value and enrichment destination as described in How to Configure Alarm Enrichment (see page 55) for CA Spectrum destination or in the corresponding section for other destinations in this chapter. The enrichment is configured. If you have already deployed a catalog with this policy, you must re-deploy the catalog for the enrichment to occur.

How to Use Custom Alarm Attributes in Enrichments


Custom alarm attributes in CA Spectrum are configurable as enrichment destinations. You must define the custom attributes in CA Spectrum destination policy, after which you can assign enrichment data to appear in a defined custom attribute.

58 Product Guide

Integrating with CA Spectrum

Complete the following process to use a custom alarm attribute as an enrichment destination: 1. 2. 3. Create the custom alarm attribute in CA Spectrum and note the hex code assigned to the attribute. Configure an enrichment policy (see page 55) to assign a variable to an enrichment value. Click spectrum-dest.xml from the View Policies page of the Policies tab. The Policy Configuration: Spectrum page appears. 4. Enter the custom alarm attribute hex code in any of the Spectrum_Alarm_Custom fields of the Spectrum_CustomAlarm_Attributes table and click Save on the table. The custom alarm attribute is available for enrichment assignment in CA Spectrum destination policy. 5. Do the following in the Spectrum_Assignment table and click Save:

Select the appropriate enrichment variable in the enrichment_variable field. Select the spectrum_Alarm_Custom attribute for which you specified a hex code in the assigned_to_spectrumtag field.

The enrichment is configured. When you assign and deploy a catalog with the policies necessary for alarm enrichment, enrichment data will appear in the specified custom attribute field in the enriched alarm.

Manual Alarm Enrichment Configuration


Although basic enrichments are fully configurable by editing enrichment and destination policy attributes in the administrative interface, the following scenarios require you to customize the XML policy files directly:

You want the enrichment to result from an alarm value other than the default internal_resourceaddr. For example, assume that you want to use a custom database to enrich alarms based on each alarm's originating platform. In this case, you must edit the enrichment policy XML file to respond to this value. To do so, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Enrich> section of the file. In the <Enrich> section, find the line that contains the following attribute:
input="internal_resourceaddr"

This input value controls which internal event properties value to use to gather information from an enrichment source. You must enter a valid internal event property, such as internal_resourceplatform for the scenario above, for this attribute.

Chapter 3: Integrations and Deployment Scenarios 59

Integrating with CA Spectrum

Note: The Internet search URL uses tags other than internal_resourceaddr. For more information, see Internet Search Enrichment Policy Configuration (see page 147). For a list of internal event properties, see Internal Event Properties (see page 238).

You want to add the enrichment data to a destination alarm attribute that is not listed in the administrative interface. For example, assume that you want to append enrichment data to an alarm condition. To add the enrichment to an alarm attribute that is not listed in the administrative interface, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Format> section of the file. In the <Format> section, find the following line:
<Field format="{0}" input="internal_reportingagent,userdetail_0" output="internal_reportingagent"

Change the output attribute value to the destination event property to which to add the enrichment data (spectrum_Condition in the scenario above). The input variable lets you specify whether to append the enrichment data to the existing data or replace the existing data with the enrichment. To append the enrichment data to the existing data, separate the property from the enrichment property with a comma, as shown above. To replace the existing data with the enrichment property, include only the enrichment property for the input attribute.

You want to create a complex query that returns multiple enrichment values. For example, assume that you want to enrich the Troubleshooter field with a first name, last name, and email address, which will extract from separate columns in a custom database. To do so, you must customize the XML enrichment policy file directly to enter a complex, multi-value query and format how each value appears in the destination event. For more information about writing complex enrichment policy, see Enrich Operation in the appendix "Writing and Customizing Policy."

More information: Internal Event Properties (see page 238) Destination Event Properties (see page 244) Enrich Operation (see page 269)

Granular Event Variables in CA Spectrum


Processed events dispatched to CA Spectrum provide event content across the following granular event variables. These event variables provide the leverage to perform advanced event and alarm administration on destination events in CA Spectrum.

60 Product Guide

Integrating with CA Spectrum

The event variables are as follows:


spectrum_EventVar_gentime spectrum_EventVar_resourceclass spectrum_EventVar_resourceinstance spectrum_EventVar_resourcevendor spectrum_EventVar_resourceplatform spectrum_EventVar_resourceuser spectrum_EventVar_reportingagent spectrum_EventVar_msgtag spectrum_EventVar_msgvalue

These variables allow for advanced processing in CA Spectrum. Some example use cases are as follows:

Using the event variables in event condition rules to create correlations. For example, you can correlate two events using the resourceuser variable to create a new event as a result of the combination.

Using the event variables in event procedures to carry out event-related actions. For example, you can write an event procedure that notifies a specific technician when events with a specific msgvalue and resourcevendor occur.

Writing a procedure that triggers an alarm based on any event variables.

You can perform these advanced operations using events that originated from any source, allowing for event procedures, event conditions, and alarm triggers on CA NSM events, CA Spectrum SA alerts, events from network application logs, or any other source collected by CA Event Integration. The event variables provide a common schema for acting on all events of interest to further integrate them into CA Spectrum. Note: For a detailed scenario using event conditions and rules, see CA Spectrum Deployment Scenarios (see page 63). For more information about creating event conditions and rules in CA Spectrum, see the CA Spectrum documentation.

How to Use Custom Event Variables in Enrichments


Custom event variables can function as placeholders for enrichment data that you can leverage in event rules and procedures. CA Spectrum events sent from CA Event Integration contain nine predefined event variables, all of which you can reference in event rules and procedures. Custom event variables let you specify a new variable to hold enrichment data. CA Spectrum event rules and procedures can use the custom variable number (v10, 11, and so on) to access its enrichment data in correlations and actions.

Chapter 3: Integrations and Deployment Scenarios 61

Integrating with CA Spectrum

When you configure an enrichment to appear in a custom event variable, the data does not appear in the destination event unless you add the custom variable to the CA Event Integration CA Spectrum event message formats. Even though it does not appear in the event, event rules and procedures can use the enrichment data assigned to the custom variable to perform their functions. Complete the following process to use a custom event variable as an enrichment destination: 1. Write an event rule or procedure in CA Spectrum for a CA Event Integration event code that uses v10 to perform an action or correlation. You must use v10 before using higher numbers (v11, v12, and so on), and you must not skip variable numbers. Note: For more information about writing CA Spectrum event rules and procedures, see the CA Spectrum documentation. For an example deployment scenario using event rules and procedures, see CA Spectrum Deployment Scenarios. 2. 3. Configure an enrichment policy (see page 55) to assign a variable to an enrichment value. Click spectrum-dest.xml from the View Policies page of the Policies tab. The Policy Configuration: Spectrum page appears. 4. Do the following in the Spectrum_Assignment table and click Save:

Select the appropriate enrichment variable in the enrichment_variable field. Select spectrum_EventVar_custom10 in the assigned_to_spectrumtag field. You must use variable 10 before using any of the other custom event variable numbers, and you must use the variables in numeric order. Note: You can only assign one custom event variable from the administrative interface. To assign multiple variables to enrichment data, you must manipulate the spectrum-dest.xml file directly. For more information about policy customization, see the appendix "Writing and Customizing Policy."

The enrichment is configured. When you assign and deploy a catalog with the policies necessary for event enrichment, the configured event rule or procedure will use the enrichment data assigned to the custom variable to perform its function.

Configure Unreconciled Events Module


When an event is not associated with a CA Spectrum model during event processing in CA Event Integration, it is classified as unreconciled when dispatched to CA Spectrum and is not displayed in the OneClick event or alarm view for any model. If you suspect you are missing important data when unreconciled events are not displayed in CA Spectrum, you can configure the CA Spectrum destination policy to send unreconciled events to a Lost and Found event module in CA Spectrum.

62 Product Guide

Integrating with CA Spectrum

To configure unreconciled events module 1. Open the administrative interface and click the Policies tab. The View Policies page opens. 2. Click spectrum-dest.xml. The Policy Configuration: Spectrum page opens. 3. Do the following and click Save:

Enter the landscape in which to create the lost and found module in the lostfoundlandscape field. You can use any landscape to which you are sending events and alarms. You must use the landscape host name, not the hex value, and the name is case sensitive. Set the ei_lostfound field to on.

The lost and found module is configured. When you deploy a catalog, a module named EI LostFound appears under the base LostFound module in OneClick in the specified SpectroSERVER landscape. Select this module to view unreconciled events received from CA Event Integration. This module can pinpoint why specific events are not being reconciled, and you can adjust the destination policy to reclassify these events if necessary.

Custom Event Codes


CA Event Integration maps events dispatched to CA Spectrum to custom event codes based on their severity. Mapping to separate event codes allows for separate alarm dispositions based on an event's derived severity. You can create custom event codes in CA Spectrum to further refine how to handle events received from CA Event Integration. After you create a custom event code in CA Spectrum, you must add the event code and event text as a refinement to the CA Spectrum destination policy section that assigns event codes based on severity. As a result, all events dispatched to CA Spectrum meeting the criteria that you specify in the destination policy are mapped to the specified custom event code. Note: For an example of policy edited to include a custom event code, see the EI1.1-SPECTRUM tutorial in the EI_HOME\Docs\Tutorials directory.

CA Spectrum Deployment Scenarios


Deploying CA Event Integration with CA Spectrum can provide the following benefits:

Event normalization to create a common, understandable event format with granular event data Enrichment for adding value to alarms and events and reducing administrative overhead

Chapter 3: Integrations and Deployment Scenarios 63

Integrating with CA Spectrum

Event collection from or alarm dispatching to third party sources to integrate disparate event managers and create a unified enterprise event management system

After you install the manager on your management server and connectors on all servers required to integrate with your CA Spectrum environment, you must create and deploy the appropriate catalog configuration to all connectors. The following section details these CA Spectrum-specific deployment scenarios:

Collecting CA NSM events and sending them to CA Spectrum in their normalized format. This scenario makes CA Spectrum a unified location for network and systems management events in an enterprise with both management products and creates a common event variable schema for advanced operations. Processing and enriching CA Spectrum alarms and updating the existing alarms with additional information. Processing and normalizing Windows Event Log events and sending them to CA Spectrum, where you can create event rules and conditions using the granular event variables assigned by CA Event Integration to complete advanced tasks such as event correlation and actions.

You can implement these scenarios in the form and quantity necessary for your enterprise. Any of these scenarios support implementation in a Distributed SpectroSERVER environment. Note: Scenarios One and Three use Windows only adaptors (CA NSM and Windows Event Log) and are therefore only applicable when using a Windows connector.

Scenario One: CA NSM to CA Spectrum


CA NSM and CA Spectrum have event management systems that collect events sent to each product. Important messages from both products require quick analysis and resolution to ensure system and network health, and managing two major event sources doubles the time required for event management. In an environment with CA NSM and CA Spectrum installed, you can add value to CA Spectrum and decrease total event administration time by sending all CA NSM agent events to a SpectroSERVER for display in the OneClick Console. In this scenario, you create and deploy a catalog that collects CA NSM agent events and dispatches them to CA Spectrum as events. This scenario simplifies event administration by creating one unified event management system where you can manage events from both products. The OneClick Console lets you run reports, queries, and filters on events in the CA Spectrum database that are not available for events in CA NSM. CA Event Integration separates events sent to CA Spectrum into several granular event variables, enabling advanced administration techniques such as event rules and procedures with CA NSM agent events.

64 Product Guide

Integrating with CA Spectrum

To implement Scenario One: CA NSM to CA Spectrum 1. Access the administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page opens. All CA Spectrum policy files require basic configuration before you can use them in a catalog. 2. Click spectrum-dest.xml. The Policy Configuration: Spectrum page opens. 3. Do the following and click Save:

Enter the landscape to which you want to send events in the landscapeout field. You must use the landscape host name, not the hex code. This value is case-sensitive. Enter a Main Location Server to send events to a Distributed SpectroSERVER environment. (Optional) Edit the landscapeuser field if you are using a CA Spectrum user other than ca_eis_user to integrate with CA Event Integration. (CA Spectrum 8.1 only) Enter the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) in the vbrokeragentaddr field. Specify the CA Spectrum version you are using in the plugin_version field.

All necessary policy attributes are configured. 4. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens. 5. Create a catalog (see page 158) that contains the following policies: Source: nsmevent-src.xml Destination: spectrum-dest.xml Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 6. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 7. Click Next. The Select Connectors page opens. 8. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. to move them to the Selected Connectors pane. Click

Chapter 3: Integrations and Deployment Scenarios 65

Integrating with CA Spectrum

9.

Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors.

10. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration."

Scenario Two: CA Spectrum to CA Spectrum with Alarm Enrichment


When you deploy CA Event Integration in a CA Spectrum environment, you can collect alarms, process and normalize them in the core processing engine, and update them in CA Spectrum. The main benefit of running alarms through CA Event Integration is the ability to enrich them with supplemental information from outside sources. Alarms typically require quick action to resolve serious network problems, so it is important that they include all necessary information for efficient resolution. CA Event Integration can enrich alarms with information from any outside source and add this information to a visible field in the alarm when it is returned to CA Spectrum. You can add technician contact information, additional severity details, more information about the alarm's originating model or location, or any other useful information that resides in any outside source. In this following scenario, you configure policy, then create and deploy a catalog with the policy that instructs the connector to collect alarms, enrich them with contact information from a custom SQL database, and update the enriched alarm in CA Spectrum, so that OneClick displays the new contact information in the alarm Assignment field. This scenario adds contact information to alarms to facilitate more efficient assignment and resolution. To implement Scenario Two: CA Spectrum to CA Spectrum with alarm enrichment 1. Create or prepare a database on the connector server with a table named Enrich that contains the following columns: dns_name This column should contain an entry for the domain name of your connector server.

66 Product Guide

Integrating with CA Spectrum

Contact This column should contain an entry for the name or email address of the contact for that server. Note: CA Event Integration supports enrichments with Microsoft SQL Server, Oracle, and MySql databases. 2. Create a Troubleshooter in CA Spectrum whose Name property matches the entry in the Contact column of the database you created. Any enrichment using the Troubleshooter attribute must match a defined Troubleshooter Name in CA Spectrum. 3. Access the administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page opens. All CA Spectrum and enrichment policy files require basic configuration before you can use them in a catalog. 4. Click spectrum-src.xml. The Policy Configuration: Spectrum page opens. 5. Do the following and click Save:

Enter the landscape from which you want to collect alarms in the landscapein field. You must use the landscape host name, not the hex code. This value is case-sensitive. Enter a comma-delimited landscape list to collect alarms from multiple landscapes in a Distributed SpectroSERVER environment. (Optional) Edit the landscapeuser field if you are using a CA Spectrum user other than ca_eis_user to integrate with CA Event Integration. (CA Spectrum 8.1 only) Enter the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) in the vbrokeragentaddr field. You must enter the same value in the spectrum-src.xml and spectrum-dest.xml files for the connection to work. Specify the CA Spectrum version you are using in the plugin_version field.

6.

Return to the View Policies page and click one of the following, depending on the type of custom database you want to use for the enrichment:

mssql-enrich.xml oracle-enrich.xml mysql-enrich.xml

The Policy Configuration page opens for the file you selected.

Chapter 3: Integrations and Deployment Scenarios 67

Integrating with CA Spectrum

7.

Enter database connection settings in the upper table of the Policy Configuration page, and enter an enrichment query similar to the following in the singleresultquery field:
select contact from Enrich where dns_name=?

Note: You should tailor the query to the database type you are using. This query extracts information from the contact field of the Enrich database table when the resource address of the alarm equals a value in the dns_name column. Verify that the column names and table name match those that you defined in your table. Click Save on the table. 8. Complete the following fields in the Assignment table of the Policy Configuration page to configure the enrichment data assignment, and click Save on the table: dbtype_tagname Specifies an XML property to hold the enrichment data. The enrichment value requires a property name for insertion into an alarm. Use the default name or enter a different property name. assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy to insert the property's enrichment value into a specific place in an alarm. Select a variable from the drop-down list. The enrichment policy is configured. 9. Return to the View Policies page and click spectrum-dest.xml. The Policy Configuration: Spectrum page opens. 10. Complete the fields in the upper table to match the values you entered for spectrum-src.xml and click Save on the table. The landscapeout field should match the landscapein field if you want to update existing alarms. Enter a Main Location Server to update alarms in a Distributed SpectroSERVER environment. 11. Complete the following fields in the Spectrum_Assignment table and click Save: enrichment_variable Specifies the enrichment variable to assign to alarms. Select the variable that you assigned to the enrichment in Step 7. assigned_to_spectrumtag Specifies where in the destination alarm to insert the enrichment data represented by the variable. Select spectrum_Alarm_Troubleshooter to insert the enriched contact information in the Assignment field of the destination alarm. The policies are configured.

68 Product Guide

Integrating with CA Spectrum

Note: From the administrative interface, you can effectively configure simple, single-value enrichments. If you want to create a complex multivalue enrichment or enrich based on a different event property, you must edit the XML policy files directly. For more information, see Manual Alarm Enrichment Configuration (see page 59). 12. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens. 13. Create a catalog (see page 158) that contains the following policies: Source: spectrum-src.xml Destination: spectrum-dest.xml Enrichment: dbtype-enrich.xml Note: dbtype denotes the database policy file in which you configured the enrichment. Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 14. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 15. Click Next. The Select Connectors page opens. 16. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 17. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 18. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." to move them to the Selected Connectors pane. Click

Chapter 3: Integrations and Deployment Scenarios 69

Integrating with CA Spectrum

19. Generate an alarm in CA Spectrum for a model handle that you have defined in the enrichment database. CA Event Integration should collect the alarm, perform the enrichment, and return the alarm to CA Spectrum. 20. Open OneClick and access the alarm. In the Alarm Detail pane, the Assignment field should contain the contact information you entered in the database for the alarm resource address.

Scenario Three: Windows Event Log to CA Spectrum with Event Conditions


When CA Event Integration generates and dispatches a destination event to CA Spectrum, the event content is distributed across several event variables. Event rules, procedures, and other advanced operations in CA Spectrum have access to these variables, so you can leverage them using advanced functionality. In this scenario, you configure policy and create and deploy a catalog that sends Windows Event Log events to CA Spectrum. Making Windows events available to CA Spectrum lets you manage these events in a common destination format from OneClick. You also create an event rule associated with a CA Event Integration event code that generates a new event and alarm when specific event text occurs (in this case, when the ASP.NET service stops). This scenario illustrates the ability to leverage the destination event variables to create rules and procedures that can correlate certain events, perform specified actions, or create new events and alarms. Note: To complete this scenario, you must create a new event, alarm, and event rule in CA Spectrum. For information about how to complete these tasks, see the CA Spectrum documentation. To implement Scenario Three: Windows Event Log to CA Spectrum with event conditions 1. Open the CA Spectrum Event Configuration editor and create a new event that contains the following message text:
{d "%w- %d %m-, %Y - %T"}- ASP.NET service stopped. Time stopped: {S 1} Reported by: {S 2}. '{m}' of type '{t}'. (event [{e}])

2. 3.

Create a critical alarm and associate it with the new event. Create an event condition rule for event 0x00010fa7 that generates the event code of the event you created according to the following expression:
regexp({v 9}, {S The ASP.NET State Service service entered the stopped state.})

You should also copy variables 1 and 8 (generation time and reportingagent) to the new event code's variables (1 and 2).

70 Product Guide

Integrating with CA Spectrum

This expression generates the event code for the event you created if a CA Event Integration destination event contains the message text in the expression in its msgvalue event variable (variable 9). After the event is generated, an alarm should generate that corresponds to the alarm you defined in the event. 4. Access the CA Event Integration administrative interface and click View Policies from the Dashboard tab. The View Policies page opens. All CA Spectrum policy files require basic configuration before you can use them in a catalog. 5. Click spectrum-dest.xml. The Policy Configuration: Spectrum page opens. 6. Do the following and click Save:

Enter the landscape to which you want to send events in the landscapeout field. You must use the landscape host name, not the hex code. This value is case-sensitive. Enter a Main Location Server to send events to a Distributed SpectroSERVER environment. (Optional) Edit the landscapeuser field if you are using a CA Spectrum user other than ca_eis_user to integrate with CA Event Integration. (CA Spectrum 8.1 only) Enter the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) in the vbrokeragentaddr field. Specify the CA Spectrum version you are using in the plugin_version field.

All policy is configured. 7. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens. 8. Create a catalog (see page 158) that contains the following policies: Source: syslog-src.xml Destination: spectrum-dest.xml Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 9. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 10. Click Next. The Select Connectors page opens.

Chapter 3: Integrations and Deployment Scenarios 71

Integrating with CA Spectrum

11. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 12. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 13. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." 14. Open the Windows Services dialog and stop the ASP.NET service. Windows generates an event to record this action. CA Event Integration should collect this event for processing. 15. Open OneClick and select the Event view for the model handle of your connector server. A CA Event Integration event should appear for the service stopping, followed by the custom event that you generated. The custom event should also generate an alarm. to move them to the Selected Connectors pane. Click

Resolving CA Spectrum Models


CA Event Integration has to resolve a model associated with each event that it dispatches to CA Spectrum. An alarm or event must be associated with a model to appear in that model's event or alarm view in OneClick. If you understand how CA Event Integration resolves models in CA Spectrum, you can perform custom reconciliation searches, configure the lookup method to best fit your environment, and troubleshoot reconciliation errors. CA Event Integration performs model resolution by using internal event property values to reconcile to a model type and a specific model. The internal_resourceaddr , internal_resourceinstance, and internal_resourceclass internal event properties are the critical pieces of information used to resolve models. The internal_resourceaddr property value is the device address acquired from the source event. The internal_resourceinstance property is a name property that corresponds to a specific instance name value in the event source.

72 Product Guide

Integrating with CA Spectrum

The CA Event Integration core retrieves a list of models from the SpectroSERVER whose IP address match the address in an event's internal_resourceaddr property or whose model name match the instance name in an event's internal_resourceinstance tag. By default, the core retrieves models by IP address. For more information about configuring the retrieval method, see Configure Model Lookup Method. The internal_resourceclass property value, which is an event's derived resource type, is mapped to the Spectrum_MTypeName destination event property to guide how CA Event Integration performs model type reconciliation. CA Event Integration compares the retrieved models against model type regular expression pattern lists in the reconciliation.xml file at EI_HOME\Core\conf. Each pattern list represents common model types for a Spectrum_MTypeName value in the order of the best to worst fit. CA Event Integration uses the list under the corresponding Spectrum_MTypeName value to evaluate the models received from CA Spectrum and select the best fit for reconciliation. CA Event Integration assigns the event to the model whose model type is determined as the best fit according to the reconciliation logic. See the example below for an illustration of this process. Note: Each event is limited to one search. However, the source can select from any search for each event. Example: Reconciling a Host_Device resourceclass The following example describes the reconciliation flow for an event whose internal_resourceclass value is Host_Device and internal_resourceaddr value is server1.ca.com: Note: This example uses the default model retrieval method of matching models based on the event's IP address. 1. CA Event Integration retrieves a list of the models in the SpectroSERVER that correspond to server1.ca.com. For this example, the SpectroSERVER returns Host_Sun and IPInterface model types for this address. Note: If the connector is monitoring multiple SpectroSERVERs in a Distributed SpectroSERVER environment, it searches each SpectroSERVER for models matching the resource address. 2. CA Event Integration compares each model in the returned list against the model type regular expression entries listed underneath Host_Device (the derived Spectrum_MTypeName value) in the reconciliation.xml file:
<Reconcilee type="SpectrumEnrich" name="Host_Device"> <entry value="^Host_Device$" /> <entry value="^Host_.*$" /> <entry value="^VNM$" />

Chapter 3: Integrations and Deployment Scenarios 73

Integrating with CA Spectrum

<entry value="^.*Dev$" /> <entry value="^EventModel$" /> <entry value="^.*$" /> <Reconcilee/>

3.

Host_Sun is matched to the second entry in the list and is therefore selected as the model associated with the Host_Device event. The event appears in the server1.ca.com Host_Sun model's event view in OneClick.

You can edit policy and the reconciliation.xml file to create custom reconciliation logic that performs specific searches for events originating from specific sources. Creating custom reconciliation searches can increase the accuracy of model resolution and help events from CA Event Integration appear under the correct models in CA Spectrum. More information: Custom Model Reconciliation Search Scenario (see page 74) Configure Model Lookup Method (see page 75)

Custom Model Reconciliation Search Scenario


You can create custom reconciliation searches in reconciliation.xml if you are customizing specific policy and need to refine the reconciliation logic to perform source-specific searches or to fix inaccurate resolutions caused by the logic in reconciliation.xml. The following scenario describes how to create a custom search for a configuration using the applog-src.xml policy to read a specific log file and send events from that file to CA Spectrum. This scenario assumes that you are using the Log Reader adaptor to collect events from a Cisco network log. In this case, you can benefit from customizing policy and reconciliation.xml to resolve events to Cisco-specific models in CA Spectrum. Note: Make a backup copy of changes made to XML policy files. 1. Edit the <Format> section of the applog-src.xml file (or the customized copy of the file you are using for the Cisco log) located at EI_HOME\Manager\PolicyStore\sources to return Cisco as the value for the internal_resourceclass property as follows:
<Field output="internal_resourceclass" format="Cisco" input="" />

By default, this property always returns "Application" for applog-src.xml. This generic classification may cause problems resolving Cisco models. Replace Application with Cisco. 2. Add a line to the <Enrich> internal_resourceclass section of the spectrum-dest.xml file (located at EI_HOME\Manager\PolicyStore\destinations) to map the internal_resourceclass property "Cisco" value to a Cisco spectrum_MTypeName property value as follows:
<mapentry mapin="^Cisco$" mapout="Cisco"/>

74 Product Guide

Integrating with CA Spectrum

By default, the default applog-src.xml resourceclass value Application maps to "GnSNMPApp" for the spectrum_MTypeName property. Mapping to Cisco provides the opportunity to create Cisco-specific reconciliation logic to more accurately resolve the Cisco log events. 3. Add a new XML construct for the Cisco spectrum_MTypeName property value to the end of the reconciliation.xml file located at EI_HOME\Core\conf as follows:
<Reconcilee type="SpectrumEnrich" name="Cisco"> <entry value="^Network$" /> <entry value="^.*Hub$" /> <entry value="^Bridge$" /> </Reconcilee>

As a result of this new reconciliation logic, events collected from the Cisco log file using the customized applog-src.xml policy (and therefore mapping to the Cisco internal_resourceclass and spectrum_MTypeName property values) will only reconcile to models that are Network, Hub, or Bridge types. The new reconciliation logic creates a more specialized search that narrows the potential models to those that could be related to the Cisco log file.

Configure Model Lookup Method


CA Event Integration uses one of the following two methods for matching an event being sent to CA Spectrum to a list of models: IP Address This default lookup method uses an event's device address in the internal_resourceaddr property to retrieve a list of models with the same address. Model Name This method uses an event's internal_resourceinstance property value to retrieve a list of models whose model name matches this value. Model lookup by model name is useful when you are sending events to models that are not associated with an IP address or when you have customized policy to map a model name from a source event to the internal_resourceinstance property. For example, you could use model name lookup when monitoring a log file for an application associated with a model in CA Spectrum that does not contain resource address information or the resource address is not of interest, and you could customize the source policy to parse the application's model name to the internal_resourceinstance property. Note: Some out-of-the-box source policies populate the internal_resourceinstance property with a static resource type value or no value at all. For example, SNMP source policy does not define a value for internal_resourceinstance by default. Verify that the derived internal_resourceinstance property value in source policy will always map to a valid model name before using the model name lookup method. Otherwise, use the default IP address lookup method. For more information about customizing policy, see the appendix "Writing and Customizing Policy."

Chapter 3: Integrations and Deployment Scenarios 75

Integrating with CA NSM

Configure the model lookup method by setting an attribute in CA Spectrum destination policy from the administrative interface. Retrieval by IP address is the default method, so you only need to configure this attribute if you want to retrieve models by model name. To configure model lookup method 1. Access the administrative interface and click the Policies tab. The View Policies page opens. 2. Click spectrum-dest.xml. The Policy Configuration: Spectrum page opens. 3. Set the modellookup_method parameter to by_ip to retrieve models based on an event's IP address or to by_name to retrieve models based on an event's model name. Click Save. The model lookup method is configured. If a catalog with this policy is already deployed, you must re-deploy for the method change to take effect.

Integrating with CA NSM


CA Event Integration can function as an add-on option for CA NSM to augment its event management capabilities with event normalization, event enrichment, event collection in a central database destination, and integration with other event sources. Implementing CA Event Integration throughout your CA NSM environment can provide the following benefits:

Reduced complexity and quantity of events Increased quality of events in the Event Console Access to events from outside sources Access to advanced reporting functionality

This section describes how to use CA Event Integration with CA NSM and how to configure and deploy common use cases. Note: The CA NSM integration is only supported for Windows connectors.

76 Product Guide

Integrating with CA NSM

CA NSM Implementation and Configuration


To implement CA Event Integration in your CA NSM environment, you must install connectors on each CA NSM server from which you want to collect events. The CA NSM adaptor is only supported for connectors installed on Windows systems. Note: You can install the CA Event Integration manager independently from the CA NSM environment. The manager can reside on any server from which you want to manage your CA Event Integration environment. The manager contains its own embedded Tomcat installation, so there is no benefit or detriment to installing on a server with an existing Tomcat installation. Any server on which you install a connector must contain either an Enterprise Management Event Agent or Event Manager installation. Note: If you want to send events to a remote CA NSM server, you can also install a connector on a server with an Enterprise Management Administrative Client installation. For more information, see How to Configure a Remote CA NSM Event Destination. Connectors collect agent events from the local CA NSM console log. Where you install the connectors depends on the architecture of your CA NSM Event Management implementation. The following list describes some common basic architectures:

A tiered architecture with several CA NSM Event Agents installed, each forwarding its events to a CA NSM Event Manager Several CA NSM Event Managers that take their policy from a central MDB Several CA NSM Event Managers, each of which contains unique policy obtained from individual MDBs

You can combine these basic architectures to form many unique implementations. How to implement CA Event Integration with CA NSM varies according to each unique implementation and what you want to accomplish. Consider the architecture of your CA NSM environment and the functions that you want CA Event Integration to perform to determine how to structure your implementation through connector installations. More information: How to Configure a Remote CA NSM Event Destination (see page 77)

How to Configure a Remote CA NSM Event Destination


The most common implementation with CA NSM is to install connectors on all Event Agent and Event Manager nodes that you want to collect events from and send events to. By default, when you specify CA NSM as an event destination, events are sent to CA NSM on the connector node. However, you can also send events to a remote CA NSM destination node.

Chapter 3: Integrations and Deployment Scenarios 77

Integrating with CA NSM

CA NSM event management nodes are often networked together and can send event messages between nodes. You may want to preserve this architecture with your CA Event Integration implementation and send events collected from one or several CA NSM nodes to a remote CA NSM event destination (a server without a connector installed). All connectors from which you want to send collected events to a remote CA NSM node must contain one of the following CA NSM Enterprise Management installations:

Administrative Client Event Agent Event Manager

To configure a remote CA NSM event destination, complete the following process: 1. 2. Add the remote destination node on the connector server's EM Connection Manager (see page 78). Specify the remote destination node in the CA NSM destination policy file (see page 79).

Add the Destination Node to EM Connection Manager


To send events to a remote CA NSM destination, you must specify the destination host on the local CA NSM node using the EM Connection Manager. Perform this procedure on all CA NSM servers with connector installations from which you want to send collected events to the remote destination. To add the destination node to EM Connection Manager 1. Select Start, Programs, CA, Unicenter, NSM, Enterprise Management, EM Connection Manager on a CA NSM server with a connector installed. The EM Connection Manager dialog opens. 2. Add the name of the remote CA NSM destination node in the Machine Name field and click Add. The node appears in the table at the bottom of the dialog. 3. Click Next, and click Finish on the ensuing screen. The CA NSM destination host is added to the EM Connection Manager. 4. Repeat Steps 1-3 on all CA NSM servers with connector installations from which you want to send collected events to the remote destination.

78 Product Guide

Integrating with CA NSM

Configure Remote Destination in Policy


The destnode parameter in CA NSM destination policy specifies the server to which a connector dispatches events directed to CA NSM. If this parameter is blank, connectors send events to the CA NSM node on which they are installed. You do not have to specify a local node for the destnode parameter, because this is the default behavior. To send collected events to a remote CA NSM destination node, you must specify this remote node as the destnode parameter. To configure the remote destination in policy 1. 2. Access the administrative interface. Click the Policies tab. The View Policies page opens. 3. Click nsmevent-dest.xml. The Policy Configuration: UniEvent page opens. 4. Add the remote destination node in the destnode field and click Save. The remote destination is configured. To apply this policy in your environment, you must add it to a catalog and deploy the catalog on the appropriate connectors. If a catalog with this policy is already deployed, you must re-deploy for the policy change to take effect.

CA NSM Usage
Using CA Event Integration with CA NSM can improve event quality, integrate the Event Console with other event sources, and facilitate advanced operations such as correlation and actions. The most common CA Event Integration use cases for CA NSM users are as follows: Filtering events Filtering events at the CA NSM Event Agent level in a tiered architecture can reduce the number of agent events reported by message record and action scripts, Advanced Event Correlation, and the Alert Management System, and therefore reduce the volume of messages in the Event Manager. Normalizing events Normalizing events at the CA NSM Event Agent level in a tiered architecture lets you implement uniform and comprehensive policies for message record and action scripts, Advanced Event Correlation, and Alert Management.

Chapter 3: Integrations and Deployment Scenarios 79

Integrating with CA NSM

Enriching events Enriching events with information from an outside source, such as CA CMDB, lets you view this information from the Event Console and use the enrichment data in advanced operations. Collecting events in a reporting database Installing CA Event Integration at the CA NSM Event Manager level and collecting all agent event messages into a database lets you perform advanced reporting and filtering to identify areas of high event activity in your enterprise or trends that would otherwise be difficult to discern. Integrating with other event sources Installing CA Event Integration on a CA NSM Event Manager with which you want to integrate an outside source, such as CA Spectrum or a customized third party source, lets you establish one unified location for enterprise event management. Note that the predefined CA NSM source policy collects agent event messages and filters out all other message formats. If you want to collect other messages from the Event Console (such as from integrated products or basic SNMP traps), you must customize the policy to do so. For more information about policy customization, see the appendix "Writing and Customizing Policy."

How to Start Processing CA NSM Events


Getting started processing CA NSM events with CA Event Integration requires only installation and basic configuration. No manual configuration is required on CA NSM servers (unless configuring a remote destination node), and you need not configure CA NSM policy before creating a catalog. Use the following high-level process to verify that you have completed all tasks necessary to start collecting and processing CA NSM events and dispatching processed or collected events from other sources to CA NSM: Note: The following process does not consider tasks such as enrichment or advanced CA NSM functionality available with CA Event Integration. 1. 2. 3. Install the manager and connectors compatible with your CA NSM environment. For more information, see the chapter "Installation." (Optional) Configure a remote CA NSM event destination (see page 77). Log in to the administrative interface (see page 124).

80 Product Guide

Integrating with CA NSM

4.

(Optional) Configure CA NSM policy files (see page 137). Policy configuration is required only for CA NSM destination policy if you are configuring a remote destination. If you want to perform a CA NSM enrichment, you must first configure the nsm-enrich.xml file. Note: Policy files for other sources that you want to integrate with CA NSM may require configuration before you use them in a catalog. For more information, see Configure Policy Attributes.

5.

Create a catalog (see page 158) with CA NSM policies assigned (and policies for other sources with which you want to integrate). You assemble all sources, destinations, and enrichments in a catalog. For more information about other sources and destinations that you can use in a catalog, see Other Integrations (see page 118). For more information about configuring enrichments with CA NSM, see CA NSM Enrichment (see page 81). For more information about CA NSM catalog configurations, see CA NSM Deployment Scenarios (see page 85).

6.

Assign (see page 162) and deploy (see page 163) the catalog on the appropriate connector.

After you deploy a catalog with the appropriate configuration, CA Event Integration should begin collecting events from defined sources, processing and enriching events according to assigned policy, and dispatching events to defined destinations on the connector server. For more information about working with catalogs and connectors, see the chapter "Configuration and Administration."

CA NSM Enrichment
The CA Event Integration enrichment modules let you enrich CA NSM events with supplemental information from outside sources that did not originally appear in the event. The provided enrichment modules enrich events with information from any custom database, CA CMDB, CA NSM WorldView managed objects, CA Spectrum model attributes, and an Internet search URL, and you can also create custom enrichment modules to extract information from any external source. Enrichment can add value to events by providing information that enables more efficient event administration, classification, and resolution. You can place enrichment data in the following two existing CA NSM event attributes from the administrative interface:

User Data Category

Chapter 3: Integrations and Deployment Scenarios 81

Integrating with CA NSM

Enrichment from the provided enrichment modules provides flexibility in improving event quality and administration time. Some example use cases are as follows:

Enriching events with the WorldView managed object identifier associated with the event. Assigning this object name to the User Data attribute lets you use this attribute to view agent events generated from a specific WorldView object and write scripts to correlate events originating from specific objects or trigger actions. Enriching events with configuration item information from CA CMDB. CA Event Integration uses the event's originating node to extract the node configuration item information from CA CMDB. Assigning this information to the Category attribute integrates CA CMDB with CA NSM and provides useful information in destination events that you can use for event correlation, assignment, and resolution. Enriching events with an external search URL or an internal knowledge base URL that searches based on specific event properties and provides supplemental event information such as root causes or resolution steps.

How to Configure CA NSM Enrichment


You can enrich CA NSM destination events with information from custom databases, CA CMDB, CA NSM WorldView, CA Spectrum model attributes, and an Internet search URL, and create custom enrichment modules. Enrichment values can appear in the CA NSM User Data and Category fields. Configuring an event enrichment requires that you configure the enrichment source, assign the enrichment a variable, and assign that variable to a specific location in the destination event. To complete these tasks, edit policy attributes from the administrative interface. Complete the following process to configure a basic event enrichment: Note: The following process only covers basic enrichment configuration; to deploy a configured enrichment, you must create a catalog with the appropriate policy and deploy the catalog on connectors. For a complete enrichment scenario, see CA NSM Deployment Scenarios (see page 85). For information about how to configure complex event enrichments, see Manual CA NSM Enrichment Configuration (see page 84). 1. Verify that the integration with CA NSM is properly configured. For more information, see CA NSM Implementation and Configuration (see page 77). 2. Select and prepare the enrichment source. If you are using a custom database, verify that it exists and is properly configured with current information. 3. Open the administrative interface and click the Policies tab. The View Policies page opens. This page lists all available policies that you can deploy in a catalog.

82 Product Guide

Integrating with CA NSM

4.

Click the policy file for the enrichment source you want to use. If you are using a custom database, click the file that corresponds to your database type (SQL Server, Oracle, or MySql). The Policy Configuration page appears for the selected file.

5. 6.

Configure the connection settings for the enrichment source. Edit the enrichment query to extract the appropriate information. For custom databases, the default query is as follows:
select contact where hostname=?

This query denotes that you are extracting data from the contact column where the hostname column equals the event resource address. Construct an enrichment query using the same conventions. For WorldView enrichments, you must specify a valid WorldView property to return in the propertyname field. By default, the enrichment extracts the name property using the event resource address. Note: If you want the enrichment to result from an event value other than the default resource address, you must edit the XML policy file directly. You also must edit the XML directly if you want to construct a complex query returning multiple enrichment values. For more information, see Manual CA NSM Enrichment Configuration. 7. Click Save on the upper table. The connection settings and query are configured. 8. Enter a property value (or use the default) and select an enrichment variable for the enrichment data in the following fields, then click Save on the Assignment table: tagname Specifies the XML property used to insert the returned enrichment value in the destination event. assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. 9. Return to the View Policies page and click nsmevent-dest.xml. The Policy Configuration: UniEvent page opens. 10. Do the following to configure the enrichment value and enrichment destination:

Select the enrichment variable that you defined in the enrichment policy file in the enrichment_variable field. Select the CA NSM destination property to which to assign the enrichment data in the assigned_to_unieventtag field.

Chapter 3: Integrations and Deployment Scenarios 83

Integrating with CA NSM

The enrichment data will appear in the destination event in the location represented by the selected property. For example, if you select evtlog_category, the enrichment data will appear in the event Category field. 11. Save the Assignment table. The enrichment is configured. If you have already deployed a catalog with this policy, you must re-deploy the catalog for the enrichment to occur. More information: Manual CA NSM Enrichment Configuration (see page 84) CA NSM Deployment Scenarios (see page 85) CA NSM Implementation and Configuration (see page 77)

Manual CA NSM Enrichment Configuration


Although basic enrichments are fully configurable by editing enrichment and destination policy attributes in the administrative interface, the following scenarios require you to customize the XML policy files directly:

You want the enrichment to result from an event value other than the default internal_resourceaddr. For example, assume that you want to use a custom database to enrich an event based on each event's originating vendor. In this case, you must edit the enrichment policy XML file to respond to this value. To do so, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Enrich> section of the file. In the <Enrich> section, find the line that contains the following attribute:
input="internal_resourceaddr"

This input value controls which internal event property value to use to gather information from an enrichment source. You must enter a valid internal event property, such as internal_resourcevendor for the scenario above, for this attribute. Note: The Internet search URL uses tags other than internal_resourceaddr. For more information, see Internet Search Enrichment Policy Configuration (see page 147). For a list of internal event properties, see Internal Event Properties.

You want to add the enrichment data to an event attribute that is not listed in the administrative interface.

84 Product Guide

Integrating with CA NSM

For example, assume that you want to append enrichment data to an event User field. To add the enrichment to an event field that is not listed in the administrative interface, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Format> section of the file. In the <Format> section, find the following line:
<Field format="{0}" input="internal_reportingagent,userdetail_0" output="internal_reportingagent"

Change the output attribute value to the destination event property to which to add the enrichment data (evtlog_user for the scenario above). The input variable lets you specify whether to append the enrichment data to the existing data or replace the existing data with the enrichment. To append the enrichment data to the existing data, separate the tag from the enrichment tag with a comma, as shown above. To replace the existing data with the enrichment tag, include only the enrichment tag for the input attribute.

You want to create a complex query that returns multiple enrichment values. For example, assume that you want to enrich the Category field with a WorldView object name and severity. To do so, you must customize the XML enrichment policy file directly to enter a complex, multi-value query and format how each value appears in the destination event. For more information about writing complex enrichment policy, see Enrich Operation in the appendix "Writing and Customizing Policy."

More information: Internal Event Properties (see page 238) Enrich Operation (see page 269)

CA NSM Deployment Scenarios


Deploying CA Event Integration with CA NSM can provide the following benefits:

Event normalization to create a common, understandable event format Event collection in an internal database for advanced reporting and filtering Event enrichment for adding value to events and reducing administrative overhead Event collection from or dispatching to third party sources to integrate disparate event managers and create a unified enterprise event management system

After you install the manager on your management server and connectors on all servers required to integrate with your CA NSM environment, you must create and deploy the appropriate configuration to all connectors.

Chapter 3: Integrations and Deployment Scenarios 85

Integrating with CA NSM

The following section details these CA NSM-specific deployment scenarios:


Collecting CA NSM agent events and sending them to the manager database for advanced reporting and back to CA NSM to establish a normalized event format. Collecting CA NSM agent events, enriching them with WorldView object properties, and sending them back to CA NSM in their normalized, enriched format. This scenario makes CA NSM events easier to understand and respond to and enhances them with additional object information.

You can implement these scenarios in the form and quantity necessary for your enterprise. For example, you can send CA NSM events to a remote CA NSM node, or in a tiered CA Event Integration architecture, send events collected from multiple connectors to one central connector that sends all processed events to one CA NSM node. For more information about implementation options, see CA NSM Implementation and Configuration and Tiered CA Event Integration Implementation.

Scenario One: CA NSM to CA NSM and the Manager Database


When deploying CA Event Integration in a CA NSM environment, you can simplify and normalize agent events by running them through the core processing engine and returning them to CA NSM. CA NSM Event Management receives events from several disparate sources, such as DSM, SNMP traps, the Windows Event Log, and other CA Technologies products, and the varied and lengthy event syntax from each of these sources is often difficult to understand, making it a challenge to process these messages in a timely manner. Note: Policy customization is required to collect events from non-agent sources. In the following scenario, you create and deploy a catalog that collects CA NSM events, normalizes these events into a common format with common event grammar, and returns the events to CA NSM and sends them to the manager database. This scenario is useful for making events more understandable in the CA NSM Event Console and organizing events from disparate sources into a common format, therefore reducing the time required to find and react to important events. This scenario can also reduce the number of events forwarded to a CA NSM Event Manager by filtering certain classes of events. Fewer events could be processed by existing message record and actions, Advanced Event Correlation scripts, and Alert Management policy, and you can write new actions, scripts, and policies to take advantage of the unified format and reduce administrative overhead. This scenario also sends CA NSM agent events to the manager database, where you can run advanced reports filtering events by important metrics, such as node, severity, and event source.

86 Product Guide

Integrating with CA NSM

To deploy Scenario One: CA NSM to CA NSM and the manager database 1. 2. Access the administrative interface on the manager server. (Optional) Click the nsmevent-dest.xml file from the Policies tab and enter the CA NSM node to send events to in the destnode field. This step is only required to send events to a remote CA NSM destination (one without a connector installed). Leave the field blank if you want to send events back to each connector server. Click Create a Catalog on the Dashboard tab. The New Catalog wizard opens. 4. Create a catalog (see page 158) that contains the following policies: Source: nsmevent-src.xml Destination: nsmevent-dest.xml, database-dest.xml Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 5. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 6. Click Next. The Select Connectors page opens. 7. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 8. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 9. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." to move them to the Selected Connectors pane. Click

3.

Chapter 3: Integrations and Deployment Scenarios 87

Integrating with CA NSM

Scenario Two: CA NSM to CA NSM and the Database with WorldView Enrichment
Scenario One (see page 86) illustrated the value gained from normalizing CA NSM events and returning them to CA NSM in a unified format with uniform event grammar. Scenario Two builds on Scenario One by adding WorldView enrichment policy to the catalog. In this scenario, the WorldView enrichment uses the node defined in each event to extract information from the WorldView object associated with the node and add this information to the event in the Category field. As a result, a normalized, enriched event is returned to CA NSM with WorldView information that adds value by correlating agent events with related WorldView objects. To deploy Scenario Two: CA NSM to CA NSM with WorldView enrichment 1. Access the administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page opens. The WorldView enrichment file requires you to enter connection and enrichment settings before you use it in a catalog. 2. Click nsm-enrich.xml The Policy Configuration: NsmEnrich page opens. 3. Enter the CA NSM repository, admin user name, and admin password in the corresponding fields and click Save on the NsmEnrich table. These settings let the enrichment modules connect to CA NSM to extract WorldView properties. The enrichment connection settings are configured. 4. Complete the following fields in the Assignment table of the Policy Configuration page to configure the enrichment data assignment and click Save on the table: wv_tagname Specifies an XML property to assign to the returned enrichment value. The enrichment value requires a property name for insertion into an event. Use the default property wv_name to correspond to the name property that you are extracting from WorldView. assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy to insert the property's enrichment value into a specific place in an event. Select a variable from the drop-down list. You cannot use the same variable in multiple enrichment policies. The enrichment policy is configured. 5. Return to the View Policies page and click nsmevent-dest.xml. The Policy Configuration: UniEvent page opens.

88 Product Guide

Integrating with CA NSM

6.

Complete the following fields in the UniEvent_Assignment table: enrichment_variable Specifies the enrichment variable to assign to events. Select the variable that you assigned to the enrichment in Step 4. assigned_to_unieventtag Specifies where in the destination event to insert the enrichment data represented by the variable. Select evtlog_category to insert the enriched node information in the Category field of the destination event. Click Save on the UniEvent_Assignment table. The enrichment destination is configured. Note: From the administrative interface, you can effectively configure simple, single-value enrichments. If you want to create a complex multi-value enrichment or enrich based on a different event property, you must edit the XML policy files directly. For more information, see Manual CA NSM Enrichment Configuration.

7.

Click the Catalogs tab and click New Catalog. The New Catalog wizard opens.

8.

Create a catalog (see page 158) that contains the following policies: Source: nsmevent-src.xml Destination: nsmevent-dest.xml Enrichment: nsm-enrich.xml Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors.

9.

Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page.

10. Click Next. The Select Connectors page opens. 11. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 12. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. to move them to the Selected Connectors pane. Click

Chapter 3: Integrations and Deployment Scenarios 89

Integrating with CA Spectrum SA

13. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." More information: Scenario One: CA NSM to CA NSM and the Manager Database (see page 86) Manual CA NSM Enrichment Configuration (see page 84)

Integrating with CA Spectrum SA


CA Spectrum Service Assurance (CA Spectrum SA) is a service-oriented management tool that unifies the health and availability information from your domain management tools and aligns the information in the context of your IT services. The product provides a new service management layer to your management infrastructure and can integrate with several CA Technologies products to provide service quality and risk information that spans data from all configured management systems. Integrating with CA Spectrum SA can provide the following benefits:

Enhancing the CA Spectrum SA alert collection capability to include lower-level sources such as application logs, security events, Windows events, SNMP traps, and so on Collecting and processing CA Spectrum SA alerts and exposing them to other sources A middle tier alert processing proxy layer that collects, processes, and enriches alerts and dispatches enhanced alerts to the Service Console

This section describes how to implement CA Event Integration for use with CA Spectrum SA and how to configure and deploy common use cases. Important! The information in this section applies when you install CA Event Integration separately from CA Spectrum SA. When you install CA Event Integration as a component from the CA Spectrum SA installation image, pre-configuration occurs, and configuration requirements change. For more information, see the CA Spectrum SA documentation.

90 Product Guide

Integrating with CA Spectrum SA

CA Spectrum SA Implementation and Configuration


To implement CA Event Integration in your CA Spectrum SA environment, you must install connectors to collect alerts from and send alerts to the CA Spectrum SA IFW bus. Install a CA Event Integration connector for each SA Manager server in your environment. The CA Event Integration connector can reside anywhere on the network. The SA Manager server contains the IFW bus that transmits data from the CA Spectrum SA connectors to the SA Manager for analysis and display in the Service Console. The CA Event Integration connector subscribes to the IFW bus to listen for and collect alerts and publishes to the IFW bus to dispatch alerts to the CA Spectrum SA Service Console. In addition to collecting and dispatching CA Spectrum SA alerts, CA Event Integration can serve as a middle-tier alerts processing proxy connector for CA Spectrum SA. In this role, CA Event Integration intercepts alerts from the CA Spectrum SA connectors; applies consistent processing, such as filtering, consolidation, and enrichment; and dispatches to the SA Manager. You have the flexibility to implement CA Event Integration in your CA Spectrum SA environment based on the role you want it to play. Note: For more information about use cases and tiered deployments, see CA Spectrum SA Usage and CA Spectrum SA Deployment Scenarios. You can implement CA Event Integration with CA Spectrum SA in one of the following two ways: Installation from the CA Spectrum SA media Lets you install the CA Event Integration components already configured to process and enrich CA Spectrum SA alerts and send events from other sources to CA Spectrum SA. When you install Event Enrichment and the Event connector from the CA Spectrum SA installation image, the CA Event Integration components are pre-configured to integrate with CA Spectrum SA. This is the recommended method for implementing with CA Spectrum SA. For more information about installing Event Enrichment and the Event connector with CA Spectrum SA, see the CA Spectrum SA Implementation Guide and Connector Guide. Standalone installation Installs CA Event Integration separately from CA Spectrum SA. When you install CA Event Integration in a standalone operation, you must manually configure the CA Spectrum SA integration as described in CA Spectrum SA Configuration Requirements (see page 91).

CA Spectrum SA Configuration Requirements


No manual configuration is required in CA Spectrum SA for a basic CA Event Integration implementation. As long as CA Spectrum SA is implemented and working correctly, CA Event Integration can collect alerts from and dispatch alerts to the IFW bus.

Chapter 3: Integrations and Deployment Scenarios 91

Integrating with CA Spectrum SA

When you install CA Event Integration separately from CA Spectrum SA, manual configuration is required when CA Event Integration plays the role of a middle-tier proxy connector. In this case, you must edit the CA Spectrum SA IFW bus topic to which CA Spectrum SA subscribes, so that CA Event Integration intercepts alerts from the standard topic and sends processed alerts to the topic on which the SA Manager is listening. This configuration is necessary so that CA Spectrum SA does not see the original alert until it has been processed by the CA Event Integration middle-tier proxy connector. For more information, see How to Configure CA Spectrum SA Proxy Connector. Note that CA Spectrum SA ignores alerts that are not associated with a service model. Therefore, if you want to send events from other sources, you must make sure that service models exist in CA Spectrum SA that include configuration items corresponding to these event sources. For more information about creating service models, see the CA Spectrum SA documentation.

Configure CA Spectrum SA Policy


To enable alarm collection from and alarm dispatching to CA Spectrum SA, you must define the following information in CA Event Integration:

SA Manager host name and messaging port SA Administrator credentials

You define these attributes in the sam-src.xml policy file for collecting alerts using the CA Spectrum SA source adaptor and the sam-dest.xml policy file for dispatching alerts using the destination adaptor. You must configure these policy attributes from the administrative interface before deploying either policy file in a catalog. Note: If you install CA Event Integration as a component of CA Spectrum SA, the necessary policy files are pre-configured. To configure CA Spectrum SA policy 1. Access the administrative interface on the CA Event Integration manager server. The Dashboard tab opens by default. 2. Click the Policies tab. The View Policies page opens.

92 Product Guide

Integrating with CA Spectrum SA

3.

Click the link on each of these files in separate operations:


sam-src.xml sam-dest.xml

The Policy Configuration: SAMAdapter page opens for each file. 4. Complete the following fields on each Policy Configuration page and click Save: hostin, hostout Specifies the SA Manager host from which to collect alerts (hostin) and to which to dispatch alerts (hostout). portin, portout Specifies the port number for TCP communication over the SA Manager. Default: 61616 userin, userout Specifies the CA Spectrum SA administrator user name for connecting to the CA Spectrum SA IFW bus on the SA Manager. passwdin, passwdout Specifies the password associated with the specified user name. Note: Other attributes are populated with default values that you do not have to edit to integrate with CA Spectrum SA. For more information about all policy attributes, see CA Spectrum SA Policy Configuration (see page 139). All required information is defined.

Configure CA Spectrum SA CI Creation and Association


Incoming alerts from CA Event Integration must associate with a CI in a service model in CA Spectrum SA to appear in the Service Console. The Event connector creates CIs based on alert information that you can add to service models, or associates the alert with an appropriate existing CI. You can control the granularity of this CI creation and association in CA Event Integration to match the granularity of your modeled services and ensure that CA Event Integration alerts are considered appropriately in service impact analysis. CA Event Integration can create CIs or associate alerts with existing CIs at either of the following levels: Device Creates CIs based on the alert's top level device. For example, if an alert is associated with a specific process or memory usage, CA Event Integration creates an associated CI for the server that contains the process or memory. This is the default CI granularity.

Chapter 3: Integrations and Deployment Scenarios 93

Integrating with CA Spectrum SA

Subdevice Creates CIs based on the specific objects with which alerts are associated. This more granular method creates CIs based on the specific class association of the alert's subdevice. For example, the connector would create a CI for a CPU for an alert associated with high CPU usage. Properly configuring the CI granularity level is vital to ensure that CA Event Integration alerts are accurately evaluated in CA Spectrum SA. For example, if your service models in CA Spectrum SA contain only high level CIs such as servers and network devices and CA Event Integration is configured to associate alerts at the subdevice level, alerts for specific CIs (such as services or users) within those high level CIs will be associated with the specific CIs, and therefore not be considered as impacting the service. Conversely, if your services are granularly modeled in CA Spectrum SA but CA Event Integration is configured to associate alerts at the device level, alerts associated with specific CIs will be associated with device-level CIs, making it more difficult to discover the root cause of service degradation. To configure CA Spectrum SA CI creation and association 1. Access the CA Event Integration administrative interface and click the Policies tab. The View Policies page opens. 2. Click sam-dest.xml. The Policy Configuration: SAMAdapter page opens. 3. Set the reconcile_level field to the appropriate granularity level, device or subdevice. By default, the granularity level is device. Click Save. The policy is configured. 4. Re-deploy any currently deployed catalog with this policy assigned for the policy change to take effect.

CA Spectrum SA Usage
CA Event Integration communicates with CA Spectrum SA through the IFW bus that contains all CA Spectrum SA messaging traffic. CA Event Integration connectors can collect infrastructure alerts for core processing and exposure to other destinations, perform advanced processing operations on alerts, and send events from other sources to CA Spectrum SA to broaden the product's service management capabilities.

94 Product Guide

Integrating with CA Spectrum SA

CA Event Integration can play the following roles with CA Spectrum SA: Advanced CA Spectrum SA connector A CA Event Integration connector can function like any other CA Spectrum SA connector by publishing alerts to the IFW bus. CA Event Integration can integrate SNMP traps, Windows Event Log events, application log files, and web services events into CA Spectrum SA to broaden the scope of service management by introducing data from these low level sources. For example, a Windows Event Log event may warn you of a Windows service stoppage that affects the quality of a key modeled service. Dispatching this event to the Service Console allows CA Spectrum SA to derive the service impact of the stoppage and generate service impact alerts accordingly. Sending SNMP traps lets you integrate with any product that uses SNMP-based communication, such as the CA SystemEDGE agent. CA Event Integration also integrates with domain managers whose events CA Spectrum SA does not collect. For example, you can use CA Event Integration to dispatch CA NSM events, since the CA NSM connector only provides state change and WorldView information, not raw events. Other domain includes mainframe. Alert router CA Event Integration can collect infrastructure alerts from the CA Spectrum SA IFW bus and send them to other management destinations. Infrastructure alerts in CA Spectrum SA are the alarms that originate from the CA Spectrum SA domain connectors, such as CA CMDB, CA eHealth, CA Wily Introscope, and so on. Collecting infrastructure alerts exposes CA Event Integration to this data from other domain managers and enables dispatching to another domain manager for analysis in a different context. Middle-tier proxy connector CA Event Integration can serve as a CA Spectrum SA middle-tier proxy connector. In a three-tiered event architecture, the domain managers (such as CA NSM, CA Spectrum, and so on) generate events at the lowest level. These are collected and forwarded by corresponding CA Spectrum SA and CA Event Integration connectors. A middle (proxy) tier provides centralized, scalable processing to these alerts, such as consolidation, filtering, and enrichment. A CA Event Integration connector with configured CA Spectrum SA sources and destinations can serve in this proxy role. The top tier consumes and displays the processed alerts in the context of modeled services. The SA Manager serves in this top tier role. You can configure CA Event Integration to intercept alerts from the IFW bus before they are collected by CA Spectrum SA. For more information, see Configure CA Spectrum SA Proxy Connector.

Chapter 3: Integrations and Deployment Scenarios 95

Integrating with CA Spectrum SA

The proxy connector use case also supports a tiered CA Event Integration connector implementation. For example, you can use multiple connectors to collect events and alarms from various sources on multiple servers, including CA Spectrum SA, CA NSM, log files, traps, and so on. These connectors can perform basic processing and filtering and subsequently forward the events to another CA Event Integration connector, which performs higher level processing such as enrichment before dispatching the enhanced events to CA Spectrum SA. For more information, see Tiered CA Event Integration Architecture.

How to Start Processing CA Spectrum SA Alerts


Collecting CA Spectrum SA alarms requires only installation and basic policy configuration. Use the following high-level process to verify that you have completed all tasks necessary to start collecting and processing CA Spectrum SA alarms and dispatching processed alarms from other sources to CA Spectrum SA: Notes:

The following process does not consider advanced tasks such as enrichment or tiering the CA Event Integration architecture. If you install CA Event Integration as a component of CA Spectrum SA, default pre-configuration occurs. However, you can follow this process if you need to add sources or enrichments, or change any other aspect of the pre-configured settings. Install the manager and connectors to integrate with your CA Spectrum SA environment. For more information, see the chapter "Installation." (Optional) Ensure that service models exist that contain configuration items corresponding to any event sources that you want to dispatch to the Alarm Console. Note: If CIs do not exist for the objects from which CA Event Integration is forwarding alarms, they should be created automatically in CA Spectrum SA once the alarms begin flowing. In this case, you may need to add new CIs to service models after you begin dispatching alarms to CA Spectrum SA.

1. 2.

3. 4. 5.

Log in to the administrative interface (see page 124). Configure CA Spectrum SA policy files. (see page 92) Create a catalog (see page 158) with these policies assigned. Note: You assemble all sources, destinations, and enrichments in a catalog. For more information about sources and destinations that you can use in a catalog, see Other Integrations. For more information about configuring enrichments, see CA Spectrum SA Enrichment. For more information about example catalog configurations, see CA Spectrum SA Deployment Scenarios.

6.

Assign (see page 162) and deploy (see page 163) the catalog on the appropriate connector.

96 Product Guide

Integrating with CA Spectrum SA

After you deploy a catalog with the appropriate configuration, CA Event Integration should begin collecting events from defined sources, processing and enriching events according to assigned policy, and dispatching events to defined destinations. For more information about working with catalogs and connectors, see the chapter "Configuration and Administration."

How to Configure CA Spectrum SA Proxy Connector


When you use CA Event Integration as a CA Spectrum SA middle-tier proxy connector, you can configure the connector to intercept alerts from the IFW bus before the SA Manager collects and displays the alerts in the Service Console. Collecting and processing alerts before they reach the SA Manager prevents a situation where alerts processed by CA Event Integration appear as duplicates of alerts that are already in the Service Console. Configure this proxy connector architecture by configuring event topics in the IFW bus. By default, the SA Manager and CA Event Integration collect alerts by subscribing to the default event topic to which CA Spectrum SA connectors publish alerts. You can edit this configuration so that the SA Manager collects alerts from a new topic to which CA Event Integration publishes its processed alerts. Note: This process is not required if you install CA Event Integration as a component of CA Spectrum SA. Complete the following process to configure CA Event Integration as a CA Spectrum SA proxy connector: 1. Open the SA_HOME\tomcat\common\classes\jmsconnect.properties file on the SA Manager and change the topic.event line as follows:
topic.event=CA_IFW_XEVENT_TOPIC

CA_IFW_XEVENT_TOPIC represents the topic to which the SA Manager will listen for published alerts. This topic can have any name, as long as you name it identically in both products. 2. 3. Access the CA Event Integration administrative interface and click the Policies tab. Set the topicin field of the sam-src.xml file to CA_IFW_EVENT_TOPIC. This setting configures the CA Event Integration connector to collect alerts from the default topic to which CA Spectrum SA connectors publish alerts. Set the topicout field of the sam-dest.xml to the same topic that you entered in the SA Manager properties file (CA_IFW_XEVENT_TOPIC in the example above). This setting configures the CA Event Integration connector to publish alerts to the IFW topic to which the SA Manager subscribes.

4.

Note: For a complete deployment scenario using a proxy connector configuration, see CA Spectrum SA Deployment Scenarios.

Chapter 3: Integrations and Deployment Scenarios 97

Integrating with CA Spectrum SA

By default, CA Event Integration maintains a durable subscription to the CA Spectrum SA ActiveMQ Server. To switch to a nondurable subscription, change the durable_subscription property in the EI_HOME\config\emaa.properties file to off. If you disable the CA Spectrum SA integration, you should unsubscribe from the ActiveMQ Server to keep it from holding data indefinitely for CA Event Integration. To do so, change the host_unsubscribe property in the EI_HOME\config\emaa.properties file to on.

CA Spectrum SA Enrichment
The CA Event Integration enrichment capability lets you enrich CA Spectrum SA alerts with supplemental information that did not originally appear in the alert. Enrichment can add value to alerts by providing information that enables more efficient administration and resolution. Some example enrichment use cases with CA Spectrum SA are as follows:

Enriching alerts with contact or other supplemental information from a database using the provided custom database enrichment module Enriching alerts with a URL that enters alert information into an Internet search engine Enriching alerts with CA CMDB and CA NSM WorldView information using the provided CA CMDB and CA NSM enrichment modules Enriching alerts from CA Spectrum or other sources with CA Spectrum model attributes Creating custom enrichment modules to enrich alerts with information from any outside source

You can place enrichment data in the User Attribute fields of the destination alert. To place enrichment data in a different destination alert field, you must configure the enrichment data destination manually in the enrichment XML file. For more information about creating and customizing enrichment policy, see the appendix "Writing and Customizing Policy."

How to Configure CA Spectrum SA Enrichment


You can enrich alerts with information from custom databases, CA CMDB, CA NSM WorldView (Windows only), CA Spectrum model attributes, and an Internet search URL, and create custom enrichment modules. Enrichment values can appear in the alert User Attribute fields. Configuring an alert enrichment requires that you configure the enrichment source, assign the enrichment a variable, and assign that variable to a specific location in the destination alert. To complete these tasks, edit policy attributes in the administrative interface.

98 Product Guide

Integrating with CA Spectrum SA

Complete the following process to configure a basic alert enrichment: Note: The following process only covers basic enrichment configuration; to deploy a configured enrichment, you must create a catalog with the appropriate policy and deploy the catalog on connectors. 1. Verify that the integration with CA Spectrum SA is properly configured. For more information, see CA Spectrum SA Implementation and Configuration. 2. Select and prepare the enrichment source. If you are using a custom database, verify that it exists and is properly configured with current information. 3. Open the administrative interface and click the Policies tab. The View Policies page opens. 4. Click the policy file for the enrichment source to use. If you are using a custom database, click the file that corresponds to your database type (SQL Server, Oracle, or MySql). The Policy Configuration page opens for the selected file. 5. 6. Configure the connection settings for the enrichment source (if using a custom database, WorldView, or CA CMDB). Edit the enrichment query to extract the appropriate information. For custom databases, the default query in the singleresultquery field is as follows:
select contact from Table1 where hostname=?

This query denotes that you are extracting data from the contact column in Table1 where the hostname column equals the alert resource address. Construct an enrichment query for your database using the same conventions. Note: If you want the enrichment to key off of an alert value other than the default resource address, you must edit the XML policy file directly. You must also edit the XML policy file directly if you want to construct a complex query returning multiple enrichment values. For more information, see Manual CA Spectrum SA Enrichment Configuration. 7. Click Save on the upper table. The connection settings and query are configured. 8. Enter a property value (or use the default) and select an enrichment variable for the enrichment data in the following fields, then click Save on the Assignment table: tagname Specifies the XML property used to insert the returned enrichment value in the destination alert.

Chapter 3: Integrations and Deployment Scenarios 99

Integrating with CA Spectrum SA

assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. 9. Return to the View Policies page and click sam-dest.xml. The Policy Configuration: SAMAdapter page opens. Note: Verify that the basic connection attributes (hostout, portout, userout) are properly configured before you configure enrichment attributes. 10. Do the following to configure the enrichment value and enrichment destination:

Select the enrichment variable that you defined in the enrichment policy file in the enrichment_variable field. Select the CA Spectrum SA destination property to which to assign the enrichment data in the assigned_to_samtag field.

The enrichment data will appear in the destination alert in the location represented by the selected property. When you select sam_userAttribute1, the enriched data will appear in the first User Attribute field when you select the alert in the Service Console. 11. Save the Assignment table. The enrichment is configured. If you have already deployed a catalog with this policy, you must re-deploy the catalog for the enrichment to occur.

Manual CA Spectrum SA Enrichment Configuration


Although basic enrichments are fully configurable by editing enrichment and destination policy attributes in the administrative interface, the following scenarios require you to customize the XML policy files directly:

You want the enrichment to result from an alert value other than the default internal_resourceaddr. For example, assume that you want to use a custom database to enrich alert based on each alert's originating platform. In this case, you must edit the enrichment policy XML file to respond to this value. To do so, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Enrich> section of the file. In the <Enrich> section, find the line that contains the following attribute:
input="internal_resourceaddr"

This input value controls which internal event property value to use to gather information from an enrichment source. You must enter a valid internal event property, such as internal_resourceplatform for the scenario above, for this attribute.

100 Product Guide

Integrating with CA Spectrum SA

Note: The Internet search URL uses properties other than internal_resourceaddr. For more information, see Internet Search Enrichment Policy Configuration (see page 147). For a list of internal event properties, see Internal Event Properties.

You want to add the enrichment data to a destination alert attribute that is not listed in the administrative interface. The only available attributes for CA Spectrum SA enrichment data from the administrative interface are the custom User Attribute fields. For example, assume that you want to append enrichment data to an alert resource ID. To add the enrichment to an alert attribute that is not listed in the administrative interface, open the enrichment policy file from EI_HOME\Manager\PolicyStore\enrichments and find the <Format> section of the file. In the <Format> section, find the following line:
<Field format="{0}" input="internal_reportingagent,userdetail_0" output="internal_reportingagent"

Change the output attribute value to the destination event property to which to add the enrichment data (sam_resourceID in the scenario above). The input variable lets you specify whether to append the enrichment data to the existing data or replace the existing data with the enrichment. To append the enrichment data to the existing data, separate the property from the enrichment property with a comma, as shown above. To replace the existing data with the enrichment property, include only the enrichment property for the input attribute.

You want to create a complex query that returns multiple enrichment values. For example, assume that you want to enrich the sam_userAttribute1 property with a first name, last name, and email address, which will extract from separate columns in a custom database. To do so, you must customize the XML enrichment policy file directly to enter a complex, multi-value query and format how each value appears in the destination event. For more information about writing complex enrichment policy, see Enrich Operation in the appendix "Writing and Customizing Policy."

CA Spectrum SA Deployment Scenarios


Deploying CA Event Integration with CA Spectrum SA can provide the following benefits:

Exposure to event data from low level sources and from domain managers for which CA Spectrum SA does not have a connector Advanced alert processing such as enrichment, filtering, and consolidation

After you install the manager on your management server and connectors on all servers required to integrate with your CA Spectrum SA environment, you must create and deploy the appropriate configuration to all connectors.

Chapter 3: Integrations and Deployment Scenarios 101

Integrating with CA Spectrum SA

The following section details these CA Spectrum SA-specific deployment scenarios:


Sending Windows Event Log events to CA Spectrum SA. This scenario makes Windows system data available for service association in CA Spectrum SA. Collecting CA Spectrum SA alerts, performing advanced processing in CA Event Integration including enrichment and consolidation, and dispatching the enhanced alerts to CA Spectrum SA. This scenario uses CA Event Integration as a proxy connector for enhancing CA Spectrum SA infrastructure alerts.

Note: Scenario One uses a Windows only adaptor (Windows Event Log) and is therefore only applicable when using a Windows connector. For scenarios when installing CA Event Integration as a component of CA Spectrum SA, see the CA Spectrum SA Implementation Guide and Connector Guide.

Scenario One: Windows Event Log to CA Spectrum SA


CA Event Integration can add value to CA Spectrum SA by providing access to event data from low-level sources for which CA Spectrum SA does not have a connector. CA Spectrum SA collects data for service analysis from CA Spectrum SA connectors. Connectors are not provided for low-level event sources such as system logs, text log files, SNMP traps, and so on. CA Event Integration can serve as an advanced domain connector for CA Spectrum SA to expose data from any of these low-level sources for service analysis. In this scenario, you create and deploy a catalog that collects events from the Windows system, security, and application logs and dispatches them to CA Spectrum SA as infrastructure alerts. This scenario exposes Windows event data to CA Spectrum SA to aid in service impact analysis. CA Spectrum SA can use Windows event data, such as service stoppages, application failures, and security breaches, to better represent the status of a service and repair service degradation if the service depends on the Windows operating system. Note: This scenario applies when installing CA Event Integration separately from CA Spectrum SA. Configuration requirements change when installing the Event connector from the CA Spectrum SA installation media. To implement Scenario One: Windows Event Log to CA Spectrum SA 1. Access the administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page appears. All CA Spectrum SA policy files require basic configuration before you can use them in a catalog. 2. Click sam-dest.xml. The Policy Configuration: SAMAdapter page opens.

102 Product Guide

Integrating with CA Spectrum SA

3.

Do the following and click Save:

Enter the CA Spectrum SA ActiveMQ server host and TCP communication port in the hostout and portout fields. This information is required to connect to the IFW bus. Enter the CA Spectrum SA administrator user credentials in the userout and passwdout fields. Specify whether to reconcile alerts at the device or subdevice level in the reconcile_level field.

All necessary policy attributes are configured. 4. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens. 5. Create a catalog (see page 158) that contains the following policies:

Source: syslog-src.xml Destination: sam-dest.xml

Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 6. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 7. Click Next. The Select Connectors page opens. 8. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 9. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 10. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. to move them to the Selected Connectors pane. Click

Chapter 3: Integrations and Deployment Scenarios 103

Integrating with CA Spectrum SA

Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." 11. Generate Windows Event Log events. Do this by stopping or starting services, simulating an authentication failure, and so on. CA Event Integration collects the Windows Event Log events and publishes them to the CA Spectrum SA IFW bus. CA Spectrum SA creates a CI for the server on which the events are occurring and specific CIs for entities within the server affected by the collected alerts (such as Windows services). 12. Open the CA Spectrum SA Service Console and either create a new service or edit an existing service in which you want the Windows Event Log data to appear. The Service Modeler opens. 13. Add the CI for the Windows Event Log server and any specific CIs to the service and save the service. Note: You can add relationships, service impact, and escalation policy to the CIs within the service to control how Windows Event Log data impacts the service. When Windows Event Log events occur, they are associated with the service to which you added the CIs, and they appear in that service's Alerts tab in the Service Console. Alerts are cleared when their condition disappears (for example, a stopped service that is restarted).

Scenario Two: CA Spectrum SA to CA Spectrum SA with Enrichment and Consolidation


CA Event Integration can serve as a CA Spectrum SA proxy connector that performs advanced processing on all infrastructure alerts. Infrastructure alerts are messages sent from CA Spectrum SA connectors that signify a domain manager event or condition that could impact a service (for example, CA NSM state change events or a CA Spectrum alarm). These alerts may be missing important information or require further processing to establish a quality set of actionable alerts. For example, if CA Spectrum SA is receiving alerts from multiple connectors that are managing the same network from different perspectives (for example, CA Spectrum, CA eHealth, and CA NSM), multiple domain managers may report the same alarm condition, causing alert duplication in the Service Console and increased administrative overhead. This scenario configures CA Event Integration as a CA Spectrum SA proxy connector. You create and configure a catalog that intercepts CA Spectrum SA infrastructure alerts before they are collected by the SA Manager and performs advanced processing on the alerts. The scenario enriches alerts with an Internet search URL and consolidates alerts based on resource address, class, and instance. The enrichment data provides a mechanism for using an Internet search engine (or an enterprise-specific knowledge base) to find additional information about the alert condition. The consolidation prevents alert duplication and adds a count attribute to the consolidated alert in the Service Console.

104 Product Guide

Integrating with CA Spectrum SA

Note: This scenario applies when installing CA Event Integration separately from CA Spectrum SA. Configuration requirements change when installing the Event Enrichment feature from the CA Spectrum SA installation media. To implement Scenario Two: CA Spectrum SA to CA Spectrum SA with enrichment and consolidation 1. Open the SA_HOME\tomcat\common\classes\jmsconnect.properties file on the SA Manager and change the topic.event line as follows:
topic.event=CA_IFW_XEVENT_TOPIC

CA_IFW_XEVENT_TOPIC represents the topic to which processed alerts will be published. The SA Manager will collect alerts from this topic. The topic can have any name, as long as you name it identically in both products. 2. Access the CA Event Integration administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page opens. CA Spectrum SA, enrichment, and consolidation policy files require basic configuration before you can use them in a catalog. 3. Click sam-src.xml. The Policy Configuration: SAMAdapter page opens. 4. Do the following and click Save:

Enter the CA Spectrum SA ActiveMQ host name and TCP communication port in the hostin and portin fields. This information is required to connect to the IFW bus. Enter the CA Spectrum SA administrator user credentials in the userin and passwdin fields.

CA Spectrum SA source policy is configured. 5. Return to the View Policies page and click default-consolidate.xml. The Policy Configuration: Consolidate page opens. 6. Do the following and click Save:

Select resourceclass/resourceinstance/resourceaddr in the field drop-down list. This selection specifies to consolidate events when the three properties are duplicates. For example, this selection consolidates events with the same IP address, class (such as Application), and instance name.

Consolidation policy is configured. 7. Return to the View Policies page and click internet-enrich.xml. The Policy Configuration: InternetEnrich page opens.

Chapter 3: Integrations and Deployment Scenarios 105

Integrating with CA Spectrum SA

8.

Do the following:

Use the default URL in the searchurl field. This URL creates a Google search using the internal_alarmid and internal_resourceinstance values. A Google search may provide useful information about common events. If enriching events from a specific domain manager, you can use this enrichment to search an internal knowledge base for additional information. Select ev_1 in the assigned_to_enrichment variable field and click Save on the InternetEnrich_Assignment table. This selection inserts the search URL into a variable that you can reference in destination policy.

The enrichment policy is configured. 9. Return to the View Policies page and click sam-dest.xml. The Policy Configuration: SAMAdapter page opens. 10. Do the following and click Save on the upper table:

Complete the hostout, portout, userout, and passwdout fields to match the values you entered for sam-src.xml. Specify whether to reconcile alerts at the device or subdevice level in the reconcile_level field. Enter CA_IFW_XEVENT_TOPIC in the topicout field to publish alerts to this topic on the CA Spectrum SA IFW bus, where you configured CA Spectrum SA to listen for alerts in Step 1.

11. Complete the following fields in the Policy Configuration: SAMAdapter_Assigment table and click Save: enrichment_variable Specifies the enrichment variable to assign to alerts. Select the variable that you assigned to the enrichment in Step 8. assigned_to_samtag Specifies where in the destination alert to insert the enrichment data represented by the variable. Select sam_userAttribute1 to insert the enriched contact information in the first custom User Attribute property of the destination alert. All policies are configured. Note: From the administrative interface, you can effectively configure simple, single-value enrichments. If you want to create a complex multi-value enrichment or enrich based on a different alarm property, you must edit the XML policy files directly. For more information, see Manual CA Spectrum SA Enrichment Configuration. 12. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens.

106 Product Guide

Integrating with CA Spectrum SA

13. Create a catalog (see page 158) that contains the following policies: Source: sam-src.xml Destination: sam-dest.xml Enrichment: internet-enrich.xml, default-consolidate.xml Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 14. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 15. Click Next. The Select Connectors page opens. 16. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 17. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 18. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." The CA Event Integration connector should collect infrastructure alerts from the CA_IFW_EVENT_TOPIC on the CA Spectrum SA IFW bus before the SA Manager collects them. After processing, enrichment, and consolidation, CA Event Integration publishes the alerts to the CA_IFW_XEVENT_TOPIC, where they are collected by the SA Manager and displayed on the Service Console. 19. Open the Service Console and select the Alerts tab for a service. The User Attribute field should contain the search URL, and alerts that were consolidated should include a count attribute. You may need to manually display the User Attribute field. For more information, see the CA Spectrum SA documentation. to move them to the Selected Connectors pane. Click

Chapter 3: Integrations and Deployment Scenarios 107

Integrating with Mainframe Products

Integrating with Mainframe Products


CA Event Integration can collect information through SNMP traps from the following CA Technologies mainframe products: CA OPS/MVS Event Management and Automation (CA OPS/MVS EMA) CA OPS/MVS EMA is a mainframe event management tool that acts as a console for mainframe events and lets you write event automation rules. CA Event Integration can collect traps from CA OPS/MVS EMA representing batch jobs and alarms and dispatch them to other management systems. The primary use case for this integration is dispatching the mainframe information to CA Spectrum SA, where Server class CIs appear in the Service Console representing batch jobs, and alerts for a batch job associate with its CI. CA SYSVIEW Performance Management (CA SYSVIEW PM) CA SYSVIEW PM is a mainframe performance management tool that lets you monitor and manage mainframe performance metrics such as system activity, CPU usage, and transaction details. CA Event Integration can collect traps from CA SYSVIEW PM that represent threshold alerts. The primary use case for this integration is dispatching the mainframe information to CA Spectrum SA, where Application class CIs appear in the Service Console representing mainframe performance resources, and you can view how mainframe threshold alerts affect business services. This section describes how to implement CA Event Integration with CA OPS/MVS EMA and CA SYSVIEW PM and how to configure and deploy common use cases.

Mainframe Products Implementation and Configuration


To implement CA Event Integration in your mainframe environment, install a connector to collect events and alerts through SNMP traps from the mainframe products. The connector collects alerts through SNMP traps, and can reside anywhere related to the mainframe products. You can collect alerts from both mainframe products with one connector, as long as both products are configured to send traps to the connector.

Mainframe Products Configuration Requirements


You must perform the following configurations to enable the mainframe integrations:

Configure a trap destination from one or both mainframe products to the CA Event Integration connector server, so that traps from the products reach the CA Event Integration connector. Configure the mainframe product source policy in CA Event Integration, and enter a port number in the source policy to receive all traps on the configured port.

108 Product Guide

Integrating with Mainframe Products

However, to return meaningful data from the mainframe products, you must customize the mfsysview-src.xml and mfopsmvs-src.xml files. The default source policy files are a basic framework for collecting traps that perform little specialized processing. You must customize the policies to collect the traps that you are interested in and appropriately process the alerts in your environment. For more information about policy customization, see the appendix "Writing and Customizing Policy."

Configure Mainframe Products Source Policy


To enable alert collection through SNMP traps from the mainframe product, you must define the port information in the source policy of the mainframe product. You must configure the policy attribute from the CA Event Integration administrative interface before deploying the policy in a catalog. The source policies are mfopsmvs-src.xml for CA OPS/MVS EMA and mfsysview-src.xml for CA SYSVIEW PM. To configure mainframe products source policy 1. Access the administrative interface on the CA Event Integration manager server. The Dashboard tab appears by default. 2. Click the Policies tab. The View Policies page opens. 3. Click the policy mfopsmvs-src.xml or mfsysview-src.xml, depending on the mainframe product. The Policy Configuration: SNMPAdapter page opens. 4. Enter the port number you want to use for receiving all traps in the port field, then click Save. The appropriate source policy is configured. The SNMP source sends all traps to the CA Event Integration host on this port. Note: Other attributes in the policy are populated with default values that you do not have to edit to integrate with the mainframe product. For more information about all policy attributes, see the appropriate policy section CA OPS/MVS EMA Policy Configuration (see page 141) or CA SYSVIEW PM Policy Configuration (see page 142).

Mainframe Deployment Scenario: CA OPS/MVS EMA to CA Spectrum SA


Collecting alarms from CA OPS/MVS EMA and dispatching them to CA Spectrum SA lets you manage mainframe events in the context of business services. When you dispatch alarms to CA Spectrum SA, you can model services with CIs created by the connector to view mainframe alerts from the CA Spectrum SA Service Console and enable functionality against these alerts such as escalation policy and enrichment.

Chapter 3: Integrations and Deployment Scenarios 109

Integrating with Mainframe Products

In this scenario, you configure CA OPS/MVS EMA alarm collection through SNMP traps and dispatch the alarms to CA Spectrum SA as infrastructure alerts. Important! This scenario applies when you install CA Event Integration separately from CA Spectrum SA. If you install CA Event Integration as the Event connector from the CA Spectrum SA installation image, certain steps in this procedure may change or no longer be required. For more information, see the CA Spectrum SA Connector Guide. To implement CA OPS/MVS EMA to CA Spectrum SA scenario 1. 2. Configure a trap destination in CA OPS/MVS EMA for the CA Event Integration connector server. Access the CA Event Integration administrative interface on the manager server and click View Policies on the Dashboard tab. The View Policies page opens. CA OPS/MVS EMA source policy and CA Spectrum SA policy require basic configurations before you can use them in a catalog. 3. Click mfopsmvs-src.xml. The Policy Configuration: SAMAdapter page opens. 4. Enter the port number you want to use for receiving traps in the port field, then click Save. The CA OPS/MVS EMA source policy is configured. The SNMP source sends all traps to the CA Event Integration host on this port. 5. Click Return to View Policies. The View Policies page opens. 6. Click sam-dest.xml. The Policy Configuration: SAMAdapter page opens. 7. Do the following and click Save:

Enter the CA Spectrum SA ActiveMQ server host and TCP communication port in the hostout and portout fields. This information is required to connect to the IFW bus. Enter the CA Spectrum SA administrator user credentials in the userout and passwdout fields. Specify whether to reconcile alerts at the device or subdevice level in the reconcile_level field.

All necessary policy attributes are configured. 8. Click the Catalogs tab and click New Catalog. The New Catalog wizard opens.

110 Product Guide

Integrating with HP Business Availability Center

9.

Create a catalog (see page 158) that contains the following policies:

Source: mfopsmvs-src.xml Destination: sam-dest.xml

Name the catalog on the Save page and click Finish. A dialog prompts you to specify whether to assign the catalog to connectors. 10. Click OK. The Assign Catalog wizard opens. The catalog you created is selected by default on the first wizard page. 11. Click Next. The Select Connectors page opens. 12. Select the connectors to which you want to assign the catalog in the Available Connectors pane, and click Next when you finish. The Confirm page opens. 13. Verify the information on the Confirm page, and click Finish. A dialog prompts you to specify whether to deploy the catalog on the assigned connectors. 14. Click OK. The catalog is deployed, and the connectors begin enacting the catalog policy on their servers. Access the Connectors tab to view the deployment status for each connector. Note: You can also assign and deploy a catalog in separate operations instead of immediately after creating a catalog. For more information, see the chapter "Configuration and Administration." CA OPS/MVS EMA sends alarms through SNMP traps to CA Event Integration, where they are processed and dispatched to CA Spectrum SA. 15. Model a service in CA Spectrum SA with the CIs created by the CA OPS/MVS EMA alarms. The alarms appear as infrastructure alerts in the Alerts tab of the Service Console for the associated CI. to move them to the Selected Connectors pane. Click

Integrating with HP Business Availability Center


CA Event Integration can collect alerts from HP Business Availability Center (HP BAC). HP BAC optimizes the availability of applications and business services using transactions to monitor end-to-end application performance.

Chapter 3: Integrations and Deployment Scenarios 111

Integrating with HP Business Availability Center

Common use cases with CA Event Integration are as follows:

Collecting alerts and dispatching them to CA Spectrum SA to monitor application performance in a larger context to determine application impact on modeled services Collecting alerts and dispatching them to another manager, such as CA NSM or CA Spectrum, to analyze application performance in the context of these domain managers

HP BAC Implementation and Configuration


To implement CA Event Integration in your HP BAC environment, install a connector to collect alerts through SNMP traps. The connector collects alerts through SNMP traps, and can reside anywhere related to HP BAC. The collected SNMP traps contain the following information about each alert:

Transaction profile Transaction name Alert severity Cause Description User defined message

HP BAC Configuration Requirements


You must perform the following configurations to enable the HP BAC integration:

Configure a trap destination from HP BAC to the CA Event Integration connector server, so that traps from the product reach the CA Event Integration connector. Configure the HP BAC source policy in CA Event Integration, and enter a port number in the source policy to receive all traps on the configured port.

However, to return meaningful data from HP BAC alerts, you must customize the hpbac-src.xml file. The default source policy is a basic framework for collecting alerts that performs little specialized processing. You must customize the policy to collect the alerts that you are interested in and appropriately process them in your environment. For more information about policy customization, see the appendix "Writing and Customizing Policy."

112 Product Guide

Integrating with CA Catalyst Connectors

Configure HP BAC Source Policy


To enable alert collection through SNMP traps from HP BAC, you must define the port information in the HP BAC source policy hpbac-src.xml. You must configure the policy attribute from the CA Event Integration administrative interface before deploying the policy in a catalog. To configure HP BAC source policy 1. Access the administrative interface on the CA Event Integration manager server. The Dashboard tab appears by default. 2. Click the Policies tab. The View Policies page opens. 3. Click hpbac-src.xml. The Policy Configuration: SNMPAdapter page opens. 4. Enter the port number you want to use for receiving all traps in the port field, then click Save. HP BAC source policy is configured. The SNMP source sends all traps to the CA Event Integration host on this port. Note: Other attributes in the policy are populated with default values that you do not have to edit to integrate with HP BAC. For more information about all policy attributes, see HP BAC Policy Configuration (see page 142).

Integrating with CA Catalyst Connectors


CA Event Integration can integrate with any CA Catalyst connector to collect and process alerts that the connector retrieves from the source product and send those alerts to a destination that would not otherwise have access to the data. Integrating with CA Catalyst connector alerts can have the following benefits:

Access to alert data from a wide range of products that you can make available to destinations other than CA Spectrum SA The ability to perform advanced processing such as enrichment and consolidation on CA Catalyst connector data

Some example use cases are as follows:

Sending alerts from all CA Catalyst connectors for which you have integrated domain managers to CA Spectrum. This establishes CA Spectrum as the event manager of managers

Chapter 3: Integrations and Deployment Scenarios 113

Integrating with CA Catalyst Connectors

Sending alerts from all systems-related domain managers (for example, IBM Tivoli, CA CMDB, and others) to CA NSM for a consolidated view of systems management events

Either of these scenarios let you optimize CA Catalyst connector data by taking advantage of advanced customizations you have already made in the area of alarm and event management (for CA Spectrum, advanced event rules and conditions, and for CA NSM, Advanced Event Correlation rules and message and record action scripts) as well as the advanced processing capabilities provided by CA Event Integration (enrichment, filtering, consolidation, evaluation, and so on).

CA Catalyst Connector Implementation and Configuration


CA Event Integration contains the CA Catalyst connector framework necessary to run CA Catalyst connectors. To implement a CA Catalyst connector in your CA Event Integration environment, you must obtain the connector materials and have the appropriate domain manager installed with which the connector can integrate. For example, if you wanted to run the CA eHealth connector in CA Event Integration, you would need the CA eHealth connector materials and a working installation of CA eHealth. Many CA Catalyst connectors are remote; that is, you can run them remotely from the integrated domain manager. For local connectors that must be installed on the same system as the domain manager, the CA Event Integration connector must also exist on the same system as the domain manager for this integration to work. You can send CA Catalyst connector alerts to any destination except for CA Spectrum SA. Sending connector alerts to CA Spectrum SA may cause conflicts or redundancies with existing CA Catalyst connectors running with CA Spectrum SA. Note that CA Catalyst connectors run through CA Event Integration only collect alerts from the source domain managers, not CIs or relationships. See the following sources for more information about CA Catalyst connectors:

The CA Spectrum SA Release Notes for a list of available CA Catalyst connectors The CA Spectrum SA Connector Guide for more information about the connector architecture and how to access and download CA Catalyst connectors The CA Catalyst Connector Guide for the specific connectors with which to integrate for connector-specific information Note: CA Catalyst Connector Guides are available with the downloadable package for each connector.

114 Product Guide

Integrating with CA Catalyst Connectors

How to Implement a CA Catalyst Connector in CA Event Integration


To implement a CA Catalyst connector in CA Event Integration, you must integrate the connector materials and policy into the CA Event Integration infrastructure. Complete the following process to implement a CA Catalyst connector in CA Event Integration: Note: What follows is an overview of the steps required to integrate a CA Catalyst connector. This process requires a detailed understanding of the CA Event Integration and CA Catalyst architecture and may require CA Services assistance. 1. Gather the connector materials. Most CA Catalyst connectors are downloadable from CA Support Online. Required connector materials include the connector .jar file, policy files, and any dependencies. Contact CA Services for assistance in gathering connector materials. 2. Migrate the necessary information in the connector configuration policy to the main connector policy file, which will act as the source policy within the CA Event Integration framework. Note: For more information about the differences between CA Event Integration and CA Catalyst connector policy, see CA Catalyst Connector Policy (see page 291). 3. 4. 5. 6. Customize the connector policy to collect only alerts. Restart the CA Event Integration services. Configure the policy attributes and connector connection information in the administrative interface. Deploy a catalog with the configured policy assigned.

For an example of a complete CA Catalyst connector integration process using the CA eHealth connector, see the 'EI 2.5 Integrating UCF Compliant Connectors.doc' file provided in the EI_HOME\Docs\Tutorials directory. Contact CA Services to help complete this process for other connectors if necessary.

Chapter 3: Integrations and Deployment Scenarios 115

Tiered CA Event Integration Implementation

Tiered CA Event Integration Implementation


CA Event Integration supports a tiered connector architecture by using the CA Event Integration destination adaptor. This adaptor lets you forward processed events from one connector to another. A tiered connector architecture can provide the following benefits: Event consolidation in large environments In a large distributed environment, management products and event sources may be distributed across multiple instances that are managed separately. Tiered CA Event Integration connectors can provide one layer of connectors that collect events from all distributed instances and forward the events to one or more connectors, which can dispatch events received from all sources to one event destination. For example, in a large scale CA NSM environment with hundreds of Event Agents feeding into dozens of Event Managers, you can install connectors to collect alarms from each Event Agent. These connectors can forward the events to a small number of higher tiered connectors, which can consolidate the events and dispatch them to a central Event Manager to create a centralized location for enterprise CA NSM event management. A tiered connector architecture enables you to more closely mirror the distributed architecture of any management product with which you are integrating Tiered event processing Creating multiple tiers of connector processing lets you separate processing operations into logical layers that correspond to the architecture of your tiered environment. For example, if the lower level connectors are collecting events from multiple sources that produce a high event volume or could produce duplicate events across sources, you can perform filtering and consolidation at this level to send a quality set of events to the higher level connector. The higher level connector can then perform advanced enrichments on the events that made it through the filters and send the optimized, enriched events to their destination.

116 Product Guide

Tiered CA Event Integration Implementation

How to Configure Tiered Connector Architecture


Complete the following process to configure a tiered CA Event Integration connector architecture: 1. Analyze the requirements of the management product with which you are integrating to determine the number of connectors required and where those connectors should be installed. Install the number of connectors that the architecture requires on servers across your enterprise. Some integrations require local installations, and some can be remote. For more information, see the sections in this chapter for the sources with which you are integrating. Access the administrative interface and click the Policies tab. Click ei-dest.xml, enter the host name and Axis2 port number of the connector to which you want to forward events, and save the file. Configure other policies required for the integrations you want to establish. Create a catalog with all necessary sources, destinations, and enrichments. Add ei-dest.xml as a destination. Create multiple catalogs if you require different catalog configurations for multiple connectors. All catalogs must have ei-dest.xml assigned as a destination. 7. Assign and deploy the appropriate catalog on all connectors that you want to forward events to the connector listed in ei-dest.xml. The connectors collect and process events according to the assigned policies and forward the processed events to the connector server named in ei-dest.xml using web services. The events appear in the Core Inbox of the destination connector and remain there until a catalog is deployed. 8. Create a catalog containing all advanced processing (enrichments, consolidation, and so on) to perform and the destinations to which to send the forwarded events. All catalogs must contain at least one source and one destination, but the destination connector does not require a source assignment to receive events forwarded from another connector. To fulfill the source requirement for a catalog that is not collecting events from any sources other than another connector, assign the ei-src.xml file as the source. This file is a placeholder that fulfills the source requirement without enacting unwanted policy or collecting events from unwanted sources. For example, if you are configuring a tiered connector architecture with CA Spectrum SA as the originating source and the final alert destination, you would create and deploy a catalog on the lower level connectors with sam-src.xml and ei-dest.xml assigned and create and deploy a catalog on the receiving connectors with ei-src.xml and sam-dest.xml assigned (not including any enrichments, advanced filtering, and consolidation). 9. Assign and deploy the catalog on the connector to which you forwarded the events.

2.

3. 4. 5. 6.

Chapter 3: Integrations and Deployment Scenarios 117

Other Integrations

The connector performs further processing on the events and dispatches them to their destination.

Other Integrations
This section describes additional integrations that are not management products.

Database Integration
The CA Event Integration manager database (EMAADB) is created when you install the manager. The database destination adaptor lets you send events from any source to the manager database. When you route events to the database, the database destination policy fits processed events into the EMAADB schema. The event properties in the manager database schema represent the common internal format to which every event is transformed before routing to other destinations. The database provides a central repository for event collection and reporting. Reports poll events in the database to return useful information about events from various sources in your enterprise, such as all nodes, resource types, sources, and IT services containing the most critical events or the most total event activity. In an enterprise with multiple event sources, the reports available in the database can provide a comprehensive overall view of important event data from all sources. You can send events to the database as a part of any CA Event Integration configuration. The following list describes some basic scenarios:

Sending events from a single source to the database to schedule and run reports on its events. Many event sources, such as CA NSM, application log files, and the Windows Event Log, can benefit from collection in the database. For example, you can send CA NSM events from all nodes in your enterprise to the database and schedule and run reports to view the CA NSM nodes producing the most critical and total events. Sending events from multiple sources to the database for a comprehensive collection of events from all sources. For example, in an enterprise with CA Spectrum, CA NSM, and web services events, you can collect events from all of these sources in the database to have one comprehensive source for evaluating event data in one common format. Sending events to the database as a supplement to additional event destinations. If you are routing events from several sources to another destination (for example, sending CA NSM and application log file events to CA Spectrum), you can also route these events to the database. While the main destination can serve as your unified event management platform, you can supplement this platform using the database to collect useful report data for these events.

118 Product Guide

Other Integrations

Windows Event Log Integration


The Windows Event Log source and destination adaptors let you collect events from and dispatch events to the Windows operating system event log, also called the Event Viewer. The Windows Event Viewer contains three separate event logs: System, Application, and Security. These logs report important Windows activity, such as service terminations, initialization problems, application failures, and authentication failures. The Windows Event Log source adaptor collect events from all three events logs, and the Event Log destination adaptor dispatches events to the Application event log. CA NSM collects events from the Windows Event Log automatically, but you may want to manage these events separately, normalize the events into a common format, or manage the events on a product that does not collect them, such as CA Spectrum or CA Spectrum SA. The following list describes common use cases for integrating with the Windows Event Log:

Sending Event Log events to CA Spectrum for management as part of your network environment. When Event Log events are in CA Spectrum, you can create alarm triggers to generate alarms from certain Windows events and create event rules and conditions using the CA Spectrum granular event variables to include Windows events in correlations and actions. Sending Event Log events to CA NSM as part of a CA NSM to CA NSM catalog configuration, so that all Event Log events are normalized into a common format on the Event Console. You can also enrich Event Log events with WorldView information to make them easier to classify in CA NSM. Normalizing events from all three event logs and returning them to the Application event log, so that you can manage events from all three logs from one place in one common format.

You can also enrich event log events using the provided enrichment modules (custom database, CA CMDB, CA NSM WorldView, Internet search, CA Spectrum model attributes) in the same manner that you enrich CA Spectrum alarms and CA NSM events. Add enrichment data to the message text of a Windows Event Log event by configuring enrichment and destination policy through the administrative interface.

SNMP Traps Integration


The SNMP source adaptor collects SNMP traps for processing and dispatching. When you deploy a catalog with SNMP source policy on a connector, the SNMP source adaptor collects all traps sent to the connector server. Collecting SNMP traps gives you the flexibility to dispatch the traps to any destination and to view all traps in the database.

Chapter 3: Integrations and Deployment Scenarios 119

Other Integrations

Note: SNMPv3 is not supported. If you have difficulty classifying and acting on certain traps sent to CA NSM or CA Spectrum, sending the traps to CA Event Integration and returning them to their original destination can simplify trap administration. For example, you can customize the SNMP source policy or create a custom database enrichment to help resolve traps to devices in CA Spectrum, reducing the amount of unmanaged traps in your environment. Most use cases require you to edit the SNMP source policy file to suit your environment. For more information about policy customization, see the appendix "Writing and Customizing Policy." More information: How to Configure SNMP Trap Collection on a CA Spectrum Server (see page 51)

Application Log Files Integration


The log reader source adaptor lets you collect event data from any generic log file. The adaptor reads log files according to the application log source policy and collects information that matches the patterns established in the policy file. For example, you can configure the application log source policy so that the adaptor collects specific information from an important network activity log file and sends it to any destination, such as CA Spectrum for management with other network events and alarms or the database for collection with events from other sources and reporting functionality. The log reader adaptor requires you to define the log file to read in the application log source policy. Specify this information from the Policies tab of the administrative interface. You must also customize this source policy file to match the format of the log file and specify the type of information to collect from the file. The log reader adaptor is only supported for connectors installed on Windows systems. Note: For more information about policy customization, see the appendix "Writing and Customizing Policy." For a complete customization scenario using application log policy, see Policy Customization Scenario: Application Log Source Policy. More information: Application Log Policy Configuration (see page 147)

120 Product Guide

Tutorials

Web Services Eventing Integration


Web services eventing is a way of sending notifications through web services. A web service can establish a subscription to another web service to receive events published by that service. The web services eventing source adaptor collects notifications sent through web services according to the web services eventing standard. Use the wsevent-src.xml source policy to configure from where the web services eventing adaptor collects web service notifications and how to process these events. This policy file requires you to define the web services publisher endpoint and other configuration information. You can specify this information from the Policies tab of the administrative interface. When you deploy a catalog with web services eventing source policy, the web services eventing adaptor collects events that the defined publisher outputs. You may want to collect web services events from important managed services and applications and send them to the database for reporting or to an external event source for management on a unified event platform. By default, the source policy is customized to process web service notifications from Microsoft Live Meeting servers. You must customize the policy to process web services events from other sources. For more information about customizing policy, see the appendix "Writing and Customizing Policy."

Tutorials
The deployment scenarios in this chapter cover various basic and complex catalog configurations for usage with specific management products. You can create numerous possible configurations, many of which can be more complex with multiple sources, customized policy, and custom enrichments. Many of these complex configurations are covered in detail in tutorials shipped with the CA Event Integration. These tutorials use procedures, screenshots, and detailed examples to guide you through deployment scenarios from installation to configuration to deployment to verification. CA Event Integration includes tutorials for the following configurations:

Running the CA Catalyst connector for CA eHealth through CA Event Integration (EI 2.5 Integrating UCF Compliant Connectors.doc) Customizing CA NSM source policy to collect events from third party applications (EI 2.0 Customize NSM Source Policy.doc)

Chapter 3: Integrations and Deployment Scenarios 121

Tutorials

Creating specialized SNMP policy (EI 2.0 Customize SNMP Source Policy.doc) Working with CA Spectrum SA, including a CA Event Integration domain connector, CA Event Integration proxy connector, and CI reconciliation techniques (EI1.2-SAM.doc) Working with CA Spectrum using r1.2 functionality, including distributed SpectroSERVER support and reconciliation by name (EI1.2-SPECTRUM.doc) Working with CA Spectrum using r1.1 functionality, including alarm enrichments, custom event codes, and event procedures (EI1.1-SPECTRUM.doc) CA NSM to manager database with custom database enrichment (EI NSM to DB.doc) CA NSM to CA NSM and the manager database with WorldView, CMDB, and custom database enrichment (EI NSM to NSM and DB.doc) SNMP traps to CA NSM and the manager database with SNMP policy customization and custom database enrichment (EI SNMP to DB and NSM.doc) CA Spectrum to CA Spectrum with custom database enrichment (EI Spectrum to Spectrum.doc) Windows Event Log and application log to manager database using customized policy and enrichments (EI Syslog and Applog to DB.doc) Web service events to manager database (EI WSEvent to DB.doc)

You can find the tutorial files at EI_HOME\Docs\Tutorials.

122 Product Guide

Chapter 4: Configuration and Administration


This section contains the following topics: Configuration Basics (see page 123) How to Create and Deploy Catalog Configurations (see page 124) Open the Administrative Interface (see page 124) Administrative Tools (see page 125) Policy (see page 128) Catalogs (see page 158) Connectors (see page 164)

Configuration Basics
Configuring CA Event Integration consists of managing the content of and the relationships among the following entities:

Connectors Catalogs Policies

Each connector establishes integrations and performs event processing on events collected from the integrated sources. The administrative interface automatically displays connectors that are registered with the manager during installation. The event processing settings that connectors enact on each event source and destination are defined in policy. CA Event Integration provides policy for all provided source and destination adaptors and enrichment modules. The main configuration tasks to perform are the following:

Configure policy attributes for each integrated source Group the configured policies together into catalogs Assign and deploy catalogs to connectors to enact the assembled policies on all sources and destinations

You can deploy one catalog to several connectors and create multiple catalogs to cover different configurations throughout your enterprise.

Chapter 4: Configuration and Administration 123

How to Create and Deploy Catalog Configurations

The architecture of connectors, catalogs, and policies gives you maximum flexibility in configuring your environment. You can compile any combination of available policies into a catalog with little configuration required. All catalog configurations, as simple as routing one source to the database, or as complex as receiving events from five different sources and routing them to three separate destinations, require only basic policy configuration, catalog creation, and catalog deployment to the appropriate connectors.

How to Create and Deploy Catalog Configurations


You must create, configure, assign, and deploy a catalog before the connector begins collecting events from sources, processing collected events according to catalog policy, and dispatching processed events to their specified destinations. Complete the following process to begin collecting and processing events: 1. 2. 3. 4. 5. Configure policy attributes (see page 131). Create a catalog (see page 158). (Optional) Preview how the catalog policy will transform events (see page 161). Assign the catalog to all connectors that you want to use the catalog policy (see page 162). Deploy the catalog on all assigned connectors (see page 163).

Open the Administrative Interface


Open the CA Event Integration administrative interface from a web browser or the Start menu. The administrative interface lets you configure your connectors, catalogs, and policies and run reports on events collected in the database. To open the administrative interface 1. Do one of the following:

Open the Start menu and select Programs, CA, Event Integration, Manager User Interface. Enter the following in a web browser:
http://servername:port/EMAA

servername Specifies the name of the CA Event Integration Manager node.

124 Product Guide

Administrative Tools

port Specifies the port number you entered on the Tomcat screen during installation. The default is 9091. The Event Integration login screen opens. 2. Enter the user name and password that you specified on the Tomcat screen during installation and click Log In. The administrative interface opens to the Dashboard tab.

Refresh the Administrative Interface


After you make certain changes in the administrative interface, you must refresh the interface to view the result of the change. For example, when you create a new report, you must refresh the interface for the report to appear in the Reports tree. A refresh is also required to view an updated deployment status on the Connectors tab. Note: Not all changes require a refresh to appear in the interface. To refresh the administrative interface, click Refresh at the top right of the screen. The interface refreshes with the most current data.

Administrative Tools
The administrative interface contains the following configuration tools: Dashboard Provides access to important configuration tasks on one screen in a logical workflow. Administration tabs Provide the functionality for configuring policies and catalogs and deploying catalogs to connectors in the following tabs:

Connectors Catalogs Policies

Dashboard
The dashboard is the central workplace of the administrative interface. It provides access to all important configuration tasks, vital status information, connector status administration, and all other areas of the interface with one click.

Chapter 4: Configuration and Administration 125

Administrative Tools

The Dashboard tab appears by default when you open the administrative interface. The Shortcuts pane provides links to the most common administrative tasks in a logical workflow, so that the dashboard can serve as a starting point for configuring your catalogs, policies, and connectors. The following tasks are available from the Shortcuts pane: Note: You can also perform these tasks from the administration tabs. The starting point is the only difference between these two methods. This chapter documents these tasks as procedures performed from the administration tabs. Click the link on each task to view these procedures. For more information about performing these tasks from the dashboard, see the Online Help. Create a Catalog (see page 158) Lets you group policies into a catalog. Catalogs define the event sources, event processing policy, and destinations for a connector or several connectors. Assign a Catalog to Connectors (see page 162) Lets you assign a catalog to one or more connectors. View Policies (see page 128) Lets you view the policies that define how events are processed for a given source or destination and set policy attributes. You group these policies to form a catalog, which you deploy to a connector to define its event processing configuration. View Catalogs (see page 158) Lets you view all existing catalogs. You can edit, delete, and preview existing catalogs and create new catalogs. View Connectors (see page 164) Lets you view all connectors, edit their catalog assignments, and deploy assigned catalogs to begin enacting the catalog's policy on the connector. View Reports (see page 171) Provides access to predefined reports that you can run on events collected in the database or administrative data. The Connectors pane provides a list of all connectors and their current connection status. From this pane, you can stop or restart each connector and open a detailed view of the connector's components.

126 Product Guide

Administrative Tools

Administration Tabs
The administrative interface provides specific tabs for interacting with each of the main entities: connectors, catalogs, and policies. The following administrative tabs are available: Connectors Displays all existing connectors, their catalog assignments, and whether each connector's catalog is deployed. The Connectors tab lets you do the following:

Assign a catalog to connectors Edit a connector configuration by changing its catalog assignment Deploy each connector's assigned catalog Deploy all assigned catalogs View connector details

Catalogs Displays all existing catalogs, the policies in each catalog, the policy types, and any descriptions associated with the included policies. The Catalogs tab lets you do the following:

Create a new catalog Edit catalogs Delete catalogs Preview how each catalog will process events

Policies Displays all existing policies, the policy type, and policy descriptions. Policies are divided into sources, destinations, and enrichments. You can filter this page to display only a certain type of policy. Click each policy file to edit configurable attributes.

Web Services
CA Event Integration uses web services (SOAP over HTTP) to populate the administrative interface with data retrieved from the connectors in your enterprise. Web service calls form the underlying architecture of the interface, enabling maximum flexibility. For information about each of the available web service calls, see the appendix "Web Services and Command Line Utilities."

Chapter 4: Configuration and Administration 127

Policy

Policy
Policy contains the instructions that tell connectors where to collect events, how to process them, and where to dispatch them after processing. Policy is stored in separate XML files at the following location:
EI_HOME\Manager\PolicyStore

The core event processing engine uses the operations defined in these files to classify, filter, parse, normalize, enrich, evaluate, and format events. Each file contains instructions that each core module uses to carry out its function. For example, the parsing module uses the parsing operations in the policy files to parse event data into new categories. Policy is divided into the following types:

Sources (see page 128) Destinations (see page 129) Enrichments (see page 130)

Your main interaction with policy is assigning it to catalogs. When you compile a catalog, you assign the source policy for each source to collect events from, the destination policy for each destination to dispatch events to, and the enrichment policy for all enrichments to apply. Connectors only establish integrations with sources, destinations, and enrichments for which policy is assigned. Because policy is provided in separate pieces, you can compile any combination of policies into a catalog. When you deploy a catalog, all of the applied policy files are compiled into one catalog.xml file that is pushed to all deployed connectors for processing. Policy files use configuration attributes to help adaptors establish integrations. You must configure some of these attributes, such as connection settings, for the specific policy files before the associated adaptor can correctly integrate with the source. The View Policies page of the Policies tab provides a view of all existing policies, which are sortable by policy type. Click each policy file on this page to configure its attributes. Policy is available for all provided sources and destinations, and enrichment policy is provided for CA NSM WorldView, CA CMDB, custom databases, CA Spectrum model attributes, and Internet searches.

Source Policy
Source policy defines how to process events collected from a specific source. Each source requires different processing rules to transform events from the source format into a common event format.

128 Product Guide

Policy

Source policy contains source-specific processing instructions for each core transformation module: classifying, parsing, normalizing, filtering, consolidating, enriching, evaluating, and formatting. For more information about the functions of each core module, see the module topics under Previewed Modules. You can assign as many sources as necessary to a catalog. For example, you may want to collect events from five different sources on a connector. If you assign the policy for all five sources, the connector can collect events from each source and process them in unique ways to create one uniform internal event format. Selecting Source Policies from the Show drop-down list on the View Policies page displays all existing source policy. Complete source policy is provided for the following sources:

CA NSM (nsmevent-src.xml) CA Spectrum (spectrum-src.xml) CA Spectrum SA (sam-src.xml) Windows Event Log (syslog-src.xml) SNMP traps (snmp-src.xml) Application log files (applog-src.xml) Web services eventing (wsevent-src.xml) HP Business Availability Center (hpbac-src.xml) CA OPS/MVS EMA (mfposmvs-src.xml) CA SYSVIEW PM (mfsysview-src.xml)

Note: For more information about the integrations supported for connectors installed on Solaris and Linux, see Provided Adaptors (see page 16).

Destination Policy
Destination policy defines how to process events to be dispatched to a specific destination. Each destination requires different processing rules to fit processed events into its internal schema. You can assign as many destinations as necessary to a catalog. For example, you may want to send collected events to three different destinations on a connector server and the database destination on the manager node. If you assign the policy for all four destinations, the connector can process collected events to fit them into each destination in a uniform event format.

Chapter 4: Configuration and Administration 129

Policy

Selecting Destination Policies from the Show drop-down list of the View Policies page displays all existing destination policy. Complete destination policy is provided for the following destinations:

CA NSM (nsmevent-dest.xml) CA Spectrum (spectrum-dest.xml) CA Spectrum SA (sam-dest.xml) CA Event Integration (ei-dest.xml) Windows Event Log (syslog-dest.xml) Manager Database (database-dest.xml)

Note: For more information about the integrations supported for connectors installed on Solaris and Linux, see Provided Adaptors (see page 16).

Enrichment Policy
Enrichment adds value to an event by using its data to extract additional information from an external source. An example of enrichment is using the node name defined in an event to add node-related information to the event from external sources, such as other management products. Selecting Enrichment Policies from the Show drop-down list on the View Policies page displays all existing enrichment policy. CA Event Integration provides the following enrichment policy: CA NSM WorldView Managed Objects (nsm-enrich.xml) (Windows only) Extracts an event node's WorldView managed object properties, such as WorldView severity and contact information, from CA NSM and adds them to the event. CA CMDB (cmdb-enrich.xml) Extracts an event node's configuration item information from CA CMDB and adds these properties to the event. You can enrich based on any configuration item property to align event processing with configuration item attributes and changes. Custom Database (mssql-enrich.xml, oracle-enrich.xml, mysql-enrich.xml) Extracts rows of data from a custom database and assigns them to the event as properties. For example, you can configure this enrichment to extract contact information from a database according to an event's device address and assign this information to the event's Source property. Use the policy file that corresponds to the type of database from which you are performing the enrichment.

130 Product Guide

Policy

CA Spectrum (spectrum-enrich.xml) Extracts an event node's CA Spectrum model attributes and adds them to the event. You can enrich based on any model attribute. Internet Search (internet-enrich.xml) Enriches an event with a URL that uses a search engine to search based on an event property. For example, you can enrich events with a custom URL that searches a knowledge base for solutions related to the event message text. To apply enrichments to destination events, you must assign an identifier property to the enrichment data in the enrichment policy file and assign that property to an external property in the destination policy file. This process lets you define where in the destination event the enrichment should appear. For more information about configuring enrichments, see the Policy Configuration topic for each enrichment and destination policy file. Note: Although you can perform most enrichment configuration using policy attributes in the administrative interface, you must edit the XML files directly to configure advanced enrichments (such as multi-value enrichments or those based on an event value other than the default). You can also create and apply customized enrichment policy to enrich events with information from external sources, such as an executable or another management product. For more information, see Policy Creation and Customization. Note: The policy files default-filter.xml and default-consolidate.xml are also listed as enrichments in the administrative interface. These files let you define filter and consolidation policy that you can add to catalogs as enrichments to enact the policy on all events. For more information about editing these files, see Filter Policy Configuration and Consolidation Policy Configuration.

Configure Policy Attributes


Policy files contain configurable attributes such as monitoring intervals and connection settings for their event source. These settings are maintained in each individual policy file using <Configure> tags. Edit these configurable attributes from the View Policies page of the administrative interface. Most policy files require you to configure attributes before using them in a catalog. Also, the pages for enrichment and destination policy files contain fields for configuring where enrichment data appears in a destination event. You must complete these fields to correctly implement enrichments. Note: You can also configure enrichments directly in the XML policy files. Some advanced enrichments require direct XML manipulation.

Chapter 4: Configuration and Administration 131

Policy

Any changes that you make to policy attributes are not reflected in a currently deployed catalog. You must re-deploy the catalog for policy changes to take effect. Therefore, you should verify that all policy attributes are configured correctly before deploying a catalog. For more information about the attributes in each policy file, see each Policy Configuration topic. To configure policy attributes 1. Click the Policies tab. The View Policies page opens. 2. Click the link on the policy file that you want to edit. The Policy Configuration page opens for the file you selected. 3. Edit the attributes and click Save. Your changes are saved. Note: If the page contains multiple tables, you must click Save on each table that you edited to save all changes.

CA Spectrum Policy Configuration


CA Spectrum source and destination policies require you to define the following information about the SpectroSERVERs with which to integrate:

The source and destination landscape host names to enable the collection of alarms from and the dispatching of events to CA Spectrum The CA Spectrum user defined for CA Event Integration interaction The CA Spectrum version that you are using

The configurable attributes in the spectrum-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off landscapein Specifies the landscapes from which to collect alarms. You must use the landscape host names, not the hex codes. This value is case-sensitive. Enter a comma-delimited landscape list to collect alarms from multiple SpectroSERVERs in a Distributed SpectroSERVER environment. Note: All listed landscapes must be part of a single Distributed SpectroSERVER environment.

132 Product Guide

Policy

vbrokeragentaddr (CA Spectrum 8.1 only) Specifies the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) from which to collect alarms. If you leave this field blank, CA Event Integration uses the appropriate landscapein value as the vbrokeragentaddr. Note: If spectrum-src.xml and spectrum-dest.xml are deployed in the same catalog, the vbrokeragentaddr value must be the same for the deployment to work. landscapeuser Specifies the CA Spectrum user defined for pushing alarms to CA Event Integration. Change the default if you have defined a user other than ca_eis_user in CA Spectrum for this purpose. Note: The user must have an Administrator license in CA Spectrum. Default: ca_eis_user plugin_version Specifies the version of CA Spectrum with which you are integrating. Select 81 if you are integrating with CA Spectrum 8.1, 90 if you are integrating with CA Spectrum 9.0 or 9.1, or 92 if you are integrating with CA Spectrum 9.2. Default: 90 In CA Spectrum destination policy, you can also configure the content and location of enrichments in destination events or alarms and define custom alarm attributes that you have created in CA Spectrum for use with enrichments. You must configure enrichment variables in enrichment policy before you add enrichments in destination policy. The configurable attributes in the spectrum-dest.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off landscapeout Specifies the landscape to which to send events. Specify a Main Location Server to send alarms to a SpectroSERVER operating in a Distributed SpectroSERVER environment.

Chapter 4: Configuration and Administration 133

Policy

landscapeuser Specifies the CA Spectrum user defined for receiving events from CA Event Integration. Change the default if you have defined a user other than ca_eis_user in CA Spectrum for this purpose. This user name must be the same as landscapeuser defined in the source policy if you are collecting alarms from and sending events and alarms to the same landscape. Default: ca_eis_user lostfoundlandscape (Optional) Specifies the landscape in which to create a lost and found module for collecting alarms and events that are not reconciled to a specific model. Enter the case sensitive host name, not the hex value. A blank entry assumes the landscapeout value as the lost and found landscape. vbrokeragentaddr (CA Spectrum 8.1 only) Specifies the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) to which to dispatch alarms and events. If you leave this field blank, CA Event Integration uses the appropriate landscapeout value as the vbrokeragentaddr. Note: If spectrum-src.xml and spectrum-dest.xml are deployed in the same catalog, the vbrokeragentaddr value must be the same for the deployment to work. ei_lostfound (Optional) Specifies whether you want to track unreconciled events in a Lost and Found module in OneClick. Default: on modellookup_method Defines the property that CA Event Integration uses to reconcile events to models. Select by_ip to use the event's IP address, or select by_name to use the event's model name. plugin_version Specifies the version of CA Spectrum with which you are integrating. Select 81 if you are integrating with CA Spectrum 8.1, 90 if you are integrating with CA Spectrum 9.0 or 9.1, or 92 if you are integrating with CA Spectrum 9.2. Default: 90 spectrum_Alarm_Custom1 (Optional) Specifies the hex code for a custom alarm attribute that you have created. After defining a custom alarm attribute, you can select this attribute in the assigned_to_spectrumtag field to assign an enrichment value to appear in this attribute in a destination alarm. You can define up to four custom alarm attributes on this page.

134 Product Guide

Policy

enrichment_variable (Optional) Specifies a returned enrichment value stored in a variable. Include an enrichment variable to assign its value to CA Spectrum destination events and alarms. You must first define an enrichment variable for a returned value in enrichment policy before assigning it in destination policy. Note: You can only assign one single-value enrichment on this page. If you want to assign multiple enrichments or a complex multi-value enrichment, you must do so directly in the XML policy file. assigned_to_spectrumtag (Optional) Specifies the CA Spectrum destination event variable or alarm attribute in which you want the assigned enrichment variable's data to appear. For example, if you select spectrum_Alarm_TroubleShooter, the enrichment data appears in the Assignment field of the alarm in CA Spectrum. Several attributes are available for assignment in the drop-down list, including the alarm Status, Troubleshooter, and Trouble Ticket ID fields and any custom alarm attributes that you define in the Spectrum_CustomAlarm_Attributes table. Note: If you want to assign enrichment data to a property not provided in the drop-down list, you must do so directly in the XML policy file.

CA Spectrum Enrichment Policy Configuration


CA Spectrum enrichment policy requires you to specify SpectroSERVER connection settings, the CA Spectrum version, and the model attributes to extract as enrichment data. The configurable attributes in the spectrum-enrich.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off landscapeout Specifies the landscape from which to extract enrichment data. Specify a Main Location Server in a Distributed SpectroSERVER environment. landscapeuser Specifies the CA Spectrum user defined for access by CA Event Integration. Change the default if you have defined a user other than ca_eis_user in CA Spectrum for this purpose. Default: ca_eis_user

Chapter 4: Configuration and Administration 135

Policy

lostfoundlandscape (Optional) Specifies the landscape in which to create a lost and found module for alarms and events that are not reconciled to a specific model. Enter the case sensitive host name, not the hex value. A blank entry assumes the landscapeout value as the lost and found landscape. vbrokeragentaddr (CA Spectrum 8.1 only) Specifies the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) from which you are extracting enrichment data. If you leave this field blank, CA Event Integration uses the appropriate landscapeout value as the vbrokeragentaddr. ei_lostfound (Optional) Specifies whether you want to track unreconciled events in a Lost and Found module in OneClick. Default: on modellookup_method Defines the property that CA Event Integration uses to reconcile events to models. Select by_ip to use the event's IP address, or select by_name to use the event's model name. attribute_hexcodes Specifies the CA Spectrum hex codes of the model attributes to extract for the enrichment. By default, the tag, id, owner, and organization attributes are used for the enrichment. You can modify this field with the hex codes for any model attributes that you want to use for the enrichment. Find the hex code for an attribute using the Spectrum Model Attribute Editor. Default: 12bfb,12bfc,12bfd,12bfe plugin_version Specifies the version of CA Spectrum with which you are integrating. Select 81 if you are integrating with CA Spectrum 8.1, 90 if you are integrating with CA Spectrum 9.0 or 9.1, or 92 if you are integrating with CA Spectrum 9.2. Default: 90 ssenrich_tagname Specifies an XML property name to assign to the returned enrichment value. The enrichment value requires a property name for insertion into a destination event. Enter a name that contains a hex code that you are using for the enrichment. Default: ssenrich_12bfb

136 Product Guide

Policy

assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. You must assign the enrichment variable to a property in destination policy to add the variable's enrichment value to a specific area of a destination's events. Select a variable from the drop-down list.

CA NSM Policy Configuration


CA NSM source and destination policies do not require any configuration before using them in a catalog. All attributes are populated with default values. However, if you want to send events to a remote CA NSM node (a node without a connector installation), you must add this node name to the destnode parameter in the nsmevent-dest.xml file. The configurable attributes in the nsmevent-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off interval Specifies the frequency in seconds with which events are collected from the event source. Valid intervals are between 10 and 120 seconds. Default: 10 seektype Specifies where in the console log file to start reading and collecting events. Specify bottom to start from the bottom of the file, and specify top to start from the top. Default: bottom In CA NSM destination policy, you can also configure the content and location of enrichments in destination events. You must configure enrichment variables in enrichment policy before you add enrichments in destination policy. The configurable attributes in the nsmevent-dest.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off

Chapter 4: Configuration and Administration 137

Policy

destnode Specifies the node to which events are dispatched. By default, this field is blank, and a blank setting sends events to the local CA NSM node on each connector. Only use this field if you want to send events to a remote CA NSM node without a connector installation. enrichment_variable (Optional) Specifies a returned enrichment value stored in a variable. Include an enrichment variable to assign its value to CA NSM destination events. You must first define an enrichment variable for a returned value in enrichment policy before assigning it in destination policy. Note: You can only assign one single-value enrichment on this page. If you want to assign multiple enrichments or a complex multi-value enrichment, you must do so directly in the XML policy file. assigned_to_unieventtag (Optional) Specifies the CA NSM destination event property in which you want the assigned enrichment variable's data to appear. For example, if you select evtlog_udata, the enrichment data appears in the User Data field of the event in CA NSM. The user data and category properties are available for assignment in the drop-down list. Note: If you want to assign enrichment data to a property not provided in the drop-down list, you must do so directly in the XML policy file.

CA NSM Enrichment Policy Configuration


CA NSM enrichment policy requires you to specify connection settings for extracting WorldView information for enrichment and an enrichment tag and variable value for assigning the WorldView enrichment data to destinations. The configurable attributes in the nsm-enrich.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: on repository Specifies the repository name of the CA NSM WorldView server from which you want to extract data. userid Specifies a user name for connecting to the WorldView repository. password Specifies the password for the user name entered in the userid field.

138 Product Guide

Policy

propertyname Specifies the WorldView property name to extract based on the event's resource address. Default: name Note: By default, WorldView enrichment uses the internal_resourceaddr source event property value to extract a node name associated with the property defined in this field. If you want to define a different source event property to use for the enrichment, you must do so directly in the XML policy file. wv_tagname Specifies an XML property name to assign to the returned enrichment value. The enrichment value requires a property name for insertion into a destination event. Select a property name in the drop-down list that corresponds to the property you are extracting. Default: wv_name assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. You must assign the enrichment variable to a property in destination policy to add the variable's enrichment value to a specific area of a destination's events. Select a variable from the drop-down list.

CA Spectrum SA Policy Configuration


CA Spectrum SA source and destination policies require you to define the following information:

The source and destination ActiveMQ server host name and TCP port for JMS messaging communications User credentials to access the ActiveMQ server An event topic to subscribe to on the CA Spectrum SA IFW bus

The sam-src.xml file contains the following configurable attributes: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off hostin Specifies the name of the CA Spectrum SA ActiveMQ host from which to collect alerts.

Chapter 4: Configuration and Administration 139

Policy

portin Specifies the port number for TCP communication over the ActiveMQ host. Default: 61616 userin Specifies the CA Spectrum SA administrator user name for connecting to the CA Spectrum SA IFW bus on the ActiveMQ host. passwdin Specifies the password for the user name. topicin Specifies the topic from which to subscribe and collect alerts on the IFW bus. Default: CA_IFW_EVENT_TOPIC The sam-dest.xml file contains the following configurable attributes: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off hostout Specifies the name of the CA Spectrum SA ActiveMQ host to which to dispatch alerts. portout Specifies the port number for TCP communication over the ActiveMQ host. Default: 61616 userout Specifies the CA Spectrum SA administrator user name for connecting to the CA Spectrum SA IFW bus on the ActiveMQ host. passwdout Specifies the password for the user name. topicout Specifies the publication topic to which to dispatch alerts on the IFW bus. Change this topic only if you have created a new topic to handle CA Event Integration events. Default: CA_IFW_EVENT_TOPIC

140 Product Guide

Policy

reconcile_level Specifies whether to reconcile alerts at the containing device level or to specific objects within a device. Note: This property does not apply with the samec-dest.xml policy file, which is deployed when integrating with CA Spectrum SA through the event enrichment feature. For more information about event enrichment, see the CA Spectrum SA documentation. Default: device enrichment_variable (Optional) Specifies a returned enrichment value stored in a variable. Include an enrichment variable to assign its value to CA Spectrum SA destination alerts. You must first define an enrichment variable for a returned value in enrichment policy before assigning it in destination policy. Note: You can only assign one single-value enrichment on this page. If you want to assign multiple enrichments or a complex multi-value enrichment, you must do so directly in the XML policy file. append_to_samtag (Optional) Specifies the CA Spectrum SA destination alert attribute in which you want the assigned enrichment variable data to appear. The sam_userAttribute1-5 properties are available for assignment, which places enrichment data in one of the custom User Attribute alert fields. Note: If you want to assign enrichment data to a property not provided in the drop-down list, you must do so directly in the XML policy file.

CA OPS/MVS EMA Policy Configuration


CA OPS/MVS EMA source policy requires you to enter the port information in the administrative interface to receive all traps on the configured port. The configurable attributes in the mfopsmvs-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off

Chapter 4: Configuration and Administration 141

Policy

port Specifies the port number you want to use for receiving traps. The SNMP source sends all traps to the CA Event Integration host on this port. Default: 9999 Note: In case of multiple SNMP sources deployed together, make sure that the port number is the same in all of the policy files. You also need to ensure that the port is not restricted and is always available for receiving traps. You must also customize this policy file to specify how to collect and process alarms from your CA OPS/MVS EMA environment.

CA SYSVIEW PM Policy Configuration


CA SYSVIEW PM source policy requires you to enter the port information in the administrative interface to receive all traps on the configured port. The configurable attributes in the mfsysview-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off port Specifies the port number you want to use for receiving traps. The SNMP source sends all traps to the CA Event Integration host on this port. Default: 9999 Note: In case of multiple SNMP sources deployed together, make sure that the port number is the same in all of the policy files. You also need to ensure that the port is not restricted and is always available for receiving traps. You must also customize this policy file to specify how to collect and process alerts from your CA SYSVIEW PM environment.

HP Business Availability Center Policy Configuration


HP Business Availability Center (HP BAC) source policy requires you to enter the port information in the administrative interface to receive all traps on the configured port. The configurable attributes in the hpbac-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off

142 Product Guide

Policy

port Specifies the port number you want to use for receiving traps. The SNMP source sends all traps to the CA Event Integration host on this port. Default: 9999 Note: In case of multiple SNMP sources deployed together, make sure that the port number is the same in all of the policy files. You also need to ensure that the port is not restricted and is always available for receiving traps. You must also customize this policy file to specify how to collect and process alarms from your HP BAC environment.

CA Event Integration Forwarding Policy Configuration


CA Event Integration forwarding policy requires you to enter information about the connector to which to forward processed events. The configurable attributes in the ei-dest.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off host Specifies the host name of the connector to which to forward events. port Specifies the Axis2 port number of the connector to which to forward events. Note: The default Axis2 port number of all connectors is 8083.

CA CMDB Enrichment Policy Configuration


CA CMDB enrichment policy requires you to specify information for connecting to and extracting configuration item information from the CA CMDB. You must also specify an enrichment property and variable value for assigning the CA CMDB enrichment data to destinations. The configurable attributes in the cmdb-enrich.xml file are as follows: endpointref Specifies the URL for the CA CMDB web services. Default: http://localhost:8080/axis/services/USD_R11_WebService?wsdl

Chapter 4: Configuration and Administration 143

Policy

userid Specifies a user name for connecting to the CA CMDB instance. Default: CMDBAdmin password Specifies the password for the user name entered in the userid field. selectquery Specifies the clause for locating the configuration item associated with the event resource address. The query can return multiple values, but you must customize the XML policy file directly to handle the values returned from complex queries. Default: dns_name like '%s' Note: By default, CA CMDB enrichment uses the internal_resourceaddr source event property value to extract CA CMDB information. propertylist Specifies the CA CMDB properties for the enrichment module to extract from the configuration item. You can extract the following types of information (all of which are represented in the default value): Standard properties Extracts standard single CI properties. For example, the name attribute returns the name property of the associated CI. Embedded list properties Extracts properties that are part of a list. For example, location.address returns the CI address location property. Custom attributes Extracts properties defined in a linked table. For example, assoc_har_serx[proc_speed,disk_type] returns the processor speed and disk type properties of the associated CI hardware table. In general, use the associated_table[property1,property2...] convention to extract custom attributes from tables. Default: name,location.address1,location.city,assoc_har_serx[proc_speed,disk_type] ciurl Specifies the Visualizer URL to access the CA CMDB CI definitions. By default, this property produces a URL that accesses the CA CMDB Visualizer page for the CI represented by the event resource address. Default: http://localhost:8080/CAisd/pdmweb.exe?OP=SEARCH+FACTORY=nr+SKIPLIST=1+Q BE.EQ.id={0}

144 Product Guide

Policy

cmdb_tagname Specifies an XML property name to assign to the returned enrichment value. The enrichment value requires a property name for insertion into a destination event. Us the default or enter another property name. Default: cmdb_name assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. You must assign the enrichment variable to a property in destination policy to add the variable's enrichment value to a specific area of a destination's events. Select a variable from the drop-down list.

Custom Database Enrichment Policy Configuration


Custom database enrichment policy requires you to specify connection properties for the database you are using for the enrichment, a query to serve as the basis for enrichment, and an enrichment property and variable value for assigning the database enrichment data to destination events. Important! CA Event Integration only provides a facility to connect, execute, and apply the returned data. You must provide a proper SQL statement to return the correct data in the correct format from your database. See the singleresultquery parameter description for an example of a situation where you must customize the SQL statement to return usable data. Select the custom database enrichment file that corresponds with the type of database that contains the information to extract. Each policy file is tuned to connect with its specific database type. The configurable attributes of the mssql-enrich.xml, oracle-enrich.xml, and mysql-enrich.xml files are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: on jdbc_jarpath Specifies the classpath of the database's JDBC and associated jar files. Separate each jar path with a semi-colon. dbinstance Specifies the instance or server name of the database that you want to extract information from. Use the following syntax for named instances:
dbserver\instance:port

Chapter 4: Configuration and Administration 145

Policy

dbname Specifies the name of the database from which to extract information. dbuser Specifies the user name for connecting to the database. Specify an internal database user, or leave the field blank if you are using trusted security. Default: sa (mssql-enrich), root (mysql-enrich), system (oracle-enrich) dbpassword Specifies the password for the user name entered in the dbuser field. Leave this field blank if you are using trusted security. singleresultquery Specifies the database query to use to extract the appropriate information from the database for enrichment. Use this query to link event information with the information that you want to extract from a database table. The query can return multiple values, but you must customize the policy to handle the values returned from complex queries. Default: select contact from Table1 where hostname=? Querying a database table that uses fixed columns may require you to pad the key value in the WHERE clause. Use a SQL function such as Rpad to add the additional spaces to ensure the queried value will be found. For example, you would need to modify the default query as follows if the hostname column has a fixed width of 64 characters: select contact from Table1 where hostname=rpad(?,64). Note: By default, custom database enrichment uses the internal_resourceaddr source event property value to extract the desired information from the database. If you want to define a different source event property to use for the enrichment query, you must do so directly in the XML policy file. mssql_tagname, oracle_tagname, mysql_tagname Specifies an XML property name to assign to the returned enrichment value. The enrichment value requires a property name for insertion into a destination event. Use the default or enter another property name. Default: mssql_0, oracle_0, mysql_0 assigned_to_enrichment_variable Assigns the defined enrichment property to a variable that you can reference in destination policy. You must assign the enrichment variable to a property in destination policy to add the variable's enrichment value to a specific area of a destination's events. Select a variable from the drop-down list.

146 Product Guide

Policy

Internet Search Enrichment Policy Configuration


Internet search enrichment policy requires you to define the URL from which to perform a search using event properties and an enrichment variable value for assigning the URL to destination events. The configurable attributes in the internet-enrich.xml file are as follows: searchurl Specifies the URL for performing the Internet search. Use substitution characters to add the event properties to search for in the URL. By default, the enrichment uses the internal_resourceaddr property as search criteria. The default URL searches Google using these values. You can also link to an internal knowledge base. Note: If you want to search based on event properties other than the default, you must directly edit the XML file. Default: http://www.google.com/search?hl=en&q={0} assigned_to_enrichment_variable Assigns the defined search URL to a variable that you can reference in destination policy. You must assign the enrichment variable to a property in destination policy to add the variable's enrichment value to a specific area of a destination's events. Select a variable from the drop-down list.

Application Log Policy Configuration


Application log source policy requires you to specify a log file from which to collect event information. The configurable attributes in the applog-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off interval Specifies the frequency in seconds with which events are collected from the event source. Valid intervals are between 10 and 120 seconds. Default: 10 startposition Specifies where to begin reading the log file. Specify top to begin at the top of the file or bottom to begin at the bottom. Default: bottom

Chapter 4: Configuration and Administration 147

Policy

logfile Specifies the log file that you want the log reader adaptor to read. Specify the path to the file as a part of the file name. Default: ..\\Logs\\axis2.log Note: The default axis2.log file logs all of the product's web services activity. You must also customize this policy file to correctly classify, parse, and format messages collected from the specified file.

Web Services Eventing Policy Configuration


Web services eventing source policy requires you to define an event publisher to collect web service notifications from and the event details of that publisher. Note: The default values represent a connection to the product's web service notifications test publisher and the event details associated with that publisher. The configurable attributes in the wsevent-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off publisher Specifies the URL for the web service publisher from which to collect notifications. Default: http://localhost:8083/axis2/services/PublisherService eventroot Specifies the XML element name where the embedded event begins. Default: //tns:getServiceResourceInstanceHealthStateResponse nsPrefix Specifies the namespace prefix for the embedded event. Default: tns nspace Specifies the namespace definition for nsPrefix. Default: http://tid.es/AgendaService/schema You must also customize this policy file to correctly process the collected web services notifications. By default, the file is configured to process notifications sent from Microsoft Live Meetings servers.

148 Product Guide

Policy

Windows Event Log Policy Configuration


Windows Event Log source policy does not require any configuration before using it in a catalog. All attributes are populated with valid default values. Event Log events are collected using three separate adaptors, one each for the System, Application, and Security logs, and the settings in the source policy file are separated for each adaptor. The configurable attributes in the syslog-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off interval Specifies the frequency in seconds with which events are collected from the event source. Valid intervals are between 10 and 120 seconds. Default: 10 logname Specifies the name of each Windows event log. By default, these fields are populated with the event log name assigned to each adaptor (Application, Security, or System). In Windows Event Log destination policy, you configure the content and location of enrichments in destination events. You must configure enrichment variables in enrichment policy before you add enrichments in destination policy. The configurable attributes in the syslog-dest.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off enrichment_variable (Optional) Specifies a returned enrichment value stored in a variable. Include an enrichment variable to assign its value to Windows Event Log destination events. You must first define an enrichment variable for a returned value in enrichment policy before assigning it in destination policy. Note: You can only assign one single-value enrichment on this page. If you want to assign multiple enrichments or a complex multi-value enrichment, you must do so directly in the XML policy file.

Chapter 4: Configuration and Administration 149

Policy

assigned_to_syslogtag (Optional) Specifies the Windows Event Log destination event property in which you want the assigned enrichment variable's data to appear. For example, if you select syslog_msg, the enrichment data appears appended to the message text of the event in the Windows Event Log. Only the message text property is available for assignment in the drop-down list. Note: If you want to assign enrichment data to a property not provided in the drop-down list, you must do so directly in the XML policy file.

SNMP Policy Configuration


SNMP policy requires you to enter the port information in the administrative interface to receive all traps on the configured port. The configurable attributes in the snmp-src.xml file are as follows: tracein Controls event tracing and debugging output for source adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off port Specifies the port number you want to use for receiving traps. The SNMP source sends all traps to the CA Event Integration host on this port. Default: 9999 Note: In case of multiple SNMP sources deployed together, make sure that the port number is the same in all of the policy files. You also need to ensure that the port is not restricted and is always available for receiving traps. You must also customize this policy file to specify how to collect and process traps to suit your environment.

Database Policy Configuration


Database destination policy does not require any configuration before using it in a catalog. The connection information for the manager database is entered in this file automatically according to the database information you provided during installation. You should only edit this file if you want to change any of the database settings. The configurable attributes in the database-dest.xml file are as follows: traceout Controls event tracing and debugging output for destination adaptors. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. Default: off

150 Product Guide

Policy

hostname Specifies the database server name. Use the following syntax for named instances:
dbserver\instance:port

dbname Specifies the database name. identity Specifies the user name for connecting to the database. Only an internal SQL Server user appears in this field, and you can only enter a SQL Server user name. If you are using trusted security, this field is blank. password Specifies the password for the user name defined in the identity field. This field is blank if you are using trusted security.

Filter Policy Configuration


Configure specific filtering policy to use in a catalog in the default-filter.xml file from the administrative interface. This file lets you specify include or exclude criteria for filtering events after normalization. Add this file during catalog creation as an enrichment to apply the policy to all collected events converted to the internal event format. The order that you enter filter criteria is important, because the core evaluates filter criteria in this order for each event. If an event matches exclude criteria, the core immediately discards the event without evaluating other entries. If an event matches include criteria, the core immediately keeps the event and does not evaluate subsequent entries. If an event does not match an entry, the core continues to evaluate the filter entries in order. The Policy Configuration: Filter page lets you enter up to five filter entries. If you require more entries, you must manually edit the XML file. The configurable attributes in the default-filter.xml file are as follows: field Specifies the internal event property on which to filter. Select a property from the drop-down list. Filtering occurs after event normalization, so all events should be normalized to this set of properties. Note: If you want to filter based on fields other than the internal properties, such as source or destination event properties, or include property and pattern combinations in a single entry, you must edit the XML file manually or enter filter policy in specific source or destination policy files. type Specifies whether to include or exclude events that match the pattern.

Chapter 4: Configuration and Administration 151

Policy

pattern Specifies the regular expression pattern that the field property must match to trigger the filtering action. When an event property specified in the field of the entry matches the pattern, the core carries out the defined filtering action, either include or exclude. You can use this field on the last entry to create a default filter for all events not filtered by other entries. For example, you can enter the regular expression ^.* to exclude all events not filtered or included by the preceding entries. More information: Filter Operation (see page 265)

Consolidation Policy Configuration


Configure specific consolidation policy to use in a catalog in the default-consolidate.xml file from the administrative interface. Add this file during catalog creation as an enrichment to apply the policy to all collected events converted to the internal event format. The core consolidates duplicate events based on duplicate values in defined event properties and sends one event with a count attribute to indicate the number of duplicates. The Policy Configuration: Consolidate page lets you specify an entry for consolidating events that can include one internal property or common combinations of internal properties. If the available combinations do not meet your needs, you must manually edit the XML file. The configurable attributes in the default-consolidate.xml file are as follows: deactivate Specifies how long to wait in minutes without another duplicate event before deactivating event consolidation for a particular event type. Default: 1 field Specifies the internal event properties that trigger consolidation when duplicate values occur. Select a property or property combination from the drop-down list. If you select a property combination, all of the property values must be duplicates for consolidation to occur.

152 Product Guide

Policy

Note: If you want to consolidate based on fields other than the internal properties, such as source or destination event properties, you must edit the XML file manually or enter consolidation policy in specific source or destination policy files. You also must edit the XML directly to set advanced options, such as an events per minute rate at which consolidation is activated. More information: Consolidate Operation (see page 267)

Test Policy in Catalogs


Each policy file contains instructions for event processing specific to source, destination, or enrichment. You can test policy in the context of catalogs to view how the product uses policy to transform a sample event. To test policy in catalogs 1. 2. Create a catalog (see page 158) containing the policy to test. You do not have to assign the catalog to a connector or deploy the catalog. Select the catalog on the View Catalogs page and click Preview. The Select Sample Event page of the Preview Catalog wizard opens. 3. Select a sample event that originated from the source whose policy you want to test and click Next. For example, if you are testing CA Spectrum source policy, select a CA Spectrum alarm to view how the core modules use the CA Spectrum source policy file to transform an alarm from CA Spectrum. The Modify Sample Event page opens. 4. Modify the event that you selected if necessary and click Next. The View Transformation page opens. This page shows all event fields before and after the complete transformation. 5. (Optional) Click Show all phases to view the transformation in more detail. The View Transformation page displays expandable tabs representing each processing module. You can expand each module to view how the event appears before the processing module begins and after it finishes. The event fields after the Reprocess module represent the complete processed event. Note: For more information about each module, see Previewed Modules (see page 154).

Chapter 4: Configuration and Administration 153

Policy

Processed events on the View Transformation page represent events that have been transformed into a common internal format. After this transformation, further processing is required to fit events into an external destination's internal schema. Therefore, you can only test source and enrichment policy using this feature. Also, the preview function transforms a single event, so you cannot test consolidation policy (which requires multiple events) with this feature.

Previewed Modules
The core event processing engine is broken up into modules where each module performs specific processing operations on events according to the policy in a catalog. An event is passed from one module to the next during the transformation process. When you preview a catalog, you are shown how each module transforms a sample event. The modules are displayed on the View Transformation page in order of processing after you click Show all phases, and you can expand each module's tab to view the event when the module receives it and after the module is finished processing and passes the event to the next module. The modules interact with events through event properties, or attributes that contain specific event values. The modules transform the properties according to the assigned policy to adhere to a uniform format by adding, deleting, and editing the names of properties. They also configure how the values are displayed for each property to enforce a uniform language and formatting. The View Transformation page displays event properties in the Name column and property values in the Value column before and after the transformation, or before and after each module when you click Show all phases. Note: The Preview feature represents the transformation of events into a unified internal format. The displayed output does not necessarily represent how events will appear in an external destination, such as CA Spectrum or CA NSM. Further processing occurs after transformation to fit events into the schema of each external event destination. The following list presents the core transformation modules, in order of processing. The referenced topics describe how each module transforms events, event properties, and their values:

Classify (see page 155) Parse (see page 155) Normalize (see page 155) Filter (see page 156) Enrich (see page 156) Format (see page 156)

154 Product Guide

Policy

Classify Module
The Classify module classifies events into specific types, allowing for more specialized processing in ensuing modules. The eventtype event property denotes the general event type, usually the originating source. The Classify module refines the eventtype property to a more specific, meaningful classification, dividing events from a source into separate categories. For example, an event received from CA NSM and originally generated by an A3 agent is classified as UniEvent in the eventtype property when received. The Classify module discerns that the event is a DSM event generated by the caiWinA3 agent, for example, and refines the eventtype property to read OPR-NSMA3AGENT. As a result, ensuing modules can enact policy specific to A3 agent events.

Parse Module
The Parse module splits event properties and their property values into additional properties, providing more specialized categories for describing events. For example, CA NSM source policy instructs the Parse module to parse the evtlog_text event property, which specifies the event message text, into several more specialized and understandable properties. Following is a sample CA NSM event text in the evtlog_text property before parsing:
Host:Windows Windows caiWinA3 Trap WinA3_CPUTotal Ok Critical none Prop TotalLoad

The Parse module breaks this example message text into multiple fields, including the following:

Windows is assigned to a "platform" property. caiWinA3 is assigned to a "reportingagent" property. Critical is assigned to a "new severity" property. Ok is assigned to an "old severity" property. Other values in the message are assigned to separate properties.

In this example, the Parse module breaks up the cumbersome message text in the evtlog_text property into several specific event properties that can be used individually for further processing and also as metrics for detailed reporting.

Normalize Module
The Normalize module transforms the syntax of event property values into a uniform language. Instead of events retaining disparate terminology for communicating similar information (for example, using different terms for similar resource states), normalization causes events from all sources to use a common naming convention across management systems.

Chapter 4: Configuration and Administration 155

Policy

For example, new properties are created in the Normalize module for CA NSM events that normalize how the event resource class is named and organized. The module defines an internal property to identify the class of a resource, such as DaemonProcess, Memory, Application, and so on. Normalizing these resource classes enables better organization, recognition, and filtering. In reports, you can select event data by resource class to view all Processor, Memory, Application, and events belonging to other event classes to isolate these areas of your enterprise across nodes and event sources.

Filter Module
The Filter module filters events from further processing and dispatching to destinations. It can exclude or include events with a specified property value. Excluded events are removed from further processing and are not dispatched to the configured destinations. Included events are immediately forwarded to the next processing module. For example, CA NSM source policy filters out all events that are not classified in the eventtype property as UniEvent (or an expression containing UniEvent). In this case, the Filter module makes sure that non-CA NSM events are not processed using CA NSM source policy. Note: In the Preview tab, a filtered (excluded) event is still displayed in subsequent modules, but is unchanged.

Enrich Module
The Enrich module acquires additional information related to an event from an external source and adds the information to the event in new event properties. Enriching an event adds information not previously available or puts available information in a new context that makes the event easier to understand and resolve. Note: Enrichment policy can be included in source, destination, and enrichment policy files. The Enrich module enacts enrichment policy found in every type of policy file. For example, events being sent to CA Spectrum must have an associated model handle that ties the event to a representative object in CA Spectrum. The CA Spectrum destination policy enriches outgoing events with this information.

Format Module
The Format module combines event property values into a new or existing property using a specified format. This module combines information from multiple tags or re-stages property information to create new properties that state information in a different context or display combined information as a new attribute.

156 Product Guide

Policy

For example, CA Spectrum source policy formats several new properties that provide information in a new property or a new context. Following are some examples of Format module operations defined in CA Spectrum source policy:

The Format module formats the CA Spectrum date time, which is a string of numbers in the spectrum_datetime property, using a date formatting function to output a more descriptive date in the internal_gentime property. The Format module takes the event device IP address from the spectrum_NetAddr property and converts it to a fully qualified domain name in the internal_resourceaddr property. The module also creates the internal_resourceaddrtype property stating that the address type is a fully qualified domain name. In reports, you can sort events by their address type.

Policy Creation and Customization


All event sources and destinations require policy to drive the processing of events received from the source or being sent to the destination. All provided adaptors come with policy that defines how to process the associated events. Without policy, the core does not receive any instructions for how to process an event, and no processing takes place. Therefore, if you write a new source or destination adaptor to integrate with an event source, you must also write new policy for processing events from this source. Every source and destination must have a corresponding policy file. You can also modify the provided policy files if you want to alter how to process events received from or being sent to the provided integrations. Some policy files, such as application log and web services eventing source policy, require customization before you can use them effectively in catalogs. You perform policy creation and customization outside of the administrative interface directly in the corresponding policy files. You either edit the existing policy files or create a new XML file for new policy. More information: Writing and Customizing Policy (see page 233)

Chapter 4: Configuration and Administration 157

Catalogs

Catalogs
Catalogs are collections of policy that define event integrations and processing instructions. They are the tools that let you configure how CA Event Integration operates on each server in your enterprise. Catalogs contain the following processing instructions:

The sources from which to collect events Catalog policy that defines how events from each source are processed and integrated into each destination Enrichments that define external sources from which to extract additional event information The destinations to which to dispatch processed events

Compile these instructions into a catalog by adding policy. Source policy contains all of the necessary information for receiving and processing events from a source, and destination policy contains all of the necessary information for dispatching processed events to a destination. Catalogs can contain any possible combination of sources, destinations, and enrichments, as long as you include at least one source and one destination. You assign catalogs to a connector to define the event integration and processing configuration for the connector to apply. You can assign a catalog to any number of connectors requiring the same policy configuration. When you deploy an assigned catalog, the connectors enact the catalog policy on their servers. The View Catalogs page of the Catalogs tab contains a table that lists all existing catalogs, the policies in each catalog, the policy types, and any descriptions associated with each policy file and lets you create, edit, preview, and delete catalogs. Note: To view catalogs in the context of their assigned connectors and deployment status, click the Connectors tab.

Create a Catalog
Create a catalog to compile a set of event processing policies into one entity that you can deploy to connector servers in your environment. When you create a catalog, you add the following policies:

Source policy to define what sources to collect events from and how to process events from that source.

158 Product Guide

Catalogs

Destination policy to define what sources to dispatch processed events to and how to process events to fit into the destination's internal schema. Enrichment policy to define external sources to query for additional information to add to events.

You can apply any number of sources, destinations, and enrichments to a catalog. You must include at least one source and one destination. The only catalog policy restriction relates to multiple policies associated with the same source or destination adaptor. If you apply more than one policy file that is associated with the same adaptor (for example, two customized CA Spectrum source policy files) to a catalog, the catalog deployment fails. This scenario is only possible if you create customized policy files for an adaptor for which a policy file already exists. This restriction does not apply to policy files for the same source and destination (for example, adding CA NSM source and destination policies) or for separate catalogs that you deploy at the same time. Note: Many policies that you add to a catalog require you to configure specific policy attributes before using them in a catalog. For more information, see Configure Policy Attributes. To create a catalog 1. Click New Catalog from the Catalogs tab. The New Catalog wizard opens with the Select Source Policies page displayed. 2. Select the sources to collect events from in the Available Sources pane and click The selected sources appear in the Selected Sources pane. Applying source policy collects events from that source and uses the policy to process them. 3. Click Next. The Select Destination Policies page opens. 4. Select the destinations to dispatch events to in the Available Destinations pane and click . The selected destinations appear in the Selected Destinations pane. Applying destination policy processes events to fit them into the destination's event schema and dispatches processed events to the event destination. 5. Click Next. The Select Enrichment Policies page opens. 6. Select the enrichments to apply to received events in the Available Enrichments pane and click . The selected enrichments appear in the Selected Enrichments pane. Applying enrichments queries the external source defined by the enrichment for information to add to events. .

Chapter 4: Configuration and Administration 159

Catalogs

7.

Click Next. The Save page opens.

8.

Enter a name for the catalog in the Catalog Name field, enter a description in the Description field (optional), and click Finish. Note: Catalog names cannot contain special characters or spaces. You must use characters from the set A-Z, a-z, 0-9, hyphen, or underline. Use of other characters causes an error message to appear. A dialog specifies the catalog status and prompts you to assign it to connectors.

9.

Do one of the following:

Click Cancel. The View Catalogs page appears with the new catalog in the Catalog List table.

Click OK. The Assign Catalog page opens. Assign the catalog to connectors using the Assign Catalog wizard (see page 162).

Note: You can also assign a catalog in a separate operation.

Edit a Catalog
You can edit the policies included in an existing catalog. When you edit a catalog, any changes are not applied on connectors where the catalog is already deployed. For the catalog changes to take effect, you must re-deploy the catalog after editing. To edit a catalog 1. Do one of the following:

Select a catalog from the View Catalogs page of the Catalogs tab and click Edit. Click an assigned catalog in the Catalogs column on the View Connectors page of the Connectors tab. The Select Source Policies page of the Edit Catalog wizard opens.

2.

Edit the source policies assigned to the catalog if necessary, and click Next. The Select Destination Policies page opens.

3.

Edit the destination policies assigned to the catalog if necessary, and click Next. The Select Enrichment Policies page opens.

4.

Edit the enrichment policies assigned to the catalog if necessary, and click Finish. The changes are applied, and the View Catalogs page reopens.

Note: To edit a catalog's external configuration, such as its assigned connectors, use the Assign Catalog button on the View Connectors page.

160 Product Guide

Catalogs

Delete a Catalog
You can delete a catalog if the its policy configuration is no longer useful in your environment and the it is not assigned to a connector. Note: You can retain a catalog even if it is not assigned to any connectors. To delete a catalog 1. Select a catalog from the View Catalogs page of the Catalogs tab and click Delete. A dialog prompts you to confirm the deletion. 2. Click OK. The catalog is removed, and a message appears confirming the deletion.

Preview a Catalog
After you create a catalog, you may want to preview how the catalog's policy configuration transforms events before you deploy the catalog on a connector. Previewing a catalog transforms sample events related to the catalog's policies and displays a view of the event's properties before and after the transformation. You can also break down the preview to see how the event is transformed in each core processing module. To preview a catalog 1. Select a catalog on the View Catalogs page of the Catalogs tab and click Preview. The Select Sample Event page of the Preview Catalog wizard opens. This page lists several sample events that you can preview. The events correspond to those you would receive from the sources associated with the catalog. 2. Select the event that you want to transform and click Next. The Modify Sample Event page opens. You can modify any of the sample event's values to emulate the type of event you expect to receive in your enterprise. 3. Modify the event if necessary using the provided fields and click Next. The View Transformation page opens. This page shows all event properties and tag values before and after the event transformation. 4. (Optional) Click Show all phases if you want to view the transformation in more detail. The View Transformation page displays expandable tabs representing each processing module that the event passes through during transformation. You can expand each module to view how the event appears before the processing module begins and after it finishes. The event properties and values after the Reprocess module represent the complete processed event.

Chapter 4: Configuration and Administration 161

Catalogs

Note: For more information about each transformation module, see Previewed Modules. 5. Click Finish. The View Catalogs page reopens. More information: Previewed Modules (see page 154)

Assign a Catalog to Connectors


You must assign a catalog to connectors to define where in your enterprise to apply the catalog configuration. When you assign a catalog to a connector, the connector uses the policy in the catalog to collect, process, and dispatch events on the server on which it is installed. Assign a catalog to multiple connectors to apply the same configuration across many servers in your environment. However, you can assign only one catalog to each connector, so if one connector requires specialized policy, you must create a separate catalog containing all pertinent policy to assign to the connector. Note: After assignment, the connector does not begin enacting the policy in the assigned catalog. You must deploy the assigned catalog for processing to begin. To assign a catalog to connectors 1. Click Assign Catalog on the View Connectors page of the Connectors tab. The Select Catalog Page of the Assign Catalog wizard opens. 2. Select the catalog to assign in the Select Catalog drop-down list and click Next. Note: Click New Catalog to create the catalog to assign. After you create the catalog, a prompt returns you to the Assign Catalog wizard where you can select the created catalog for assignment. The Select Connectors page opens. 3. Select the connectors to assign the catalog to in the Available Connectors pane and click . Select as many connectors as necessary. The selected connectors appear in the Selected Connectors pane. 4. Click Next. The Confirm page opens.

162 Product Guide

Catalogs

5.

Review the connector catalog assignment. The table on the Confirm page displays the new assignment and the previous catalog assignment, if applicable. When you confirm the assignment, the previous assignment is replaced. Click Finish. A dialog specifies the assignment status and prompts you to deploy the catalog on the assigned connectors, which begins enacting the catalog policy. You do not have to deploy the catalog immediately.

6.

Do one of the following:

Click OK. The assignments are complete, and the catalog deploys on the assigned connectors. The View Connectors page appears and shows all catalog assignments and a deployment status message.

Click Cancel. The assignments are complete, and the View Connectors page opens. This page shows all catalog assignments. You can deploy assigned catalogs from this page.

Deploy Assigned Catalogs


Deploying a catalog that you have assigned to a connector begins enacting the catalog's policy configuration on the connector's server. After deployment, the connector begins receiving events from the defined sources, processing received events, and dispatching them to their destinations. If a catalog is assigned to multiple connectors, you can deploy the catalog on each connector individually or deploy all assigned catalogs simultaneously (see page 164). Note: You can also deploy a catalog immediately after assigning it to a connector. For more information, see Assign a Catalog to Connectors. To deploy an assigned catalog 1. Select the connector whose catalog you want to deploy on the View Connectors page of the Connectors tab and click Deploy. A dialog prompts you to confirm the deployment. 2. Click OK. A message appears stating that the deployment has been initiated, and the Deployment Status column displays as Pending. Refresh the interface or click the Connectors tab to confirm that the deployment was successful.

Chapter 4: Configuration and Administration 163

Connectors

Deploy All Catalogs


You can deploy all catalogs that are assigned to connectors in one operation. This option is useful if you have configured and assigned catalogs for dozens of connectors in your environment and want to deploy them all at the same time. To deploy all assigned catalogs 1. Click Deploy All on the View Connectors page of the Connectors tab. A dialog prompts you to confirm the deployment. 2. Click OK. A message appears stating that the deployments have been initiated, and the Deployment Status column displays as Pending. Refresh the interface or click the Connectors tab to confirm that the deployments were successful.

Connectors
A connector is the software that integrates with the servers to collect events from and dispatch events to. You deploy a catalog to a connector to define how to process events on its server. A catalog pushes the following information to the connector:

The sources from which to collect events How to process events from each source The destinations to which to dispatch processed events

When you deploy an assigned catalog on its connector, the connector begins processing events on its server according to the policy in the catalog. Connectors are available in the administrative interface if you have registered the connector with the manager node of the interface during installation. Connectors must remain connected to the manager node. You can manage these connections to verify that processing is not interrupted on the Dashboard tab.

View Connector Configuration


You can view the current configuration of your environment based on connectors and their catalog assignments and deployments. When a catalog is deployed on a connector, the connector is collecting, processing and dispatching events based on the catalog's policy. The View Connectors page of the Connectors tab shows all registered connectors and lets you assign and deploy catalogs to connectors.

164 Product Guide

Connectors

To view the connector configuration, click the Connectors tab. The View Connectors page opens. This page displays all registered connectors in a table view with the following information: Connector Displays the connector name, which corresponds to the server name on which it is installed. Catalog Displays the connector's assigned catalog. Deployed Status Specifies whether the connector's catalog is deployed and the status of the deployment. Note: A catalog can be assigned to a connector without being deployed. The connector begins processing the catalog policy on its server only after deployment. Deployed On Specifies when the connector's catalog was deployed, if applicable. When you select a connector, you can perform the following actions:

Assign a catalog to the connector (see page 162) Edit its catalog assignment (see page 165) Deploy its assigned catalog (see page 163)

You can also deploy all assigned catalogs on connectors simultaneously by clicking Deploy All.

Edit Connector Configuration


You can edit a connector's configuration by changing its catalog assignment. After changing a catalog assignment, you must re-deploy the catalog on the connector to begin enacting the newly assigned catalog's policy. To edit connector configuration 1. Select a connector on the View Connectors page of the Connectors tab and click Edit. The Edit Catalog Assignment page opens.

Chapter 4: Configuration and Administration 165

Connectors

2.

Select the catalog to assign in the Select Catalog drop-down list and click OK. Note: To add a new catalog for assignment to the Select Catalog drop-down list, click New Catalog. The catalog assignment changes, and a dialog prompts you to deploy the catalog on the connector.

3.

Do one of the following:

Click OK. The catalog is deployed on the connector.

Click Cancel. The View Connectors page opens with the new assignment displayed. You must deploy the catalog before the connector begins enacting the catalog's policy on its server.

Connectors Pane
The Connectors pane of the dashboard lets you view the highest severity metrics and deployment status of every registered connector in your manager environment. You can also control connector status by stopping or restarting any registered connector. The Connectors pane contains the following columns: Name Specifies the connector name, which is the server name on which it was installed. Click the connector name to view detailed status information. Status Specifies the status of the connector's highest severity metric displayed in the Status Description column. The status can be one of the following:

Normal Idle Warning Down Critical Failed

Note: For more information about what each status value indicates, see View Connector Details.

166 Product Guide

Connectors

Status Description Displays the component, adaptor or module, and metric with the highest severity status. For example, if the event queue in the Classifier module has the highest severity status on the connector, Ifw - Classifier - QueueLength displays. To view details about the Status Description metric and all other metrics, click the connector name. Deployed Catalog Specifies the catalog deployed on the connector. Click the catalog name to view and edit the catalog's policy. Stop Stops the event integration and processing services on the connector. Restart Recycles the event integration and processing services on the connector. More information: Stop Connectors (see page 167) Restart Connectors (see page 167) View Connector Details (see page 168)

Stop Connectors
Stopping a connector stops the event integration and processing services (CA EI IFW and CA EI CORE) on its server. Once stopped, you must restart these services to begin receiving and processing events on the connector server. To stop a connector, access the Connectors pane on the Dashboard tab and click Stop for the connector that you want to stop. The event integration and processing services stop on the connector's server, and a confirmation message appears stating that the connector stopped successfully.

Restart Connectors
Restarting a connector recycles the event integration and processing services (CA EI IFW and CA EI CORE) on its server. You should restart a connector if you change its deployment settings, so that the new settings are applied. To restart a connector, access the Connectors pane on the Dashboard tab and click Restart for the connector that you want to restart. The event integration and processing services are stopped and restarted on the connector server, and a confirmation appears stating that the connector recycled successfully.

Chapter 4: Configuration and Administration 167

Connectors

View Connector Details


You can view the detailed status of all components of a connector to pinpoint performance problems for specific integrations or processing modules. To view connector details, click the link on the connector name on the Connectors pane of the Dashboard tab or on the View Connectors page of the Connectors tab. The View Connector Details page opens. This page shows the status of all components of a connector. Use this information to verify that all integrations and processing modules are operating correctly and efficiently. The information on this page is collected every thirty seconds. The highest severity metric on this page is propagated to the Connectors pane on the Dashboard tab. Each component belongs to one of the following two types: Core Includes all of the core event processing modules, such as EventPlusReader, Classifier, and so on. Use these metrics to assess the health and performance of each module. Ifw Includes all running integration adaptors, such as UniEvent and Spectrum. Use these metrics to assess the health and performance of each integration on the connector. The following list describes all of the assessed metrics: Note: These metrics are collected for both the Core and Ifw types unless otherwise noted in the description. ProcessTime Specifies the time the module has been processing events in seconds. TotalEvents Specifies the total number of events processed since the module was last started. AvgTput Specifies the average event throughput in events per second since the module was last started. MaxTput Specifies the maximum event throughput in events per second since the module was last started.

168 Product Guide

Connectors

MinTput Specifies the minimum event throughput in events per second since the module was last started. LastTput Specifies the most recent event throughput value in events per second. Activity (IFW only) Specifies the date of the last update of each adaptor's statistics files. This metric is Normal if the files were last updated less than two minutes ago and Idle if the files were last updated more than two minutes ago. An Idle status indicates that events are not flowing. FilteredEvents (Core only) Specifies the number of events that have been filtered since the module was last started. This metric covers events filtered explicitly according to the Filter operation in policy and implicitly because of their inability to be classified. QueueLength (Core only) Specifies the number of events queued to be processed by a given module since the module was last started. Longer queues indicate a backlog of events. A status of Warning indicates more than 300 queued events, and a status of Critical indicates more than 500 queued events. ExceptionCount (Core only) Specifies the number of code exceptions since the core was last started. See the EI_HOME\Logs\Core.log file for more information about these exceptions. ThreadCount (Core only) Specifies the number of active processing threads since the core was last started. Additional threads are created for performing normalization and enrichments. A status of Warning indicates more than 100 active threads, and a status of Critical indicates more than 200 active threads. Also, the Core and Ifw include a general Health listing with the following metrics: Activity Specifies the date of the last update of the core metrics. This metric is Normal if the metrics were last updated less than two minutes ago and Idle if the files were last updated more than two minutes ago. An Idle status indicates that events are not flowing. Availability Specifies the availability of the Ifw and Core functionality based on dependent services and processes. This metric is Down if the associated Windows service is not running and Failed if the associated processes are not running.

Chapter 4: Configuration and Administration 169

Chapter 5: Reporting
This section contains the following topics: Reports (see page 171) Report Types (see page 171) Destination Database Event Reports (see page 173) Administrative Reports (see page 185) Schedule a Report (see page 198) Publish a Report (see page 200) Delete a Published Report (see page 202) Export a Report to a PDF or CSV File (see page 202)

Reports
CA Event Integration offers a variety of predefined reports that you can run on events collected in the database and the administrative functions of the product. The reporting functionality is driven by CA Web Reporting Server (WRS), which is embedded with the manager installation. You can run reports to isolate a problem area in your enterprise, view event trends based on any criteria, and verify that the administrative functions of the product are running as expected. The reports give you a structured view of your event management environment and facilitate problem resolution. The Reports tab of the administrative interface provides access to all predefined reports grouped into categories, report templates for creating customized reports, and any published or user defined reports in the folder tree on the left pane.

Report Types
CA Event Integration contains the following folders that offer four basic types of reports: Configured Reports Contains pre-defined reports. These reports provide data about events collected in the database and product-specific metrics such as catalog deployment and configuration. They are organized by their sub-types under the Reports folder. The following types of configured reports are available:

Chapter 5: Reporting 171

Report Types

Destination Database Event Reports Provide data about events sent to the manager database. You can run the predefined Top Active and Top Critical event reports to isolate problem areas that are generating the most events or most critical events, or you can run a custom report using any event metric as criteria. These reports are in the Destination Database Events folder. Administrative Reports Provide data about product-specific entities and operations, such as catalogs, policy, deployment, and connectors. Run administrative reports for an overall view of the configuration of your environment. These reports are in the following sub-folders:

Audit Reports Configuration Reports Status Reports

Report Templates Contains templates that let you create custom database event and administrative reports using criteria that you specify based on settings taken from the configured reports. Published Reports Contains static reports that you have configured and published based on an existing report or a report template. A published report can run on a schedule, or it can be a static html page representing the data returned at the original time of execution and publishing. If you set an execution schedule, you can specify to save a history of previously run instances of the report. Note: This folder is not available until a published report is created. User Defined Reports Contains ad-hoc reports that you have configured and published based on an existing report or a report template. User-defined reports are similar to configured reports in that they generate a report based on their settings whenever you run them. Note: This folder is not available until a user-defined report is created.

172 Product Guide

Destination Database Event Reports

Destination Database Event Reports


Destination database event reports are reports that you can run on events sent to the manager database. Events appear in the database when you dispatch them to the database destination in a catalog. Run these reports to present useful data about these database events in a meaningful context. You can use destination database event reports to isolate problem areas by grouping important events to discern event trends. You can group report data by the source of the event, the event node, resource types, platforms, and more. You can drill down into event reports to view specific event details. The following predefined destination database event reports are provided: Top N Returns the areas of your enterprise with the most critical or highest amount of event activity in the database. The following two sub-types are available: Top Critical Returns the areas of your enterprise containing the most events with a critical severity in the database. The following Top Critical reports are available:

Top 10 Critical Nodes (12h) Top 10 Criticals (12h) Top 5 Critical IT Services Top 5 Critical Nodes Top 5 Critical Resource Types Top 5 Critical Sources

Top Active Returns the areas of your enterprise with the highest event count in the database in the last hour. The following Top Active reports are available:

Top 5 Most Active IT Services Top 5 Most Active Nodes Top 5 Most Active Resource Types Top 5 Most Active Sources

The following destination database event report templates are provided: Event Report Configuration Lets you create a report that displays database events meeting the specified criteria. You can specify the data source, columns to display, and filtering criteria such as severity and interval.

Chapter 5: Reporting 173

Destination Database Event Reports

Top N Event Report Configuration Lets you create a customized Top N report that displays the top database event producers according to filtering criteria that you specify. You can create a report that filters by any data type, such as services, nodes, and resources, and by any severity. You can configure predefined or created reports by clicking Configure, and you can drill down into most reports for a detailed event view. You can also export destination database event reports to CSV or PDF format.

Run a Top N Report


Top N reports display the areas of your enterprise that have produced the most critical events (Top Critical reports) or the highest volume of events (Most Active reports) within a predefined time frame according to the events collected in the manager database. The following predefined Top Critical reports are provided: Top 10 Critical Nodes (12h) Displays the top ten nodes that are producing the most critical events in the last twelve hours. Top 10 Criticals (12h) Displays the top ten critical event producers in the last twelve hours broken up into the following six panes: Nodes, Instance, Users, Class, Source, and Platform. This report displays an overview of all the important areas of your environment that are producing the most critical events. For each pane, it displays the number of events, the metric details (node, class, and so on), and a link for drilling down into an hourly breakdown for each specific area. The following predefined Top N reports are provided in Top Critical and Most Active configurations: Top 5 Critical/Most Active IT Services Displays the top five IT services that are producing the most critical events or the highest event count in the last hour. This report lets you pinpoint which services, such as a specific department in your organization, are producing the most critical or total event activity and drill down into a more detailed view of the message count for a specific service or a list of the events for a service. Top 5 Critical/Most Active Nodes Displays the top five nodes that are producing the most critical events or the highest event count in the last hour. This report lets you focus on specific servers that are generating an abnormal amount of critical or total events and drill down into a detailed event view for any node.

174 Product Guide

Destination Database Event Reports

Top 5 Critical/Most Active Resource Types Displays the top five resource types, such as devices, servers, or applications, that are producing the most critical events or the highest event count in the last hour and lets you drill down into a detailed event view for any resource. Top 5 Critical/Most Active Sources Displays the top five event sources, such as CA NSM, CA Spectrum, or the Windows Event Log, that are producing the most critical events or the highest event count in the last hour and lets you drill down into a detailed event view for any source. To run a Top N report 1. 2. Click the Reports tab. Expand the Destination Database Events folder. All predefined Top N reports appear. 3. Click the Top N report that you want to run. The report you selected opens in the right pane. Top N reports contain the following columns: Details Displays a column number and a button for opening a separate window showing a list of the database events for a resource. Note: The Top 5 Critical/Most Active IT Services report contains two adjacent buttons in this column that let you drill down into a more detailed view of the message count for a specific service and a list of the database events for a service. Events Displays the number of critical or total database events associated with the metric (sources, services, nodes, and so on). Metric Displays the value for the metric that you are running the report on. This column is Service for the Services report, Node for the Nodes report, and so on.

View Event Details


The Details column of most Top N reports provides a button that launches an Event Details report. This report provides a list of the database events included in the report row on which you clicked the button.

Chapter 5: Reporting 175

Destination Database Event Reports

To view event details 1. 2. Run a Top N report that is not reporting on services. Click the button in the Details column for the row of event data that you want to view. The Event Details report opens displaying the database events for the report row. For example, if you are viewing the Top 5 Critical Nodes report, and you see that 25 critical events were generated on a specific node, clicking the Details button for the node's row in the report opens the Event Details Report showing a list containing all 25 critical events generated on the node. The Event Details Report contains the following columns: ID Specifies a row number for the event. Generated Time Specifies when the event was generated. New Severity Specifies the new severity of the event. Old Severity Specifies the previous severity of the event. Priority Specifies the event priority. Source Specifies the source where the event came from. For example, if it is a CA NSM DSM event, the DSM agent that generated the event appears in this column. Node Specifies the node where the event came from. Class Specifies the resource type that caused the event generation. Instance Specifies the instance of the resource type that caused the event generation. Platform Specifies the platform of the node where the event came from. User Specifies the user that generated the event, if applicable.

176 Product Guide

Destination Database Event Reports

Vendor Specifies the platform vendor. Message Text Displays the text of the event message.

View Service Event Details


The Details column of most Top N reports provides a button that launches an Event Details Report. This button on Top N reports for services launches a list of events in the database affecting a specific service. To view service event details 1. 2. Run the Top 5 Critical/Most Active IT Services report or a customized Top N report that you created based on services. Click the second button in the Details column for the service whose related events you want to view. The Service Event Report opens in a separate window displaying the events in the database affecting the selected service. The Service Event Report contains the following columns: ID Specifies a row number for the event. Generated Time Specifies when the event was generated. New Severity Specifies the new severity of the event. Old Severity Specifies the previous severity of the event. Priority Specifies the event priority. Source Specifies the source where the event came from. For example, if it is a CA NSM DSM event, the DSM agent that generated the event appears in this column. Node Specifies the node where the event came from. Class Specifies the resource type that caused the event generation.

Chapter 5: Reporting 177

Destination Database Event Reports

Instance Specifies the instance of the resource type that caused the event generation. Platform Specifies the platform of the node where the event came from. User Specifies the user that generated the event, if applicable. Vendor Specifies the platform vendor. Message Text Specifies the text of the event message.

View Service Message Count Details


When you run a Top N event report for services, you can view further details about a service's message count using a launch button from the Details column. Basic Top N services reports show the number of events (or critical events) in the database associated with a service. Launching a Service Messages Count Report from a service report displays the number of events affecting the selected service sorted by message text. From the Service Messages Count Report, you also launch a list of all database events with a specific message text. To view service message count details 1. 2. Run the Top 5 Critical/Most Active IT Services report or a customized Top N report that you created based on services. Click the first button in the Details column for the service whose message count that you want to view. The Service Messages Count Report page opens in a separate window. This report displays the number of events affecting the service according to their message text. For example, you can view how many events exist in the database with the text "Database connection has failed" that affect the selected service. The Service Messages Count Report contains the following columns: Details Contains a button that launches the Service Messages Report, which is a list of all events in the database with the message text displayed in the Message Text column.

178 Product Guide

Destination Database Event Reports

Note: The Service Messages Report contains the same columns as the Service Event Report without the Message Text column. The message text is displayed in the report information, because the text is the same for all events in the report. Events Specifies the number of events for each message text. Message Text Specifies the message text.

View Hourly Event Breakdown


The configured Top 10 Criticals (12h) report displays the top ten critical event producers according to the following six filtering criteria:

Node Instance User Class Source Platform

The Top 10 Critical Nodes (12h) report displays the top ten critical event producers by node. The main page of these reports lists the total number of critical database events produced for each displayed metric in the last twelve hours. The launch buttons on these reports provide a more detailed breakdown of the number of events sent to the database by hour. To view an hourly event breakdown 1. 2. Run the Top 10 Criticals (12h) or Top 10 Critical Nodes (12h) report. Click the launch button in the ID column for the resource for which you want to view an hourly breakdown. The Last 12 Hours Report page opens in a separate window. This report breaks the data into hours. Each hour when an event was sent to the database has a row in the report table. The Last 12 Hours Report contains the following columns: Details Contains a button that launches a list of the events in the selected row.

Chapter 5: Reporting 179

Destination Database Event Reports

Hour Specifies a one hour period in the last twelve hours in which events were generated. Events Specifies the number of events generated in a one hour period. You can configure this report to display only the interval you are interested in, and you can also change the report format to display in a chart, chart and table, and workspace, in addition to the default table format. Note: The chart format requires Java 1.5.1 to display properly.

Create a Top N Report


The Top N Event Report Configuration template lets you create, run, and publish a Top N report according to criteria that you specify. You can also create a new Top N report based on an existing configured, published, or user-defined report. To create a Top N report 1. Do one of the following:

Expand the Report Templates folder and click Top N Event Report Configuration. Run an existing Top N report (either configured, user-defined, or published) and click Configure.

The Top N Event Report Configuration page opens. If you accessed the page from an existing report, it contains the values from that report. 2. Complete the Presentation pane using the following fields and drop-down lists: Title Specifies the report title. Display Specifies how many rows to display in the report. Default: 10 Refresh Specifies how often to refresh the report, if at all. Default: None

180 Product Guide

Destination Database Event Reports

3.

Complete the Select Time Period pane using the following fields and drop-down lists: Relative Specifies what relative time period of data to include in the report. Use the Last field to select the last days or hours to include and the Rounded by drop-down list to specify how to round the time period. Absolute Specifies the absolute time period of data to include in the report. Use the Starting Date and Ending Date fields to specify an exact starting and ending date for report data from years to hours and minutes.

4.

Complete the Select Filters pane using the following drop-down lists: Severity Specifies the minimum severity of events to include in the report. For example, if you select Warning, all database events of Warning severity or higher appear in the report. Select from Unknown, Normal, Warning, Critical, and Fatal. Summarize by Specifies in what context you want the report to summarize events. For example, if you select Service, the report displays the number of events in the database meeting the specified severity or above for each service (up to the specified number of rows). Select from Service, Node, Class, Source, Platform, User, and Vendor.

5.

(Optional) Click Schedule if you want to schedule the report. The Schedule page opens.

6. 7.

(Optional) Create a schedule for the report (see page 198). Click Publish. The Publish page opens. Note: You can click Execute on the Configuration page to run the report, but the report is not saved or published.

8.

Enter publishing settings (see page 200) and click Finish. The report is created and published to User Defined Reports or Published Reports, depending on your publishing settings. A confirmation message appears with the published destination. Note: Ad-hoc reports are published to the User Defined Reports folder, and static reports are published to the Published Reports folder. For more information about ad-hoc and static report types, see Publish a Report.

You may need to refresh the administrative interface or close and re-expand the report's folder or the top-level folder (if the containing folder does not yet exist) for the created report to appear in the report tree.

Chapter 5: Reporting 181

Destination Database Event Reports

Configure a Top N Report


You can configure an existing Top N report, whether it is predefined, user-defined, or published, to run the report applying any changes that you specify without saving the changes. To save any configuration changes, you must publish a new report with these changes. Note: Not all event reports are configurable. To configure a Top N event report 1. 2. Run a Top N event report. Click Configure on the report. The Top N Event Report Configuration page opens. 3. 4. Make changes to the Top N report using the Top N Event Report Configuration page (see page 180). Click Execute. The report executes and applies any changes you made to the configuration. You can schedule and publish a new report based on the changes you made using the Schedule and Publish buttons.

Create an Event Report


Destination database event reports provide a list of events collected in the database according to criteria that you specify. You can specify the columns to include in the report and filtering criteria such as the originating data source, severity, service, and others. The Event Report Configuration template lets you create and publish customized destination database event reports. You can also create a new event report based on an event report that you previously created. To create an event report 1. Do one of the following:

Expand the Report Templates folder and click Event Report Configuration. Run a previously created event report (either user-defined or published) and click Configure.

The Event Report Configuration wizard opens with the Select Presentation page displayed. If you accessed the page from an existing report, it contains the values from that report.

182 Product Guide

Destination Database Event Reports

2.

Complete the Presentation pane using the following fields and drop-down lists: Title Specifies the report title. Display Specifies how many rows to display in the report. Default: 10 Refresh Specifies how often to refresh the report, if at all. Default: None

3.

Select the columns that you want to appear in the report from the Available Columns field and click the right arrow button. The columns you selected appear in the Selected Columns field. Note: You must add at least one column to the report. The following columns appear by default: ID, Type, Association, and Message Text.

4.

Click Next. The Select Filters page opens.

5.

Complete the Select Time Period pane using the following fields and drop-down lists: Relative Specifies what relative time period of data to include in the report. Use the Last field to select the last days or hours to include and the Rounded by drop-down list to specify how to round the time period. Absolute Specifies the absolute time period of data to include in the report. Use the Starting Date and Ending Date fields to specify an exact starting and ending date for report data from years to hours and minutes.

6.

Complete the Select Filters pane using the following drop-down lists: Severity Specifies the minimum severity of events to include in the report. For example, if you select Warning, all database events of Warning severity or higher appear in the report. Select from Unknown, Normal, Warning, Critical, and Fatal. Sort by Specifies the column to sort the events by. Order Specifies whether to sort events in ascending or descending order.

Chapter 5: Reporting 183

Destination Database Event Reports

7.

(Optional) Click Schedule if you want to schedule the report. The Schedule page opens.

8. 9.

(Optional) Create a schedule for the report (see page 198). Click Publish. The Publish page opens. Note: You can click Execute on the Configuration page to run the report, but the report is not saved or published.

10. Enter publishing settings (see page 200) and click Finish. The report is created and published to User Defined Reports or Published Reports, depending on your publishing settings. A confirmation message appears with the published destination. Note: Ad-hoc reports are published to the User Defined Reports folder, and static reports are published to the Published Reports folder. For more information about ad-hoc and static report types, see Publish a Report. You may need to refresh the administrative interface or close and re-expand the report's folder or the top-level folder (if the containing folder does not yet exist) for the created report to appear in the report tree.

Configure an Event Report


You can configure an existing destination database event report, whether it is user-defined or published, to run the report applying any changes that you specify without saving the changes. To save any configuration changes, you must publish a new report with these changes. To configure an event report 1. 2. Run a user-defined or published event report. Click Configure on the report. The Event Report Configuration wizard opens with the Select Presentation page displayed. 3. 4. Make changes to the event report using the Event Report Configuration wizard (see page 182). Click Execute. The report runs and applies any changes you made to the configuration. You can schedule and publish a new report based on the changes you made using the Schedule and Publish buttons.

184 Product Guide

Administrative Reports

Administrative Reports
Administrative reports provide data about important entities such as catalogs, connectors, and policy files and administrative functions such as deployment and configuration change. You can use these reports to keep track of important product-specific data to verify the accurate implementation and solid health of your CA Event Integration environment. The following configured administrative report categories are provided, represented as folders in the report tree: Audit Reports Provide information about operations performed related to policy files, catalogs, and deployment. Policy Files Audit Report Displays all policy files that have been modified, when the change occurred, and who made the change. Catalog Files Audit Report Displays all catalogs that have been created, modified, or deleted, when the change occurred, and who made the change. Deployment Audit Report Displays all deployment activity, when the activity occurred, and who initiated the activity. Configuration Reports Provide detailed information about catalogs and connectors. Catalog Configuration Report Displays all catalogs and their assigned policy. Connector Configuration Report Displays all connectors and their catalogs assignments and deployment status. Status Reports Provide status information about connectors. Connector Summary Report Displays basic connector information. Connector Detail Report Displays the status of each component in a connector. This report resembles the content of the View Connector Details (see page 168) page.

Chapter 5: Reporting 185

Administrative Reports

The following administrative report templates are provided: Catalog Audit Report Configuration Lets you create a report that displays any changes related to catalogs. You can specify a catalog name and user to filter by, and you can view only certain operations, such as additions or deletions. Deployment Audit Report Configuration Lets you create a report that displays any deployment activity. You can specify a catalog, connector, and user name to filter the results by. Policy Audit Report Configuration Lets you create a report that displays any changes to policy files. You can specify a policy file and user name to filter by. You can configure most predefined or created reports by clicking Configure, and you can export administrative reports into CSV or PDF format.

Run a Catalog Files Audit Report


The Catalog Files Audit report displays all catalogs that have been created, modified, or deleted, when the change occurred, and who made the change. Use this report for a view of all catalog operations in your environment. To run a Catalog Files Audit report 1. 2. Click the Reports tab. Expand the Audit Reports folder. The predefined audit reports appear. 3. Click Catalog Files Audit Report. The Catalog Files Audit Report opens in the right pane. The Catalog Files Audit Report contains the following columns: ID Specifies an ID number for each row in the report. Catalog Specifies the catalog name. Operation Specifies what operation was performed on the catalog. Valid values are insert, delete, and edit.

186 Product Guide

Administrative Reports

Modified Specifies when the catalog was modified. Modified by Specifies the user who modified the catalog.

Create a Catalog Audit Report


The Catalog Audit Report Configuration template lets you create, run, and publish a catalog files audit report according to criteria that you specify. You can also create a new catalog files audit report based on an existing configured, published, or user-defined catalog files audit report. You can filter the report by catalog, user, or operation. To create a Catalog Audit report 1. Do one of the following:

Expand the Report Templates folder and click Catalog Audit Report Configuration. Run an existing catalog files audit report (either configured, user-defined, or published) and click Configure.

The Catalog Audit Report Configuration page opens. If you accessed the page from an existing report, it contains the values from that report. 2. Complete the Presentation pane using the following fields and drop-down lists: Title Specifies the report title. Display Specifies how many rows to display in the report. Default: 10 Refresh Specifies how often to refresh the report, if at all. Default: None 3. Complete the Select Time Period pane using the following fields and drop-down lists: Relative Specifies what relative time period of data to include in the report. Use the Last field to select the last days or hours to include and the Rounded by drop-down list to specify how to round the time period.

Chapter 5: Reporting 187

Administrative Reports

Absolute Specifies the absolute time period of data to include in the report. Use the Starting Date and Ending Date fields to specify an exact starting and ending date for report data from years to hours and minutes. 4. Complete the Select Filters pane using the following fields: Catalog Name Specifies a catalog name or expression that you want to filter the report by. Wild card characters are supported. User Specifies a user name to filter the report by. Only operations performed by this user are displayed in the report. Wild card characters are supported. Operation Specifies a catalog operation to filter the report by. For example, you can specify to display only changes to existing catalogs. Select from All, Add, Change, and Delete. 5. (Optional) Click Schedule if you want to schedule the report. The Schedule page opens. 6. 7. (Optional) Create a schedule for the report (see page 198). Click Publish. The Publish page opens. Note: You can click Execute on the Configuration page to run the report, but the report is not saved or published. 8. Enter publishing settings (see page 200) and click Finish. The report is created and published to User Defined Reports or Published Reports, depending on your publishing settings. A confirmation message appears with the published destination. Note: Ad-hoc reports are published to the User Defined Reports folder, and static reports are published to the Published Reports folder. For more information about ad-hoc and static report types, see Publish a Report. You may need to refresh the administrative interface or close and re-expand the report's folder or the top-level folder (if the containing folder does not yet exist) for the created report to appear in the report tree.

188 Product Guide

Administrative Reports

Run a Deployment Audit Report


The Deployment Audit report displays all deployment activity, the connectors and catalogs involved, when the deployments occurred, and who initiated the activity. Use this report for a view of all deployment changes and details about each operation. To run a Deployment Audit report 1. 2. Click the Reports tab. Expand the Audit Reports folder. The predefined audit reports appear. 3. Click Deployment Audit Report. The Deployment Audit Report opens in the right pane. The Deployment Audit Report contains the following columns: ID Specifies an ID number for each row in the report. Connector Specifies the connector on which a deployment occurred. Catalog Specifies the name of the deployed catalog. Deployed by Specifies the user who deployed the catalog. Deployed Specifies when the deployment occurred. Status Specifies the current status of the deployment.

Create a Deployment Audit Report


The Deployment Audit Report Configuration template lets you create, run, and publish a deployment audit report according to criteria that you specify. You can also create a new deployment audit report based on an existing configured, published, or user-defined deployment audit report.

Chapter 5: Reporting 189

Administrative Reports

You can filter the customized report by catalog, connector, and user. To create a deployment audit report 1. Do one of the following:

Expand the Report Templates folder and click Deployment Audit Report Configuration. Run an existing deployment audit report (either configured, user-defined, or published) and click Configure.

The Deployment Audit Report Configuration page opens. If you accessed the page from an existing report, it contains the values from that report. 2. Complete the Presentation pane using the following fields and drop-down lists: Title Specifies the report title. Display Specifies how many rows to display in the report. Default: 10 Refresh Specifies how often to refresh the report, if at all. Default: None 3. Complete the Select Time Period pane using the following fields and drop-down lists: Relative Specifies what relative time period of data to include in the report. Use the Last field to select the last days or hours to include and the Rounded by drop-down list to specify how to round the time period. Absolute Specifies the absolute time period of data to include in the report. Use the Starting Date and Ending Date fields to specify an exact starting and ending date for report data from years to hours and minutes. 4. Complete the Select Filters pane using the following fields: Catalog Name Specifies a catalog name or expression that you want to filter the report by. Wild card characters are supported. Only deployment operations involving the specified catalogs are included in the report.

190 Product Guide

Administrative Reports

Connector Specifies a connector name to filter the report by. Wild card characters are supported. Only deployment operations affecting the specified connectors are included in the report. User Specifies a user name to filter the report by. Only operations performed by this user are displayed in the report. Wild card characters are supported. 5. (Optional) Click Schedule if you want to schedule the report. The Schedule page opens. 6. 7. (Optional) Create a schedule for the report (see page 198). Click Publish. The Publish page opens. Note: You can click Execute on the Configuration page to run the report, but the report is not saved or published. 8. Enter publishing settings (see page 200) and click Finish. The report is created and published to User Defined Reports or Published Reports, depending on your publishing settings. A confirmation message appears with the published destination. Note: Ad-hoc reports are published to the User Defined Reports folder, and static reports are published to the Published Reports folder. For more information about ad-hoc and static report types, see Publish a Report. You may need to refresh the administrative interface or close and re-expand the report's folder or the top-level folder (if the containing folder does not yet exist) for the created report to appear in the report tree.

Run a Policy Files Audit Report


The Policy Files Audit report displays all policy files that have been modified through the administrative interface, when the change occurred, and who made the change. To run a Policy Files Audit report 1. 2. Click the Reports tab. Expand the Audit Reports folder. All predefined audit reports appear. 3. Click Policy Files Audit Report. The Policy Files Audit Report opens in the right pane.

Chapter 5: Reporting 191

Administrative Reports

The Policy Files Audit Report contains the following columns: ID Specifies an ID number for each row in the report. Policy File Specifies the name of the policy file. Operation Specifies the operation performed on the policy file. Modified Specifies when the policy file was modified. Modified by Specifies the user who modified the policy file. Created Specifies when the policy file was created, if applicable. Created by Specifies the user who modified the policy file, if applicable.

Create a Policy Audit Report


The Policy Audit Report Configuration template lets you create, run, and publish a policy files audit report according to criteria that you specify. You can also create a new policy files audit report based on an existing configured, published, or user-defined policy files audit report. You can filter the customized report by policy file name, user, policy type, and operation. To create a policy audit report 1. Do one of the following:

Expand the Report Templates folder and click Policy Audit Report Configuration. Run an existing policy files audit report (either configured, user-defined, or published) and click Configure.

The Policy Audit Report Configuration page opens. If you accessed the page from an existing report, it contains the values from that report. 2. Complete the Presentation pane using the following fields and drop-down lists: Title Specifies the report title.

192 Product Guide

Administrative Reports

Display Specifies how many rows to display in the report. Default: 10 Refresh Specifies how often to refresh the report, if at all. Default: None 3. Complete the Select Time Period pane using the following fields: Relative Specifies what relative time period of data to include in the report. Use the Last field to select the last days or hours to include and the Rounded by drop-down list to specify how to round the time period. Absolute Specifies the absolute time period of data to include in the report. Use the Starting Date and Ending Date fields to specify an exact starting and ending date for report data from years to hours and minutes. 4. Complete the Select Filters pane using the following fields: Policy Name Specifies a policy file name to filter the report by. Wild card characters are supported. Only operations on policy files defined in this field are included in the report. Policy Type Specifies a type of policy to filter the report by. For example, you can specify to include only source policy files in the report. Select from Source, Destination, and Enrichment. User Specifies a user name to filter the report by. Only operations performed by this user are displayed in the report. Wild card characters are supported. Operation Specifies an operation to filter the report by. Only the Change operation applies to this report. 5. (Optional) Click Schedule if you want to schedule the report. The Schedule page opens. 6. (Optional) Create a schedule for the report (see page 198).

Chapter 5: Reporting 193

Administrative Reports

7.

Click Publish. The Publish page opens. Note: You can click Execute on the Configuration page to run the report, but the report is not saved or published.

8.

Enter publishing settings (see page 200) and click Finish. The report is created and published to User Defined Reports or Published Reports, depending on your publishing settings. A confirmation message appears with the published destination. Note: Ad-hoc reports are published to the User Defined Reports folder, and static reports are published to the Published Reports folder. For more information about ad-hoc and static report types, see Publish a Report.

You may need to refresh the administrative interface or close and re-expand the report's folder or the top-level folder (if the containing folder does not yet exist) for the created report to appear in the report tree.

Configure an Audit Report


You can configure an existing catalog files, deployment, or policy files audit report, whether it is predefined, user-defined, or published, to run the report applying any changes you specify without saving the changes. To save any configuration changes, you must publish a new report with these changes. To configure an audit report 1. 2. Run an audit report. Click Configure. The Configuration page opens for the appropriate type of audit report. 3. Make changes to the audit report using one of the following configuration pages:

Catalog Audit Report Configuration (see page 187) Deployment Audit Report Configuration (see page 189) Policy Audit Report Configuration (see page 192)

4.

Click Execute. The report runs and applies any changes you made to the configuration.

You can schedule and publish a new report based on the changes you made using the Schedule and Publish buttons.

194 Product Guide

Administrative Reports

Run a Catalog Configuration Report


The Catalog Configuration report displays all catalogs and their assigned policy. Use this report to verify that each catalog has the correct policy assigned for the event processing that you want to perform. To run a Catalog Configuration report 1. 2. Click the Reports tab. Expand the Configuration Reports folder. All predefined configuration reports appear. 3. Click Catalog Configuration Report. The Catalog Configuration Report opens in the right pane. The Catalog Configuration Report contains the following columns: ID Specifies an ID number for each row in the report. Catalog Specifies the catalog name. Policy Type Specifies the policy type for each policy file. Policy File Specifies the name of each individual policy file assigned to the catalog.

Run a Connector Configuration Report


The Connector Configuration report displays all existing connectors, the catalogs assigned to each connector, and the assigned catalog's deployment status. Use this report to keep track of every connector's catalog configuration. To run a Connector Configuration report 1. 2. Click the Reports tab. Expand the Configuration Reports folder. All predefined configuration reports appear. 3. Click Connector Configuration Report. The Connector Configuration Report opens in the right pane.

Chapter 5: Reporting 195

Administrative Reports

The Connector Configuration Report contains the following columns: ID Specifies an ID number for each row in the report. Connector Specifies the connector name, which corresponds to the server name on which it was installed. Catalog Specifies the catalog assigned to the connector, if applicable. Deployed Specifies whether the assigned catalog is deployed on the connector. If the catalog is deployed, the time of deployment appears in this column.

Run a Connector Detail Report


The Connector Detail report displays the status of each component on a connector. Use this report to view how CA Event Integration is performing on a connector server according to each piece of the configuration, such as integrations, destinations, and specific transformation modules. To run a Connector Detail report 1. 2. Click the Reports tab. Expand the Status Reports folder. All predefined status reports appear. 3. Click Connector Detail Report. The Connector Detail Report opens in the right pane. The Connector Detail Report contains the following columns: ID Specifies an ID number for each row in the report. Name Specifies the component name. For example, Spectrum denotes that the CA Spectrum adaptor is being detailed. One component may have several rows to cover multiple metrics. Status Specifies the status of the described metric. Valid values are dependent on the metric class being detailed.

196 Product Guide

Administrative Reports

Value Specifies the value of each metric. The units of each value depend on the class being detailed. For example, the time-related classes such as ProcessorTime are listed in seconds, while the event-related classes such as TotalEvents are listed in number of events. Type Specifies the type of the component. Each policy module is of the Core type, and each integration is of the Ifw type. Connector Specifies the connector being detailed. Class Specifies the metric being detailed. Several metrics may be detailed for each component. For more information about each metric, see View Connector Details (see page 168).

Run a Connector Summary Report


The Connector Summary report provides basic connector status information. The data on this report is similar to what is displayed in the Connectors pane of the dashboard. Use this report to view the current connector highest severity status and its assigned catalog, if applicable. To run a Connector Summary report 1. 2. Click the Reports tab. Expand the Status Reports folder. All predefined status reports appear. 3. Click Connector Summary Report. The Connector Summary Report opens in the right pane. The Connector Summary Report contains the following columns: ID Specifies an ID number for each row in the report. Name Specifies the connector name, which corresponds to the server name on which it was installed.

Chapter 5: Reporting 197

Schedule a Report

Status Specifies the status of the connector's highest severity metric displayed in the Failed column. The status can be one of the following:

Normal Idle Warning Down Critical Failed

Failed Displays the component, adaptor or module, and metric with the highest severity status. For example, if the event queue in the Classifier module has the highest severity status on the connector, Ifw - Classifier - QueueLength displays.

Schedule a Report
You can create a schedule for a report to run at specified intervals. Scheduling a report automates execution to provide up-to-date report data at a specified interval without having to manually run the report each time. You can specify an exact start date, time of day, a high-level interval for execution, and a schedule expiration date. You can schedule any existing report or a report you are creating from a template. Scheduled reports are published as static reports to the Published Reports folder. To schedule a report 1. Do one of the following:

Run an existing report and click Configure. Expand the Report Templates folder and click a template.

The appropriate configuration page opens. 2. Configure the report settings as necessary and click Schedule when finished. The Schedule page opens. Note: Complete only the necessary fields on the Schedule page to create a schedule that uses the appropriate level of detail. None of the fields are required.

198 Product Guide

Schedule a Report

3.

Select a report schedule from one of the following options: Daily Runs the report at a daily interval. When you select this option, a days drop-down list appears for you to specify the number of days between execution. For example, if you select Daily and specify 3 in the drop-down, the report runs every three days. Weekly Runs the report at a weekly interval. When you select this option, check boxes appear for you to specify on which days of the week to run the report. For example, if you select Weekly and select the Mo and Fr check boxes, the report runs every Monday and Friday. Monthly Runs the report at a monthly interval. When you select this option, drop-downs appear for you to specify the number of months between execution and the day of the month on which to run. For example, if you select Monthly and specify 2 in the 'Every months' drop-down and 1 in the Day of Month drop-down, the report runs every two months on the first day of the month. By month Runs the report in specified months. When you select this option, check boxes appear for you to select the months on which to run the report, and a drop-down appears for specifying the day of execution. For example, if you select By month, select the Apr and Aug check boxes, and specify 2 in the Day of Month drop-down, the report runs in April and August on the second day of the month.

4.

Specify a start time and execution interval in the Time of Execution pane using the following drop-down lists: Start time Specifies the time of day to begin running a report. Use both drop-down lists to specify the hour and minute to begin execution. Execute every Specifies an execution interval within a day. For example, if you specify a start time of 2:00 AM and select 2 hours in the Execute every drop-down list, the report runs at 2:00 AM and every two hours afterwards until the current day ends.

5.

Specify a schedule range using the following fields in the Range of Schedule Recurrence pane: Task Start Date Specifies a date to begin executing the report schedule.

Chapter 5: Reporting 199

Publish a Report

Task Expiration Date Specifies a date to stop executing the report schedule. 6. Click Publish when finished. The Publish page opens. 7. Enter publishing settings (see page 200) and click Finish. The report schedule is created. When the report runs for the first time, it publishes to the Published Reports folder.

Publish a Report
You can publish a report to save the report to a location on the report tree. You can publish reports that you are creating from a template, existing reports that you are editing, and reports that you want to create from an existing report's configuration. When you publish a report, you specify its name, report type, expiration date, control permissions, and any email addresses for notifications. You can view published reports from the Reports tab and the WRS interface. To publish a report 1. Do one of the following:

Run an existing report and click Configure. Expand the Report Templates folder and click a template.

The appropriate configuration page opens. 2. 3. Configure the report settings as necessary. (Optional) Click Schedule. The Schedule page opens. 4. 5. (Optional) Create a report schedule (see page 198). Click Publish. The Publish page opens. 6. Specify the report name and type using the following fields: Name as Specifies the report name.

200 Product Guide

Publish a Report

Save as Ad-hoc report Saves the report definition and publishes these settings in the User Defined Reports folder. When a user clicks on the report name, a report is generated from these settings using current data. Note: This option is not available for a report you have created a schedule for. Scheduled reports must be saved as static reports. Save as Static report Runs and saves the generated report data to the Published Reports folder. When a user clicks a static report, the previously generated data appears. If the report is on a schedule, the data from the last execution appears. Select one or both of the following 'Using rules' options to define how static reports are maintained:

Select Refresh Latest Copy to save over the previous report data every time a static report is executed. Select Keep History to save a new instance of the report data every time a static report runs. Any time the report runs, the new instance is time stamped and placed as a sub-heading under the report name in the report tree with "History" after its name.

7.

Complete the publishing settings using the following panes as necessary: Time of Expiration Specifies when the report expires, if at all. The report disappears from the tree when it expires. Select Never if you do not want the report to expire. Select On and specify an expiration date if you want the report to expire or if you want to specify a specific period of time after generation that the report expires. Control Permissions Lists operations that can be performed on a report. You can specify whether to allow or prohibit configuring, scheduling, and publishing. Note: Allow configuring is not available for reports for which you have created a schedule, because scheduled reports contain static data. Email Notification Specifies the email addresses that you want to send a notification of the published report to. You can specify multiple email addresses by separating them with a comma or space. When the report is published, all included addresses are sent a notification. Note: The WRS email notification functionality must be configured in Unicenter Management Portal. If you do not have UMP installed to configure an email server, this field does not work.

Chapter 5: Reporting 201

Delete a Published Report

8.

Click Finish. A confirmation page opens, and the report publishes. Ad-hoc reports publish to the User Defined Reports folder, and static reports publish to the Published Reports folder. Note: If you created a report schedule, the report publishes at the first scheduled execution.

Delete a Published Report


To delete a published report, you must navigate to the folder that is the source of these reports. To delete a published report, navigate to EI_HOME\ThirdParty\CA\WRS\webpages\repository\Enterprise Management\EMAA Reports\Published Reports and delete the file that represents the report to delete.

Export a Report to a PDF or CSV File


You can export all configured, published, and user-defined reports to a PDF or CSV file to save the file in these formats outside of the context of the product. To export a report to a PDF file 1. 2. Run a report. Click PDF Format. The report opens in PDF format in the interface. You can print or save the file to disk from this page. Note: You must have Acrobat Reader installed for the PDF to appear in the interface. If Acrobat Reader is not installed, the PDF does not appear in the interface, but you can save the file. To export a report to a CSV file 1. 2. Open a report. Click CSV Format. A dialog opens asking whether you want to save the file.

202 Product Guide

Export a Report to a PDF or CSV File

Note: You must have Microsoft Excel installed for the file to appear in the interface. If Excel is not installed, the file does not appear in the interface, but you can save the file. 3. Click Save, select the save destination for the file, and click Save again. The file downloads, and a download complete dialog opens. Click Open if you want to view the file.

Chapter 5: Reporting 203

Chapter 6: Troubleshooting and Verification


This chapter describes troubleshooting and verification techniques using log files, trace files, test tools, and other methods. This section contains the following topics: Log Files (see page 205) Event Flow (see page 206) Deployment Troubleshooting (see page 208) View Unclassified Events (see page 212)

Log Files
The EI_HOME\Logs directory contains log files that log the operation of the product's components. They keep records of installation, event processing, reporting, and web services activity. The following log files are available for verification and troubleshooting: axis2.log Logs web services activity, connector registration, and web services eventing subscriptions. Use this log file to troubleshoot errors or verify the correct operation of any of these functions. This file is created during manager and connector installations. core.log Logs event flow from the Inbox to the Outbox. Use this file to verify the correct processing of events in the core and troubleshoot processing problems. This file is created during a connector installation. eplusd.log Logs event flow for native adaptors from the source to the Inbox and the Outbox to the destination. Use this file to verify the correct collection and transmission of events performed by native adaptors. This file is created during a connector installation. ifw.log Logs event flow for Java framework adaptors from the source to the Inbox and the Outbox to the destination. Use this file to verify the correct collection and transmission of events performed by Java adaptors. This file is created during a connector installation.

Chapter 6: Troubleshooting and Verification 205

Event Flow

install.log Logs any post-file copy installation activity, including errors such as database creation failure, connection registration problems, and so on. Use this file for installation troubleshooting. This file is created during manager and connector installations. TNDPortal.log Logs reporting activity. Use this file to troubleshoot problems with the reports in the administrative interface. This file is created during a manager installation. tomcat.log Logs administrative interface servlet engine activities. This file is created during a manager installation. unclassified-events.log Logs events that are implicitly filtered because the core could not classify them, or explicitly filtered due to filtering policy. Use this file to view the events that are not being processed. Five generations of these log files are retained at one time (except for install.log). The core, ifw, eplusd, and axis2 logs roll when you start their corresponding services.

Event Flow
After you configure and deploy a catalog, events are collected, processed, and dispatched by the following process: 1. Source adaptors collect events from the defined event sources. Some events are pulled or queried from the event source (such as CA NSM and Windows Event Log events) whereas others are pushed from the event source (such as CA Spectrum and SNMP traps). The integration framework converts the collected events into property and value pairs that the core uses for processing, sorts the events into files (.in) based on the originating source, and puts the files in a buffered input directory, named the Inbox. The core processing modules pick up .in files from the Inbox and begin processing by reading the events. The events pass through each module in the core as they are classified, parsed, normalized, filtered, consolidated, enriched, evaluated, and formatted to transform the event syntax into a normalized format with uniform event grammar and then transform them into the destination schema. The core writes the reformatted events (using property and value pairs) to an output directory, named the Outbox. The events are sorted into files (.out) based on their destination.

2.

3. 4.

5.

206 Product Guide

Event Flow

6.

The destination adaptors pick up .out files from the Outbox and deliver the events to their specified destinations.

How to Enable Event Flow Tracing


Use the eplusd, ifw, and core log files to track the event flow in CA Event Integration. Tracking the event flow lets you pinpoint the source of problems, view how events are processed, see implicitly filtered events, and verify event collection and transmission. Event processing logs trace the flow of events from collection to processing to transmission as follows: 1. The eplusd.log file logs events collected in the Inbox through native source adaptors, and the ifw.log file logs events collected in the Inbox through Java framework source adaptors. These files record adaptor initialization and connection information, event collection, and input properties and values for collected events. The core.log file logs the event flow from the Inbox to the Outbox. This file records catalog deployment, event transmission from each module to the Outbox, and the event processing operations performed by each core module. The eplusd.log file logs events transmitted from the Outbox to their destinations through native adaptors, and the ifw.log file logs events transmitted from the Outbox to their destinations through Java framework adaptors. These files record how the event fits into the destination and the actual event transmission.

2.

3.

For example, you would track an event collected from CA NSM and dispatched to CA Spectrum as follows:

Check the eplusd.log file to verify that the event reached the Inbox (because CA NSM events are collected using a native source adaptor). Check the core.log file to verify that the event was processed according to the assigned policy and sent to the Outbox. Check the ifw.log file to verify that the event was sent from the Outbox to CA Spectrum (because events are sent to CA Spectrum using a Java destination adaptor).

Each log file requires specific settings to trace event collection, processing, and transmission. Complete the following process to verify that your system is configured to enable event flow tracing: 1. Verify that the tracein and traceout fields are set to "on" for each policy file whose adaptor you want to trace. These values enable event tracing in the eplusd.log and ifw.log files for source (tracein) and destination (traceout) adaptors. Edit or verify this setting by clicking each policy file on the Policies tab of the administrative interface.

Chapter 6: Troubleshooting and Verification 207

Deployment Troubleshooting

Note: If you only want to trace events from specific sources, switch the tracein or traceout setting to "off" to disable tracing for event sources or destinations that you do not need to trace. 2. 3. Restart the CA EI IFW service or caeiifw daemon and re-deploy catalogs with policy that you modified. Open log4j.properties at EI_HOME\Core\conf and change the value of the rootLogger property as follows:
log4j.rootLogger=warn, stdout

This enables detailed tracing for all modules in the core.log file. 4. Save and close the file and restart the CA EI CORE service or caeicore daemon.

Deployment Troubleshooting
Deploying a catalog merges all of the catalog's policy into one catalog.xml file, restarts the core and IFW, and starts adaptors collecting events from sources and sending them to the Inbox for core processing. The deployment status appears on the administrative interface as Pending after you deploy a catalog. You must refresh the Connectors tab to view the updated deployment status, which should be Completed if the deployment was successful. Use the following methods to troubleshoot issues, such as a failure message or no events appearing at the specified destination after deployment:

Service restart errors

The CA EI IFW and CA EI CORE services should restart after each deployment. Open the EI_HOME\Logs\eplusd.log and core.log files and verify that the IFW and Core services were recycled after catalog deployment. Both files should be new versions, because the files roll each time the services restart. Run the deployment again if the services did not recycle properly.

Connector registration errors

The IFW re-registers the connector with the manager after each deployment. Check the eplusd.log file to verify that the connector successfully registered with the manager. If the registration fails, the deployment cannot run. You can manually register the connector (see page 312) if necessary.

Catalog and policy errors

In a successful deployment, catalog.xml is properly merged and compiled to guide event processing. To verify the existence of the catalog.xml file, navigate to EI_HOME\core\CatalogPolicy. Check the size, timestamp, and contents of the file to verify that it was created at the deployment time and contains the appropriate policy. If no file or an old file appears, redeploy the catalog.

208 Product Guide

Deployment Troubleshooting

One potential cause of a catalog merge failure is the presence of two or more policy files associated with the same source or destination adaptor. For example, you cannot include two CA NSM source policy files in the same catalog. This scenario is only possible if you create custom policies based on existing policy files. Remove one of the policies from the catalog and redeploy. Even if the catalog exists in the appropriate place, corrupted catalog policy can prevent the core from processing events. For example, if you create a new policy file or customize existing policy and you leave out a close (</>) XML tag, the entire catalog file is corrupted. To check for policy errors, enable tracing (see page 207) in the core.log file, open the core.log file, and search for an error similar to 'schema validation failed for catalog.xml.' The file also provides detailed information about the exact policy error. Fix the policy and redeploy to begin processing events.

Integration errors

The IFW adaptors control event collection and dispatching. Adaptor connection problems with the event sources could prevent events from entering or leaving the core. To verify event collection and transmission, stop the CA EI CORE service or caeicore daemon and check the EI_HOME\core\Inbox directory for .in files that contain collected events. The eplusd.log file should also contain tracing information for all collected events. If you see no .in files with collected events in the Inbox, generate events using the test suite (see page 209) to see whether they are collected. If events are not being collected, check eplusd.log or ifw.log for adaptor initialization errors. An external source may be down, or you may have incorrectly configured policy attributes required to connect with a source. Note: If you configured your catalog to send events to the manager database, you can also check the reports in the administrative interface to verify that events are flowing correctly.

Performance or component errors

Check the connector detail metrics (see page 168) from the administrative interface if you notice a slow event flow or missing events. If a specific adaptor or core module is causing problems, the status of its metrics will be abnormal, or there will be no collected metrics. Check the log files for more information if you find that a component is performing poorly.

Generate Events Using the IFW Test Suite


The IFW test suite lets you test adaptor connections and event collection in your environment. You can use the IFW test suite to verify event collection and transmission and troubleshoot adaptor problems.

Chapter 6: Troubleshooting and Verification 209

Deployment Troubleshooting

The IFW test suite located at EI_HOME\Ifw\TestSuite lets you generate events for the following sources:

CA NSM Windows Event Log SNMP traps Application logs Web services eventing

There are sample .in and .out files for generating events and commands that use these files to place events into CA Event Integration. Generating events lets you test whether a deployed catalog configuration is collecting events from a source. Note: The IFW test suite is available on Windows only. To generate events using the test suite 1. 2. 3. (Optional) Stop the CA EI CORE service if you want to verify that CA Event Integration is collecting events from a source. Open a command prompt and navigate to EI_HOME\Ifw\TestSuite. Run the dir command to see the files and programs available from this directory. Enter a command name for a source followed by the corresponding .in file name. For example, to generate CA NSM events, you could enter the following:
unieventsrctest UniEvent.in

The test suite generates ten CA NSM events that are listed in the UniEvent.in file. The events should appear in the CA NSM Event Console, and if you configured a catalog to collect CA NSM events, the CA NSM adaptor should collect the events and place them in the Inbox for processing. If you stopped the core, the events will remain in the Inbox in .in files for you to examine. After you start the core, it processes the events according to catalog policy.

Test Catalog Configurations Using the Core Test Suite


The core test suite lets you test catalog configurations before you deploy them in your environment. Use the core test suite to verify that the core correctly processes events from certain sources and dispatches to certain sources using the appropriate policy. The core test suite located at EI_HOME\Core\TestSuite contains a Sample catalog, .in file, and .out file. You can copy or modify these files to test various scenarios. For example, you can configure a test that deploys a test Windows Event Log to CA Spectrum catalog so that you can view how these events are processed before actually collecting events from Windows and sending them to CA Spectrum. Note: The core test suite is available on Windows only.

210 Product Guide

Deployment Troubleshooting

To test catalog configurations using the core test suite 1. 2. Open a command prompt and navigate to EI_HOME\Core\TestSuite. Make a copy of (or modify) the Samp.cat, Samp.in, and Samp.out files. Samp.cat specifies which policies to deploy. For example, to deploy a Windows Event Log source with CA Spectrum destination, make the following changes to Samp.cat:
Spectrum filter= ..\..\Manager\PolicyStore\sources\syslog-src.xml ..\..\Manager\PolicyStore\destinations\spectrum-dest.xml

Samp.in specifies which events to send through the Core. For the above example, you may include a few sample Windows Event Log events, such as the following:
syslog_eventid=256 syslog_category= syslog_msg=eventid=1234; timegen=1200418651; category=category; msg=This is a test message 1; node=server1; source=source syslog_node=server1 syslog_severity= syslog_source=CA Event Integration syslog_timegen=1200418651 syslog_user=N/A eventtype=WinSysLog

Samp.out represents the expected results. For the above example, you could modify this file to include expected CA Spectrum result events. 3. Enter runcoretest followed by a .cat file name for the catalog configuration to test. For example, run the following command to test a CA NSM to CA Spectrum configuration:
runcoretest Samp.cat

A catalog with CA NSM source and CA Spectrum destination policy is deployed, and the core processes an example .in file according to the policy.

Chapter 6: Troubleshooting and Verification 211

View Unclassified Events

View Unclassified Events


When the core is unable to classify a received event, it cannot process the event and does not dispatch it to its destination. Events are unclassified if they are not classified correctly in the event source's policy file. CA Event Integration stores all unclassified events in a log file, so that you can review these events and refine the source policy to ensure that all important events are processed and dispatched appropriately. To view unclassified events, navigate to EI_HOME\Logs and open the unclassified-events.log file. This file lists all unclassified and filtered events.

212 Product Guide

Appendix A: Upgrades and Migration


This appendix contains information about upgrade scenarios, how to perform an upgrade, and migration considerations. This section contains the following topics: Supported Upgrades (see page 213) Perform an Upgrade (see page 214) Upgrade a Connector on Solaris or Linux (see page 215) Migration Considerations (see page 216)

Supported Upgrades
The following upgrade scenarios are supported between releases and versions of CA Event Integration:

CA Event Integration r1.2 without a CA Spectrum license to CA Event Integration r1.2 with a CA Spectrum license Any version of CA Event Integration r1.2 to CA Event Integration r1.3, r1.3.1, r2.0, and r2.5 CA Event Integration r1.3 to CA Event Integration r2.0 and r2.5 CA Event Integration r1.3.1 to CA Event Integration r2.0 and r2.5 (connector only) CA Event Integration r2.0 to CA Event Integration r2.5

When you upgrade from any previous CA Event Integration version, you must enter a license key to obtain the CA Spectrum functionality. Upgrades from a previous CA Event Integration for CA Spectrum version to CA Event Integration without a CA Spectrum license are not supported. The installer forces you to enter a license key for CA Spectrum functionality in this scenario. Use the installer to perform all upgrades. If the installer detects an old release or different version of the product, it installs the new version over the old version while retaining all existing configuration settings. Refreshing an installation of the same version and release number is also supported, and you can add a license for CA Spectrum functionality during a refresh. Note: For CA Event Integration migration considerations when installed as a component of CA Spectrum SA, see the CA Spectrum SA documentation.

Appendix A: Upgrades and Migration 213

Perform an Upgrade

Perform an Upgrade
When you upgrade CA Event Integration, you must upgrade the manager and all connectors. The installation program detects the software present on the server on which you run the installation and upgrades that software only. Upgrade the manager before you upgrade any remote connectors. If a connector exists on the manager server, it is upgraded at the same time. All custom adaptors and policy are exported from your existing installation and imported into the new version, and you have the option of re-deploying all currently deployed catalogs with updated policy. When you add a license key to an unlicensed version during a refresh, the CA Spectrum functionality is enabled while retaining all previous configuration settings. Note: You cannot change any installation settings during an upgrade. If you want to change the installation location, database settings, install set, or any other setting, you must uninstall the current version before installing the new version. To perform an upgrade 1. Double-click InstallEI.exe from the root directory of the installation image. The Introduction page of the installation wizard opens. 2. Click Next. The License Agreement page opens. 3. Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. The Previous Install Detected dialog opens. This dialog lists the existing version detected on your server. 4. Click Continue to proceed with the upgrade. The installer stops all running CA Event Integration services and exports all policies from the previous release. The Check License Key page opens. 5. Do one of the following:

Enter the license key required to enable integration with CA Spectrum and click Next. Leave the field blank and click Next to install all standard functionality without CA Spectrum integration.

Note: If you are upgrading from a previous CA Event Integration for CA Spectrum version and you do not enter a license key, a dialog appears that prevents you from proceeding with the upgrade until you enter a license key for CA Spectrum functionality. The Pre-Installation Summary page opens.

214 Product Guide

Upgrade a Connector on Solaris or Linux

6.

Review the summary and click Install. The upgrade initializes. All custom policy and adaptors are imported. The Redeploy All Catalogs dialog opens.

7.

Click Redeploy to re-deploy all previously deployed catalogs with the updated policy, or click Skip to not re-deploy and continue the upgrade. The Install Complete page opens.

Upgrade a Connector on Solaris or Linux


You can upgrade a connector installed on Solaris or Linux. Upgrade the manager before upgrading the Solaris or Linux connector. You must run the installer from an xterm. Note: You cannot change any installation settings during an upgrade. To upgrade a connector on Solaris or Linux 1. 2. Copy the connector installation file (InstallEI.bin.Solaris or InstallEI.bin.Linux) from the installation image to a local temporary folder. Verify that the file possesses the appropriate root ownership and execute permissions as follows:
chown root ./InstallEI.bin.Linux chmod 555 ./InstallEI.bin.Linux

3.

Start the installer from an xterm as follows: Linux


./InstallEI.bin.Linux

Solaris
./InstallEI.bin.Solaris

Preface the command with the appropriate file path for the installer location on the Solaris or Linux system, if necessary. The Introduction page of the installation wizard opens. 4. Click Next. The License Agreement page opens. 5. Scroll to the bottom of the agreement, select 'I accept the terms of the License Agreement', and click Next. The Previous Install Detected dialog opens. This dialog lists the existing version detected on your server.

Appendix A: Upgrades and Migration 215

Migration Considerations

6.

Click Continue to proceed with the upgrade. The installer stops all running CA Event Integration services and exports all policies from the previous release. The Check License Key page opens.

7.

Do one of the following:


Enter the license key required to enable integration with CA Spectrum and click Next. Leave the field blank and click Next to install all standard functionality without CA Spectrum integration.

Note: If you are upgrading from a previous CA Event Integration for CA Spectrum version and you do not enter a license key, a dialog appears that prevents you from proceeding with the upgrade until you enter a license key for CA Spectrum functionality. The Pre-Installation Summary page opens. 8. Review the summary and click Install. The upgrade initializes.

Migration Considerations
When you migrate from one release of CA Event Integration to another, the following items are automatically retained:

Custom policy and adaptors Policy attribute settings Database user and connection settings Services user settings Created catalogs Catalog deployments Events in the manager database Administrative interface user name

During the upgrade, you have the option of redeploying all currently deployed catalogs with updated policy. If you do not re-deploy during the upgrade, you must do so manually to apply any policy updates to catalogs. You should check the policy attribute settings on the Policies tab before re-deploying. Although all settings are maintained from the previous installation, new attributes may be present that may require action.

216 Product Guide

Migration Considerations

How to Migrate from a Windows Connector to a Solaris or Linux Connector


You cannot directly upgrade from a previous version of a Windows connector to a Solaris or Linux connector. You can, however, migrate the operations currently performed by the Windows connector to the Solaris or Linux connector as follows: 1. 2. Install a connector on a Solaris or Linux system, and register it with the same manager as the Windows connector that you want to replace. Open the CA Event Integration administrative interface, and assign and deploy the catalog currently deployed on the Windows connector to the new Solaris or Linux connector. Note that connectors support only the following integrations on Solaris and Linux:

CA Spectrum source and destination CA Spectrum SA source and destination Web services eventing source SNMP traps source CA Event Integration forwarding destination

If the catalog contains policy for integrations not supported on Solaris and Linux, you cannot deploy it on the Solaris or Linux connector. In this case, you must either remove the Windows-only integrations from the catalog or create a new catalog with supported integrations only. 3. Uninstall the Windows connector if all integrations can be migrated to the Solaris or Linux connector and you no longer want to operate the Windows connector. You can retain the Windows connector if you need to use integrations that are only supported on Windows.

Migrate from CA Spectrum 8.1 to 9


CA Event Integration uses one set of CA Spectrum policy files to integrate with all supported versions of CA Spectrum. However, if you have currently created or deployed catalogs using CA Spectrum policy files with CA Spectrum 8.1 and you upgrade to 9.0 or later, you must specify in the policy files which version of CA Spectrum you are integrating with, so that CA Event Integration will use the appropriate CA Spectrum adaptor. To migrate CA Event Integration from CA Spectrum 8.1 to 9.0 or later 1. Open the administrative interface and click the Policies tab. The View Policies page opens. 2. Click spectrum-src.xml. The Policy Configuration: Spectrum page opens.

Appendix A: Upgrades and Migration 217

Migration Considerations

3.

Set the plugin_version field to one of the following and click Save: 90 Integrates with CA Spectrum 9.0 or .91. 92 Integrates with CA Spectrum 9.2. The CA Spectrum source policy file will integrate with the adaptor for the appropriate version of CA Spectrum.

4.

Return to the View Policies page, click spectrum-dest.xml, and repeat Step 3. The CA Spectrum destination policy file will integrate with the adaptor for the appropriate version of CA Spectrum.

5.

Click the Connectors tab. The View Connectors page opens.

6.

Re-deploy all currently deployed catalogs that contain CA Spectrum policy. After successful deployment, all connectors will integrate correctly with the appropriate version of CA Spectrum.

How to Migrate SNMP Policies to Java SNMP Adaptor


Previous releases of CA Event Integration used an SNMP adaptor based on C++ named SNMPplugin.dll. The current SNMP adaptor included is Java-based, which lets you configure the port through which the adaptor receives SNMP traps. The ability to configure the SNMP port helps you avoid conflicts with other SNMP managers that may also be using the Windows SNMP listener port (which SNMPplugin.dll also uses). The default policy files that use the SNMP adaptor migrate to the Java-based adaptor by default. However, if you created any custom policy, you must manually switch the policy to use the Java adaptor. Complete the following process to migrate SNMP policies to the Java SNMP adaptor: 1. 2. 3. 4. Stop the CA EI CORE and CA EI IFW service. Open the custom policy file from EI_HOME\Manager\PolicyStore\sources. Change the Plugin name attribute value to SNMPAdapter. Add the following entry in the <Configure> section:
<entry name="port" type="readWriteText" value="9999"/>

5.

Change the value attribute value in the <entry name="plugin" entry to SNMPAdapter.jar.

218 Product Guide

Migration Considerations

6.

Replace the entire <enventry name="" entry in the <Environment> section under <EnvComponent name="Ifw"> with the following entry:
<enventry name="wrapper.java.classpath.3" value="../ThirdParty/Apache/snmp/snmp4j-1.10.2.jar;" />

7. 8. 9.

Save and close the file. Restart the CA EI Tomcat and CA EI AXIS2 services. Open the administrative interface, open the policy file on the Policies tab, and configure the port number through which to receive traps in the port field.

10. Redeploy any currently deployed catalog with the policy assigned.

Appendix A: Upgrades and Migration 219

Appendix B: Writing Adaptors


This section contains the following topics: Adaptor Overview (see page 221) Adaptors Provided (see page 221) Adaptor Creation (see page 222) Adaptor Internals (see page 223) Adaptor Coding and Implementation (see page 227) Adaptors and Policy (see page 231)

Adaptor Overview
Adaptors establish the entry and exit points for events processed by CA Event Integration. They are responsible for extracting events from their sources and sending those events to the core for processing. When the core is finished processing the events, separate adaptors dispatch them to their destinations. Source adaptors, also called In adaptors, extract events from their data sources, and the integration framework transfers events to and from the core, where events are processed, using a file cache. Destination adaptors, also called Out adaptors, pick up processed events from an output file cache and dispatch them to their destinations.

Adaptors Provided
CA Event Integration provides the following adaptors: Source adaptors:

CA NSM (Windows only) CA Spectrum CA Spectrum SA Windows Event Log (Windows only) Log Reader (Windows only)

Appendix B: Writing Adaptors 221

Adaptor Creation

SNMP traps The SNMP adaptor facilitates the following specific SNMP-based product integrations:

CA OPS/MVS Event Management and Automation CA SYSVIEW Performance Management HP Business Availability Center

Web Services Eventing CA Catalyst connector framework

Destination adaptors:

CA NSM (Windows only) CA Spectrum CA Spectrum SA CA Event Integration (for forwarding events to other connectors) Windows Event Log (Windows only) Manager Database (Windows only)

These adaptors integrate seamlessly with their sources and require no advanced customization. You need not interact with the provided adaptors at all; when you assign policy to a catalog, that policy automatically triggers the appropriate adaptors to establish integrations.

Adaptor Creation
You can write new adaptors for any event source that CA Event Integration does not provide an adaptor for. Events of interest reside in various data sources, and there is value in exposing these events for processing. When an event is delivered to the core, it can be processed and made available to the destination of choice. You can write a source adaptor, or an In adaptor, for any event source, such as a proprietary application, to extract events from that source into the core for processing. From there, you can route the events to any mapped destinations. Similarly, you can write a destination adaptor, or an Out adaptor, to dispatch events from any source to an event destination that CA Event Integration does not provide a destination adaptor for. Writing adaptors can increase the flexibility of the product to help you create a complete, unified, event management environment tailored to the specific needs of your enterprise. Write adaptors using a provided sdk that includes a class that contains the adaptor coding framework.

222 Product Guide

Adaptor Internals

Adaptor Internals
Before writing adaptors, you must understand the details about how adaptors work internally with the rest of the event processing engine. You write an adaptor in C++ or Java in the form of a loadable library. An In and Out adaptor for the same source can reside in the same library file, or a library can contain only an In or Out adaptor. After the adaptor is executed, data from the adaptor passes to the framework, which organizes events into files and puts them into a file cache, where they are picked up by the core.

How Adaptors are Located and Executed


After you write, build, and compile your adaptor, you must put it on the manager server under the EI_HOME\Manager\AdaptorStore directory in the form of a library file. When a deployment occurs, the adaptors in this directory (as determined by the deployed catalog) are sent to the connector and stored in the EI_HOME\Ifw\Plugins directory. Both C++ and Java adaptors are supported and run in their own daemons on the connector server. Java.exe houses the Java adaptors, and C++ adaptors are run by the eplusd.exe process. CA Event Integration installs a Java Runtime Environment and the eplusd daemon (eplusd.exe). The eplusd process locates, processes, and executes C++ adaptors on connector startup as follows: 1. 2. When started, eplusd locates C++ adaptors by looking in the EI_HOME\Ifw\Plugins directory for file names that include ".dll". These library files are loadable. After all files are located, the process attempts to get the procedure addresses of PlugInMainIN() and PlugInMainOUT() from each library file. If PlugInMainIN() is found, the file is treated as an In adaptor, and if PlugInMainOUT() is found, the file is treated as an Out adaptor. An adaptor may function as both an In and Out adaptor. If eplusd finds neither of these procedures, the library is ignored. 3. After determining the adaptor type, eplusd launches the PlugInMainIN(), PlugInMainOUT(), or both procedures, each in its own process thread to execute the adaptors and begin collecting events from their sources and sending them to their destinations.

Appendix B: Writing Adaptors 223

Adaptor Internals

The Java Ifw daemon locates, processes, and executes Java adaptors on connector startup as follows: 1. When started, the Java Ifw process locates Java adaptors by looking in the EI_HOME\Ifw\Plugins directory for file names that include .jar. These files are loadable. After all files are located, the process attempts to load classes named com.ca.eventplus.ifw.plugin.Adaptor.PlugInMainIn and com.ca.eventplus.ifw.plugin.Adaptor.PlugInMainOut from each jar file (where Adaptor is the name of the Adaptor jar file without the jar extension). If PlugInMainIn is found, the file is treated as an In adaptor, and if PlugInMainOut is found, the file is treated as an Out adaptor. An adaptor may function as both an In and Out adaptor. If the Java Ifw process finds neither of these classes, the file is ignored. 3. After determining the adaptor type, the Java Ifw process launches the PlugInMainIn, PlugInMainOut, or both classes, each in its own process thread to execute the adaptors and begin collecting events from their sources and sending them to their destinations.

2.

Adaptor Configuration
The name you give the adaptor library file is meaningful to the framework. When the framework initializes an adaptor, it derives the event type and adaptor name by removing the library file's .dll extension. Therefore, you should give the file a name that you want to see as the event type and the adaptor name for the source or destination that the adaptor points to. All adaptors in the system must have policy, and that policy must be connected to the adaptor through configuration policy. Configure operations appear at the top of a policy file within <Configure> properties. Adaptors and policy exchange information through the adaptor attributes defined in configure operations to enable adaptors to connect to their sources and collect events in an appropriate manner. See the following topic for additional adaptor attribute information. Note: For more information about writing policy for adaptors, see Adaptors and Policy (see page 231) and the appendix "Writing and Customizing Policy."

224 Product Guide

Adaptor Internals

Adaptor Attributes
The configure operations in policy files lets you define adaptor-specific attributes that help integrate with the specified event source. The GetPlugInAttribute() method in the adaptor's code calls this entry for attributes that you have defined in association with an adaptor. Following is an example <Configure> entry in a policy file:
<Configure> <Plugin name="Sample"> <entry name="tracein" type="list" value="on" selection="on,off"/> <entry name="traceout" type="list" value="on" selection="on,off"/> <entry name="interval" type="readWriteText" value="10"/> <entry name="plugin" type="readOnlyText" value="Sample.dll"/> </Plugin> </Configure>

The following list describes some of the attributes that you can enter for use by the framework when processing the adaptor: tracein Traces the event flow in the framework for In adaptors when marked "on." Find this information in the eplusd.log and ifw.log files. traceout Traces the event flow in the framework for Out adaptors when marked "on." Find this information in the eplusd.log and ifw.log files. interval Specifies the interval in seconds for the framework to release incoming events. Valid intervals are between 10 and 120 seconds. plugin Specifies the name of the associated adaptor library file. Other attributes may be necessary to define connection information (for example, database credentials) or customize where events are picked up (such as a log file) and how they should be read (for example, from the bottom or top of a file). Note: For more information about writing configure operations, see Configure Operation (see page 252). For more information about the policy you must write for adaptors, see Adaptors and Policy (see page 231).

Event Inbox and Outbox Files


The adaptor name is also used to identify file names in the event disk buffers. Then "In" adaptors extract events from their event sources, and these events are stored in files in the EI_Home\Core directory.

Appendix B: Writing Adaptors 225

Adaptor Internals

The IFW converts events received from In adaptors to a series of properties and values that the core uses to process events. Adaptors define the event properties for each event source. The IFW places the property and value pairs for each event in a file in the Inbox folder with the following naming convention:
adaptor name-<YYYY>-<MM>-<DD>-<Timestamp>

When a file is waiting for event input, the file name ends in ".wr". However, it is renamed with an ".in" extension when it contains events that are ready for processing. Each .in file contains the number of events collected by the adaptor within a specified interval of time. By default, the interval is ten seconds. You can define this property in the adaptor's configure operations. The core picks up .in files from the Inbox folder, processes the events in the files, and outputs processed events (in property and value pairs) in adaptor-specific files in the Outbox folder. These file names are the same as the Inbox files, but the extension changes to ".out". The framework reads the events from these files one at a time and routes them to the SendEventToDestination() method for the associated Out adaptor. Understanding the workings of the Inbox and Outbox is important when testing to see if an adaptor is working. If an In adaptor is working, you should see a .wr file starting with the adaptor name in the Inbox folder when the core is turned off. To test an Out adaptor, you can make your own file with test data and drop it into the Outbox folder. If the data is transferred to the appropriate destination, then the Out adaptor is working. Note: For more information about testing adaptors, see How to Configure and Test an In Adaptor (see page 229) and How to Configure and Test an Out Adaptor (see page 230). Inbox and outbox files are identical except for their location and extension. The files contain events with their event type defined and divided into event properties, which are taken from a source adaptor and modified by policy. An example of how events are formatted in these files is as follows:
eventtype=type of event tag1=tag value tag2=tag value

Event files use the following conventions:


The first property must identify the eventtype. Properties may not contain spaces. Values may contain spaces. A blank line is an event delimiter. Files are written using wide character strings.

226 Product Guide

Adaptor Coding and Implementation

Adaptor Coding and Implementation


The following section lists the code you must write for C++ and Java In and Out adaptors. Each type of adaptor requires different methods. If you want to write an In and Out adaptor for the same source, you can combine the In and Out adaptor methods in one adaptor file. The CA Event Integration SDK, located at EI_HOME\sdk, contains all the necessary materials to write adaptors in C++ or Java. Note that if you want the adaptor to be compatible with Solaris and Linux connectors, you must write the adaptor in Java. After writing the adaptor, you must build and compile the code and verify that the adaptor file is in the correct location. This section also contains information about how to compile, set up, and test a new adaptor.

Adaptor Processing Model


The IFW uses a callback model in the library file AbstractPlugin.lib (or AbstractPlugin.jar) to drive the adaptors. LoopIN() is employed for In adaptors, and LoopOUT() is employed for Out adaptors. Following is a simplified source listing of the framework in C++. You must code the procedures in bold in your adaptor file. Framework for In adaptors:
void AbstractPlugin::LoopIN() { InitIN(); while(true) { if (ReceiveEventFromSource()) WriteInternalEvent(); } TermIN(); }

Appendix B: Writing Adaptors 227

Adaptor Coding and Implementation

Framework for Out adaptors:


void AbstractPlugin::LoopOUT(void) { InitOUT(); while(true) { ReadInternalEvent(); SendEventToDestination(); } TermOUT(); }

Sample Adaptor Files


A sample adaptor named SampPlugin is provided with CA Event Integration. You can find the C++ source code for this adaptor in the SampPlugIn.cpp file at the following location:
EI_HOME\sdk\ifw\cplus\SampPlugin

You can find the Java source code for this adaptor in the Sample.java file at the following location:
EI_HOME\sdk\ifw\java

This adaptor is fully functional as contained, and there is sample source for each of the procedures that you must write. The file contains both In and Out adaptor code. In the sample, the In adaptor code allocates, packages, and sends a sample event using helper methods from the AbstractPlugin class. The Out adaptor code unpacks and prints the received event to stout. To link properly with the AbstractPlugin.lib (or AbstactPlugin.jar if using Java), set the compiler option "Treat wchar_t as Built-in Type to No (/Zc:wchar_t-)." You can use this sample file as a template for creating a new adaptor. Reference the file for descriptions of all the In and Out methods that you must write to create a new adaptor. If you want to create only an In adaptor, comment out the PlugInMainOUT() procedure, and if you want to create only an Out adaptor, comment out the PlugInMainIN() procedure.

228 Product Guide

Adaptor Coding and Implementation

How to Build and Compile a C++ Adaptor


After you write the code for a C++ adaptor, you must build the adaptor using Microsoft Visual Studio so that it links with AbstractPlugin.h and AbstractPlugin.lib. Use the following process to build and compile a C++ adaptor with Microsoft Visual Studio: 1. Open the adaptor project using Microsoft Visual Studio. The SampPlugin project is located in EI_HOME\sdk\ifw\cplus\SampPlugin. 2. 3. Adjust the C/C++ Additional Include directories to ..\. so that the project can find AbstractPlugin.h. Adjust the linker Include directory to ..\debug or ..\release to link with the correct version of AbstractPlugin.lib.

How to Build and Compile a Java Adaptor


Use a Java IDE to build a created Java adaptor. Use the following process to build and compile a Java adaptor using the Eclipse IDE: 1. Create a Java project in Eclipse (or an equivalent IDE) from an existing source: EI_HOME\sdk\ifw\java. This directory structure enforces the naming convention necessary so that the Java Ifw daemon recognizes the adaptor on startup and loads it into the framework. 2. 3. Add EI_HOME\sdk\ifw\java\AbstractPlugin.jar to the product as an external jar. Add the following jar files to the project:

ThirdParty\Xalan-J\serializer.jar ThirdParty\Xalan-J\xalan.jar ThirdParty\Xalan-J\xercesImpl.jar ThirdParty\Xalan-J\xml-apis.jar

These files must be known to the classpath. 4. 5. Put the sample.class file into jar files named TestDriver.jar and Sample.jar. These class files are located in the java sdk. Put the adaptor jar file in the EI_HOME\Ifw\Plugins directory.

How to Configure and Test an In Adaptor


After writing the code for a new In adaptor, you must configure your directories so that the adaptor files and entries are in the right place. After configuration, you should test the In adaptor to make sure that the adaptor correctly integrates with and receives events from the event source.

Appendix B: Writing Adaptors 229

Adaptor Coding and Implementation

The following process describes how to configure and test an In adaptor: 1. 2. Stop the CA EI IFW and CA EI CORE services. Add adaptor attributes in configuration policy to the adaptor's policy files. All adaptors must have these attributes defined in their policy files. The framework uses these attributes to extract and process user-defined attributes for the adaptor. The policy files are located at EI_HOME\Manager\PolicyStore on the manager server. Note: For more information, see Adaptor Attributes. For more information about policy, see Adaptors and Policy. 3. 4. Put the adaptor library in the EI_HOME\Manager\AdaptorStore directory on the manager server. Create, configure, and deploy a catalog associated with the new adaptor on the manager. This action sends both the catalog and the adaptor from the manager's Manager\AdaptorStore and Manager\PolicyStore directories to the connector server. Check the EI_HOME\Core\Inbox directory for files with names matching the adaptor library name. Open Inbox files for the adaptor and verify that the contents are correct.

5. 6.

How to Configure and Test an Out Adaptor


After writing the code for a new Out adaptor, you must configure your directories so that the adaptor files and entries are in the right place. After configuration, you should test the Out adaptor to make sure that the adaptor correctly integrates with and receives processed events from the core. The following process describes how to configure and test an Out adaptor: Note: If the Out adaptor shares a file with an In adaptor for the same source, you only need to put the library folder in the appropriate directory once. 1. 2. Verify that the adaptor library is in the EI_HOME\Manager\AdaptorStore directory on the manager server. Add adaptor attributes in configuration policy to the adaptor's policy files. All adaptors must have attributes defined in their policy files. The framework uses these attributes to extract and process user-defined attributes for the adaptor.

230 Product Guide

Adaptors and Policy

Note: For more information, see Adaptor Attributes. For more information about policy, see Adaptors and Policy. 3. Create, configure, and deploy a catalog associated with the new adaptor on the manager server. This action sends both the catalog and the adaptor from the manager's Manager\AdaptorStore and Manager\PolicyStore directories to the connector server. Verify that the core is processing the .in file and producing an .out event file that you have directed to the new Out adaptor. (Optional) If necessary, create a file with test data and put it into the EI_HOME\Core\Outbox directory (on the connector) and verify that it is being processed. Note: For more information about event outbox files, see Event Inbox and Outbox Files. 6. Check the adaptor's destination to verify that the events are arriving as expected.

4. 5.

Adaptor Log Files


The eplusd.log file logs all adaptor-related activity initiated by the eplusd process. The ifw.log file logs all adaptor-related activity initiated by the Java Ifw daemon. Find these files at EI_HOME\Logs. Use these files to troubleshoot adaptor errors and to monitor whether your adaptor is initializing and operating correctly. The LOG and LOGFORCE macros, which you specify in the adaptor files, control whether tracing information is recorded in the adaptor log files. The LOG macro logs eplusd or Java Ifw daemon messages only if you have set the tracing attribute to "on" in the adaptor's policy file. LOGFORCE writes messages to the log file regardless of the tracing configuration policy setting. Note: For more information about setting configuration policy attributes, see Adaptor Attributes and Configuration Policy. View sample usage of these macros in the SamplePlugin.cpp file.

Adaptors and Policy


In and Out adaptors require catalog policy to drive the processing of events received from an event source or being sent to an event destination. All provided adaptors come with policy that defines how the core processes events associated with the adaptors. Policy contains instructions for event classification, filtering, parsing, normalization, enrichment, evaluation, and formatting. Each module in the core uses the appropriate policy file to obtain instructions for its specific function (parsing, normalization, and so on). Policy also contains configuration attributes that you must specify for adaptors to work.

Appendix B: Writing Adaptors 231

Adaptors and Policy

When you create a new adaptor, you must also create policy for that adaptor. In adaptors require policy to define how to process events retrieved from the event source, and Out adaptors require policy to define how to fit the processed events into the event destination's schema. In the absence of policy for an adaptor, events are not extracted into CA Event Integration or processed. Catalog policy is stored in separate XML files for each event source and destination. Therefore, if you have created an adaptor for an event source that contains code for an In and Out adaptor, you must create separate source and destination policy XML files. Policy files are stored in the EI_HOME\Manager\PolicyStore directory. For a detailed discussion about how to write policy for each core transformation module and example source and destination policy files that you can use as a reference for writing policy, see the appendix "Writing and Customizing Policy."

232 Product Guide

Appendix C: Writing and Customizing Policy


This appendix describes how to write policy for new adaptors and customize existing policy. This section contains the following topics: Policy Overview (see page 233) Policy Structure and Deployment (see page 235) Policy File Conventions (see page 237) Policy Operations (see page 252) Sample Policies (see page 287) Policy Customization Scenario: Application Log Source Policy (see page 287) How to Configure and Implement Policy Files (see page 290) CA Catalyst Connector Policy (see page 291)

Policy Overview
Policy defines the conventions used by the core processing modules to classify, filter, parse, normalize, enrich, evaluate, and format events received from an event source and write those events in a specified format to the event's defined destination. Note: For more information about each core processing module, see the chapter "Configuration and Administration." When an event enters the core from an event source, the core uses the policy defined for that source to accurately process the event. The core uses the policy defined for the specified event destination to put the event in the correct output format and prepare the event to be sent to its destination. Without policy for a source, the core cannot perform any processing on received events. Policy is provided for all provided event sources and destinations. This policy is fully functional as provided and only requires minor configuration. Note: The application log, web services eventing, and all source policy files based on the SNMP adaptor do require some customization before you can use them to collect meaningful data. If you understand how policy is written, you can customize existing policy or write new policy.

Appendix C: Writing and Customizing Policy 233

Policy Overview

New Policy
All event sources and destinations require policy for the core to process events received from sources and prepare events to be dispatched to destinations. Without policy, the core does not receive instructions for how to process an event, and no processing takes place. Therefore, if you create a new source or destination adaptor, you must also write its policy for processing events from these sources. Every adaptor file must have a corresponding policy file. Policy is broken up into sources and destinations, so if you have written a new adaptor that contains code for an In and Out adaptor, you must write separate policy files for the source and destination adaptors. Adaptors can also have multiple policy files. For example, you can write new policy that uses the SNMP adaptor to handle traps from a specific product.

Policy Customization
While policy is included for all provided sources and destinations, the following are scenarios that may require you to edit or customize the provided policy:

You want to change settings in the existing policy. You may want to change certain policy settings for the processing of events from provided sources. For example, classification policy may require new event classification categories or changes to the names or policy for each existing class to accommodate an updated integrated product release.

You want to configure complex enrichments. Some policy requires customization to fit enrichment data into a destination's external schema. You can also customize the enrichment policy files to configure how to extract the enrichment data. Although you can perform most enrichment customization by modifying the policy attributes in the administrative interface, complex (multi-value) enrichments require you to edit the policy files directly.

You want to create new policy based on existing policy. You may want to keep the provided policy intact, but customize a new version of that policy to fit the needs of specific situations. In this case, you would copy the existing policy file, make changes to the file, and put it into the policy store under a different name. After recycling the product services, the new policy file should be available for catalog assignment in the administrative interface. For example, a server in your environment may be responsible for managing only a subset of the events typically appearing in the CA NSM Event Console. You could create new policy for this server based on the existing CA NSM source policy that filters out all event classes except for the ones that pertain to the server.

234 Product Guide

Policy Structure and Deployment

Important! An adaptor supports deployment of only one associated policy file at a time; you cannot deploy multiple policy files that are based on the same adaptor in one catalog. For example, you cannot deploy a catalog with two custom policy files associated with the Log Reader adaptor for collecting events from separate log files. The only exception to this rule is the SNMP adaptor. You can successfully deploy a catalog with multiple policies that use the SNMP adaptor. Note: For testing purposes, as a best practice, you may want to work from a copy of the existing policy file even if intending to edit and replace the existing policy. Some policy, such as the application log source policy, serves as a template that requires customization before you can collect meaningful data. For a complete customization example, see Policy Customization Scenario: Application Log Source Policy (see page 287).

Policy Structure and Deployment


Policy is stored in separate XML files for every available event source and destination. Policy is installed with the manager in the following location: EI_HOME\Manager\PolicyStore The PolicyStore directory contains the following folders: sources Contains the policy files for each event source. destinations Contains the policy files for each event destination. enrichments Contains the policy files for each enrichment module. You must place all created policy files in the appropriate location. From the administrative interface, when you are compiling catalogs, you select the policy to apply to a catalog (sources, destinations, and enrichments). When you deploy a catalog to a connector, all policy that you include in that catalog is combined into one catalog file and pushed to the connector so that the core knows how to process events from all defined sources to all defined destinations on that server. Note: You can preview how assigned policy transforms sample events in the administrative interface before you deploy a catalog to a connector. For more information, see the chapter "Configuration and Administration."

Appendix C: Writing and Customizing Policy 235

Policy Structure and Deployment

Configure Core Modules


XML properties in the policy files instruct each core transformation module how to process events. For example, the Classify module takes its instructions from the <Classify> properties in the policy files. The core processes events one module at a time in a specific order. The eplus-plugins.cfg file contains a list of the core modules to be loaded when processing events and the order in which to run the modules. If necessary, you can modify which modules are included in processing and the order in which the modules process events. Following is the default syntax of the eplus-plugins.cfg file:
# Plugin class for eventplus # These must be in the order of event processing # # First, the event is read from the .in file com.ca.eventplus.catalog.plugin.EventPlusReader # # Second, the event is transformed from source to EI internal schema com.ca.eventplus.catalog.plugin.Classifier Classify com.ca.eventplus.catalog.plugin.Parser Parse com.ca.eventplus.catalog.plugin.Normalizer Normalize com.ca.eventplus.catalog.plugin.Filter Filter com.ca.eventplus.catalog.plugin.Enricher Enrich com.ca.eventplus.catalog.plugin.Formatter Format # # Third, the event is prepared for filtering, enrichment, consolidation on normalized event com.ca.eventplus.catalog.plugin.Reprocessor Reprocess com.ca.eventplus.catalog.plugin.Classifier Classify com.ca.eventplus.catalog.plugin.Parser Parse com.ca.eventplus.catalog.plugin.Normalizer Normalize com.ca.eventplus.catalog.plugin.Filter Filter com.ca.eventplus.catalog.plugin.Consolidator Consolidate com.ca.eventplus.catalog.plugin.Enricher Enrich com.ca.eventplus.catalog.plugin.Evaluator Evaluate # # Fourth, the event is transformed from internal to destination schema com.ca.eventplus.catalog.plugin.Formatter Format com.ca.eventplus.catalog.plugin.Writer Write

To configure core modules 1. 2. Open the eplus-plugins.cfg file at EI_HOME\Core\Bin. Change the configuration of the file in any of the following ways:

To remove a module, either prefix the line with a # or delete the line. The EventPlusReader and Writer modules must always exist, but you can comment out other modules to remove that processing step from the core.

236 Product Guide

Policy File Conventions

The EventPlusReader and Writer modules must always be first and last, but you can arrange the other modules in any order. For example, you can rearrange the modules so that the core consolidates events before it filters them. To repeat a module multiple times during the processing, copy the line into each appropriate place in the sequence. For example, if you want to filter events after normalization and enrichment, you can copy the filter line after the enrichment line.

Save the file when finished. Note: Any changes you make to this file are propagated to all catalogs. You cannot have separate settings for separate catalogs. 3. Restart the core service (CA EI CORE or caeicore). The changes take effect.

Policy File Conventions


Policy files instruct the core how to convert received events from their source format to a common format (source policy) and how to fit processed events into the schema of event destinations (destination policy). The files use the basic conventions discussed in this section to interpret, process, and output events correctly. You must be aware of these conventions before writing new policy or editing existing policy.

Event Properties and Values


Source adaptors deliver events to CA Event Integration as sets of properties and values. Properties represent the schema fields for the event source, and values represent the values of those properties. Event properties and property values are carried with the event through each core module and are used in policy as the basis for transformation. You must be familiar with the event properties for a given source to write policy for that source. Properties are defined within the code for a source adaptor. If you are writing source or destination policy for a provided event source, you must use the properties from the provided adaptor, and if you are writing new source or destination policy for a new event source, you must define the event properties in the source adaptor code and use these properties in the policy. The core modules convert a source's event properties into a set of internal event properties during event processing. The internal event properties represent a common internal event schema that makes it possible to create a unified event format and to transform events from any source so that they can fit into any destination. You must be familiar with the internal event properties to write policy for any source or destination. The core uses the internal event properties in destination policy to fit events into the event properties of the defined destination.

Appendix C: Writing and Customizing Policy 237

Policy File Conventions

New properties may be created by modules during processing and may also be deleted by certain modules. For example, the parsing module may parse a property into three pieces, thus creating three additional properties. Properties can only be removed by the final Write module, which includes an exclusion filter for removing properties containing a certain pattern.

Internal Event Properties


When writing new or customizing existing source policy, you must be familiar with the internal event properties that the core uses to transform events into a common internal format. You must use these properties to transform source event properties into the internal format and to transform the internal properties of the processed event into the external event property schema of a destination. The following list describes all of the internal event properties: internal_resourceclass Specifies the category of the resource represented by the event, such as Application, Router, and so on. Valid internal_resourceclass values are as follows:

ComputerSystem Group Application MailServer Router InterfaceCard GenericITResource Switch Processor Printer Memory Port Network BackgroundProcess File Person

238 Product Guide

Policy File Conventions

internal_resourceinstance Specifies the instance name of the resource. Together with internal_resourceclass, this value defines the unique instance of the resource, such as ComputerSystem.mynode (where mynode is the internal_resourceinstance) or Processor.CPU1 (where CPU1 is the specific processor name). internal_resourcevendor Specifies the vendor associated with the resource, such as Dell for the resource class ComputerSystem. internal_resourceplatform Specifies the platform associated with the resource. This value is usually the operating system platform, such as Windows, Solaris, Linux, and so on. It may also be an application platform such as WebSphere. internal_resourceaddrtype Specifies the resource address type. This tag describes the format of the internal_resourceaddr field. Currently, this value is always set to FQDN or Fully Qualified Domain Name. internal_resourceaddr Specifies the address of the resource as a fully qualified domain name. For example, Processor.CPU1 is contained within server test123.ca.com. The internal_resourceaddr is test123.ca.com. internal_resourceuser Specifies the user associated with a resource. internal_oldseverity Specifies the previous severity of the resource. The following values are possible:

10 (unknown) 30 (normal) 50 (warning) 70 (critical) 90 (fatal)

internal_newseverity Specifies the latest severity of the resource. The possible values are the same as those for internal_oldseverity. internal_gentime Specifies the date and time when the event was generated by its source.

Appendix C: Writing and Customizing Policy 239

Policy File Conventions

internal_logtime Specifies the date and time when the event was logged into its management system. If a management system does not support a log time, or if CA Event Integration is acting as the management system directly (such as with the applog and SNMP adaptors), the log time represents the date and time the event was collected by CA Event Integration. internal_repeatcount Specifies the count for a consolidated event. internal_lastoccurrence Specifies the last event occurrence for a consolidated event (whereas the internal_gentime and internal_logtime specify the first occurrence). internal_elapsedtime Currently unused. internal_reportingagent Specifies the monitoring agent from which CA Event Integration collects or listens for events. In some cases, CA Event Integration directly queries or subscribes to events without using an interim management product such as CA Spectrum. In these cases, the internal_reportingagent may reflect the adaptor used to collect the events. internal_priority Specifies the priority of the event. The following values are possible:

10 (low) 50 (medium) 80 (high)

If the event source or reportingagent does not provide this information, CA Event Integration policy typically defaults to 50. internal_msgid, internal_msgtag, internal_msgvalue, internal_msgtype Represent extensible event data. For example, you could capture a Processor type and event reason using the following syntax for each property:

internal_msgid: [xxxx-xx-xxx,xxxx-xx-xxxxxxxx-xxx] internal_msgtag: [proctype,reason] internal_msgvalue: [duocore,exceeded threshold] internal_msgtype: [text,text]

The brackets ([]) let you list an array of values, allowing some event destinations (such as the database destination) to act individually on each item.

240 Product Guide

Policy File Conventions

internal_alarmid Specifies a unique ID representative of the event type. Typically, the alarmid does not consider generation time and severity, but rather the key event fields that constitute a given type of event. internal_message Specifies the detailed message of the event, usually describing a reason why the resource is in a particular state. For example, 'threshold exceeded' or 'logon attempt failed' might be reasons why a resource is critical. internal_eventid Specifies a universal ID uniquely defining the event. This tag is required by some event destinations (such as the database destination) to differentiate between similar events. internal_ciowner Specifies whether the CI was created by CA Event Integration or an external source. For example, CA Event Integration creates CIs for events collected from non-manager sources like SNMP, the Windows Event Log, and so on. This property applies only when dispatching events to CA Spectrum SA. internal_mdrproduct Specifies the product from which the event originated. This property applies only when dispatching events to CA Spectrum SA.

Source Event Properties


When customizing existing policy or writing new policy for provided sources, you must be familiar with the event properties of a source so that your policy can interact with events pulled from the source. The following lists define the event properties for every provided event source: CA Spectrum

eventtype spectrum_AlarmId spectrum_AlarmType spectrum_DomainName spectrum_Datetime spectrum_Model spectrum_ModelName spectrum_MTypeName

Appendix C: Writing and Customizing Policy 241

Policy File Conventions

spectrum_MType spectrum_DeviceType spectrum_ModelClass spectrum_Condition spectrum_NetAddr spectrum_Creator spectrum_Severity spectrum_SysDesc spectrum_VendorName spectrum_Cause

CA NSM

eventtype evtlog_recid evtlog_color evtlog_attrib evtlog_time evtlog_type evtlog_flag evtlog_annotation evtlog_timegen evtlog_msgnum evtlog_severity evtlog_node evtlog_user evtlog_text evtlog_pinfo evtlog_source evtlog_tag evtlog_device evtlog_category evtlog_station evtlog_udata

242 Product Guide

Policy File Conventions

CA Spectrum SA

eventtype usm_entitytype usm_MdrProduct usm_UrlParams usm_MdrProdInstance usm_OccurrenceTimestamp usm_ReportTimestamp usm_ElapsedTime usm_Message usm_Summary usm_RepeatCount usm_AlertType usm_ciowner usm_MdrElementID usm_Severity usm_AlertedMdrProduct usm_connectorName usm_siloName

Windows Event Log


eventtype syslog_eventid syslog_category syslog_msg syslog_node syslog_severity syslog_source syslog_timegen syslog_user

Appendix C: Writing and Customizing Policy 243

Policy File Conventions

Application Log

eventtype logentry

Web Service Eventing


eventtype mslive_Vendor mslive_ServiceResourceInstanceID mslive_ServiceResourceInstanceType mslive_timestamp mslive_Id mslive_Name mslive_Value mslive_Category

SNMP Traps Note: These tags are valid for all integrations that use the SNMP adaptor: CA OPS/MVS EMA, CA SYSVIEW PM, and HP BAC.

eventtype snmp_community snmp_agent snmp_ticks snmp_generictrap snmp_specifictrap snmp_enterprise snmp_varbindoids snmp_varbindvals snmp_msgid

Destination Event Properties


When customizing existing policy or writing new policy for provided destinations, you must be familiar with the event properties of a destination so that your policy can fit events into the destination's event schema.

244 Product Guide

Policy File Conventions

You can also use these properties as destinations for enrichment data. For example, you can configure the CA NSM destination policy so that an enrichment value from the CA CMDB is added to an event as part of the evtlog_source property. The following lists define the event properties for every provided event destination: Note: The database destination and CA Event Integration forwarding destination event properties are the internal event properties (see page 238). CA Spectrum spectrum_MTypeName Provides information used to resolve the CA Spectrum model. For more information about resolving models, see the chapter "Implementation and Deployment Scenarios." spectrum_Condition Specifies a mapped condition value based on the spectrum_EventCode value. For example, the spectrum_EventCode 0x00010fa6 dispatches a critical alarm in CA Spectrum with a Critical Condition value displayed in the event written to OneClick. Valid values are Info, Normal, Major, Minor, and Critical. spectrum_EventCode Specifies the CA Spectrum event code. CA Event Integration embeds EvtFormat files with specific event codes in CA Spectrum, which generate events. Some of these event codes dispose alarms. For example, the event code 0x00010fa5 generates a Major alarm (code 0x00010fa0) and logs the event. spectrum_ModelHandle Specifies the CA Spectrum model ID associated with the event. spectrum_EventVar_gentime Displays the internal_gentime value in the event written to OneClick. spectrum_EventVar_resourceclass Displays the internal_resourceclass value in the event written to OneClick. spectrum_EventVar_resourceinstance Displays the internal_resourceinstance value in the event written to OneClick. spectrum_EventVar_resourcevendor Displays the internal_resourcevendor value in the event written to OneClick. spectrum_EventVar_resourceplatform Displays the internal_resourceplatform value in the event written to OneClick.

Appendix C: Writing and Customizing Policy 245

Policy File Conventions

spectrum_EventVar_resourceuser Displays the internal_resourceuser value in the event written to OneClick. spectrum_EventVar_reportingagent Displays the internal_reportingagent value in the event written to OneClick. spectrum_EventVar_msgtag, spectrum_EventVar_msgvalue Represent extensible event data. For example, you could capture a service stopping message and event ID using the following syntax for each property:

spectrum_EventVar_msgtag: [7035,The ASP .NET Service service was successfully sent a stop control.] spectrum_EventVar_msgtag: [eventid,message]

The brackets ([]) let you list an array of values, allowing the destination to act individually on each item. The CA NSM, Windows Event Log, and CA Spectrum SA destination event properties are identical to their corresponding source event properties (see page 241).

Event Classes
The event class represents a container for all processing operations related to a certain type of event. The event type is determined by a special property named eventtype. Each source adaptor generates a default eventtype, which classifies events on a broad level by their source. For example, the CA Spectrum adaptor generates an eventtype of Spectrum. The name attribute of each <EventClass> property in the catalog is matched to the eventtype property in the event, and the operations of that class are carried out on the event. One such operation, <Classify>, changes the eventtype to reference a more specific <EventClass>. Event classes are hierarchical, so you can create subclasses of a base event class to further classify events. For more information, see Classify Operation (see page 258). You can modify the eventtype property through policy just like any other property in policy files. Changing the eventtype in policy files changes the event class to the new eventtype value. Note: Switching event classes takes effect only after the current core module is finished processing.

246 Product Guide

Policy File Conventions

Hierarchy and Inheritance


Policy is hierarchical, meaning that a child event class inherits all policy operations from a parent class. The following code fragment shows the OPR-DSMEVENT class inheriting all policy operations defined in the parent OPR-BASE class:
<EventClass name="OPR-BASE") <Classify ...../> <Filter ...../> </EventClass> <EventClass name="OPR-DSMEVENT" extends="OPR-BASE"> <Parse ...../> </EventClass>

In cases where a parent class and child class have similar operations, the parent operations are enacted first, followed by the child operations. For example, if a parent and child include a parse operation, the parent parsing occurs first, followed by the child parsing. You must understand this rule so that the policy you write is processed in the intended order. The following example shows this rule:
<EventClass name="OPR-BASE"> <Parse> <Field input="tagA" pattern="^(\w+)-(\w+)$" output="tagA1,tagA2"/> </Parse> </EventClass> <EventClass name="OPR-DSMEVENT" extends="OPR-BASE"> <Parse> <Field input="tagA1" pattern="^(\d\d)(\w+)$" output="tagA1a,tagA1b"/> </Parse> </EventClass>

In this example, assume that tagA="12Buckle-Shoe". The property is parsed by the parent operation, which transforms the property into two properties, tagA1 (12Buckle) and tagA2 (Shoe). Afterwards, the tagA1 value, "12Buckle", is parsed by the child operation into two more separate properties, tagA1a (12) and tagA1b (Buckle). There are no limits on the levels of inheritance or the number of inheriting children. All policy operations support inheritance except for classification.

Property Functions
In policy operations, various functions help transform or generate event properties such as date, time, and server name. Use any functions in the function library in policy to convert properties to a specific output. Write these functions within curly brackets {} as part of the input attribute in a Field element. See the examples below for syntax.

Appendix C: Writing and Customizing Policy 247

Policy File Conventions

You can use these functions in the input attribute of any operation. For example:
<Format> <Field output= format={0} input={somefunction(param1)} /> </Format>

Functions with no parameters use the following syntax:


{function}

Functions with additional parameters use a different syntax as follows:


{function([param1,param2,param3])}

The following function types and functions are available: Host functions {localhost} Returns the fully qualified local host name. {ip(propname)} Dereferences the property (usually a host name) and coverts to an IPv4 address. {fdqn(propname)} Dereferences the property (usually an IPv4 address) and converts a fully qualified domain name. {convertHexToMac([propname,-])} Deferences the property (a hexadecimal code for a MAC address) and converts to a delimited string using the second parameter delimiter character. DateTime functions {xsdateTime(now)} Returns a datetime stamp formatted as xs:dateTime (yyyy-MM-dd'T'HH:mm:ss-Z). {xsdateTime(propname)} Dereferences the property (epoch time in seconds or milliseconds) and converts to xs:dateTime format. {convertxsdateTime([propname,MMM d yyyy K:mm:ss a])} Dereferences the property (a datetime formatted string), parses the property according to the second parameter, and converts to xs:dateTime format.

248 Product Guide

Policy File Conventions

{datetime(now)} Generates a date and time string for the current date and time. This function results in the following output structure: March 12, 2008 1:31:00PM. {datetime(propname)} Generates a date and time string for the date and time represented by an event property value, where the property is a long integer representing some number of seconds since January 1, 1970. {timet(now)} Generates a long integer representing the current number of seconds since January 1, 1970. {timet(propname)} Generates a long integer representing the number of seconds since January 1, 1970, using the event property value, which must be a date and time string. Date functions {xsDate(now)} Returns a date stamp formatted as xs:date ( yyyyy-MM-dd-Z). {xsDate(propname)} Dereferences the property (epoch time in seconds or milliseconds) and converts to xs:date format. {convertxsDate([propname,MMM d yyyy K:mm:ss a])} Dereferences the property (a date time formatted string), parses the property according to the second parameter, and converts to xs:date format. Time functions {xsTime(now)} Returns a time stamp formatted as xs:time (hh:mm:ss-Z). {xsTime(propname)} Dereferences the property (epoch time in seconds or milliseconds) and converts to xs:time format. {convertxsTime([propname,MMM d yyyy K:mm:ss a])} Dereferences the property (a date time formatted string), parses it according to the 2nd parameter, and converts to xs:time format.

Appendix C: Writing and Customizing Policy 249

Policy File Conventions

Array functions {entry(propname)} References an entry in an array and returns the first property value in a list. {entry(propname, index)} References the values of an entry in an array at the specified index position in a comma separated list. {getarrayval([propname1,propname2,literal1])} Returns a string where propname1 and propname2 are properties whose values are comma-delimited lists of the same size, literal1 specifies one of the entries in propname1, and the return string is the corresponding entry in propname2. Use this function to return matching values in related lists. String functions {replace([propname,ch1,ch2])} Replaces any character in a specified property with another character. propname Specifies the property in which to search for the character to replace. ch1 Specifies the character to search for and replace in the specified property. ch2 Specifies the replacement character. {toLower(propname)} Converts the uppercase characters in the specified property to lowercase. {toUpper(propname)} Converts the lowercase characters in the specified property to uppercase. Other functions {uniqueidentifier} Generates a SQL unique identifier, such as 61CD55D1-F142-2E04-8A2A-9667118CF65E. {prepareconsolidatefield} Parses combinations of event fields entered in consolidation operations. {encodeurl(propname)} Returns a string that replaces problematic characters in the specified property value with a hex value. For example, you can replace embedded spaces in a URL.

250 Product Guide

Policy File Conventions

Example: Formatting the event log time The following example uses the datetime function to convert the evtlog_time property value into a date and time string used for the internal_logtime output property:
<Format> <Field output="internal_logtime" format="{0}" input="{datetime(evtlog_time)}" /> </Format>

Example: Formatting the event log node The following example uses the fdqn function to convert the evtlog_node property into a fully qualified domain name used for the internal_resourceaddr output property:
<Format> <Field output="internal_resourceaddr" format="{0}" input="{fqdn(evtlog_node)} /> </Format>

How to Add a Function


The provided functions are stored in a properties file in the EI_HOME\Core\bin\EplusCore.jar file. You can create and add new functions to the function library for use in policy files. Complete the following process to add a new function to the function library: 1. Create a java class or method for the function to create. Note: The method must return a string and accept an array of strings. 2. Package the java class into a .jar file and drop the file into the EI_HOME\Core\bin directory. The file is automatically added to the class path. 3. 4. 5. 6. Extract the tagfunctions.properties file from the EI_HOME\Core\bin\EplusCore.jar file. Add the method or class to the tagfunctions.properties file following the conventions of the other entries. Add the tagfunctions.properties file back to the EplusCore.jar file. Restart the CA EI CORE service.

Appendix C: Writing and Customizing Policy 251

Policy Operations

Policy Operations
Policy consists of operations that each provide processing and transformation instructions for a specific core transformation module. You can write policy for each of the following operations in the policy file for any source or destination:

Configure Sample Event Classify Parse Normalize Filter Consolidate Enrich Evaluate Format Write

Configure Operation
Configure operations define basic settings such as connection information, the adaptor to associate with, and monitoring preferences. Adaptors use this operation to establish integrations. You must include configure operations at the top of all policy files using the <Configure> property. Other policy may have to reference attributes in configure operations to be able to collect the information it needs. For more information, see & Reference Operator (see page 256). Note: Attributes for configure operations are also configurable from the administrative interface by clicking the policy file on the View Policies page. Configure operations begin with the <Configure> property, which has the following basic syntax:
<Configure> <Plugin name=> <entry name= type= value= selection=/> </Plugin> </Configure>

252 Product Guide

Policy Operations

Plugin name Defines the name of the adaptor associated with the policy file. This is a required field. entry name Defines the name of the attribute that you want to define. These attributes can be anything from basic preferences to database credentials to adaptor files. Following are some common examples of policy attributes: tracein Traces the event flow in the framework for In adaptors when marked on. Find this information in the eplusd.log and ifw.log files at EI_HOME\Logs. interval Defines the interval in seconds for the framework to release incoming events. Valid intervals are between 10 and 120 seconds. logname Defines a log file name from which to pull events. This attribute applies to policy for log readers, such as the Windows Event Log. Enter the log file name in the value attribute. identity Defines a login name for a database (used with the database destination adaptor and database enrichment policies). Use the password attribute with this attribute. Enter the ID in the value attribute. plugin Defines the file name for the adaptor associated with the policy file. Specify the adaptor file name in the value attribute. type Defines the output type of the attribute you are defining. The following types are available: readWriteText Indicates a configurable text value. For example, the interval attribute uses this type. readOnlyText Indicates a nonconfigurable text value. For example, the plugin attribute uses this type. list Indicates a comma-delimited list of possible values, of which one can be specified in the value attribute. The list of possible values is contained in the selection attribute.

Appendix C: Writing and Customizing Policy 253

Policy Operations

password Indicates a password or any kind of sensitive data. When you use this output type, the value of the attribute is encrypted in the policy file. value Defines the value associated with the attribute you are defining. selection Defines selection options for an attribute. This option applies to attributes of list type, such as those that can be turned on or off. Example: CA Spectrum source configure operations The following example displays the configure operations defined for CA Spectrum source policy:
<Configure> <Plugin name="Spectrum"> <entry name="tracein" type="list" value="off" selection="on,off"/> <entry name="landscapein" type="readWriteText" value=""/> <entry name="vbrokeragentaddr" type="readWriteText" value=""/> <entry name="landscapeuser" type="readWriteText" value="ca_eis_user"/> <entry name="plugin_version" selection="81,90" type="list" value="81"/> <entry name="plugin" type="readOnlyText" value="Spectrum.jar"/> </Plugin> </Configure>

tracein Defines whether the framework will be traced for event flow and debugging output from the CA Spectrum source adaptor. landscapein Defines the landscape host name from which to pull alarms. This value is blank by default. vbrokeragentaddr (CA Spectrum 8.1 only) Specifies the fully qualified domain name or IP address of the SpectroSERVER landscape or Main Location Server (for Distributed SpectroSERVER operation) from which to collect alarms. If you leave this field blank, CA Event Integration uses the appropriate landscapein value as the vbrokeragentaddr. landscapeuser Defines the CA Spectrum user created for interacting with CA Event Integration. plugin_version Defines the version of CA Spectrum with which to integrate.

254 Product Guide

Policy Operations

plugin Specifies that Spectrum.jar is the name of the CA Spectrum source adaptor. Example: Database destination configure operations The following example displays the configure operations defined for the database destination policy:
<Configure> <Plugin name="Dbplugin"> <entry name="traceout" type="list" value="off" selection="on,off"/> <entry <entry <entry <entry <entry <entry </Plugin> </Configure> name="hostname" type="readWriteText" value="server1"/> name="dbname" type="readWriteText" value="EMAAODBC"/> name="identity" type="readWriteText" value"sa"/> name="password" type"password" value="dbaccess"/> name="tablename" type="readOnlyText value="Event"/> name="plugin" type="readOnlyText" value="DBplugin.dll"/>

traceout Defines whether the framework will be traced for event flow and debugging output from the database destination adaptor. hostname Defines the host name of the database server. This value is populated by the database information you define during installation. dbname Specifies that EMAAODBC is the name of the manager database. identity Specifies the user id for accessing the database. This value is populated by the database information you define during installation. password Specifies the password associated with the user id defined in the identity field. This value is populated by the database information you define during installation. The password is encrypted in the file. tablename Specifies that Event is the name of the table in which to collect events. plugin Specifies that DBplugin.dll is the name of the database destination adaptor.

Appendix C: Writing and Customizing Policy 255

Policy Operations

& Reference Operator--Reference Configure Operation Settings in Other Policy


Other policy operations, most often normalize and enrich, can require the information defined in configure operations to obtain the necessary information for enacting policy operations. For example, connection information is necessary to extract information from CA NSM for enrichment purposes. You can use the & operator in policy to reference settings in the <Configure> operations that let other operations carry out processing instructions. The & reference operator has the following basic syntax:
&amp;(Plugin name. entry name)

Plugin name Defines the Plugin name attribute from the <Configure> section of the policy file from which you want to reference settings. entry name Defines the entry name attribute of the <Configure> section defined above containing a value that you want to reference. If you need to reference multiple settings from configure operations, repeat the & operator and plugin name for each additional entry name. Use this operator within the context of a Field element for the appropriate attribute. Following is an example usage within the input attribute:
<Field format="{0}" input="&amp;(UniEvent_Assignment.enrichment_variable)" output="&amp;(UniEvent_Assignment.assigned_to_unieventtag)" />

This example uses the enrichment_variable and assigned_to_unieventtag entries of the <Configure> section with the CA NSM plugin name (CA NSM destination policy) to insert enrichment data into a CA NSM destination event in a specified location.

Environment Operation
Environment operations are required for many sources to properly run the adaptor and connector environment. This operation lets you define java classpaths and java additionals necessary for an adaptor to run in a specific environment. All environment operations begin with an <Environment> property. When the policy is merged into a catalog and deployed to a connector, the environment operation creates service wrapper extensions to the wrapper.conf file that configures the adaptor environment. CA Spectrum source and destination policy uses environment operations to include all of the proper jar files in the classpath and point the connector to the appropriate landscape in a remote CA Spectrum scenario.

256 Product Guide

Policy Operations

Example: MySql custom enrichment environment operations The environment operation in the mysql-enrich.xml file is as follows:
<Environment> <EnvComponent name="Ifw"> <enventry name="wrapper.java.unused.99" value="unused;" /> </EnvComponent> <EnvComponent name="Core"> <enventry name="wrapper.java.classpath.3" value="&amp;(MysqlEnrich.jdbc_jarpath);" /> </EnvComponent> </Environment>

This operation tells the core to add the MySql jdbc jar path to the core Java classpath on the connector by referencing the setting you entered in the policy's configure operation.

Sample Event Operation


You can enter event properties and values for a sample event to use when previewing how the policy transforms events in the administrative interface. Define sample events in policy using the <SampleEvents> property. When you preview a catalog, the sample events made available for selection are extracted from the policy files attached to the catalog. You can add several sample events to a policy file to represent common types of events generated by a source, and all of these events will be available when you preview a catalog with the policy file assigned. The sample event operation has the following basic syntax:
<SampleEvents> <Event> <property tag="eventtype" value="" /> <property tag="" value"" /> </Event> </SampleEvents>

tag Defines the event property name. Include all event properties in the sample event that an event from the source contains. value Defines the value for each event property. Make sure to include the internal eventtype property in every sample event so that the event is correctly classified during transformation.

Appendix C: Writing and Customizing Policy 257

Policy Operations

See the source policy files at EI_HOME\Manager\PolicyStore\sources for example sample events that are defined in each source file.

Classify Operation
Classify operations refine an event class from the generic eventtype to more specific classifications, enabling specific policy to be enacted on different types of events from the same source. The eventtype value searches for a matching <EventClass> property in the classify operation to classify an event, and each <EventClass> can contain several subclasses. For example, you may want to classify events coming from CA NSM into more specific classes according to where the CA NSM Event Manager or Event Agent received the event from. Classify operations do not support inheritance, because classification makes an eventtype more specific, whereas inheritance extends general eventtypes. The core classification module traverses all field elements in order until a field is matched, after which no other fields are considered. Classify operations begin with a <Classify> property, which has the following basic syntax:
<EventClass name=> <Classify> <Field input= pattern= output= outval= /> </Classify> </EventClass>

name Defines the name of the event class that you are using to create the policy. input Defines the event property whose value is being used for the classification. pattern Defines the regular expression value that the input attribute value must match for an event to be matched. output Defines the property whose value is assigned the outval attribute. The output is usually the eventtype property. outval Defines the value assigned to the output attribute. This is usually a more specific eventtype value.

258 Product Guide

Policy Operations

Example: Classify Windows Event Log events into specific subgroups The following example divides events received from the Windows Event Log event source that match the specified patterns into two subgroups: SYSLOG-SEC and SYSLOG-APP.
<EventClass name="SYSLOG"> <Classify> <Field input="msg" pattern="^Sec.*$" output="eventtype" outval=SYSLOG-SEC" /> <Field input="msg" pattern="^App.*$" output="eventtype" outval=SYSLOG-APP" /> </Classify> </EventClass>

This operation searches the message text (defined by the "msg" attribute) of events received from the Windows Event Log for words or strings beginning with Sec or App and classifies events that qualify into the more specific SYSLOG-SEC and SYSLOG-APP eventtypes. This operation creates new Windows Event Log subgroups for events received from the application and security logs.

Parse Operation
Parse operations split event properties into additional properties using regular expression subgroups. For example, if an event source groups the old and new severity of a metric into one property, you can parse the severities into separate properties to make the information easier to understand. Parse operations fully support inheritance, so you can parse properties that were created by parsing operations in higher levels. The core parsing module traverses all parsing operation field elements in order from top to bottom until all field elements are processed. Any matches are recorded and processed. Parse operations begin with a <Parse> property, which has the following basic syntax:
<EventClass name=> <Parse> <Field input= pattern= output= /> </Parse> </EventClass>

name Defines the name of the event class that you are using to create the policy. input Defines the event property that you want to parse into subgroups.

Appendix C: Writing and Customizing Policy 259

Policy Operations

pattern Defines the regular expression pattern that the input event property must match for the event to be parsed. The pattern is divided into subgroups designated by parentheses. If the input event property matches the pattern, it is separated into one property for each subgroup. output Defines the output property to assign to the parsed input event property subgroups. The output properties correspond to the regular expression subgroups in the pattern. Therefore, the first output property is assigned to the first subgroup, and so on. Output property values may be new properties or existing properties. Example: Parse CA NSM DSM events into additional properties The following example creates three new properties from one specific event property of events received from CA NSM (through the DSM component) and also moves the trap description in the event to the trapdef output property:
<EventClass name="OPR-DSMEVENT"> <Parse> <Field input="tagA" pattern="^(.*)-(.*)$" output="tagA1,tagA2" /> <Field input="tagA1" pattern="^(\d\d)(\w+)$" output="tagA1a,tagA1b" /> <Field input="tagQ" pattern="^TRAP:(.*)$" output="trapdef" /> </Parse> </EventClass>

This Parse operation searches the event class OPR-DSMEVENT for events containing the input property "tagA" and parses this property into two separate properties (tagA1, tagA2) by separating the values on either side of a dash. The value parsed into tagA1 is then further parsed into two more separate properties (tagA1a, tagA1b). The policy also takes the TRAP description from events with the "tagQ" property and puts it into the trapdef output property.

Normalize Operation
Normalize operations transform the syntax of event property values to give values from all sources a uniform nomenclature. For example, you may want to map several similar severity property values (Ok, Success, Good, and so on) to Normal for uniformity purposes. You can write normalize operations based on the following types of transformation:

Regular expression Jdbc query Java method call Command line executable

260 Product Guide

Policy Operations

Normalize operations fully support inheritance. The core normalization module traverses all normalization policy field elements in order from top to bottom until all field elements are processed. Any matches are recorded and processed. Normalize operations begin with a <Normalize> property, which has the following basic syntax:
<Normalize> <Field input= type= [inputtype= connectionstring= jdbcdriver= query= returntype=]|[jclass= method=]|[cmdline=] output= /> [<mapentry mapin= mapout=>] </Normalize>

Note: Only the input, type, and output attributes are required for all normalization types. The other attributes you must enter depend on the type of normalize operation you are writing. See the type definition for the specific requirements for each normalization type. input Defines the list of properties to normalize. type Defines the type of normalization to perform. You can have multiple fields of the same or different types within a single Normalize property, with no restrictions. The following are available types: map Matches and normalizes properties against regular expressions. Map normalization uses mapentry elements where each element represents an expression and an output to assign to the property if the expression is matched. These elements are read from top to bottom until a property matches an element, after which additional mapentries are not considered. This type requires you to use the following attributes:

mapin mapout

jdbc Uses property values as input parameters in a jdbc query to determine the normalized value for the properties. This type requires you to use the following attributes:

inputtype connectionstring jdbcdriver query returntype

Appendix C: Writing and Customizing Policy 261

Policy Operations

methodcall Uses property values as input parameters in a Java method call to determine the normalized value for the properties. Note that the properties are treated as strings using this option, and the Java method must accept a string array as its only parameter. This type requires you to use the following attributes:

jclass method

exe Uses property values as input parameters in an executable to determine the normalized value for the properties. This type requires the following attribute:

cmdline

mapin (map only) Defines a regular expression pattern that compares the input property value. mapout (map only) Defines the assigned value to the output property if the input property matches the mapin regular expression. inputtype (jdbc only) Defines the value types for the input properties. Valid values are any Java primitive types such as int, string, long, and bool. connectionstring (jdbc only) Defines a JDBC connection string to a database instance. This string must include the database instance, name, user name, and password. The subsequent JDBC example shows use of a string. jdbc driver (jdbc only) Defines the JDBC driver Java class. Write the class for this attribute without the .class extension. query (jdbc only) Defines a SQL SELECT query that returns the value to use for the input property. returntype (jdbc only) Defines the value type of the value returned from the JDBC query. Valid types are any Java primitive type such as int, string, and bool. jclass (methodcall only) Defines the Java class full name where you run a method.

262 Product Guide

Policy Operations

method (methodcall only) Defines the name of the Java method that returns the value used for the input property. cmdline (exe only) Defines the command line that includes the full pathname and returns the value for the input property. Use substitution markers ({0}, {1}, {2}) that are replaced with the input property values. output Defines the property that is assigned the output value of the normalization operation. The output property is assigned the following value for each normalization type:

For map normalization, the output property is assigned the value of the mapout attribute. For jdbc normalization, the output property is assigned the return value of the jdbc query. For methodcall normalization, the output property is assigned the return value of the method call. For exe normalization, the output property is assigned the stdout value of the executable.

For all normalization types, the output property can be a new or existing property. Example: Normalizing city output by mapping with regular expressions The following example maps the city and state input properties to one city output property according to regular expressions:
<Normalize> <Field input="city,state" type="map" output="city"> <mapentry mapin="^Cin.*,IA$" mapout="Cincinnati" /> <mapentry mapin="^Cin.*,OH$" mapout="Cincinnati" /> </Field> </Normalize>

This operation searches for city properties that begin with Cin and have a state property of either IA or OH and normalizes these input properties to a city output that reads Cincinnati.

Appendix C: Writing and Customizing Policy 263

Policy Operations

Example: Normalizing vendor output using a jdbc query The following example finds the vendorid input property and normalizes the property to display the vendor's name using a jdbc query:
<Normalize> <Field input="vendorid" inputtype="string" type="jdbc" connectionstring="jdbc:sqlserver://server01;databaseName=trapdb; user=sa;password=sa;" jdbcdriver="com.microsoft.sqlserver.jdbc.SQLServerDriver" query="select vendorname from traptable where vendorid=?" returntype="string" output="vendorname" /> </Normalize>

This operation searches for vendorid properties and normalizes these properties by running a jdbc query to find the vendor name for each vendorid in a database that contains this information. The operation then displays the vendor name in place of the vendorid in a new vendorname output property. Example: Normalizing zip code output using a Java method The following example finds the city, state, and zip input properties and normalizes this information into one "ninedigitzipcode" property by running a Java method:
<Normalize> <Field input="city,state,zip" type="methodcall" jclass="com.ca.eventplus.catalog.methods.ZipCode" method="ConvertZip" output="ninedigitzip" /> </Normalize>

This operation normalizes the city, state, and zip input properties into the value of the zip code expressed in nine digits by running a Java method to obtain this value. The operation then displays the nine digit zip code in a new output property in place of the input properties. Example: Normalizing zip code output using a command line executable The following example finds the city, state, and zip input properties and normalizes this information into one "ninedigitzip" property by running a command line executable:
<Normalize> <Field input="city,state,zip" type="exe" cmdline="c:\\normzip.exe {0} {1} {2}" output="ninedigitzip" /> </Normalize>

264 Product Guide

Policy Operations

This operation normalizes the city, state, and zip input properties into the value of the zip code expressed in nine digits by running a command line executable to obtain this value. The executable contains substitution markers that are replaced with each input property value to calculate the nine digit zip code using this information. The operation then displays the nine digit zip code in a new output property in place of the input properties.

Filter Operation
Filter operations exclude certain events from further processing and dispatching to destinations. These operations can exclude or include events with a certain event property value or combinations of property values. For example, you may want to exclude the processing of events with a Normal severity. The core evaluates filter operations in the order entered in the file. It adheres to the following conventions:

If an event matches exclude criteria, the core immediately discards the event without evaluating ensuing filter entries. If an event matches include criteria, the core keeps the event and does not evaluate ensuring entries. If an event does not match an entry criteria, the core continues to evaluate the filter operations in order through the end of the filter operations.

Assembling ordered combinations of filter criteria gives you the flexibility to create complex filters. Create filter operations in any of the following ways:

Directly in source or destination policy files using the syntax described in this topic. This method applies specific filters to events received from a specific source or events being sent to a specific destination. Directly or from the administrative interface in the default-filter.xml file. This file lets you define filter criteria that the core uses to evaluate all events converted into its internal schema. The same filtering logic applies when using this method. You add this file to catalogs as an enrichment. For more information about configuring default-filter.xml from the administrative interface, see Filter Policy Configuration (see page 151).

Filter operations begin with a <Filter> property, which has the following basic syntax:
<EventClass name=> <Filter> <Field type= input= pattern=> </Filter> </EventClass>

Appendix C: Writing and Customizing Policy 265

Policy Operations

name Defines the name of the event class that you are using to create the policy. type Defines whether you want to include or exclude events matching the pattern. Specify exclude to exclude all events matching the pattern, and specify include to include all events matching the pattern. input Defines the event property value that is being compared against the pattern attribute for filtering. You can combine multiple properties into one entry using a comma-delimited list. In this scenario, both properties must match their corresponding patterns for the filter criteria to be met. pattern Defines the regular expression pattern that the input property must match to trigger the filtering action. Use comma-delimited patterns to correspond to multiple input values. If the pattern is matched for an exclude filter type, the entire event is excluded from further processing and is directly sent to the core Write module for output. If matched for an include filter type, the core includes the event in processing and dispatching. You can use this attribute on the last filter entry to create a default filter for all events not filtered by other entries. For example, you can enter the regular expression ^.*$ to exclude all events not filtered or included by the preceding entries. For more information, see the examples below. Example: Exclude low severity events The following example excludes events received from the Windows Event Log with a severity of OK or Normal:
<EventClass name="SYSLOG"> <Filter> <Field input="internal_newseverity" pattern="^30$" type="exclude" /> </Filter> </EventClass>

Example: Include high severity events The following example includes events received from the Windows Event Log with a severity of Critical and excludes all other severities:
<EventClass name="SYSLOG"> <Filter> <Field input="internal_newseverity" pattern="^(70|90)$" type="include" /> <Field input="internal_newseverity" pattern="^.*$" type="exclude" /> </Filter> </EventClass>

266 Product Guide

Policy Operations

Example: Create a complex filter The following example filters Windows Event Log events by including high severity events of a certain class and excluding all other events:
<EventClass name="SYSLOG"> <Filter> <Field input="internal_newseverity,internal_resourceclass" pattern="^(70|90),Application$" type="include" /> <Field input="internal_newseverity" pattern="^.*$" type="exclude" /> </Filter> </EventClass>

This policy filters events received from the Windows Event Log by doing the following in order:

Including all events with a Critical severity with a resourceclass of "Application" Excluding all events not explicitly included by a previous entry

Consolidate Operation
Consolidate operations consolidate duplicate events into one output event with an attribute defining the number of duplicates. This operation reduces event volume so that destinations receive a consolidated set of quality events. Consolidate operations use the following criteria to detect and consolidate duplicate events:

A list of specified event properties that must have the same value across events A duplicate events per minute threshold that enables consolidation A deactivate threshold that disables consolidation when no duplicates occur after a certain period of time A release interval that releases duplicate events to their destinations after a certain period of time

You configure all of these values in the policy file to control when consolidation takes place. Consolidation occurs as follows when configured and activated:

The first event is always sent to the destination, because the core cannot hold the event to wait for duplicates. When the core receives an event that meets the consolidation criteria for an event processed within the disable threshold, it holds the event and waits for duplicates. The core collects duplicates and releases one event with a duplicate count attribute after the disable threshold is reached (no duplicates occur for a specified amount of time) or the release interval is reached, whichever comes first. The duplicate count does not take the initial event into account.

Appendix C: Writing and Customizing Policy 267

Policy Operations

Create consolidate operations in any of the following ways:

Directly in source or destination policy files using the syntax described in this topic. This method applies consolidate operations to events received from a specific source or events being sent to a specific destination. Directly or from the administrative interface in the default-consolidate.xml file. This file lets you define consolidate operations that the core uses to evaluate all events converted into its internal schema. You add this file to catalogs as an enrichment. For more information about configuring default-consolidate.xml from the administrative interface, see Consolidation Policy Configuration (see page 152).

Consolidate operations support class hierarchy, but consolidation must be mutually exclusive. You cannot consolidate twice; a child consolidation would fully override the base consolidation. Consolidate operations begin with a <Consolidate> property, which has the following basic syntax:
<EventClass name=> <Consolidate activate= deactivate= releaseinterval=> <Field input= pattern=> </Consolidate> </EventClass>

activate Defines the duplicate event rate per minute at which to begin consolidating. At any rate below this setting, the core does not consolidate duplicate events. Set this value to zero to always activate consolidation. deactivate Defines how long to wait in minutes without another duplicate event before deactivating event consolidation. releaseinterval Defines how long to wait in minutes before releasing duplicate events to the destination if the deactivate threshold is not reached. This setting prevents a situation where duplicate events keep occurring consistently for a long period of time and are not sent because the deactivate threshold is not activated. input Defines the event properties to evaluate for consolidation. You can enter multiple properties in a comma-delimited list. If you enter a property combination, all of the property values must be duplicates for consolidation to occur.

268 Product Guide

Policy Operations

pattern (Optional) Defines a regular expression pattern to identify inconsequential pieces of a field that might change. For example, a message may contain a timestamp that is different in each event and should not be considered if consolidating based on the message content. Example: Consolidate events with similar internal tag values The following example consolidates events based on three internal event properties:
<EventClass name="any"> <Consolidate activate="10" deactivate"5" releaseinterval="5"> <Field input="internal_resourceclass" pattern="" /> <Field input="internal_resourceaddress" pattern="" /> <Field input="internal_resourceinstance" pattern="" /> </Consolidate> </EventClass>

This example consolidates all events from any source that have the same values for the internal_resourceclass, internal_resourceaddress, and internal_resourceinstance properties. The consolidation policy takes effect only if events are occurring at a rate of at least ten per minute, and the consolidation stops (if it is currently active) when five minutes pass without a duplicate event. If duplicate events occur consistently for five minutes without reaching the deactivate threshold, the duplicate is released to the destination and the consolidation starts over. This is an example of an operation you could configure using default-consolidate.xml, because it applies to events from all sources in the internal event format. You can configure an operation similar to this from the administrative interface and add the default-consolidate.xml file to a catalog as an enrichment.

Enrich Operation
Enrich operations look up additional properties from an external source using current event property values and create new event properties from the retrieved properties. For example, you may want to enrich an event received from CA NSM with contact information for the resource in CA NSM WorldView or CA CMDB Configuration Item (CI) information. You can include enrich operations in any type of policy file (source, destination, and enrichment). You can write enrich operations based on the following types of transformation:

Regular expression Jdbc query Java method call Command line executable

Appendix C: Writing and Customizing Policy 269

Policy Operations

Enrich operations fully support inheritance. The core enrichment module traverses all enrich operation field elements in order from top to bottom until all field elements are processed. Any matches are recorded and processed. Enrich operations begin with an <Enrich> property, which has the following basic syntax:
<Enrich> <Field input= type= outputtype= [inputtype= connectionstring= jdbcdriver= query= returntype= column=]|[jclass= method=]|[cmdline=] output= /> [<mapentry mapin= mapout=>] </Enrich>

Note: Only the input, type, outputtype, and output attributes are required for all enrichment types. The other attributes you must enter depend on the type of enrich operation you are writing. See the type definition for the specific requirements for each enrichment type. input Defines the list of properties to enrich with information from external sources. If are using a single multiple-column property as input, enter the column values in a comma-delimited list using the following format:
property_column_order

property Specifies the event property name. column Specifies a column value from the column attribute. order Specifies an integer, starting with 0, indicating the order in which the specific column value is returned. For example, if you are using a users property as input that is made up of column values firstname and lastname, the input property would read as follows:
user_firstname_0,user_lastname_1

type Defines the type of enrichment to perform. You can have multiple fields of the same or different types within a single enrich operation, with no restrictions. The following are available types:

270 Product Guide

Policy Operations

map Matches and enriches properties using regular expressions. Map enrichment uses mapentry elements where each element represents an expression and an output to assign to the property if the expression is matched. These elements are read from top to bottom until a property matches an element, after which additional mapentries are not considered. This type requires you to use the following attributes:

mapin mapout

jdbc Uses property values as input parameters in a jdbc query to determine an enriched value for the properties. This type requires you to use the following attributes:

inputtype connectionstring jdbcdriver query returntype

You can return multiple columns and rows from a complex jdbc query and use the column attribute in the policy to identify each of these returned columns uniquely. See the complex jdbc example below for details. methodcall Uses property values as input parameters in a Java method call to determine an enriched value for the properties. Note that the properties are treated as strings using this option, and the Java method must accept a string array as its only parameter. This type requires you to use the following attributes:

jclass method

exe Uses property values as input parameters in an executable to determine an enriched value for the properties. This type requires the following attribute:

cmdline

Appendix C: Writing and Customizing Policy 271

Policy Operations

outputtype Defines the type of output to return. Enrichment processing can return standard output, a list output, or a paired list output. The following are valid values for this attribute: std Indicates that the returned output is a singular value. ref Indicates that the given mapout value is a variable that contains the output to return. list Indicates that the returned output value is a comma-delimited list of values, such as red,blue,green. The resulting properties are referenced as "xxxxxx_y" where xxxxxx is the name of the output property (as designated by the output attribute) and y is the index of the list value (where 0 is the first element in the list). For example, if the output attribute is color, blue in the above list would be referenced as color_1. pairedlist Indicates that the returned output value is a paired comma-delimited list of values, such as color,red,size,large,name,fido. The resulting properties are referenced as "xxxxxx_zzzzz" where xxxxxx is the name of the output property (as designated by the output attribute) and zzzzz is the name of the returned property in the list (color, size, and name in the above example). For example, if the output attribute is myenrichsource, size in the above example (whose value is large) would be referenced as myenrichsource_size. mapin (map only) Defines a regular expression pattern that compares the input property value. mapout (map only) Defines the assigned value to the output property if the input property matches the mapin regular expression. Specify an event property for this attribute to map to the event property's value. inputtype (jdbc only) Defines the value types for the input properties. Valid values are any Java primitive types such as int, string, long, and bool. connectionstring (jdbc only) Defines a JDBC connection string to a database instance. This string must include the database instance, name, user name, and password. The subsequent JDBC example shows use of a string.

272 Product Guide

Policy Operations

jdbc driver (jdbc only) Defines the JDBC driver Java class. Write the class for this attribute without the .class extension. query (jdbc only) Defines a SQL SELECT query that returns the value to use for the input property. returntype (jdbc only) Defines the value type of the value returned from the JDBC query. Valid types are any Java primitive type such as int, string, and bool. If using a complex jdbc query that returns multiple values, you must specify the return type for each value in a comma-delimited list. The order of the list should correspond to the values defined in the column attribute. column Defines aliases for returned columns when multiple columns of data are being returned from a jdbc query. Specify a comma-delimited list of identifiers for each column value so that each piece of data is uniquely referenced in separate properties. This attribute is only required with a multiple column jdbc query. For more information, see the complex jdbc query example. jclass (methodcall only) Defines the Java class full name where you run a method. method (methodcall only) Defines the name of the Java method that returns the value used for the input property. cmdline (exe only) Defines the command line that includes the full pathname and returns the value for the input property. Use substitution markers ({0}, {1}, {2}) that are replaced with the input property values. output Specifies the property that is assigned the output value of the enrich operation. The output property is assigned the following value for each enrichment type:

For map enrichment, the output property is assigned the value of the mapout attribute. For jdbc enrichment, the output property is assigned the return value of the jdbc query.

Appendix C: Writing and Customizing Policy 273

Policy Operations

For methodcall enrichment, the output property is assigned the return value of the method call. For exe enrichment, the output property is assigned the stdout value of the executable.

For all enrichment types, the property tag can be a new or existing property. Example: Enrich city and state output with region information The following example maps the city and state input properties to one region output property according to regular expressions:
<Enrich> <Field input="city,state" type="map" output="region" outputtype="std"> <mapentry mapin="^Cin.*,OH$" mapout="Midwest" /> <mapentry mapin="^New York.*,NY$" mapout="East" /> </Field> </Enrich>

This operation searches for city and state properties that begin with Cin and have a state property of OH and begin with New York and have a state property of NY. It enriches these properties by adding the appropriate region for each of these locations. Example: Enrich resource output with department information using a jdbc query The following example searches for the host name associated with an event and enriches the event with the department the host belongs to using a jdbc query:
<EventClass name="OPR"> <Enrich> <Field input="internal_resourceaddr" inputtype="string" type="jdbc" outputtype="std" connnectionstring="jdbc:sqlserver://server01;databaseName=mdb; user=nsmadmin;password=admin;" jdbcdriver="com.microsoft.sqlserver.jdbc.SQLServerDriver" query="select organization_uuid from ca_resource_department where id=?" returntype="string" output="department" /> </Enrich> </EventClass>

This operation uses the internal_resourceaddr value as the input to the query (where id=?) and runs a jdbc query on the mdb table that contains department information for each resource. The operation then displays the host's department in a new department output property.

274 Product Guide

Policy Operations

Example: Enrich resource output with a complex jdbc query The following example enriches an event with multiple columns and rows from a database table. This scenario is similar to the prior example, with the name and description also being queried from the ca_resource_department_table:
<EventClass name="OPR"> <Enrich> <Field input="internal_resourceaddr" inputtype="string" type="jdbc" outputtype=pairedlist" column="name,desc,org" connectionstring="jdbc:sqlserver://server01;databaseName=mdb; user=nsmadmin;password=admin;" jdbcdriver="com.microsoft.sqlserver.jdbc.SQLServerDriver" query="select name,description,organization_uuid from ca_resource_department where id=?" returntype="string,string,string" output="department" /> </Enrich> </EventClass>

This operation uses the internal_resourceaddr value as in the prior example. The query returns the corresponding name, description, and organization_uuid and assigns this information to new properties using the defined column attributes as follows:

The returned name is assigned department_name_0 The returned description is assigned department_desc_0 The returned organization is assigned department_org_0

When a query returns multiple rows, the trailing number on the property represents the row number. For example, a second returned row would be labeled as department_name_1, department_desc_1, and department_org_1. Example: Enrich CA Spectrum events with CA Spectrum model handle The following example extracts the CA Spectrum model handle from CA Spectrum and enriches the event with this information using a Java method:
<Enrich> <Field output="spectrum_ModelHandle" outputtype="std" type="methodcall" input="&amp;(Spectrum.landscape), &amp;(Spectrum.landscapeuser),spectrum_MTypeName, {ip(internal_resourceaddr)},&amp;(Spectrum.ei_lostfound)" jclass="com.ca.eventplus.ifw.plugin.spectrum&amp; (Spectrum.plugin_version).EMAAGetModelIDFromIP" method="getModelID"> </Field> </Enrich>

Appendix C: Writing and Customizing Policy 275

Policy Operations

This operation extracts the CA Spectrum model handle from the CA Spectrum Managed Object associated with this event using the getModelID Java method and enriches the event with this information in the spectrum_ModelHandle property. This method uses the &amp reference operator to obtain the CA Spectrum landscape and user defined in the <Configure> section of the spectrum-dest policy file. The method also uses the spectrum_MTypeName and internal_resourceaddr input values from the event so that it can extract the correct model handle. Example: Enrich CA NSM events with WorldView object properties The following example extracts CA NSM WorldView properties from CA NSM and enriches the event with this information using a command line executable:
<Enrich> <Field cmdline="./Enrich/NSMWV/Ace_GetObjectProperties /r &amp;(NsmEnrich.repository) /u &amp;(NsmEnrich.userid) /p &amp;(NsmEnrich.password) /pf &amp;(NsmEnrich.propertyname) /pv {0}" input="internal_resourceaddr" output="wv" outputtype="pairedlist" type="exe"/> </Enrich>

This operation extracts WorldView properties from the CA NSM WorldView object associated with this event using a provided command line executable and enriches the event with this information in a paired list format in the wv tag. This executable uses the &amp reference operator to obtain the WorldView repository connection information defined in the <Configure> section of the nsm-enrich policy file. The executable also uses the internal_resourceaddr input value from the event so that it can extract the correct object information.

276 Product Guide

Policy Operations

Evaluate Operation
Evaluate operations evaluate streams of events against defined rules and run workflow actions when the rules are met. You can write evaluate operations to automate intelligent actions in response to one or more event conditions that require more acknowledgment or action than a simple resolution. This functionality can provide many of the capabilities of CA NSM Event Management in other products such as CA Spectrum SA. Consider the following examples:

You can write a rule that detects if a source (SNMP, application log, and so on) generates three events reporting poor response time in a ten minute interval and creates a new event with a higher severity than the previous events that indicates a consistently poor response time. You can write rules to infer a clear event for low-level event sources that do not contain this mechanism. For example, an exception from an application log may not have the ability to clear a previously logged exception. You can write a rule for these exceptions to detect symptoms that the exception has been resolved and send a true clear alert to the event destination. You can write rules to associate events being sent to CA Spectrum SA as infrastructure alerts with the appropriate CI type. For example, you can write a rule to detect multiple events that indicate a database problem (where one event would not make this clear) and map the events to a Database CI.

Evaluation occurs after the final enrichment in the core. You write evaluate operations in policy, but the operations rely on the Drools language, which adheres to a different format and must be inserted in the evaluate operation. For more information about writing Drools event-based rules and workflow actions, see the following page for Drools documentation (version 5): http://www.jboss.org/drools/documentation.html. Evaluate operations begin with an <Evaluate> property, which has the following basic syntax:
<Evaluate> <Field input="rule name" output="DRL"> <!CDATA[ <Drools rule> </Field> <Field input="action name" output="DRF"> <!CDATA[ <Drools action> </Field> </Evaluate>

Appendix C: Writing and Customizing Policy 277

Policy Operations

input Defines the name of the event rule in the rule section and the name of the corresponding action in the action section. output Defines the type of Drools language to output. Use DRL for rules and DRF for actions. Drools rule Defines the event rule criteria in the Drools language. See the example below for a full rule with explanations. Drools action Defines the event workflow action to run if the rule criteria are met. See the example below for a full action with explanations. A workflow action is not required for every rule if the rule itself can perform the appropriate action. Example: Detect immediate service shutdown, create a higher severity event, and write to a CSV file The following example detects when a Windows service shuts down within 30 seconds after starting. These operations are tracked in separate events, so an event rule is required to correlate the events and trigger an appropriate action. In this case, the operation creates a new event to replace the other events with a message and severity that reflects the more serious nature of the situation, and also prints the message to a CSV file. The output event is sent to CA Spectrum SA. This evaluate operation contains a rule and an action. Note: You should always create a new event, not update an existing one.

278 Product Guide

Policy Operations

The rule for this example is as follows:


<Evaluate> <Field input="Eval1" output="DRL"> <![CDATA[ package com.ca.eventplus.catalog; import com.ca.eventplus.catalog.util.EPEvent; import java.util.HashMap; declare EPEvent @role(event) end rule "Correlation Test" no-loop true duration 30 when up : EPEvent( resourceclass == "DaemonProcess", message matches "^.*entered the running state.*$" ) dn : EPEvent( resourceclass == "DaemonProcess", resourceinstance == up.resourceinstance, message matches "^.*entered the stopped state.*$", this after[0s,30s] up ) then System.out.println( "drools:rule matches "); HashMap hm = new HashMap(); hm.put("time", dn.getGentime()); hm.put("msg", dn.getResourceinstance() + " is cycling down immediately after startup"); hm.put("eventtype","any"); hm.put("sam_className",dn.getSiloproperty("temp_samclass")); hm.put("sam_connectorName",""); hm.put("sam_deviceID",dn.getResourceaddr().toLowerCase()); hm.put("sam_entitytype","EVENT"); hm.put("sam_eventDetail",dn.getResourceinstance() + " is cycling down immediately after startup"); hm.put("sam_reportTime",dn.getGentime()); hm.put("sam_resourceID",dn.getSiloproperty("temp_samclass")+": "+dn.getResourceaddr().toLowerCase()); hm.put("sam_severity","3"); hm.put("sam_siloAlarmID",dn.getAlarmid()+": "+dn.getResourceaddr().toLowerCase()+": "+dn.getResourceinstance().toLowerCase()); hm.put("sam_siloID","Unknown"); hm.put("sam_siloName",""); hm.put("sam_situationCategory","StatusReport"); hm.put("sam_situationMessage",dn.getAlarmid()+" "+dn.getResourceaddr()+": "+dn.getResourceinstance()+" (cycling)"); hm.put("sam_situationType","Risk");

Appendix C: Writing and Customizing Policy 279

Policy Operations

hm.put("usm_AlertType","Risk"); hm.put("usm_AlertedMdrElementID",dn.getResourceaddr().toLowerCase()); hm.put("usm_AlertedMdrProduct","CA:00037"); hm.put("usm_AlertedMdrProdInstance",dn.getResourceaddr()); hm.put("usm_MdrProduct","CA:00037"); hm.put("usm_MdrProdInstance",dn.getResourceaddr()); hm.put("usm_MdrElementID",dn.getAlarmid()+": "+dn.getSiloproperty("temp_subresourceID").toLowerCase()); hm.put("usm_Summary",dn.getAlarmid()+" "+dn.getResourceaddr()+": "+dn.getResourceinstance()+" (cycling)"); hm.put("usm_Message",dn.getResourceinstance() + " is cycling down immediately after startup"); hm.put("usm_OccurrenceTimestamp",dn.getGentime()); hm.put("usm_severity","3"); hm.put("usm_entitytype","Alert"); retract(up); retract(down); kcontext.getKnowledgeRuntime().startProcess ("com.ca.eventplus.catalog.ruleflow", hm); end ]]> </Field>

The input and output properties define the rule name and output. The Drools rule is embedded in the '![CDATA[' property. The Drools rule contains the following sections: import Defines Java methods to import for use in the rule. This declaration must include the EPEvent method, which describes the event properties that the Drools engine can use. declare EPEvent Declares EPEvent as an event role, enabling correlation between events. rule "Correlation Test" Starts the event rule. when Defines the rule criteria. The when clause in this example looks for the following events occurring within ten seconds of one another:

An event with an internal_resourceclass value of DaemonProcess and a message that contains the text 'entered the running state' An event with an internal_resourceclass value of DaemonProcess, an internal_resourceinstance that matches the resources instance of the 'up' event, and a message that contains the text 'entered the stopped state'

280 Product Guide

Policy Operations

Note the format of the clause, specifically how it uses the EPEvent method to retrieve and evaluate the internal event properties. Also note the syntax of the clause that defines the time interval between events. In general, use the following conventions to reference event properties in the when clause:

Reference CA Event Integration internal event properties using the property name with the 'internal_' prefix removed. For example, the when clause uses resourceclass to reference internal_resourceclass. Reference source or destination event properties (such as CA Spectrum SA properties in the USM format) using the accessor method getSiloProperty() with the full property name in parentheses. For example, getSiloProperty(usm_Severity") references an event's usm_Severity property.

then Defines the action to run when the criteria in the when clause are met. The then clause in this example defines the following actions:

Sends the properties for a new CA Spectrum SA alert to a Java Hash Map, including the properties of the down event with a new event message (including "is cycling down immediately after startup") and a new severity to reflect the more serious nature of the problem Puts the new message and event gentime in a Java Hash Map for separate use Calls a workflow interface

Note the format of the clause, specifically how it sets the event properties to new values, populates the necessary CA Spectrum SA destination event tags, and uses a Java Hash Map to store the new event properties. In general, use the following conventions to reference and set event property values in the then clause:

Get and set the values of internal event properties using the accessor methods getProperty() and setProperty("value"), where the Property value is the capitalized internal event property with the 'internal_' prefix removed. For example, getResourceclass gets the value of the event's internal_resourceclass property, and setResourceclass("DaemonProcess") sets the value of the event's internal_resourceclass property to DaemonProcess. Get and set the values of source or destination event properties using the accessor methods getSiloProperty("") with the full property name in parentheses and setSiloProperty("property", "value") with the full property name followed by the new value in parentheses. For example, setSiloProperty("usm_Message", "System failure") sets the event's usm_Message property value to System failure.

Appendix C: Writing and Customizing Policy 281

Policy Operations

The action workflow for this example is as follows:


<Field input="Action1" output="DRF"> <![CDATA[ <?xml version="1.0" encoding="UTF-8"?> <process xmlns="http://drools.org/drools-5.0/process" xmlns:xs="http://www.w3.org/2001/XMLSchema-instance" xs:schemaLocation="http://drools.org/drools-5.0/process drools-processes-5.0.xsd" type="RuleFlow" name="ruleflow" id="com.ca.eventplus.catalog.ruleflow" package-name="com.ca.eventplus.catalog" > <header> <imports> <import name="java.io.PrintWriter" /> <import name="java.io.FileOutputStream" /> <import name="java.util.Date" /> <import name="java.sql.Timestamp" /> </imports> </header> <nodes> <start id="1" name="Start" x="16" y="16" /> <actionNode id="2" name="WriteToFile" > <action type="expression" dialect="java" > try { System.out.println( "drools:in RuleFlow action" ); Timestamp tstamp = new Timestamp(new Date().getTime()); PrintWriter pw = new PrintWriter(new FileOutputStream ("../Inbox/CorrelationEvent-CorrTest-"+"+tstamp.getTime() +".in", true)); pw.println("eventtype=" + (String)kcontext.getVariable("eventtype")) pw.println("usm_AlertType=" + (String)kcontext.getVariable("usm_AlertType")); pw.println("usm_AlertedMdrElementID=" + (String)kcontext.getVariable("usm_AlertedMdrElementID")); pw.println("usm_AlertedMdrProduct=" + (String)kcontext.getVariable("usm_AlertedMdrProduct")); pw.println("usm_AlertedMdrProdInstance=" + (String)kcontext.getVariable("usm_AlertedMdrProdInstance")); pw.println("usm_MdrElementID=" + (String)kcontext.getVariable("usm_MdrElementID")); pw.println("usm_MdrProduct=" + (String)kcontext.getVariable("usm_MdrProduct")); pw.println("usm_MdrProdInstance=" + (String)kcontext.getVariable("usm_MdrProdInstance")); pw.println("usm_Summary=" + (String)kcontext.getVariable("usm_Summary")); pw.println("usm_Message=" + (String)kcontext.getVariable("usm_Message"));

282 Product Guide

Policy Operations

pw.println("usm_OccurrenceTimestamp=" + (String)kcontext.getVariable("usm_OccurrenceTimestamp")); pw.println("usm_severity=" + (String)kcontext.getVariable("usm_severity")); pw.println("usm_entitytype=" + (String)kcontext.getVariable("usm_entitytype")); pw.close(); } catch(Exception e) { e.printStackTrace(); } </action> </actionNode> <actionNode id="3" name="LogIt" x="128" y="16" > <action type="expression" dialect="java" >try { PrintWriter pw = new PrintWriter(new FileOutputStream ("c:/daemoncycle.csv", true)); pw.print("user," + (String)kcontext.getVariable("time") + "," + (String)kcontext.getVariable("msg")); pw.println(); pw.close(); } catch(Exception e) { e.printStackTrace(); } </action> </actionNode> <end id="3" name="End" x="240" y="16" /> </nodes> <connections> <connection from="1" to="2" /> <connection from="2" to="3" /> <connection from="3" to="4" /> </connections> </process> ]]> </Field> </Evaluate>

Appendix C: Writing and Customizing Policy 283

Policy Operations

The input and output properties define the action name and output. The Drools workflow is embedded in the '![CDATA[' property. The Drools workflow contains the following sections: process Defines the Drools materials to use. You must define the workflow that you defined in the rule in the id property. imports Imports the Java classes required to complete the workflow action. nodes Completes the following actions:

Writes the new CA Spectrum SA event properties from the hash map to a file in the core Inbox folder, which results in a new CA Spectrum SA alert. Writes the time and message event properties (that were added separately to the hash map in the rule) to a c:\daemoncycle.csv file.

For additional examples and information about the syntax and requirements of the Drools language version 5, see the following page: http://www.jboss.org/drools/documentation.html.

Format Operation
Format operations combine property values into a new or existing property using a specified format. You can use format operations to define information in events received from event sources using a new property and adhering to a specified format. Format operations fully support inheritance. The core formatting module traverses all format operation fields in order from top to bottom until all field elements are processed. Any matches are recorded and processed. Format operations begin with a <Format> property, which has the following basic syntax:
<Format> <Field conditional= input= format= output= /> </Format>

conditional (Optional) References an event property whose existence or non-existence determines whether the format operation is performed. input Defines an event input property or list of event properties that you want output in another property with a new format.

284 Product Guide

Policy Operations

format Defines the format to give the specified input properties. Use substitution markers to indicate each property according to their order in the input attribute. For more information, see the formatting example. output Defines a single output property that is assigned the value of the reformatted input properties. Example: Format severity event input into a meaningful description The following example searches for events indicating a severity change for a resource and reformats this information into a short description:
<Format> <Field input="resource,hostwork,severity" format="The {0} on machine {1} is {2}" output="description" /> </Format>

This operation searches for the resource, hostwork, and severity properties, which usually indicate a change in severity for a resource on a host system. The policy reformats these three input values into a short description and outputs this description to the description property. The format attribute uses substitution markers according to the input properties' order in the input attribute to put the property values correctly in the description. An example output description property would look like the following: "The CPU on machine server01 is Critical." Example: Format an assigned property with conditional criteria The following example assigns the property referenced by township to the output property, but only if township exists. If township does not exist, the property referenced by city is assigned.
<Format> <Field conditional=township input=township format="{0}}" output="municipality" /> <Field conditional=!township input=city format="{0}}" output="municipality" /> </Format>

Write Operation
Policy processed by the write operation puts the final event, with all existing and new properties, into an internal buffer for subsequent access by the destination adaptors. Write operations define where an event is sent after processing. You can copy an event into several internal buffers, one for each destination, to send an event to multiple destinations.

Appendix C: Writing and Customizing Policy 285

Policy Operations

Write operations fully support inheritance. The core write module processes only a single Write element that defines where to copy the event. Write operations begin with a <Write> property, which has the following basic syntax:
<Write tagfilter= output= />

tagfilter Defines a regular expression that, if matched against an event property, removes that property from the event. For most destination adaptors, removing properties is not an issue, because the adaptors remove any properties automatically that do not match the destination schema. However, some destination adaptors include every property in the destination output. Use this attribute to remove unnecessary properties in such situations. output Specifies the destinations to send events from the event class under which you are writing the policy. Following are the provided destinations adaptors and their appropriate syntax:

CA NSM Event Console: UniEvent Database: DBEvent Windows Event Log: SysLogEvent CA Spectrum: SpectrumEvent CA Event Integration: EIForwardAdapterEvent CA Spectrum SA: SAMAdapter

Example: Send received events to CA NSM and the database The following example copies all received events to the internal buffers for CA NSM and the database so that each of these destination adaptors will pick them up, and removes properties that match the filtering criteria:
<Write tagfilter="^wv.*$" output="UniEvent,DBEvent" />

This operation writes all received events under the contained event class to the internal buffers for CA NSM and the database destination adaptors. The operation also filters out all properties that begin with wv, so that these properties are not copied to the destinations.

286 Product Guide

Sample Policies

Sample Policies
You can use the existing policy files as a reference for creating your own policies. View complete policy files at EI_HOME\Manager\PolicyStore. Find other policy examples at EI_HOME\Core\TestSuite. Specifically, the Samp.cat file is a standalone catalog that illustrates many of the concepts discussed in this appendix.

Policy Customization Scenario: Application Log Source Policy


Some of the existing policy files require customization before you can use them in a catalog. For example, the application log source policy is a generic policy template for reading events matching a specified pattern in a specified log file. You must customize this policy file to specify the log file to read, the pattern that must be matched for event retrieval, and how to parse and format collected events. In the following scenario, the applog-src.xml file is copied and customized to collect error events from the Windows Management Instrumentation (WMI) log file, wbemcore.log, that is present on every Windows computer. To customize application log source policy to read WMI log file 1. Make a copy of the applog-src.xml file located at EI_HOME\Manager\PolicyStore\sources and give it a meaningful name, like applog-wbemcore-src.xml. Put the new file in the same directory. You should copy the existing policy to retain the applog-src.xml policy template for use in future customizations and catalog assignments. Note: If you retain the applog-src.xml policy, it and the new applog-wbemcore-src.xml will share the same adaptor. You cannot deploy both of these policy files in the same catalog, or the catalog deployment will fail. You can only assign one policy file associated with the same source or destination to a catalog. 2. Open the applog-wbemcore-src.xml file and modify the following lines in the <Configure> section as follows (changes are marked in bold):
<entry name="logfile" type="readWriteText" value="WINDOWS\system32\wbem\logs\wbemcore.log"/>

Note: The log file may be in a different location on different versions of Windows. These changes specify to read the wbemcore.log file. 3. Delete all content in the <SampleEvents> section, as this sample event will not apply to the wbemcore.log file. Note: You can specify a new sample event in this section.

Appendix C: Writing and Customizing Policy 287

Policy Customization Scenario: Application Log Source Policy

4.

Replace the <Classify> and <Filter> sections within the base event class with the following:
<Classify> <Field input="logentry" pattern="^.* : Error .*$" output="eventtype" outval="WbemError" /> </Classify> <Filter> <!-- filter everything not classified --> <Field input="eventtype" pattern="^LogReader$" type="exclude" /> </Filter>

This section classifies the events from the wbemcore.log file into Error events, which will be collected, and non-Error events, which will not be collected. 5. Edit the Axis2Log class to create the WbemError class as follows:
<EventClass name="WbemError" extends="LogReader">

You can now use the ensuing code to define processing rules for the WbemError event class. 6. Add the following beneath the </Enrich> tag to explain how to parse the file:
<Parse> <Field input="logentry" output="temp_mon,temp_dom,temp_time,temp_year, temp_errcode,internal_msgtext" pattern="^\(\w+ (\w+) (\d+) (\d\d:\d\d:\d\d) (\d+)\) : Error (\d+) (.*)$"/> </Parse>

The parsing section explains how to delineate the log record into its constituent pieces. This example parses the error message into three pieces: the datetime stamp, the error code, and the error message. 7. Replace the <Format> section of the file with the following:
<Field format="{0} {1}, {2} {3}" input="temp_mon,temp_dom,temp_year,temp_time" output="internal_gentime"/> <Field output="internal_logtime" format="{0}" input="{datetime(now)}" /> <Field <Field <Field <Field output="internal_repeatcount" format="1" input="" /> output="internal_elapsedtime" format="" input="" /> output="internal_resourceclass" format="Application" input="" /> output="internal_resourceinstance" format="wbemcore" input="" />

<Field output="internal_resourcevendor" format="Microsoft" input="" /> <Field output="internal_resourceplatform" format="Windows" input="" /> <Field output="internal_resourceaddrtype" format="FQDN" input="" /> <Field output="internal_resourceaddr" format="Unknown" input="" /> <Field output="internal_resourceuser" format="Unknown" input="" /> <Field output="internal_reportingagent" format="EIS-LOGRDR" input="" />

288 Product Guide

Policy Customization Scenario: Application Log Source Policy

<Field output="internal_priority" format="50" input="" /> <Field output="internal_oldseverity" format="10" input="" /> <Field output="internal_newseverity" format="10" input="" /> <Field output="internal_msgid" format="[{0},{1}]" input="{uniqueidentifier},{uniqueidentifier}" /> <Field output="internal_msgtag" format="[alarmid,message]" input="" /> <Field output="internal_msgvalue" format="[{0},Error {1}]" input="temp_errcode,internal_msgtext" /> <Field output="internal_msgtype" format="text" input="" /> <Field output="internal_alarmid" format="{0}" input="temp_errcode" /> <Field output="internal_message" format="Error {0}" input="internal_msgtext" />

This section describes how to map the event content into a normalized event structure. Many fields are hard-coded based on the log file to be read, such as internal_resourceinstance (wbemcore) and internal_resourceplatform (Windows). Other fields are based on parsed values, such as alarmid (temp_errcode) and internal_message (internal_msgtext). 8. 9. Delete the rest of the event sub-classes beneath the WbemError class. Save and close the file. The policy customization is complete for the wbemcore.log file. The LogReader adaptor should be able to collect and process error messages from this file when you use the policy in a catalog. To test the new policy, complete Steps 10-15. 10. Restart the CA EI Tomcat service. The applog-wbemcore-src.xml file should appear in the Policies tab of the administrative interface and be available to assign to a catalog. 11. Create a catalog with the applog-wbemcore-src.xml source policy and the database as a destination and deploy it on the connector where you want to collect events from the log file. Note: Verify that the existing applog-src.xml file is not deployed in any catalogs, or the catalog creation will fail. 12. Stop the CA EI CORE service. When the core is stopped, you can view collected events in the core Inbox folder before they are processed. 13. Run wbemtest.exe from the command line, connect to the root\cimv2 namespace, click Open Class, and enter an invalid class name. An error dialog opens, and an error message is generated in the wbemcore.log file.

Appendix C: Writing and Customizing Policy 289

How to Configure and Implement Policy Files

14. Navigate to the EI_HOME\Core\Inbox folder, and you should see a .in file for LogReader. This file contains events collected by the LogReader adaptor that are ready to be processed by the core. Open the file in Notepad, and you should see entries similar to the following:
eventtype=LogReader logentry=(Wed Jun 25 12:19:50 2008) : Error 80041002 occurred executing queued request eventtype=LogReader logentry=(Wed Jun 25 12:19:50 2008) : CAsyncReq_GetObjectAsync, Path= Blah in namespace root\cimv2 using flags 0x0

These are the events that you generated in the wbemcore.log file. 15. Start the CA EI CORE service. The events are processed and sent to the database destination. You should be able to see the events in the database and when you run event reports in the administrative interface.

How to Configure and Implement Policy Files


After writing new policy, you must associate the policy with its corresponding adaptor. Policy depends on adaptors to establish the integration with the source from which the policy is designed to process events. Policy and adaptor files must also be in the right place for the integration and processing to work and the policy to be available for use in a catalog. Use the following process to configure and implement new policy files: Note: This process assumes that the policy's associated adaptor is already available. 1. Verify that the following line is included in the <Configure> section of the policy:
<entry name="plugin" type="readOnlyText" value="adaptorname"/>

adaptorname Specifies the full adaptor file name (including extension) of the adaptor with which to associate the policy file. If multiple files are required, specify them in a comma-separated list. 2. Put the policy file in the following location on the manager server:
EI_HOME\Manager\PolicyStore\

From the PolicyStore directory, put the file in the folder that corresponds to its policy type: sources, destinations, or enrichments. 3. Put the adaptor file in the following location on the manager server:
EI_HOME\Manager\AdaptorStore

290 Product Guide

CA Catalyst Connector Policy

After completing this process, the policy will be represented on the administrative interface on the Policies tab and is available for inclusion in a catalog. When you deploy a catalog containing the policy, the policy and its associated adaptor are sent to each of the selected connectors.

CA Catalyst Connector Policy


CA Event Integration contains the CA Catalyst connector framework, which it can use to run CA Catalyst connectors. You must configure the connector to fit into the CA Event Integration framework as described in How to Implement a CA Catalyst Connector in CA Event Integration (see page 115) and the 'Integrating UCF Compliant Connectors' document provided in the EI_HOME\Docs\Tutorials directory. The majority of the required configuration is necessary because CA Catalyst connector policy is compiled with different conventions than CA Event Integration policy. You must convert the CA Catalyst connector policy so that it acts as a CA Event Integration source policy file. CA Catalyst connector policy is contained in two separate files, a configuration file typically named connector_server.xml and a policy file named connector_policy.xml. You must merge the content in these files into one file that adheres to the conventions of CA Event Integration source policy as follows and place the file in the EI_HOME\Manager\PolicyStore\sources directory:

Copy information from the '<Silo' and '<ImplementationClass' lines of the connector configuration policy file (connector_server.xml) into a valid CA Event Integration <Configure> section. Copy information from the '<ConnectionInfo' line of the connector configuration policy file into a valid CA Event Integration <Configure> section. Add required supporting jar files to a CA Event Integration <Environment> section of the policy. Copy the new <Configure> and <Environment> sections to the top of the connector policy file (connector_policy.xml). Customize the connector policy file so that it only processes alerts. Add the prefix 'usm_' to all output properties so that CA Event Integration can correctly interpret the data.

For more information about specific properties to set and examples of all described operations, see the 'Integrating UCF Compliant Connectors' document provided in the EI_HOME\Docs\Tutorials directory. For assistance, contact CA Services.

Appendix C: Writing and Customizing Policy 291

Appendix D: Web Services and Command Line Utilities


This section contains the following topics: Web Services (see page 293) Command Line Utilities (see page 311)

Web Services
CA Event Integration uses web services (SOAP over HTTP) to populate the administrative interface with data retrieved from the connectors in your enterprise. Web service calls form the underlying architecture of the interface, enabling maximum flexibility and driving the user-level functionality of the product. You can view a list of the available web service calls by entering the following URL: http://localhost:port/axis2 localhost Specifies the host where the manager is installed. port Specifies the communication port number that you specified during installation. This URL opens the Deployed Services web page, which lists each web service call broken up by available services. You can click the link for each service to view the XML code for each web service in a WSDL file. This appendix describes each web service call in general. The web services are broken up into the following services:

AssemblyOpsService AgentInstanceService TransformEventService AgentControlService PolicyControlService Version

Appendix D: Web Services and Command Line Utilities 293

Web Services

Web Services Scripting


Exposing the web services architecture of CA Event Integration gives you the opportunity to interact with these services. It is useful to be aware of the available web services and how they operate, because you can use these web service calls to do the following:

Deploy CA Event Integration services remotely from its infrastructure Integrate web services functionality with other applications Enact functionality at specific times using specific criteria and information by including the web services in customized scripts

View a list of the web services in this appendix or from the axis2 URL: http://localhost:port/axis2. This appendix describes what functionality each web service call provides and the operators to use when invoking them, while the axis2 URL provides the detailed XML code for each call in a WSDL file broken up by service. Note: The syntax in this appendix does not represent exact syntax to use in operations, but is instead a representation of the web service calls and their operators available for usage. Exact syntax depends on the usage context.

Scripting Materials
After you are familiar with the web services, you can use the webserviceTest.java file in the CA Event Integration SDK to include the web service calls in customized Java scripts. This file contains a template for scripting web service calls. Find the webserviceTest.java file at EI_HOME\sdk\webservices. The following .jar files containing WSDL compiled stubs and implementation tools that provide simple interfaces to each of the web services are available in EI_HOME\lib\Webservices: agentcontrolclient.jar Contains the AgentControlServiceClient class, which contains the following methods:

getAgentDetailStatus agentCoreControl agentIFWControl

agentinstanceclient.jar Contains the GetAgentsClient class, which contains the following methods:

getAgents getAgentEndPoint registerAgent

294 Product Guide

Web Services

assemblyopsclient.jar Contains the AssemblyOpsClient class, which contains the following methods:

getAssemblyList deployAssembly getAssemblyDetails getPolicyFileByAssemblyNameAndPolicyType removeAssembly createAssembly updateAssembly getAssemblyAgent setAssemblyForAgent

policycontrolclient.jar Contains the PolicyControlClient class, which contains the following methods:

getPolicyTypeList getPolicyList getPolicyProperties setPolicyProperties

transformeventclient.jar Contains the TransformEventServiceClient class, which contains the following methods:

transformTest getSampleEvents getCoreOperationClasses

Scripting Example
The webserviceTest.java file contains sample Java code for including the web services in customized scripts. This file contains template code for creating classes to interface with any of the web service calls. You can create scripts to pass parameters to classes, enabling more flexibility in using different data sets. The following example uses sections from the webserviceTest.java file to create a script that returns the policy properties for CA NSM source policy (nsmevent-src.xml), creates a new catalog, and creates a sample event that can be transformed by the new catalog.

Appendix D: Web Services and Command Line Utilities 295

Web Services

Note: For more information about and examples for scripting each of the web services, see the webserviceTest.java file.
import import import import import import import import import import com.ca.PolicyControlClient.*; com.ca.AgentInstanceClient.*; com.ca.AgentControlClient.*; com.ca.AssemblyOpsClient.*; com.ca.TransformEventClient.*; com.ca.stub.AssemblyOpsServiceStub; com.ca.stub.TransformEventServiceStub; com.ca.stub.PolicyControlServiceStub; java.net.*; java.util.*;

import java.io.*;

public class webserviceTest {

public static void main(String[] args) {

String hostUrl = null;

try {

// create instances of each client -- pass in the hosturl // read the emaa property file to obtain port numbers

Properties properties = new Properties(); String portstring = null; try { properties.load(new FileInputStream ("..\\config\\emaa.properties"));

portstring = properties.getProperty("mgraxisport");

} catch (IOException e) { System.out.println(e.getMessage()); System.out.println("Error reading property file -exiting"); System.exit(0); }

296 Product Guide

Web Services

hostUrl = "http://localhost:" + portstring + "/axis2/services/PolicyControlService"; PolicyControlClient pcc = new PolicyControlClient(hostUrl);

// get policy properties from nsmevent-src.xml file for sources - displays on screen

pcc.getPolicyProperties("nsmevent-src.xml", "sources");

hostUrl = "http://localhost:" + portstring + "/axis2/services/TransformEventService"; TransformEventServiceClient transEv = new TransformEventServiceClient(hostUrl);

System.out.println("Transform Event"); System.out.println("Create new assembly transformtest for testing");

AssemblyOpsServiceStub.PolicyFileType[] transForm = new AssemblyOpsServiceStub.PolicyFileType[3]; transForm[0] = new AssemblyOpsServiceStub.PolicyFileType(); transForm[0].setPolicyFile("database-dest.xml"); transForm[0].setPolicyType("destinations"); transForm[1] = new AssemblyOpsServiceStub.PolicyFileType(); transForm[1].setPolicyFile("nsm-enrich.xml"); transForm[1].setPolicyType("enrichments"); transForm[2] = new AssemblyOpsServiceStub.PolicyFileType(); transForm[2].setPolicyFile("nsmevent-src.xml"); transForm[2].setPolicyType("sources"); aops.createAssembly("transformtest", "test assembly", transForm);

TransformEventServiceStub.Event[] tEvent = new TransformEventServiceStub.Event[17]; for(int r = 0; r < tEvent.length; r++){ tEvent[r] = new TransformEventServiceStub.Event(); }

Appendix D: Web Services and Command Line Utilities 297

Web Services

tEvent[0].setEventTag("eventtype"); tEvent[0].setEventValue("UniEvent"); tEvent[1].setEventTag("evtlog_recid"); tEvent[1].setEventValue("2433608"); tEvent[2].setEventTag("evtlog_color"); tEvent[2].setEventValue("0"); tEvent[3].setEventTag("evtlog_attrib"); tEvent[3].setEventValue("0"); tEvent[4].setEventTag("evtlog_time"); tEvent[4].setEventValue("1187730612"); tEvent[5].setEventTag("evtlog_type"); tEvent[5].setEventValue("1"); tEvent[6].setEventTag("evtlog_flag"); tEvent[6].setEventValue("0"); tEvent[7].setEventTag("evtlog_annotation"); tEvent[7].setEventValue("0"); tEvent[8].setEventTag("evtlog_timegen"); tEvent[8].setEventValue("1075266723"); tEvent[9].setEventTag("evtlog_msgnum"); tEvent[9].setEventValue("0"); tEvent[10].setEventTag("evtlog_severity"); tEvent[10].setEventValue(""); tEvent[11].setEventTag("evtlog_node"); tEvent[11].setEventValue("usildsmc"); tEvent[12].setEventTag("evtlog_user"); tEvent[12].setEventValue("NT AUTHORITY\\SYSTEM"); tEvent[13].setEventTag("evtlog_text"); tEvent[13].setEventValue("Host:Windows Windows caiWinA3 Trap WinA3_CPUTotal Ok Critical none Prop TotalLoad"); tEvent[14].setEventTag("evtlog_pinfo"); tEvent[14].setEventValue("4124,emevtlogplayer.exe"); tEvent[15].setEventTag("evtlog_source"); tEvent[15].setEventValue("w2kProcInst"); tEvent[16].setEventTag("evtlog_tag"); tEvent[16].setEventValue("WNT");

// displays results on screen

transEv.transformEvent("transformtest", "com.ca.eventplus.catalog.plugin.Enricher", tEvent);

298 Product Guide

Web Services

} catch(Exception e) { System.out.println("Main Exception: " + e.getMessage() ); }

Following is the script you would need to run for the example, assuming that the webserviceTest class file is at EI_HOME\webservices:
..\jre\bin\java webserviceTest -classpath .;..\lib\WebServices\*;..\ThirdParty\Axis2-1.3\lib\*

AssemblyOpsService Web Services


The AssemblyOpsService service contains web service calls for working with catalogs. These calls provide the functionality for viewing, creating, updating, removing, deploying, and verifying catalogs. Note: Assembly is the term used for catalogs in the web services. The AssemblyOpsService service contains the following web service calls:

removeAssembly getAssemblyList verifyAssembly deployAssembly createAssembly updateAssembly setAssemblyForAgent getAssemblyDetails getAssemblyAgent getPolicyFileByAssemblyNameAndPolicyType

Appendix D: Web Services and Command Line Utilities 299

Web Services

removeAssembly Call--Delete a Catalog


The removeAssembly web service call deletes the specified catalog. When interacting with the removeAssembly call, you must include the following basic syntax:
removeAssembly(assemblyName, userid)

assemblyName Specifies the name of the catalog that you want to delete. userid Specifies the user name under which to run the operation.

getAssemblyList Call--View Catalogs


The getAssemblyList web service call returns a list of existing catalogs. When interacting with the getAssemblyList call, you must include the following basic syntax:
getAssemblyList()

verifyAssembly Call--Verify a Catalog


The verifyAssembly web service call verifies that a compiled catalog deployed to a connector contains accurate and up-to-date policy. The call recompiles the specified catalog and compares it to the catalog currently deployed on the specified connector. When interacting with the verifyAssembly call, you must include the following basic syntax:
verifyAssembly(assemblyName, agtendpoint)

assemblyName Specifies the name of the catalog that you want to verify. agtendpoint Specifies the connector on which you want to verify the catalog.

300 Product Guide

Web Services

deployAssembly Call--Deploy a Catalog


The deployAssembly web service call deploys a catalog on a connector. Deploying a catalog starts the processing of events on the connector's node according to the policy defined in the catalog. When interacting with the deployAssembly call, you must include the following basic syntax:
deployAssembly(assemblyName, agtendpoint)

assemblyName Specifies the name of the catalog that you want to deploy. agtendpoint Specifies the name of the connector whose assigned catalog you want to deploy.

createAssembly Call--Create a Catalog


The createAssembly web service call lets you create a new catalog. Using this call, you can name and describe the catalog and assemble all needed policy files. When interacting with the createAssembly call, you must include the following basic syntax:
createAssembly(assemblyName, desc, policyFileType[policyfile,policytype], userid)

assemblyName Specifies the name of the catalog you are creating. desc Specifies a description of the catalog. policyfile Specifies the name of the policy file that you want to add to the catalog. policytype Specifies the type of the policy file that you are adding to the catalog. userid Specifies the user name under which to run the operation. You can specify as many pairs of policyfile,policytype as needed for your catalog. For a detailed example using this call in a script, see Scripting Example (see page 295).

Appendix D: Web Services and Command Line Utilities 301

Web Services

updateAssembly Call--Edit a Catalog


The updateAssembly web service call lets you edit an existing catalog by changing the catalog's policy file assignments. Specify all policy files that you want to include in the catalog, so leave out any currently included policy files that you want to remove from the catalog and include all policy files that you want to add or retain. When interacting with the updateAssembly call, you must include the following basic syntax:
updateAssembly(assemblyName, desc, policyfileType[policyfile,policytype], userid)

assemblyName Specifies the name of the catalog that you want to edit. desc Specifies a description of the catalog. policyfile Specifies the name of the policy file that you want to include in the catalog. policytype Specifies the type of the policy file that you are including in the catalog. userid Specifies the user name under which to run the operation. For examples using this call in a script, see the webserviceTest.java file.

setAssemblyForAgent Call--Assign a Catalog to a Connector


The setAssemblyForAgent web service call lets you associate a catalog with one or more connectors. This call does not deploy the catalog, but instead assigns the catalog to the connector so that it can be deployed in a later operation. When interacting with the setAssemblyForAgent call, you must include the following basic syntax:
setAssemblyForAgent(assemblyName, instance)

assemblyName Specifies the name of the catalog you are assigning. instance Specifies the name of the connector to which you want to assign the catalog. You can specify as many connectors as necessary.

302 Product Guide

Web Services

getAssemblyDetails Call--View Catalog Policy Details


The getAssemblyDetails web service call returns all policy assigned to a catalog. The policy information is returned by policy type and policy file. When interacting with the getAssemblyDetails call, you must include the following basic syntax:
getAssemblyDetails(assemblyName)

assemblyName Specifies the catalog name.

getAssemblyAgent Call--View Catalog Connector Assignments


The getAssemblyAgent web service call returns a list of connectors that a specific catalog is assigned to. When interacting with the getAssemblyAgent call, you must include the following basic syntax:
getAssemblyAgent(assemblyName)

assemblyName Specifies the name of the catalog whose connector assignments you want to return.

getPolicyFileByAssemblyNameAndPolicyType Call--View Policy by Catalog


The getPolicyFileByAssemblyNameAndPolicyType web service call returns policy files for a specific catalog and policy type. For example, you can return all source policy files for a catalog using this call. When interacting with the getPolicyFileByAssemblyNameAndPolicyType call, you must include the following basic syntax:
getPolicyFileByAssemblyNameAndPolicyType(assemblyName, policytype)

assemblyName Specifies the name of the catalog that you want to return policy files for. policytype Specifies the type of policy file that you want to return. Valid values are sources, destinations, and enrichments.

Appendix D: Web Services and Command Line Utilities 303

Web Services

AgentInstanceService Web Services


The AgentInstanceService service contains web services for obtaining connector information. These calls provide the functionality for viewing and registering connectors. The AgentInstanceService service contains the following web service calls:

getAgents registerAgent getAgentEndPoint

getAgents Call--View Available Connectors


The getAgents web service call returns a list of all available registered connectors. When interacting with the getAgents call, you must include the following basic syntax:
getAgents()

registerAgent Call--Register a Connector with the Manager


The registerAgent web service call registers a connector with the manager. This call can register a connector or de-register a registered connector, depending on the specified operation. When interacting with the registerAgent call, you must include the following basic syntax:
registerAgent(agentnode, agentEndpoint, emaapath, register|deregister)

agentnode Specifies the node of the connector that you want to register. You can specify the server name or IP address. agentEndpoint Specifies the connector port number. emaapath Specifies the location of the root installation path. For example, using the default installation path, this value is C:\Program Files\CA\Event Integration. register Registers the connector with the manager. deregister Deregisters the connector with the manager.

304 Product Guide

Web Services

getAgentEndPoint Call--View Node and Port Number for a Connector


The getAgentEndPoint web service call displays the endpoint, or node and port number used to communicate with the manager, for any registered connector. The call returns null if no registered connectors are found matching the node that you enter. When interacting with the getAgentEndPoint call, you must include the following basic syntax:
getAgentEndPoint(instanceNode)

instanceNode Specifies the node of the connector whose endpoint you want to view.

TransformEventService Web Services


The TransformEventService service contains web services for previewing event transformation. These calls provide the functionality for using sample events to preview how a specified configuration transforms events. The TransformEventService service contains the following web service calls:

transformTest getSampleEvents getCoreOperationClasses

transformTest Call--Preview Event Transformation


The transformTest web service call transforms a sample event using the policy defined in a specified catalog. Use this call to preview how a catalog will process events in your environment. When interacting with the transformTest call, you must include the following basic syntax:
transformTest(assemblyName, operationClass, originalEvent)

assemblyName Specifies the name of the catalog that you want to preview. operationClass (Optional) Specifies the specific core module that you want to preview. Return a list of core modules using the getCoreOperationClasses call. Following is an example of a valid operationClass entry:
com.ca.eventplus.catalog.plugin.Parser

Appendix D: Web Services and Command Line Utilities 305

Web Services

originalEvent Specifies the sample event that you want to transform. Generate sample events using the getSampleEvents call.

getSampleEvents Call--Produce Sample Events for a Catalog


The getSampleEvents web service call creates sample events for a defined catalog for use in previewing how the catalog processes events. When interacting with the getSampleEvents call, you must include the following basic syntax:
getSampleEvents(aName)

aName Specifies the name of the catalog that you want to create sample events for.

getCoreOperationClasses Call--View Core Modules


The getCoreOperationClasses web service call returns the name of all classes, or modules, of the core. These modules define what processing steps are being performed on events in what order. This information is also provided in the eplus-plugins.cfg file provided at EI_HOME\Core\bin. When interacting with the getCoreOperationClasses call, you must include the following basic syntax:
getCoreOperationClasses()

The call returns a list similar to the following:


com.ca.eventplus.catalog.plugin.EventPlusReader com.ca.eventplus.catalog.plugin.Classifier com.ca.eventplus.catalog.plugin.Filter com.ca.eventplus.catalog.plugin.Parser com.ca.eventplus.catalog.plugin.Normalizer com.ca.eventplus.catalog.plugin.Enricher com.ca.eventplus.catalog.plugin.Formatter com.ca.eventplus.catalog.plugin.Writer

306 Product Guide

Web Services

AgentControlService Web Services


The AgentControlService service contains web services for controlling connector status. These calls provide the functionality for viewing connector and manager status and controlling the services (IFW and the core) that power the connector. The AgentControlService service contains the following web service calls:

getAgentDetailStatus agentIFWControl agentCoreControl

getAgentDetailStatus Call--View Detailed Connector Status


The getAgentDetailStatus web service call returns aggregate status information about a connector. When interacting with the getAgentDetailStatus call, you must include the following basic syntax:
getAgentDetailStatus()

agentIFWControl Call--Control the IFW Service


The agentIFWControl web service call lets you control the status of the CA EI IFW service (Windows) or caeiifw system daemon (Solaris and Linux), which establishes integrations with defined event sources. This call can start, stop, pause, or resume the service, depending on the operation used. When interacting with the agentIFWControl call, you must include the following basic syntax:
agentIFWControl(start|stop|pause|resume)

agentCoreControl Call--Control the Core Service


The agentCoreControl web service call lets you control the status of the CA EI CORE service (Windows) or caeicore system daemon (Solaris and Linux), which establishes and controls event processing. This call can start, stop, pause, or resume the service, depending on the operation used. When interacting with the agentCoreControl call, you must include the following basic syntax:
agentCoreControl(start|stop|pause|resume)

Appendix D: Web Services and Command Line Utilities 307

Web Services

PolicyControlService Web Services


The PolicyControlService service contains web services for interacting with policy. These calls provide the functionality for viewing available policy by different criteria and setting policy properties. The PolicyControlService service contains the following web service calls:

getPolicyTypeList getPolicyList getPolicyProperties setPolicyProperties

getPolicyTypeList Call--View Policy Types


The getPolicyTypeList web service call returns a list of all available policy types. Sources and destinations are examples of available policy types. When interacting with the getPolicyTypeList call, you must include the following basic syntax:
getPolicyTypeList()

getPolicyList Call--View Policy by Event Class and Policy Type


The getPolicyList web service call returns a list of policy file subclasses that meet the specified criteria. Narrow the results by entering a policy type and event class to return policy subclasses for. When interacting with the getPolicyList call, you must include the following basic syntax:
getPolicyList(evtClass, policyType)

evtClass Specifies the event class whose subclasses you want to return. policyType Specifies the type of policy you want to search for subclasses matching the specified event class. Example: Returning CA NSM event subclasses The following example returns subclasses for source policy under the NSMEVENT-BASE event class:
getPolicyList(NSMEVENT-BASE, source)

308 Product Guide

Web Services

getPolicyProperties Call--View Policy Attributes


The getPolicyProperties web service call returns the properties of a policy file. These properties control functions such as connection settings. When interacting with the getPolicyProperties call, you must include the following basic syntax:
getPolicyProperties(fileName, policyType)

fileName Specifies the name of the policy file that contains the attributes you want to view. policyType Specifies the type of policy you want to view. Valid types are sources, destinations, and enrichments. Example: View attributes for CA Spectrum source policy The following example displays configurable properties for CA Spectrum source policy:
getPolicyProperties(spectrum-src.xml, sources)

setPolicyProperties Call--Set Policy Attributes


The setPolicyProperties web service call lets you set values for configurable policy attributes for a specified policy file. This call lets you configure attributes such as connection and display information. You can also set properties for multiple adaptors in a single call. When interacting with the setPolicyProperties call, you must include the following basic syntax:
setPolicyProperties(fileName, policyType, PluginObject[] pluginObjects, Userid)

fileName Specifies the name of the policy file whose attributes you want to set. policyType Specifies the type of policy that you want to edit. PluginObject Specifies the name of the adaptor that you are setting attributes for.

Appendix D: Web Services and Command Line Utilities 309

Web Services

pluginObjects Specifies a collection of properties for a given adaptor. Use multiple PluginObject[] pluginObjects strings to set attributes for as many adaptors as necessary. You compile the pluginObject properties as propertyObject entries, which contain the following attributes: name Specifies the name of the property that you want to set. value Specifies the new value of the property. type Specifies the type of the property. selection Specifies the currently selected value. Userid Specifies a user name under which to set policy attributes. Note: For a detailed example using this call in a script, see the webserviceTest.java file.

Version Web Services


The Version service contains web services that provide the version of the product. The Version service contains the following web service call:

getVersion

getVersion Call--View Product Version


The getVersion web service call returns the version of CA Event Integration being used. When interacting with the getVersion call, you must include the following basic syntax:
getVersion()

310 Product Guide

Command Line Utilities

Command Line Utilities


The following command line utilities are available for manually registering or de-registering a connector from the manager and controlling the product services or system daemons:

unregister_agent (see page 311) register_agent (see page 312) control-axis2 (see page 312) control-core (see page 313) control-ifw (see page 314) control-tomcat (see page 315)

unregister_agent Command--De-Register a Connector from a Manager


The unregister_agent command lets you de-register a connector from the manager to which it is currently registered. You may need to de-register a connector if its manager no longer exists or if you want to register it with a new manager installation. You must run this command from the following directory: EI_HOME\Core\bin. You can only run this command on servers with a connector or manager and connector installation. The unregister_agent command has the following syntax:
unregister_agent manager_host manager_axisport connector_host connector_axisport

manager_host Specifies the host name of the manager from which you are de-registering the connector. manager_axisport Specifies the communication port number of the manager. connector_host Specifies the host name of the connector. connector_axisport Specifies the communication port number of the connector.

Appendix D: Web Services and Command Line Utilities 311

Command Line Utilities

register_agent Command--Register a Connector with a Manager


The register_agent command lets you register a connector with a manager. A connector registers automatically with the manager node that you supply during installation. You only need to use this command if you are registering the connector with a different manager or if a connector failed to register during installation. Before registering a connector, you must de-register it from its previous manager using unregister_agent command. You must run this command from the following directory: EI_HOME\Core\bin. You can only run this command on servers with a connector or manager and connector installation. The register_agent command has the following syntax:
register_agent manager_host manager_axisport connector_host connector_axisport

manager_host Specifies the host name of the manager that you want to register the connector with. manager_axisport Specifies the communication port number of the manager. connector_host Specifies the host name of the connector. connector_axisport Specifies the communication port number of the connector.

control-axis2 Command--Control the Axis2 Service


The control-axis2 command lets you maintain and control the CA EI AXIS2 service (Windows) or caeiaxis2 system daemon (Solaris and Linux). You can install, uninstall, start, stop, debug, restart, or view the status of this service. You must run this command from the EI_HOME\thirdParty\Axis2-1.3\bin directory on Windows systems. The control-axis2 command has the following syntax: Windows
control-axis2 debug|install|uninstall|start|stop|status|help

Solaris and Linux


/etc/init.d/ control-axis2 install|uninstall|start|stop|status|restart

312 Product Guide

Command Line Utilities

debug (Windows only) Starts the service and logs trace messages to standard output. install Makes the service or daemon start automatically on system startup and stop automatically on system shutdown. uninstall Removes the capability enabled by the install parameter. start Starts the service or daemon. stop Stops the service or daemon. status Displays status information such as whether the service or daemon is installed and running. restart (Solaris and Linux only) Stops and starts the service or daemon. help (Windows only) Displays usage details.

control-core Command--Control the Core


The control-core command lets you maintain and control the CA EI CORE service (Windows) or caeicore system daemon (Solaris and Linux). You can install, uninstall, start, stop, debug, restart, or view the status of this service. You must run this command from the EI_HOME\Core\Bin directory on Windows systems. The control-core command has the following syntax: Windows
control-core debug|install|uninstall|start|stop|status|help

Solaris and Linux


/etc/init.d/ control-core install|uninstall|start|stop|status|restart

Appendix D: Web Services and Command Line Utilities 313

Command Line Utilities

debug (Windows only) Starts the service and logs trace messages to standard output. install Makes the service or daemon start automatically on system startup and stop automatically on system shutdown. uninstall Removes the capability enabled by the install parameter. start Starts the service or daemon. stop Stops the service or daemon. status Displays status information such as whether the service or daemon is installed and running. restart (Solaris and Linux only) Stops and starts the service or daemon. help (Windows only) Displays usage details.

control-ifw Command--Control the Ifw


The control-ifw command lets you maintain and control the CA EI IFW service (Windows) or caeiifw system daemon (Solaris and Linux). You can install, uninstall, start, stop, debug, restart, or view the status of this service. You must run this command from the EI__HOME\Ifw directory on Windows systems. The control-ifw command has the following syntax: Windows
control-ifw debug|install|uninstall|start|stop|status|help

Solaris and Linux


/etc/init.d/ control-ifw install|uninstall|start|stop|status|restart

314 Product Guide

Command Line Utilities

debug (Windows only) Starts the service and logs trace messages to standard output. install Makes the service or daemon start automatically on system startup and stop automatically on system shutdown. uninstall Removes the capability enabled by the install parameter. start Starts the service or daemon. stop Stops the service or daemon. status Displays status information such as whether the service or daemon is installed and running. restart (Solaris and Linux only) Stops and starts the service or daemon. help (Windows only) Displays usage details.

control-tomcat Command--Control Tomcat Service


The control-tomcat command lets you maintain and control the CA EI Tomcat service. You can install, uninstall, start, stop, debug, or view the status of this service. You must run this command from the EI_HOME\ThirdParty\tomcat\bin directory. The control-tomcat command has the following syntax:
control-tomcat debug|install|uninstall|start|stop|status|help

debug Starts the service and logs trace messages to standard output. install Installs the service in the Windows Service Manager. uninstall Uninstalls the service in the Windows Service Manager. start Starts the service.

Appendix D: Web Services and Command Line Utilities 315

Command Line Utilities

stop Stops the service. status Displays status information such as whether the service is installed and running. help Displays usage details.

316 Product Guide

Appendix E: Manager Database


This section contains the following topics: Manager Database (see page 317) Schema Overview (see page 318) Tables (see page 318) Database Maintenance (see page 319)

Manager Database
During the manager installation, CA Event Integration creates a database for storing events that are dispatched to the database destination. The administrative interface uses events stored in the database to provide information for event reports. The event data is stored in multiple tables, and you can perform several advanced search and query operations on these events using the reporting feature. If you are dispatching events to another event source or systems management product that does not have equivalent querying functionality, you can also dispatch these events to the database in case you need to perform an advanced query, filter, or search.

Appendix E: Manager Database 317

Schema Overview

Schema Overview
The EMAADB schema contains five tables and two linking tables. The following graphic illustrates the structure of the EMAADB schema:

Tables
The five main tables included in the database are as follows: Event Contains event information common to all events, such as resource class, instance, address, vendor, and so on. The table is ordered by eventid. Message Contains event properties specific to certain events ordered by msgid. The Event table has a many:many relationship with this table.

318 Product Guide

Database Maintenance

Priority Converts Priority numeric data to text equivalents. This lookup table is ordered by priority. The Event table has an n:1 relationship with this table. Severity Converts Severity numeric data to text equivalents. This lookup table is ordered by severity. The Event table has an n:1 relationship with this table. Association Contains event information about business services associated with an event ordered by associationid. The Event table has an m:n relationship with this table. The tables are linked through the EventAssociation linking table, and their relationship provides a means to represent such relationships as service associations to an event.

Database Maintenance
The manager database does not contain a built-in mechanism for automatically archiving or purging collected events after a certain period of time. All events sent to the database remain there until they are deleted or archived manually. The database has the potential to grow to an unmanageable size without a periodic archive or purge. You should implement a database maintenance policy for archiving or purging events to prevent overloading the database. When implementing the policy, consider the following:

The amount of events you expect to receive during specific intervals (daily, weekly, and so on) The maximum amount of events that you want to retain in the database at a time The maximum size to which you want the database to grow How to handle events that are periodically removed from the database (archive in a separate location, purge, or a combination of both)

For information about how to configure and implement a database maintenance policy, see the SQL Server documentation.

Appendix E: Manager Database 319

Appendix F: High Availability


This appendix covers the high availability functionality built into CA Event Integration, including resiliency and cluster awareness. This section contains the following topics: High Availability Overview (see page 321) Connector Resiliency (see page 321) Cluster Awareness (see page 322) Non-Cluster High Availability with CA XOsoft (see page 324)

High Availability Overview


High availability (also known as fault tolerance or failover) is a common architectural requirement that focuses on ensuring business continuity in the event of an interruption of IT resource availability. The main objective of implementing an HA solution is zero downtime for IT resources. High availability support for CA Event Integration provides failover capabilities in the following ways:

Resiliency to outages Cluster awareness Non-cluster replication and failover

Cluster awareness and non-cluster high availability is not supported for connectors on Solaris and Linux.

Connector Resiliency
The CA Event Integration connector is resilient to product outages, event publisher and consumer failures, and CA Event Integration service failures. CA Event Integration connector resiliency results in zero loss of events during failure and restart periods for the following internal and external components:

CA Event Integration source adaptors Event sources CA Event Integration destination adaptors Event destinations

Appendix F: High Availability 321

Cluster Awareness

CA Event Integration services and system daemons CA Event Integration enrichment modules Enrichment sources

The following CA Event Integration components do not support resiliency:


Windows Event Log destination adaptor on Windows 2003 and 2008 Windows Event Log source adaptor on Windows 2003 Database destination adaptor CA Event Integration manager

Cluster Awareness
CA Event Integration supports installation in a cluster environment. Cluster aware CA Event Integration components correctly failover using the connector resiliency architecture and a cluster shared disk for temporary data storage. Cluster installations are supported on Microsoft Cluster Server 2003. You add CA Event Integration to a cluster resource group during installation to enable cluster awareness. Resource groups must contain the following for a CA Event Integration cluster installation:

Physical disk Network resources

When you install CA Event Integration on a cluster node, it automatically selects a cluster group that it determines to be the best fit based on cluster resource requirements. You can retain the default selection or select another group as long as it adheres to the requirements. When you install CA Event Integration on a cluster node with a cluster-aware CA Spectrum SA already installed, it automatically selects the group in which the CA Spectrum SA resources exist. Installations with CA Spectrum SA should always exist on the same resource group if you plan to use the product with CA Spectrum SA.

How to Implement CA Event Integration in a MSCS Environment


Complete the following process to install and configure CA Event Integration components in a Microsoft Cluster Server environment: 1. 2. Log on to the active node. Run the CA Event Integration installer on the active node. When the installer prompts you for a services user, you must enter a domain user.

322 Product Guide

Cluster Awareness

The installer detects the cluster node and selects the cluster resource group perceived to be the best fit based on CA Event Integration resource group requirements on the Enter Cluster Resource Group dialog. If a cluster-aware CA Spectrum SA is installed, the installer always selects the resource group in which the CA Spectrum SA resources exist. Note: If you are installing CA Event Integration as a component of CA Spectrum SA (the event enrichment feature), CA Event Integration uses the resource group with SQL Server resources by default, which is the same group that you should use for CA Spectrum SA. 3. Retain the default selection or select a different resource group that meets all requirements, and complete the installation. The following CA Event Integration resources appear in the cluster resource group after the installation completes:

CA EI AXIS2 CA EI CORE CA EI IFW CA EI Tomcat

Note: Certain resources may not appear if you are installing only the manager or only the connector. 4. 5. 6. 7. Take all resources in the cluster group offline. Move the group to the second cluster node. Bring all resources online on the second cluster node except for the CA Event Integration resources and CA Spectrum SA resources (if applicable). Run the CA Event Integration installer on the second cluster node. When the installer prompts you for a services user, you must enter a domain user. The installer detects the cluster node and selects the cluster resource group perceived to be the best fit based on CA Event Integration resource group requirements on the Enter Cluster Resource Group dialog. 8. Ensure that the same resource group is selected that you used on the first cluster node, and complete the installation. Note: If you need to install CA Event Integration on more than two cluster nodes, repeat Steps 4-8 for each cluster node before completing the process. 9. Bring the entire group online after installation completes.

Appendix F: High Availability 323

Non-Cluster High Availability with CA XOsoft

Uninstall CA Event Integration HA Manager


When you uninstall a highly available CA Event Integration manager implementation, do not destroy the manager database until the installation has been removed from the last cluster node. To uninstall the CA Event Integration HA Manager 1. 2. 3. 4. 5. Offline the CA Event Integration cluster resources. Uninstall CA Event Integration (see page 40) as you would in a nonclustered uninstallation, but do not select the Destroy Database check box. Move the cluster group to the next node. Repeat the uninstallation on all cluster nodes. Do the following on the last cluster node only:

Destroy the database by selecting the Destroy Database check box. Delete CA Event Integration cluster resources after the uninstallation completes. Delete all files in the EI_HOME directory on the shared drive.

Non-Cluster High Availability with CA XOsoft


CA Event Integration uses CA XOsoft Replication and High Availability (CA XOsoft) to provide a high availability and disaster recovery solution for implementations that do not use Microsoft Cluster Server (MSCS). Unlike cluster high availability that depends on a shared disk, CA XOsoft uses replication of components on a master (primary) and replica (secondary) servers connected by a LAN or WAN. CA XOsoft provides high availability for CA Event Integration connectors by replicating all files and directories that enable the connector to collect events from event sources and dispatch them to destinations. CA XOsoft uses an active passive failover mode where components on the secondary server are passive (or stopped) until failover occurs to provide business continuity in the event of scheduled or unscheduled outages. CA XOsoft uses the concept of scenarios as a process definition that includes details of the master and replica servers and their connectivity, report and event handling rules, node properties, and the directories, sub-directories, and files that are included in the replication process. Each scenario is saved as an XML file.

324 Product Guide

Non-Cluster High Availability with CA XOsoft

The CA XOsoft HA scenario is configured with a custom script that serves as a network redirection script and determines which component is the master component. The script is also configured to check the health of an active service every 30 seconds. If this script returns a bad code, CA XOsoft starts the switchover process. After the service on the secondary computer becomes active, it is considered the master, and the reverse replication process starts (either automatically or manually), making the then-primary computer the secondary computer and now the once-secondary computer, the primary (master) computer. This section describes how to enable non-cluster high availability for a standalone installation of CA Event Integration. For information about how to enable CA Event Integration non-cluster high availability when installed as a component of CA Spectrum SA, see the CA Spectrum SA Implementation Guide. The following events trigger the failover from the primary to secondary computers:

Failure of connector services (CA EI CORE and CA EI IFW) Failure of connector host

Replicated Information
The following directories must be replicated when you configure non-cluster high availability:

EI_HOME\Core\CatalogPolicy EI_HOME\Core\conf EI_HOME\Core\Errbox EI_HOME\Core\Inbox EI_HOME\Core\Kpi EI_HOME\Core\Outbox EI_HOME\Core\Wipbox EI_HOME\Ifw\conf EI_HOME\Ifw\Kpi EI_HOME\Ifw\Plugins

Appendix F: High Availability 325

Non-Cluster High Availability with CA XOsoft

How to Implement CA Event Integration in a Non-Cluster High Availability Environment


You implement CA Event Integration in a non-cluster high availability environment by installing CA Event Integration connectors and CA XOsoft components on the primary and secondary server and configuring failover scenarios in CA XOsoft. If your CA Event Integration environment contains multiple connectors, repeat this process for each separate connector: 1. 2. 3. 4. 5. 6. 7. 8. Install the CA XOsoft Control Service on a separate server (not the primary or secondary). Install the CA XOsoft Engine on the primary and secondary server. Install the CA Event Integration connector on the primary and secondary server. Configure the connector logical node name (see page 326) on the primary and secondary server. Verify that all CA Event Integration services are in Manual startup mode on both servers. Create and configure a scenario in CA XOsoft for CA Event Integration (see page 327). Start the CA EI IFW and CA EI Core services on the primary server. Run the created scenario.

Configure Connector Logical Node Name


Before you create the scenario in CA XOsoft for replicating CA Event Integration connector information, you must configure all connector servers to report the same logical node name to an integrated installation of CA Spectrum SA. The use of a single logical node name ensures that the CA Spectrum SA user interface displays only one CA Event Integration connector entry. If you do not perform this configuration, the primary and secondary servers register with their fully qualified host name, and when failover occurs, two CA Event Integration connector entries will appear in the CA Spectrum SA user interface, one as online and one as offline. This procedure is only required if you are using the CA Event Integration connector as an integrated component of CA Spectrum SA.

326 Product Guide

Non-Cluster High Availability with CA XOsoft

To configure connector logical node name 1. Open the sampc-dest.xml file, or (if the connector and manager are on the same system) the samec-dest.xml file located at EI_HOME\Manager\PolicyStore\destinations on the primary manager server. The policy file that you edit must be the file that is in the deployed connector catalog. 2. Locate the following line, give the -Dsamsiloregistration property a common logical node name for all connector servers, and save and close the file:
<enventry name="wrapper.java.additional.4" value="-Dsamsiloregistration=<logicalnodename>" />

3. 4.

Restart the CA EI IFW service on the primary server. Repeat Steps 1 and 2 on each secondary connector server.

Create and Test the CA Event Integration Scenario in CA XOsoft


After you install the CA Event Integration connector and CA XOsoft components on the primary and secondary servers, you must configure and run a scenario in CA XOsoft to enable the active-passive failover. To create the CA Event Integration scenario in CA XOsoft 1. 2. Log in to the CA XOsoft Replication and High Availability user interface. Click Scenario Management. The CA XOsoft Manager opens. 3. Select Scenario, New. The Scenario Creation Wizard opens. 4. Select Create a New Scenario, assign the scenario to the default Scenarios group, and click Next. The Select Server and Product Type page opens. 5. Select File Server in the Select Server Type pane, select High Availability Scenario in the Select Product Type pane, and click Next. The Master and Replica Hosts page opens. 6. Enter a scenario name and the master and replica host names and port numbers and click Next. The CA XOsoft Engine Verification page opens. 7. Verify that the CA XOsoft Engine is correctly installed on the primary and secondary servers and click Next. Note: Drive sharing must be enabled for the engines. The Master Root Directories page opens.

Appendix F: High Availability 327

Non-Cluster High Availability with CA XOsoft

8.

Select Include Files, enter * in the filter field, select all Directories listed in Replicated Information (see page 325), and click Next. The files must be in the same directories on all servers. The Replica Root Directories page opens.

9.

Click Next. The Scenario Properties page opens.

10. Expand Replication, set 'Run after Reboot' to On, set Synchronization Type to Block Synchronization, and click Next. The Master and Replica Properties page opens. 11. Expand Host Connection, verify the IP address and port numbers for the master and replica, and click Next. The Switchover Properties page opens. 12. Expand all selections, do the following, and click Next:

Set Perform Switchover Automatically to On in the Switchover section. Set MoveIP, Redirect DNS, and Switch Computer Name to Off in the Network Traffic Redirection section. Set the user-defined scripts in the User-Defined Scripts section to On, set the script name for each script to EI_HOME\config\installer\xoseiha.bat, and set the argument for each script as follows:

Active to Standby Redirection Script: stop Standby to Active Redirection Script: start Identify Network Traffic Direction Script: status

Set Is Alive Timeout and Heartbeat Frequency in the Is Alive section to appropriate intervals. Set Send Ping Request in the Check Method section to On, and configure the IP addresses to ping. Set Check Script on Active Host in the User-Defined Scripts sub-section to On, define the script name as EI_HOME\config\installer\xoseiha.bat, and define the argument as master.

The Switchover and Reverse Replication Initiation page opens. 13. Select Switchover automatically in the Switchover Initiation pane, select Start automatically in the Reverse Replication Initiation pane, and click Next. The Scenario Verification page opens. 14. Click Next. The Scenario Run page opens.

328 Product Guide

Non-Cluster High Availability with CA XOsoft

15. Start the CA EI services on the primary server, and click Run Now. The Run dialog opens. 16. Select Block synchronization and click OK. The scenario displays as Running in the Scenarios pane. In the Statistics pane, you can view the switchover from the primary to secondary server when the primary services fail.

Appendix F: High Availability 329

Index
A
accessing the administrative interface 124 adaptor creation adaptor processing model 227 adding configuration policy attributes 225, 252 build and compile C++ adaptors 229 build and compile java adaptors 229 coding 227, 228 configuring and testing 229, 230 creating policy for adaptors 231 inbox and outbox files 225 naming and configuration 224 overview 222 procedures to code 227 sample files 228 adaptors about 16, 221 and policy 231 attributes 225 configuration 224 destination 16 execution 223 extracting and storing events 225 internals 223 log file for troubleshooting 231 processes 223 provided 16, 221 source 16 writing new 222 administration tabs 127 administrative reports 185 administrative interface about 14, 19 accessing 124 administrative tools 125 configuring enrichments from 55, 82 dashboard 20, 125 defining log in name and password 24 refreshing 125 reports 171 troubleshooting 205 agentCoreControl web service 307 agentIFWControl web service 307 alarms collecting and processing 53 enrichment 54, 55, 59 enrichment scenario 66 interacting with 52, 54 using custom alarm attributes with enrichment 58 application log files integrating with 16, 120 policy 128 policy configuration 147 policy customization scenario 287 architecture 14 attributes, configuring in policy 131 audit reports catalog files 186 configuring 194 creating 187, 189, 192 deployment 189 policy files 191 axis2 service 33, 34 web services view 294 axis2.log file 205

C
CA Catalyst connectors configuration 114 how to implement 115 integrating with 113 merging policy into CA Event Integration policy 291 CA NSM add-on benefits 13 configuring policy for remote destination 79 configuring remote CA NSM destination node 77 deployment scenarios 64, 85, 86, 88 enrichment 81, 82 enrichment policy configuration 138 getting started 80 implementation on 77 integrating with 76 policy 128, 129 policy configuration 137 sample policy 269, 285

Index 331

usage 79 CA OPS/MVS EMA configuration requirements 108 configuring policy 141 deployment scenario 109 integrating with 108 CA Spectrum add-on benefits 13 configuration requirements 44 configuring a remote installation 46 deployment scenarios 63, 64, 66, 70 distributed SpectroSERVER implemenation 44 EI lost and found module 62 implementation on 44 integrating with 16, 43 migration 217 policy 128, 129 policy configuration 132 resolving models 72, 74, 75 sample policy 252, 269 specifying version 132 use cases 52 user configuration 45 CA Spectrum configuration configuring event message format 45 configuring policy 49 copying custom CsPCause files 49 creating CA Spectrum user 45 getting started 53 remote connection 46 requirements 44 trap collection 51 CA Spectrum SA add-on benefits 13 alarm enrichment 98, 100 configuration requirements 91 configuring policy 92, 139 configuring proxy connector 97 configuring reconciliation 93 deployment scenarios 101, 102, 104 getting started with 96 implementation on 91 integrating with 90 use cases 13, 94 CA Spectrum usage about 52 alarm enrichment 54, 55 configuring unreconciled events module 62 custom alarm attributes 58

custom event codes 63 event rules and procedures 60, 61, 70 event variables 60, 61 CA SYSVIEW PM configuration requirements 108 configuring policy 142 integrating with 108 ca_eis_user database user 21, 22, 33 catalog audit report configuration template 187 catalog policy 233 catalog reports catalog configuration report 195 catalog files audit report 186 configuring a catalog files audit report 194 creating a catalog files audit report 187 catalogs about 18, 158 adding policy 158, 160 assigning to connectors 162 configuration 123, 124 creating 158 deleting 161 deploying 163, 164 editing 160 previewing 161 reports 186, 195 catalogs tab 127 classification module description 155 policy 258 cluster awareness 321, 322 CMDB enrichment 18, 130 policy configuration 143 command line executable, using in policy 260, 269 command line utilities 311 commands control-axis2 312 control-core 313 control-ifw 314 control-tomcat 315 register_agent 312 unregister_agent 311 components connector 22 included 14 installable 21 installing 24 manager 21

332 Product Guide

configuration basics 123 catalogs 124 entities 123 examples 13 configuration policy editing from administrative interface 131 referencing 256 syntax 252 configured reports 171 connector reports connector configuration report 195 connector detail report 196 connector summary report 197 connectors about 18, 22, 164 assigning catalogs to 162 configuration 123 deploying catalogs on 163 de-registering from manager 311 forwarding policy configuration 143 installing 27, 29, 31 monitoring 166 registering manually 312 registering with manager 27, 29 reports 195, 196, 197 resiliency 321 restarting 167 stopping 167 tiered architecture 116, 117 viewing configuration 164 viewing details 168 where to install for CA NSM 77 where to install for CA Spectrum 44 where to install for CA Spectrum SA 91 where to install for HP BAC 112 where to install for mainframe products 108 connectors pane 166 connectors tab 127 consolidation policy 267 policy configuration 152 control-axis2 command 312 control-core command 313 control-ifw command 314 control-tomcat command 315 core about 18 configuring modules 236

modules 18 stopping and starting 33, 34, 313 test suite 210 tracing events in 207 troubleshooting 205 core.log file 205 create catalog wizard 158 createAssembly web service 301 CsPCause files, in CA Spectrum 49 custom alarm attributes 58 custom database enrichment about 130 policy configuration 145 custom event codes 63 custom event variables 60, 61

D
dashboard about 20, 125 connectors pane 166 restarting connectors 167 shortcuts pane 125 stopping connectors 167 tasks 125 dashboard tasks overview 125 database about 317 as a destination 16, 118 authentication types 23 custom database enrichment 130 EMAADB schema 318 installation and configuration requirements 22, 23 maintenance 319 policy 129 policy configuration 150 sample policy 252 tables 318 user requirements 22, 23 user security 23 viewing destination events 173 deployAssembly web service 301 deployment all catalogs 164 catalogs 163 troubleshooting 208 deployment audit report 189, 194

Index 333

deployment audit report configuration template 189 deployment scenarios CA NSM 85 CA NSM to CA NSM 86, 88 CA NSM to CA Spectrum 64 CA OPS/MVS EMA to CA Spectrum SA 109 CA Spectrum 63 CA Spectrum SA 101, 102, 104 CA Spectrum SA proxy connector 104 CA Spectrum to CA Spectrum with enrichment 66 tutorials 121 Windows Event Log to CA Spectrum 70 Windows Event Log to CA Spectrum SA 102 destination adaptors about 16 available 16 configuring and testing 230 creating 222, 227 destination policy about 129 configuring CA NSM for remote destination node 77 detailed connector status page 168 domain name, resolving 247

E
edit catalog wizard 160 EI Connector 22 EI Manager about 21 installing 24, 31 EMAADB 317, 318 enrichment about 18 CA CMDB 143 configuring from ui 55, 82 consolidation 152 custom alarm attributes as destination 58 enriching alarms 54, 55, 66 enriching CA NSM events 81, 82 enriching infrastructure alarms 98, 100 filter 151 in example catalog 88 internet search 147 manual configuration 59, 84, 100 module description 156

policy 130 policy syntax 269 using a custom database 130 using complex queries 269 environment policy 256 eplusd process 223 eplusd.log file 205, 207, 231 eplus-plugins.cfg file 236 evaluate operation 277 Event Agent, installing on 77 event collection 11 event details report 175 event flow tracing 207 understanding 206 Event Integration (EI) about 11 administrative interface 19 architecture 14 components 14, 21 configuring 123 features 11 use cases 13 event management, simplifying 11, 52, 79 Event Manager, installing on 77 event message format in CA Spectrum 45 event processing about 14, 15 changing processing order 236 core 18 integration framework 16 internal workflow 206 tracing 207 transformation 18 troubleshooting 205 event properties about 237 destination 244 internal 238 source 241 event report configuration template 182 event reports about 173 configuring 184 creating 182 top n event reports 174 event rules and procedures, in CA Spectrum 60, 61, 70 event variables, in CA Spectrum 60

334 Product Guide

events adding sample events to policy 257 classes 246 classifying in policy 258 collecting 16 enriching with external information 82, 130 filtering in policy 265 forwarding to another connector 116, 117 previewing transformation 161 reports 173 tags and tag values 154, 237 transforming 18 viewing event messages in reports 175, 182 viewing unclassified 212 eventtype property 246

H
high availability cluster awareness 322 connector resiliency 321 implementing with CA XOsoft 326, 327 installing in MSCS environment 322 non-cluster 324, 325 overview 321 HP Business Availability Center configuration requirements 112 configuring policy 142 integrating with 111, 112

I
ifw.log file 205 implementation scenarios 44, 77 inbox and outbox event files 225 infrastructure alarms, interacting with 94, 96, 97, 98 install.log file 205 installation considerations 22 installing connector 27, 29 in a cluster environment 322 manager 24 manager and connector 31 performing an upgrade 214 silent installation 35, 37, 39 troubleshooting 32 with CA NSM 77 with CA Spectrum 44 with CA Spectrum SA 91 integrating with 47 integration adaptors 16 integration framework 16 integrations application log files 120 CA NSM 77, 79 CA OPS/MVS EMA 108 CA Spectrum 44 CA Spectrum SA 91 CA SYSVIEW PM 108 database 118 HP BAC 111 snmp traps 119 troubleshooting 208 web services eventing 121 Windows Event Log 119

F
filter configuring policy 151 module description 156 policy 265 formatting module description 156 policy 284 functions adding to function library 251 using in policy 247

G
getAgentEndPoint web service 305 getAgents web service 304 getAgentStatus web service 307 getAssemblyAgent web service 303 getAssemblyDetails web service 303 getAssemblyList web service 300 getCoreOperationClasses web service 306 getPolicyFileByAssemblyNameAndPolicyType web service 303 getPolicyList web service 308 getPolicyProperties web service 309 getPolicyTypeList web service 308 getSampleEvents web service 306 getting started with CA NSM 80 with CA Spectrum 53 with CA Spectrum SA 96 getVersion web service 310

Index 335

internal event properties 238 internal security 23 internet search enrichment about 130 configuring policy 147

J
java method call, using in policy 260, 269 Java process daemon 223 jdbc query, using in policy 260, 269

L
last 12 hours report 179 lost and found module, CA Spectrum 62

M
mainframe configuration requirements 108 configuring policy 141, 142 deployment scenario 109 integrating with 108 managed objects, enrichment 130 migration about 216 to CA Spectrum 9.0 217 model resolution about 72 configuring in CA Spectrum SA 93 configuring model lookup method 75 customizing 74 MySql enrichment policy configuration 145

N
normalization module description 155 policy 260

O
Oracle enrichment policy configuration 145

P
packaging 12 parsing module description 155 policy 259 password encryption 42 policies tab 127

policy about 128, 233 adding to catalogs 158 configuration 123 configuration examples 64, 88 configuring attributes 131 creation and customization 157 customization scenario 287 destination 129 enrichment 130 environment 256 relationship with adaptors 231 reports 191, 192, 194 source 128 structure 235 testing 153 troubleshooting 208 types 128 viewing details 128 writing or editing 234 policy audit report configuration template 192 policy configuration about 131 application log 147 CA CMDB 143 CA NSM 137 CA OPS/MVS EMA 141 CA Spectrum 132 CA Spectrum SA 139 CA SYSVIEW PM 142 consolidation 152 custom database enrichment 145 database 150 EI forwarding 143 filtering 151 HP BAC 142 internet search enrichment 147 required 131 snmp 150 web services eventing 148 Windows Event Log 149 policy syntax application log scenario 287 classification 258 configuration 252 configuring and implementing 290 configuring execution 236 consolidation 267 conventions 237

336 Product Guide

customizing 234 enrichment 269 evaluation 277 filter 265 formatting 284 normalization 260 parsing 259 sample event 257 sections 252 writing 285 writing new 234 policy syntax conventions about 237 event classes 246 event properties and values 237 functions 247 hierarchy and inheritance 247 previewing transformation adding sample events for 257 modules previewed 154 performing 161 proxy connector, configuring 97 published reports 171 publishing a report 200

troubleshooting 205 types 171 response files, provided 35

S
sample event policy 257 Sample.java adaptor file 228 SampPlugin.cpp adaptor file 228 scheduling a report 198 scripting web services 294, 295 sdk 227, 294 service event report 177 service messages count report 178 services running reports for 174 viewing service event details 177 viewing service message count details 178 services user, creating 24, 27, 29 services, controlling 33, 34 setAssemblyForAgent web service 302 setPolicyProperties web service 309 silent installation about 35 creating a response file 37 with a created response file 39 with a provided response file 35 snmp traps configuring collection on a CA Spectrum server 51 integrating with 16, 119 policy 128 policy configuration 150 Solaris and Linux installing connectors on 22, 29 migrating to from Windows connector 217 silent installation on 35, 37, 39 uninstalling connectors on 41 upgrading connectors on 215 source adaptors about 16 available 16 configuring and testing 229 creating 222, 227 source policy about 128 SpectroSERVER, installing on 44 SQL Server enrichment policy configuration 145 sql server requirements 22

R
reconciliation.xml file 72, 74 reference operator, policy 256 register_agent command 312 registerAgent web service 304 remote CA NSM destination, configuring 77 remote CA Spectrum installation, configuring 46 removeAssembly web service 300 report templates 171 reports about 20, 171 accessing 171 administrative 185 audit 185, 186, 189, 191 configuration 195 configured 171 connector 185, 195, 196, 197 deleting 202 event reports 173 exporting to pdf or csv file 202 publishing 200 scheduling 198 templates 171

Index 337

T
tags and tag values 237 test suite core 210 generating events with 209 ifw 209 testing catalog configurations with 210 tiered connector architecture 116, 117 time, resolving 247 tomcat service 33, 34 top n event report configuration template 180 top n event reports about 174 configuring 182 creating 180 viewing event details from 175 viewing hourly event breakdown from 179 viewing service event details from 177 viewing service message count details from 178 transformation modules classify 155 configuring 236 enrich 156 filter 156 format 156 normalize 155 parse 155 previewed 154 transforming events 18 transformTest web service 305 troubleshooting connector registration 32 deployment 208 installation 32 integrations 208 log files 205 performance 208 policy errors 208 tracing event flow 207 viewing unclassified events 212 trusted authentication 23 tutorials 121

unregister_agent command 311 updateAssembly web service 302 upgrades about 213 migration considerations 216 performing 214, 215 supported 213 use cases 44, 77, 91 user defined reports 171

V
verifyAssembly web service 300

W
web service calls agentCoreControl 307 agentIFWControl 307 createAssembly 301 deployAssembly 301 getAgentEndPoint 305 getAgents 304 getAgentStatus 307 getAssemblyAgent 303 getAssemblyDetails 303 getAssemblyList 300 getCoreOperationClasses 306 getPolicyFileByAssemblyNameAndPolicyType 303 getPolicyList 308 getPolicyProperties 309 getPolicyTypeList 308 getSampleEvents 306 getVersion 310 registerAgent 304 removeAssembly 300 setAssemblyForAgent 302 setPolicyProperties 309 transformTest 305 updateAssembly 302 verifyAssembly 300 web services about 127, 293 AgentControlService 307 AgentInstanceService 304 AssemblyOpsService 299 PolicyControlService 308 scripting example 295 scripting materials 294

U
unclassified events, viewing 212 unclassified-events.log file 205 uninstallation 40, 41, 324

338 Product Guide

TransformEventService 305 troubleshooting 205 using in scripts 294 Version 310 web services eventing integrating with 121 Microsoft Live Meeting events 121 policy configuration 148 webserviceTest.java file 294, 295 Windows Event Log deployment scenario 70 integrating with 16, 119 policy 128, 129 policy configuration 149 sample policy 258, 265 workflow 206 worldview enrichment about 130 policy configuration 138 write policy 285

Index 339

Вам также может понравиться