Академический Документы
Профессиональный Документы
Культура Документы
Version 5.2.1
Case Analyzer is a FileNet P8 process component that monitors and analyzes case and
workflow business processes. Case Analyzer collects data from event logs and audit logs and
stores the data in the Case Analyzer store. OLAP cubes are generated from this data, and
business process analytic reports are produced from the multidimensional information in the
OLAP cubes.
In the Content Platform Engine environment, multiple object stores in a single database are
supported. Similarly, multiple Case Analyzer stores are allowed with each Case Analyzerstore
dedicated to an object store. The Case Analyzer store and object store are identified and defined
by database connection and schema name.
The database connection is an object that represents the JDBC data source connection to the
database. The database connection enables object stores and isolated regions to share a database.
Hence, within a single database, you can have multiple Case Analyzer stores with corresponding
object stores.
Create a Case Analyzer store and specify the store name, store type, database connection,
and schema name.
Configure Case Analyzer for processing of production or simulation data
Schedule event pruning and publishing intervals
Configure OLAP database integration to specify OLAP database host, name, user name,
and OLAP connector host
Use the Process Task Manager to do the following tasks:
Specify and configure Case Analyzer settings to process workflow event logs for isolated
regions
Define data fields so that you can use the values of data fields in Case Analyzer reports.
Manage the Case Analyzer store by processing OLAP cubes.
Manage the Case Analyzer store by initializing the store, pruning a region, and pruning
events.
Case Analyzer
Version 5.1.0
You can use Case Analyzer to monitor and analyze case and business processes. Case Analyzer collects
events from the Process Engine event logs and Content Engine audit log.Case Analyzer generates chart-based
statistical reports from active case and workflow data, as well as historical data.
Case Analyzer uses OLAP (On-Line Analytical Processing) technology for fast analysis of multi-dimensional
information, which enables you to drill-down from summary view to details and to interactively explore case and
business process data from different perspectives.
The following diagram shows the architecture of Case Analyzer and how data flows from the Process
Engine and Content Engine servers to Case Analyzer.
Event dispatchers retrieve events from the Process Engine event logs and the Content Engine audit log. A
dispatcher thread is created for each Process Engine event log and for theContent Engine audit log.
Publisher threads process events from the logs and publish the analytical results to the Case
Analyzer database. You can configure the number of publisher threads.
At the end of the publishing interval, the fact tables in the Case Analyzer database are updated with the raw
statistical data from the processed events. The statistical data in the fact tables are used to build the OLAP
cubes in the Case Analyzer OLAP database. The OLAP cubes provide the data required to generate Cognos
Business Intelligence reports or Excel charts for the user.
The Case Monitor Dashboard uses the data from the Case Analyzer database to monitor case and workflow
events.
Create the Case Analyzer store and specify the store properties.
Procedure
To create the Case Analyzer store and specify the properties, start the Administration Console for Content
Platform Engine and log in as the gcd_admin user.
1.
2.
b.
Right-click the Case Analyzer folder and click New to start the wizard.
Configure the Case Analyzer store properties to specify the source for workflow event data.
Procedure
To access and configure the Case Analyzer store properties:
1.
Start the Administration Console for Content Platform Engine and log in as the gcd_admin user.
2.
In the domain navigation pane, select Case Analyzer. To access the Case Analyzer store properties,
expand Case Analyzer and select a Case Analyzer store.
3.
4.
Continue through the Case Analyzer store tabs to complete the configuration of the Case
Analyzer store.
Use the Case Analyzer compression wizard to compress the Case Analyzer store. You can use the Case
Analyzer compression wizard only on Microsoft SQL Server databases.
As the fact tables in the Case Analyzer store grow over time, they require more disk space and longer cubeprocessing times. Compressing the Case Analyzer store reduces the fact table sizes by aggregating, or "rolling
up," measures across common dimensional values, with a resulting loss of information.
For example, consider a system that is composed of insurance claim data.
1-2-12
Homeowners
18275
1-5-12
Auto
67251
1-26-12
Auto
36185
2-22-12
Auto
47477
4-13-12
Auto
92487
4-28-12
Auto
37530
5-15-12
Homeowners
88357
Compressed by month and claim type, the data is aggregated by month. The precise date and actual claim
number information is lost.
Jan 2012
Homeowners
<unknown>
Jan 2012
Auto
<unknown>
Feb 2012
Auto
<unknown>
April 2012
Auto
<unknown>
May 2012
Homeowners
<unknown>
Procedure
1.
Before you start the Case Analyzer store compression, do the following steps:
a.
Back up the Case Analyzer store. Compressing the Case Analyzer store cannot be undone or
canceled.
b.
2.
Set the file path for the JDBC driver in the cacompression.bat file:
a.
b.
Edit the cacompression.bat file, and set the JDBC driver JAR file path:
set JDBC_DRIVER=installation_directory\sqljdbc_4.0\enu\sqljdbc.jar
For example:
set JDBC_DRIVER=c:\sqljdbc_4.0\enu\sqljdbc.jar
3.
4.
Log in as gcd_admin.
5.
Description
Database Type
Database Server
Database Instance
Case Analyzer database instance name. For the default instance, leave this field blank.
Database Name
Database Port
Schema Name
6.
Description
Select one or more time intervals, and specify the start and end dates. You cannot compress
the current month data.
You can choose to compress data by monthly or daily intervals, or a combination of the two.
For example, if you have data that ranges from the date of July 16, 2008, to the current date
of November 12, 2012, you can compress the older data (from 2008 through 2011) into
monthly intervals. More recent data (from January 2012 to October 2012) can be compressed
into daily intervals. The current month data (November 1, 2012, to November 12, 2012)
remains uncompressed. The following table demonstrates the example time intervals.
7.
8.
Start Date
Month
Day
Jan 1, 2012
b.
Modify the Temp Directory if necessary. The temporary directory must have available space
equivalent to the size of the current Case Analyzer store.
c.
Click Next.
Select the dimensions to be compressed. You can compress dimensions for user-defined data fields
only. By default only dimensions that have more than 10,000 rows are shown.
a.
b.
Click Next.
b.
Important: You can stop the compression process after it is started; however, if you do so, you must
restore the Case Analyzer store from a backup.
9.
Version 5.2.1
You can configure the Case Analyzer store to process specific isolated regions or event logs. Configure the
settings on the Workflow Event Log Source tab. The default setting is to process all events from all isolated
regions.
To specify which isolated regions or event logs are to be processed by the Case Analyzer store, do the
following steps:
1.
2.
On the Workflow Event Log Source tab, choose one of the two options:
Option
Action
All of the event logs from all isolated regions for the Case Analyzer store are processed.
o
Select one or more isolated regions.
o
Select <ALL EVENT LOGS> to process all of the event logs for an isolated region.
o
To specify individual event logs for a region, clear <ALL EVENT LOGS> and select th
Important: After you clear <ALL EVENT LOGS> and click Apply, you cannot return and select <ALL EV
you later decide that you want all of the event logs for the region to be processed, you can return and select ea
If you change a region or event log from not selected to selected, only workflows that are run after the
change are processed by Case Analyzer for that region or event log.
If you configure a new isolated region and you want Case Analyzer to process events for that region,
use the Event Log Configuration window to specify processing for that isolated region.
If you configure a new event log for an isolated region that is configured for Case Analyzer processing
and <ALL EVENT LOGS> is selected, the events in the new event log are processed. Otherwise, if
specific event logs are selected, use the Event Log Configuration window to specify the new event log.
Important: After you configure Case Analyzer to process events from specific regions, you cannot undo this
selection by selecting Process all events. However, you can configureCase Analyzer to process all events by
selecting <ALL EVENT LOGS> for every region.
GCD administrator
Version 5.2.1
A directory service account that has Full Control access to the Content Platform Engine domain object.
GCD administrator
Unique identifier
gcd_admin
Description
The gcd_admin is able to create, modify, and delete Content Platform Engine domain resources.
The gcd_admin account must reside in the directory service realm specified in Configuration
Manager's Configure LDAP task.
A GCD administrator can grant Full Control rights to additional users and groups, thereby making them
GCD administrators as well. Being a GCD administrator does not automatically make you
an object_store_admin, which is assigned on the object store's own property sheet.
Log on to IBM Administration Console for Content Platform Engine as gcd_admin in order to:
Create the GCD by launching the Configure New Domain Permissions wizard the first time
you start IBM Administration Console for Content Platform Engine to establish the FileNet
P8 domain.
Carry out administrative tasks for the FileNet P8 domain.
Minimum required permissions
Use IBM Administration Console for Content Platform Engine to grant Full Control access to
the Content Platform Engine domain object
You can perform the following actions on the Case Analyzer store: initialize a store, process the cubes, prune
events, prune isolated region data, and stop and restart a Case Analyzerstore.
Table 1. Actions you can perform to manage the Case Analyzer store
Action
Description
Process cubes
Update the OLAP cubes with the latest information stored in the fact tab
Prune events
Remove all data that are related to a specific isolated region from the Ca
Table 1. Actions you can perform to manage the Case Analyzer store
Action
Description
Remember: The Case Analyzer store files can grow large over time. You can reduce the size of the Case
Analyzer store by aggregating measures across common dimensional values. For more information,
see Compress the Case Analyzer store.
To use the values of case and workflow data fields in Case Analyzer reports, you must identify which data fields
will be exposed, specify their properties as dimensions or measures, and specify the appropriate OLAP cube to
store the data. The values for case and workflow data fields are stored in the Case Analyzer store.
Before you make a case or task property or a data field available to those who will use this information to
analyze workflows and cases, the property or field must be captured in the workflow system event log or object
store audit log.
To make a data field available from workflows, do the following steps:
In Process Designer, define a data field in a workflow definition. For more information, see Workflow
properties - data fields.
In the administration console, create a database field in the event log for that data field. For more
information, see Managing user database fields.
To make case or task properties available from Case Manager systems, see Integrating IBM case analytics
tools
If a data field that you want to make available occurs in both the event log and audit log, then you need to
create only a single Case Analyzer data field to retrieve the data values from both logs. To enable this, the data
field name and type must be the same in the event log and audit log. For example, if a data field named
LoanAmt of data type float is exposed in the event log and audit log, then you will create a single Case
Analyzer data field of type float named LoanAmt to pull the data from both sources to the Case
Analyzer database.
Data fields can be created as dimensions or measures:
Dimensions provide meaningful statistical information about an item of business significance. A large
dimension (a dimension with many members) is hard for a user to comprehend unless the dimension
provides meaningful data. For example, defining the social security number (SSN) as a dimension
results in a large number of dimension members, with little or no statistical value per member. On the
other hand, defining a dimension as the first three numbers of the SSN, which indicate the issuing
state, can provide meaningful groupings of statistical information where there are many workflow
events with different SSNs. Statistical analysis can then be performed on the resulting groups.
Any data field type can be a dimension. For data fields of type float, integer, and time, you have the
option of aggregating the data. For example, if a data field is an amount, you can categorize the
amount field into ranges of 0-10, 10-100, 100-1000, and above 1000. Aggregating dimension data
saves on storage space; if you choose not to aggregate the data, all the values are stored as members
in the dimension, which yields large dimensions.
Important: Large dimensions (even less than 64,000 members) can be problematic to Excel. Consider
a third-party application if Excel does not serve your purpose with large dimensions. Large dimensions
also increase the memory footprint of Analysis Services.
Measures provide an aggregate value for a data field, such as a sum or average. Because measures
are used for aggregation functions, only data fields of type integer or float can be created as
measures. The default aggregation function for the measure is Sum.
You can use load balancers to manage client requests across all of the nodes in a FileNet P8 server farm.
Farming requires a mechanism to balance the load across all the nodes in a farm, and to redirect client
connections to surviving nodes in case of failure. This section summarizes the available load balancing options.
A number of hardware and software load-balancing products are available for server farm configurations,
including IBM, Oracle, F5 Big IP, and JBoss.
F5 Big IP
Important: Layer 7 load balancers are supported in FileNet P8, but the header or packet modification
capabilities of layer 7 load balancers have not been tested and should be used with caution.
The Case Analyzer preconfigured reports are organized to focus on the following areas of your system:
Case - In-progress and historical information about cases, such as the current number of cases, and
the average time to complete cases during a specified time period.
Task - In-progress and historical information about tasks, such as the current number of tasks, and the
average time to complete tasks during a specified time period.
Workflow - In-progress and historical information about workflows in your system, such as the current
number of workflows in the system, and the average time to complete workflows during a specified
time period.
Queue - In-progress and historical information about work items in various queues, such as the current
number of work items in each queue, and the number of work items completed during a specified time
period.
Step - In-progress and historical information about work items at various steps, such as the average
time spent to complete work at a step, and the percentage of work items taking each route from a step.
User - In-progress and historical information about each user, such as the average time to complete
work during a specified time period.
Case-related reports
Case-related reports provide you with information about the status and processing of cases in your
system. A case can be comprised of tasks, content, processes, and views.
Task-related reports
Task-related reports provide you with information about the status and processing of tasks in your
system. The distinction between a case and a task is that a case can comprise one or more tasks. A
task is a process fragment, which can be a set of items to be completed.
Workflow-related reports
Workflow-related reports provide you with information about the status and processing of workflows in
your system. To make sense of the reports, it helps to understand the distinction between a workflow
and a work item. A workflow is a single instance of a workflow definition. It consists of one or more
work items. A work item is the smallest individual piece of a workflow.
Queue-related reports
Queue-related reports provide you with information about the status of work items in specific queues in
your system.
Step-related reports
Step-related reports provide you with information about the status of work items at specific steps in
your system.
User-related reports
User-related reports provide you with information about the status and processing of work items by
specific users in your system.
Case-related reports
Version 5.2.1
Case-related reports provide you with information about the status and processing of cases in your system. A
case can be comprised of tasks, content, processes, and views.
The reports in chart form show the case status as described in the following table.
Description
The average age of all cases currently in the system. Age is computed
from the time the case was launched.
The number of cases that were created in the system during the
specified time period. The information is grouped by time and case
Description
definition.
Number Of Cases In
Progress During Time
Period
The number of cases in progress for the specified time period. The
information is grouped by time and case definition.
Number Of Cases
Completed During Time
Period
The number of cases that were completed during the specified time
period. The information is grouped by time and case definition.
Task-related reports
Version 5.2.1
Task-related reports provide you with information about the status and processing of tasks in your system. The
distinction between a case and a task is that a case can comprise one or more tasks. A task is a process
fragment, which can be a set of items to be completed.
The following table lists the preconfigured reports that provides the task data in chart form.
Description
The number of tasks currently in the system. The information is grouped by task definition.
The average age of all tasks currently in the system. Age is computed from the time the task
was launched.
The average amount of time, in hours, that it took to complete tasks during the specified time
period. The information is grouped by time and task definition.
The number of tasks that entered the system during the specified time period. The information
is grouped by time and task definition.
Number Of Tasks In
Progress During Time
Period
The number of tasks in the system at the end of the specified time period. The information is
grouped by time and task definition.
Description
Number Of Tasks
Completed During Time
Period
The number of tasks that were completed during the specified time period. The information is
grouped by time and task definition.
The average amount of time that current tasks are in each state.
The average amount of time that tasks have been in each state for a specified time period.
Wait state indicates that the task was dependant on another item or task to complete
before the task can continue.
Ready state indicates that the task was ready to begin; however, the task is waiting
for either a process to start it automatically or for user intervention to manually start
the task.
Failed state indicates that the processing of the task stopped and required user
intervention for the task to continue towards completion.
Known issue: Case Analyzer is out of sync with IBM Case Manager after a test environment has
been reset.
Want a glimpse into the future? Check out the new support experience beta.
Chat with Support
Release notes
Abstract
Case Analyzer might not pickup new events after a reset of the Case Manager test
environment.
Content
In IBM Case Manager you can reset a test environment by using Case Manager Builder.
When you reset the test environment, the target object store in Content Engine and the
region data in Process Engine will be reinitialized. However, Case Analyzer will still
contain object store data and region data from events that occurred before the
environment reset. Therefore, Case Analyzer might not pickup new events after the
reset.
To correct this, you must reset the Case Analyzer database after the test environment is
reset in Case Manager. In Case Analyzer Version 5.0, the only way to reset the
database is to use a command line script.
Complete these steps to reset the Case Analyzer database:
1. From the command line, navigate to your system's equivalent of the following
directory:
C:\Program Files\IBM\FileNet\Case Analyzer Engine\jpa\scripts\sqlserver
The command will rebuild the Case Analyzer relational and OLAP database.
Want a glimpse into the future? Check out the new support experience beta.
Chat with Support (currently not available)
Technote (troubleshooting)
Problem(Abstract)
There are several layers to Case Analyzer so when problems happen, this guide
provides a couple places to check.
Symptom
Cause
Case Analyzer not processing events or OLAP cubes are unable to process.
1. Verify the host name along with other CA and OLAP configuration parameters in CA
datastore configuration in Administrative Console for the Content Platform Engine
(ACCE).
2. Ensure that the Case Analyzer service is running on the Case Analyzer SSAS
(typically OLAP) server and that you can connect (telnet) to the Case Analyzer SSAS
server on port 32771.
3. Verify that the Case Analyzer database user exists and has all roles except the DENY
roles selected.
4. Verify that the Case Analyzer OLAP user is a domain account, has administrative
rights on the MSSQL Analysis server, and has a matching domain account on Case
Analyzer DB with all except DENY roles.
5. If the OLAP database does not exist, create a new one. Check the OLAP database
for the out-of-the-box cubes (e.g. Work In Progress). If they do not exist, run the
setupolap batch script and specify the CA datasource name. After entering the
command it will appear to hang (no output), however this is actually a prompt for
credentials so please enter the CA username+password.
6. Once the OLAP database exists and has the cubes, check the VMAE datasource
credentials and if needed, change them to use the correct account. Manually process
the cubes to verify things are working.