Вы находитесь на странице: 1из 528

All rights reserved.

No parts of this publication may be reproduced, stored in a retrieval system, or


transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or
otherwise, without the prior permission of the author.

Author
Published by
MD AZIZUDDIN AAMER
azizaamer@gmail.com

Dedicated to my parents and to my girlfriend kanika

PREFACE
This book covers Component , Product , Operating Testing with respect to
Siebel. It also covers the Testing life cycle. It covers both White Box as well as
Black Box Testing. This also covers Unit Testing ( UT ) , Integration Testing (IT )
as well User Acceptance Testing ( UAT ).It covers entry and exit criteria for all
testing covered as well as the deliverables.

It has a Sample full Regression Siebel Test plan. It has a comprehensive


Glossary of Testing Terms & Terminology. It also covers Siebel Testing Best
Practices.

It also covers an Overview of All Siebel Applications , Configuration , Scripting ,


Workflows , EIM , EAI , Haleys Authority , Data Validation Manager ( DVM ) ,
State Model , Visibility & Access Control Mechanisms.
It also covers Siebel Recommended Best Practices for Configuration , Scripting ,
EIM and also configuration Guidelines.

It covers the following new features in the Siebel 8.0 UI

Task Based UI

InkData Control

New API called Siebel Test Optimizer (formerly known as Siebel Test
Express)

It covers Siebel Test Automation using QTP.


It also covers enabling Siebel Test Automation
It also covers Siebel Script Views , Siebel Step Groups , Siebel Init Action ,
Siebel Page Tabs ,
Siebel Text Fields , Siebel Buttons , Siebel Picklists , Siebel Menus , Search
Buttons , parameterization,
Object Identification , Object Test , Table Test.

It also covers Recording Siebel Load Test Script , Siebel Correlation Library ,
Validation,
Text Matching Test , Server Response Test & parameterization.

ABOUT THE AUTHOR

Md Azizuddin Aamer is an MBA currently working as a Project Manager for a


large MNC.

He got 8 + Years of Siebel CRM Experience. He has worked on all versions of


Siebel ranging from Siebel 5.5 to Siebel 8.1.

He is PMP Certified from PMI , PRINCE 2 Certified , ITIL Certified and


Stanford Certified Project Manager ( SCPM ) from Stanford University.
He is Siebel 7.7 Certified Consultant as well as Siebel 7.7 Analytics Server
Architect Certified.

Test Concept Overview


42
Test Framework
43
Test Stages: Application Team
44
Functional Testing
46
Technical Testing
47
Component Testing: What is it?
48
Component Test : Plan
49
Component Test: Prepare & Execute
50
Component Test: Key Considerations
51
Assembly Testing: What is it?
52
Assembly Testing: Plan
53
Assembly Testing: Prepare & Execute
54
Assembly Testing: Key Considerations
55
Product Test: What is it?
56
Product Test: Plan
57
Product Test: Prepare & Execute
58
Product Test: Key Considerations
59
User Acceptance Test: What is it?
60
User Acceptance Test: Plan
61
User Acceptance Test: Prepare and Execute
62
User Acceptance Test: Key Considerations
63
Performance Test: What is it?
64
Performance Test: Performance Factors
65
Performance Test: Plan
66
Performance Test: Prepare & Execute
67
Performance Test: Key Considerations
68
Operational Readiness Test: What is it?
69
Operational Readiness Test: Plan
70
Operational Readiness Test: Prepare & Execute
71
Operational Readiness Test: Key Considerations
72
Testing Life Cycle
73
A systematic approach to testing that normally includes these phases:
73
1.
Risk Analysis
73
2.
Planning Process
73
3.
Test Design
73
4.
Performing Test
73
5.
Defect Tracking and Management
73
6.
Quantitative Measurement
73
7.
Test Reporting
73
A. Risk Identification
74
1.
Software Risks - Knowledge of the most common risks associated with software
development, and the platform you are working on.
74
2.
Testing Risks - Knowledge of the most common risks associated with software
testing for the platform you are working on, tools being used, and test methods being
applied.
74

3.
Premature Release Risk - Ability to determine the risk associated with releasing
unsatisfactory or untested software products.
74
4.
Business Risks - Most common risks associated with the business using the
software.
74
5.
Risk Methods - Strategies and approaches for identifying risks or problems
associated with implementing and operating information technology, products, and
processes; assessing their likelihood, and initiating strategies to test for those risks. 74
B. Managing Risks
74
1.
Risk Magnitude - Ability to rank the severity of a risk categorically or
quantitatively.
74
2.
Risk Reduction Methods - The strategies and approaches that can be used to
minimize the magnitude of a risk.
74
3.
Contingency Planning - Plans to reduce the magnitude of a known risk should
the risk event occur.CSTE Body of Knowledge Skill Category 6
74
A. Pre-Planning Activities
74
1.
Success Criteria/Acceptance Criteria - The criteria that must be validated
through testing to provide user management with the information needed to make an
acceptance decision.
74
2.
Test Objectives - Objectives to be accomplished through testing.
74
3.
Assumptions - Establishing those conditions that must exist for testing to be
comprehensive and on schedule; for example, software must be available for testing on
a given date, hardware configurations available for testing must include XYZ, etc. 74
4.
Entrance Criteria/Exit Criteria - The criteria that must be met prior to moving
to the next level of testing, or into production.
74
B. Test Planning
74
1.
Test Plan - The deliverables to meet the tests objectives; the activities to
produce the test deliverables; and the schedule and resources to complete the activities.
74
2.
Requirements/Traceability - Defines the tests needed and relates those tests to
the requirements to be validated.
74
3.
Estimating - Determines the amount of resources required to accomplish the
planned activities.
74
4.
Scheduling - Establishes milestones for completing the testing effort.
74
5.
Staffing - Selecting the size and competency of staff needed to achieve the test
plan objectives.
75
6.
Approach - Methods, tools, and techniques used to accomplish test objectives. 75
7.
Test Check Procedures (i.e., test quality control) - Set of procedures based on
the test plan and test design, incorporating test cases that ensure that tests are
performed correctly and completely.
75
C. Post-Planning Activities
75
1.
Change Management - Modifies and controls the plan in relationship to actual
progress and scope of the system development.
75
2.
Versioning(change control/change management/configuration management)
- Methods to control, monitor, and achieve change.
75
A. Design Preparation
75

1.
Test Bed/Test Lab - Adaptation or development of the approach to be used for
test design and test execution.
75
2.
Test Coverage - Adaptation of the coverage objectives in the test plan to specific
system components.
75
B. Design Execution
75
1.
Specifications - Creation of test design requirements, including purpose,
preparation and usage.
75
2.
Cases - Development of test objectives, including techniques and approaches for
validation of the product. Determination of the expected result for each test case.
75
3.
Scripts - Documentation of the steps to be performed in testing, focusing on the
purpose and preparation of procedures; emphasizing entrance and exit criteria.
75
4.
Data - Development of test inputs, use of data generation tools. Determination of
the data set or sub-set needed to ensure a comprehensive test of the system. The ability
to determine data that suits boundary value analysis and stress testing requirements. 75
A. Execute Tests - Perform the activities necessary to execute tests in accordance with
the test plan and test design (including setting up tests, preparing data base(s), obtaining
technical support, and scheduling resources).
75
B. Compare Actual versus Expected Results - Determine if the actual results met
expectations (note: comparisons may be automated).
75
C. Test Log - Logging tests in a desired form. This includes incidents not related to
testing, but still stopping testing from occurring.
75
D. Record Discrepancies - Documenting defects as they happen including supporting
evidence.
75
A. Defect Tracking
76
1.
Defect Recording - Defect recording is used to describe and quantify deviations
from requirements.
76
2.
Defect Reporting - Report the status of defects; including severity and location.
76
3.
Defect Tracking - Monitoring defects from the time of recording until
satisfactory resolution has been determined.
76
B. Testing Defect Correction
76
1.
Validation - Evaluating changed code and associated documentation at the end
of the change process to ensure compliance with software requirements.
76
2.
Regression Testing - Testing the whole product to ensure that unchanged
functionality performs as it did prior to implementing a change.
76
3.
Verification - Reviewing requirements, design, and associated documentation to
ensure they are updated correctly as a result of a defect correction.
76
A. Concepts of Acceptance Testing - Acceptance testing is a formal testing process
conducted under the direction of the software users to determine if the operational
software system meets their needs, and is usable by their staff.
76
B. Roles and Responsibilities - The software testers need to work with users in
developing an effective acceptance plan, and to ensure the plan is properly integrated into
the overall test plan.
76
C. Acceptance Test Process - The acceptance test process should incorporate these
phases:
76
1.
Define the acceptance test criteria
76

2.
3.

Develop an acceptance test plan


76
Execute the acceptance test plan
76
A. Test Completion Criteria
76
1.
Code Coverage - Purpose, methods, and test coverage tools used for monitoring
the execution of software and reporting on the degree of coverage at the statement,
branch, or path level.
76
2.
Requirement Coverage - Monitoring and reporting on the number of
requirements exercised, and/or tested to be correctly implemented.
76
B. Test Metrics
76
1.
Metrics Unique to Test - Includes metrics such as Defect Removal Efficiency,
Defect Density, and Mean Time to Last Failure.
77
2.
Complexity Measurements - Quantitative values accumulated by a
predetermined method, which measure the complexity of a software product.
77
3.
Size Measurements - Methods primarily developed for measuring the software
size of information systems, such as lines of code, function points, and tokens. These
are also effective in measuring software testing productivity.
77
4.
Defect Measurements - Values associated with numbers or types of defects,
usually related to system size, such as 'defects/1000 lines of code' or 'defects/100
function points'.
77
5.
Product Measures - Measures of a products attributes such as performance,
reliability, failure, usability,
77
A. Reporting Tools - Use of word processing, database, defect tracking, and graphic
tools to prepare test reports.
77
B. Test Report Standards - Defining the components that should be included in a test
report.
77
C. Statistical Analysis - Ability to draw statistically valid conclusions from
quantitative test results
77
Black Box Testing and Its Methods
77
The Test Cycle
79
Analyze Phase
80
Analyze: Testing Terms
81
Design Phase
82
Design: Testing Terms
83
Design Phase: Templates
84
Implement Phase
85
Implement: Test Terms
87
Designing Test Cases: Terms
88
Execute Phase
89
Execute Phase: Templates
90
Execute: Test Tools
91
Siebel Testing
92
Unit Testing (UT)
93
Interface Testing (IT)
94
Functional Testing (FT)
95
System Integration Testing (SIT)
96
Performance Testing (PT)
97

Regression testing (RT)


User Acceptance Testing (UAT)
Entry/Exit Criteria for the Testing Phase
White Box Testing
White Box Testing techniques
Black Box Testing
Black Box Testing Techniques
Testware
Structured Testing challenges
Making Test a First Class Citizen
Use Cases for UCBT
UCBT
Advantages of Use-case based modeling
Using RUP
RUP Processes
Test Activities according to RUP
Siebel eRoadmap
Siebel Testing Process Overview
Testing SOA
Risk Mitigation strategies
Cornerstones of T-Map: Test Management Approach
Lifecycle
Techniques
Organization
Infrastructure
How can you test better?
Introducing Siebel Applications
Siebel Customer Relationship Management (CRM)
Siebel CRM Enterprise
Siebel CRM Professional Edition
Siebel CRM OnDemand
Siebel Business Entities
Accounts
Types of User Interfaces (UI)

98
99
100
101
102
103
104
105
106
107
109
110
111
113
114
115
117
118
120
121
123
124
125
126
127
130
131
132
133
134
135
136
137
145

Implementing Siebel Applications


Implementing Siebel Applications
Successful Siebel Product Implementations
Use a Standardized Implementation Methodology
Implementation Methodology Characteristics
Advantages of a Multi-Phased Approach
Two Ways to Satisfy User Requirements
Impacts of the Implementation Approaches
The Siebel Data Model
Prominent Data Tables
Siebel Scripting Terms
Siebel Object Model
Siebel Scripts
Browser Scripts
Server Scripts
Siebel Workflow Architecture
Workflow Step
Events Invoking Siebel Workflow Processes

150
151
152
153
154
155
160
161
162
169
186
187
188
190
191
192
194
196

Business Logic and


Decision Rules
Business Logic and Decision Rules

199
200

Decision Rules:
Decision Step Details
Decision Rules: Decision Step Details

200
201

Actions: Business Service Step


Actions: Siebel Operation Step

Process Properties
A Word about Process Properties

Developing Workflows in Siebel Tools

203
203
206

A Word about
206
207

208
208

Developers can also develop or modify workflows using Siebel Tools connected to the
development database by locking the project in the master repository. This way, they do
not need to make sure that all the list-of-values are made available to the local database
Event Logs
209
Event Logs
210
Migrate to Production
211
Using Siebel State Model
215
Business Challenge and Solution
215
Siebel State Model
215
Siebel Business Rules Using HaleyAuthority
216
About HaleyAuthority
216
Advantages of HaleyAuthority
217
Haley Architecture
219

Siebel Business Rules Features


Siebel Business Rules Features
High Level Rules Architecture
Components of the Rules Architecture
Runtime Inference Engine
Steps for using HaleyAuthority
Enterprise Integration Manager (EIM)
Why not SQL?
Siebel Base Tables
Interface Tables
User Keys
Interface Table Structure
Interface Tables Temporary Column

219
220
221
222
223
224
225
226
227
228
229
230
231

Start With T_
Used to hold temporary values and status used during processing step.

EIM Process Flow

Process Flow between Siebel Database and Customer Master

Data Mapping

SQL Ancillary Program Utility SQL *Loader Control Files

Prepare the interface tables

Prepare EIM Configuration file

Run EIM

Verify Results
Process Flow Between Siebel Database and other Database

231
231

231
232
232
232
232
232
232
232
232
233

Data Mapping
233
Data Mapping
234

An interface table may populate more than one base table.


234

A base table may be populated by more than one interface table.


234
Identify
234

Which interface table will map to Siebel Base table.


234

Which Siebel base table map to interface table.


234

Which interface table columns will map to base table columns.


234
Note: Some base tables may not be mapped to corresponding interface table . In
such cases, use Siebel VB to load data.Siebel VB Works on business object layer
while EIM works on data layer.
234
What is Upgrade ?
235
How to Upgrade
236
Why to upgrade
237
Benefits of Upgrading
238
Upgrade Considerations
239
Siebel Upgrade Framework
240
Upgrade / Migration Assessment
241
Assessment Deliverables
242
Upgrade / Migration Roll out
243
Post Migration Support
244
Flow of the Upgrade Process
245
Upgrade Flow
246
What's new in Siebel 8/8.1 : generic functionality
247
Task Based User Interface
249
Overview
250

Task UIs features


Task UI Concepts
Transient Business Component
How Transient BCs Differ from Standard BCs
Task Applet
Task View
Task Chapter
Task Group
Task Playbar
iHelp
Why?
Using iHelp to complete Tasks
Using iHelp Map
Process of iHelp Administration
SIEBEL SMART SCRIPT
Overview
Siebel SmartScript offers the following benefits
Smart Script Terminology
Procedure for creating Smart Script
Siebel Data Validation Manager
Introduction
The DVM features
Roadmap for Implementing Data Validation Processing
A. Process of Administering Data Validation Rules
Access Control Mechanisms
Access Control Requirements
What is Access Control?
Single Party Model
Categorization
Organizational Entities
Siebel Access Control Mechanisms
Siebel Organization Implementation
Organization enabled Objects

251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
283
284
286

SubOrganization Visibility
Sub-Organization Visibility
Access Group
Why Another Visibility control mechanism ?

286
287
288
289

What are
Access Groups ?
What are Access Groups ?

289
290

?
What are Access Groups ?

What are Access Groups


290
291

Groups ?
What are Access Groups ?

What are Access


291
292

How Access
Group Addresses Complex Requirements
How Access Group Addresses Complex Requirements
Behavior of Community and Catalog - Catalog
Overview of EAI
Need for Integration
Basic Integration Tasks
Identify the Data to Integrate in Each Application
Map and Transform the Data from Each Application
Transport the Data Between Applications
Traditional Application Integration
Features of an Integrated Environment
Integration Approaches
Siebel Integration Strategies
Workflow for EAI
Elements of Workflow for EAI
EAI Dispatch Service
Virtual Business Components
eBusiness Connectors
Enterprise Integration Manager (EIM)
Object Interfaces
Other EAI Strategies
Comparing EAI Strategies
Matching EAI Strategies to Integration Approaches
1. Synchronize Siebel Data with External Data

292
293
294
295
296
297
298
299
300
301
302
303
305
306
307
308
309
310
311
312
313
314
315
317

2. Display External Data in Siebel Applets


318
3. Display Siebel Data in an External System
319
4. Control a Siebel Application from an External System
320
5. Export Siebel Data to an External System
321
Business Services
322
Business Service Method Arguments
323
Property Set
323
Hierarchical Property Set
324
What Can Invoke a Business Service
325
Prebuilt EAI Business Services
327
Prebuilt Data Transformation Adapters
328
Prebuilt Data Transport Adapters
329
Custom Business Services
330
Configuration Best Practices
331
Desc Sort Specification
332
Force Active and Link Specification
333
Non-Indexed Search / Sort Specifications
334
Primary ID Field & Primary Join Property
335
Check No Match Property
335
Setting No Delete, No Insert, No Update Properties at BC Level
337
Outer Join Flag
338
Redundant SearchSpecs on Applets
339
Defining Ancestry of Custom Objects
339
Required property
339
Update a field when another field is updated
341
BC Read Only Field & Field Read Only Field:fieldname & Parent Read Only Field 343
Comment configuration changes
344
Cloning objects
345
Siebel Scripting Best Practices
346
When to use Scripting
347
Follow standard naming conventions
348
Comment Code
349
Place code in the correct Event Handler
350
Know when to use Browser versus Server Script
351
Use Fast Script In Event Handlers that fire Frequently
352
Use Option Explicit
353
Leverage Appropriate Debugging Techniques
354
Remove Unused Code from the Repository
355
Include Error Handling in all the Scripts
356
Use RaiseError and RaiseErrorText Properly
357
Use Exception Information
358
Place Return Statements Correctly : eScript
359
Centralize Browser Script using the Top Object
360
Know when to use Current Context versus a New Context in Server Script
362
Use Smallest Possible Scope for Variables
363
Instantiate Objects only as needed
364

Destroy Object Variables when no longer needed


Use conditional Blocks to Run Code
Verify Objects Returned
Verify field is active before use
Use Proper View Mode for Queries
Use ForwardOnly Cursor Mode
Verify Existence of Valid Record After Querying
Use Switch or Select Case Statements
Call Methods Once If Results Do Not Change
Script The BusComp_PreGetFieldValue Event Carefully
Use The Required Field Property
Call Custom Methods Only As Necessary
Use Conditional Blocks To Run Code
Use Conditional Blocks Example
Conditional statements - Example
Creating Methods As Wrappers For Simple Method Calls
Remove Debugging Code
Use ActivateMultipleFields
Use The Associate Method
Nested query loops
Cache Data
Use a Join Instead Of A Scripted Query
Reuse objects
EIM General Recommendations
Five ways to tune the EIM process
Recommended Order of tuning
GLOSSARY
REGRESSION TEST PLAN
Introduction
Document Purpose
Document Scope
Test Focus
Test Objectives
Dependencies and Assumptions
Testing Coverage & Traceability
Approach
Test Collateral
Data Requirements
Confidence Testing
Regression in System Test phase
Regression in System Integration Test phase
Performance Testing
Regression Test scope
Location of Test Scripts
System Test Regression Summary
Core Siebel CRM 4.3 Functions/Features to be tested during ST

365
366
367
368
369
370
371
372
373
376
378
379
380
381
383
384
386
387
390
391
392
396
398
399
400
410
413
438
441
441
441
441
441
442
442
443
443
443
443
444
444
445
446
446
446
446

System Integration Test Regression Summary


Core Siebel CRM 4.3 Functions/Features to be tested during SIT
Legacy Interface Functions/Features to be tested
Performance Testing
Functions/Features not to be tested
Entry and Exit Criteria
Entry Criteria
Exit Criteria
Suspension and Resumption Criteria
Suspension
Resumption
Testing tools and Techniques
Test Plan and Schedule
Resource Requirements
Test Plan
Risks and COntingencies
Appendix
CRM 4.3 Test Scripts
Legacy Interface Explanation
Confidence Test Checklist
Performance KPIs
Best Practices for Siebel Functional Test Script Development
Siebel 8.0 Test Automation
Siebel 8.0 Test Automation New and Enhanced Features
Task Based UI
SiebTask Control Specification
SiebTaskStep Control Specification
SiebTaskUIPane Control Specification
SiebTaskLink Control Specification
InkData Control
SiebInkData Control Specification
Siebel 8.0 Test Automation Benefits
Siebel Test Optimizer Introduction
Siebel Test Optimizer Benefits
Siebel Test Optimizer Installation and Setup Considerations
Siebel Test Optimizer Configuration Steps
Siebel Test Optimizer Usage
Siebel Functional Testing Overview
Creating a Siebel Functional Test Script
Enabling Siebel Test Automation
Verify Siebel Script Recording

447
447
450
451
451
451
451
451
452
452
452
453
454
454
454
455
456
456
458
460
461
465
470
471
472
473
474
475
476
477
478
479
480
482
483
484
485
488
489
490
491

492

Configuring Internet Explorer

493
494

494

Siebel Script Views


Siebel Script Views
Siebel Step Groups
Siebel Init Action

495
496
497
498

Siebel Page Tabs


Siebel Page Tabs

498
499

Siebel Text Fields


Siebel Text Fields

499
500

Siebel Buttons

500
501

Siebel Picklists
Siebel Picklists

501
502

Confirmation Dialogs
Confirmation Dialogs

502
503

Siebel Menus

503
504

Search Button

504
505

Other Siebel Actions


Other Siebel Actions

505
506

Parameterization
Parameterization

506
507

507

Object
Identification
Object Identification

Identification Preferences
Siebel Object Identification Preferences

508
509

Siebel Object
509
510

Using Object Library for Siebel Scripts

511

Object Test
Object Test

511
512

513
513

Select an object to test

Select which property or properties to test


Select which property or properties to test

513
514

514

Table Test
Table Test

515
516

Select individual cells to test


Select individual cells to test

516
517

Load Test Script


Recording a Siebel Load Test Script
Siebel Correlation Library
Validation
Text Matching Test
Server Response Test
Parameterization
Script Step Groups

Recording a Siebel
517
518
521
523
524
525
526
527

Test Concept Overview


The V-Model is a proven, industry standard framework that defines the standard
development life cycle. It is shown below.

Test Framework
Within the development life cycle, for each standard test stage there are specific testing
tasks during the testing life cycle.
These tasks are illustrated relative to the Testing Framework:

Define the Approach


Plan the Test
Prepare the Test
Execute the Test
Close the Test

Test Stages: Application Team


A Stage refers to major development process steps in a projects lifecycle: Analyze,
Design, Build, Test, etc. It also refers to different stages of tests. Here are the testing
stages for the Application team:

Component Test
Assembly Test
Product Test
User Acceptance Test
Performance Test

Test Stages:
Technical Architecture Team
Here are the testing stages for the Technical Architecture team:

Technical Architecture Component and Assembly Test


Product Test
Performance Test
Operational Readiness Test

Functional Testing
The different test stages can be categorized as two types: functional & technical testing.
Here are the different functional tests:
Component Test
Assembly Test
Product Test
- Application product test
- Integration product test
User Acceptance Test

Technical Testing
Here are the different technical tests:

Performance Test - This test is carried out to ensure that a release is capable of
operating at the load levels specified in the business performance requirements
and any agreed-on Service Level Agreements (SLAs).

Operational Readiness Test (ORT) - Tests the readiness of the production


environment to handle the new system.

Component Testing: What is it?


Component Test:

Component test ensures that the logic implemented in the module scope satisfies
the modules requirements

The component test stage spans a number of individual component tests that
prove the low-level functionality of all end system components

The component test also validates that a modules code reflects the appropriate
detailed logic set forth in the design specification

A component test condition denotes a unique path through the modules logic.
The scope of the test condition encompasses logical branches, limits, etc.

Component Test : Plan


The first task is to Plan the Component Test:

Process steps for component test:

Define Component Test Approach


Define Component Test Conditions
Define Component Test Cycles

Inputs:

Data Conversion Design


Technical Design (Customization/Integration)
Testing Strategy
Application Development Standards

Deliverables:
Test Plan
Checklists:
Component Test Entry Criteria
Component Test Exit Criteria

Component Test: Prepare & Execute


Next task is to Prepare and Execute the Component Test:

Process steps for component test:


Conduct Component Test
Inputs:
Test Plan
Deliverables:
Test Plan
Test Data
Test Closure Reports
Checklists:
Component Test Entry Criteria
Component Test Exit Criteria
Code Review Checklist

Component Test: Key Considerations


Key Considerations

Define the boundaries between component and assembly testing


Limit the scope of component testing
Address performance concerns during component test
Avoid maverick programmers
Create the test plan in parallel with the detailed design
Plan for regression testing
Pay attention to the efficiency of the component test
Live data vs. prepared component test
Other Component Test Considerations

Assembly Testing: What is it?

Assembly Test:

Assembly test ensures that related components function properly when assembled
Testing the application component interfaces helps verify that correct data is
passed between components
At the completion of assembly testing, all component interfaces in the application
are executed and proven to work according to specifications
It is imperative that the assembled components be fully tested before they are
migrated to product test

Assembly Testing: Plan


The first task is to Plan Assembly Test:

Process steps for assembly test:


Define Assembly Test Approach
Define Assembly Test Conditions
and Expected Results
Define Assembly Test Cycles
Validate Assembly Test Plan
Inputs:
User Scenario
Data Conversion Design
Requirements
Testing Strategy
Metrics
Technical Architecture Specifications
Development Environment Design
Test Plan

Deliverables:

Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet

Assembly Testing: Prepare & Execute

Next task is to Prepare and Execute Assembly Test:

Process steps for assembly test:

Confirm Assembly Test Cycles


Establish Assembly Test Environment
Create Assembly Test Scripts
Update Common Test Data
Execute Assembly Test
Perform Fixes

Inputs:
Test Plan
Common Test Data
Deliverables:
Test Closure Memo
Common Test Data
Test Plan

Assembly Testing: Key Considerations


Key Considerations

Clearly define the boundary between component testing and assembly testing
Focus this test stage on the interfaces between components
Pay attention to the efficiency of the assembly test
Plan for regression testing
Ensure adequate assembly testing
Consider the impact of object development
Coincide the assembly testing schedule with the delivery of assemblies
Creating assembly test plans for object-oriented applications requires more effort
Prioritize the testing efforts
Plan for migration
Refine test plans after running test scripts once
Foster collaboration between testers and developers
Set up the common test data for testing multiple applications

Product Test: What is it?

Product Test:

Sometimes called system test, tests that


the system meets all functional and
business requirements. Break it into two
separate tests for systems that involve
multiple applications:

The application product test tests business requirements met by each


individual application.
The integration product test is an end-to-end test of the business
requirements across all applications and platforms. Upon successful
completion of the application product test, integration product test can
occur.
It may be necessary to separate the application and integration product test
environments (to reduce risk of clashes, etc.). If so, do not combine the
product and integration tests.

Product testing ensures that the following requirements have been met:

Business requirements are fulfilled


Functional specifications are correctly implemented
Complex business processes execute correctly
Business process errors and exception logic are handled correctly.

Product Test: Plan

First task is to Plan Product Test:

Process steps for product test:


Define Product Test Approach
Define Product Test Conditions & Expected Results
Define Product Test Cycles
Validate Product Test Plan
Inputs:

Fit/Gap analysis
Application inventories
Integration design
Business process design
Requirements
Testing Strategy
Metrics

Deliverables:

Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet

Checklists:

Application Product Test Entry Criteria


Application Product Test Exit Criteria
Tech Arch Product Test Entry Criteria
Tech Arch Product Test Exit Criteria

Product Test: Prepare & Execute


The next task is to Prepare and Execute Product Test:

Process steps for product test:


Confirm Product Test Cycles
Establish Product Test Environment
Create Product Test Scripts
Update Common Test Data
Execute Product Test
Perform Fixes
Inputs:
Test Plan
Common Test Data

Deliverables:

Common Test Data


Test Plan
Test Closure Memo

Checklists:

Application Product Test Entry Criteria


Application Product Test Exit Criteria
Product-tested Application Checklist

Product Test: Key Considerations


Key Considerations

Use this task to plan technical architecture product tests


Develop the test suite to support automated regression testing
Account for additional levels of product testing
Test the applications compatibility
Emphasize security testing
Conduct a technical architecture product test, if necessary
Define and execute communication procedures
Ensure test environment is set up properly
Keep the application documentation in sync with the application
Refine the test plans after running the test scripts
Set up common test data for testing multiple applications
Budget extra time to resolve defects

User Acceptance Test: What is it?

User Acceptance Test:

User acceptance test ensures that the solution meets the original functional and
business requirements and is acceptable to the client.

UAT may span the same areas as application and integration product testing, but
design it with help from the business sponsors and end users.

User Acceptance Test: Plan


The first task is to Plan User Acceptance Test:

Process steps for User Acceptance test:


Define User Acceptance Test Approach
Define User Acceptance Test Conditions
& Expected Results
Define User Acceptance Test Cycles
Validate User Acceptance Test Plan
Inputs:
Fit/Gap analysis
User Scenario
Application Inventories
Integration Conceptual Design
Business Process Design
Requirements
Testing Strategy
Metrics

Deliverables:

Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet

User Acceptance Test: Prepare and Execute


The next task is to Prepare and Execute User Acceptance Test:

Process steps for User Acceptance test:

Inputs:

Confirm User Acceptance Test Cycles


Prepare and Train Targeted Users
Establish User Acceptance Test Environment
Create User Acceptance Test Scripts
Schedule Test
Execute User Acceptance Test
Gather User Feedback
Perform Fixes
Communicate Results

Test Plan
Common Test Data

Deliverables:
Test Closure Memo
Test Plan
Checklists:
User-Accepted Application Checklist

User Acceptance Test: Key Considerations


Key Considerations

Involve the users early and often in the development process


Begin planning for user participation in the User Acceptance Test
Use the UAT as more than a user evaluation
Allow more time to resolve defects
Repeat the tests conducted in earlier test stages, if necessary

Performance Test: What is it?


Performance Test:

To identify and fix system performance issues before the system goes live. This
generally includes load test, stress test, stability test, throughput test, and ongoing
performance monitoring.

Performance testing ensures that a system is capable of operating at realistic and


peak workloads in a fully integrated environment. The analysis and design must
explicitly determine suitable business performance requirements.

Performance testing should test the following parameters:


System performance at normal volume levels
System performance at peak volume levels
System performance at twice peak volume levels
Volume level at which performance degradation begins
Nature of performance degradation

Performance Test: Performance Factors


Performance Factors

The performance (e.g., transactional response time) is influenced by the core


system components: the client, the server, and the network. There are delays
associated with each component. These delays aggregate to ultimately influence
the end user application performance levels.

Application factors are associated with


application architecture, hardware, platform,
design, and content.
User factors are associated with user location
or connection method to the application.
Network factors are associated with the network
infrastructure that connects the user to the
application and the various nodes of the
application architecture to one another.

Performance Test: Plan

The first task is to Plan Performance Test:

Process steps for performance test:

Define Performance Test Approach


Define Performance Test Conditions & Expected Results
Define Performance Test Cycles
Validate Performance Test Plan

Inputs:

Fit/Gap Analysis
User Scenario
Application Inventories

Integration Conceptual Design


Business Process Design
Requirements
Testing Strategy
Metrics

Deliverables:

Test Approach
Test Conditions & Expected Results
Test Cycle Control Sheet

Performance Test: Prepare & Execute


The next step is to Prepare and Execute
Performance Test:

Process steps for performance test:


Confirm Performance Test Cycles
Establish Performance Test Environment
Create Performance Test Scripts
Update Common Test Data
Execute Performance Test
Perform Fixes
Inputs:

Test Plan
Common Test Data

Deliverables:

Test Closure Memo


Test Plan
Common Test Data

Checklists:

Product-tested Application Checklist

Performance Test: Key Considerations

Key Considerations
Plan technical architecture performance tests
Link technical infrastructure sizing activities with performance test planning
Focus on quality requirements
Model the test environments after the production environment
Conduct technical architecture performance test prior to application performance
test
Refine test plans after running the first test script set
Use performance testing tools
Return performance test modifications to product test for functional verification
Prepare to handle performance problems with conversion programs
Trace back to design defects when fixing performance problems
Obtain live data input files for the performance test
Tune the application rather than the infrastructure
Budget extra time to resolve defects

Begin performance test as early as possible

Operational Readiness Test: What is it?

Operational Readiness Test:

The Operational Readiness Test (ORT) verifies the production environments


ability to handle the new system. It consists of three components:
The operations test verifies that the correct functionality, architecture, and
procedures are defined and implemented to allow production support
teams
to run, maintain, and support the system.
The deployment test verifies that all system components can correctly
deploy
to the production environment in the time required.
The deployment verification test verifies that the system is correctly
installed and configured in the production environment.

Operational Readiness Test: Plan


The first task is to Plan for the Operational Readiness Test:

Process steps for operational readiness test:


Plan for Deployment Test
Plan for Deployment Verification Test
Plan for Operations Test
Inputs:
Data Conversion Design
Requirements
Testing Strategy
Test Plan

Deliverables:

Test Plan
Migration Procedures
Migration Verification Scripts
Migration Request Form

Checklists:

Application Operability Checklist

Operational Readiness Test: Prepare & Execute


Prepare and Execute Operational Readiness Test:

Process steps for operational readiness test:

Execute Deployment Test


Execute Deployment Verification Test
Execute Operations Test
Verify Result
Perform Fixes
Enable Administrators and Support
Confirm Operational Readiness

Notify Deployment, Development, and Operating Groups

Inputs:
Metrics
Service Introduction Plan
Service Level Agreement
Test Plan
Training Materials
Performance Support Materials
Deliverables:
Test Closure Memo
Checklists:
Application Operability Checklist
Operations Test Entry Criteria
Operations Test Exit Criteria
Service Level Test Entry Criteria
Service Level Test Exit Criteria

Operational Readiness Test: Key Considerations

Key Considerations:

Understand and define the operations test scope


Ensure the operations test scripts follow the actual operations procedures
Design the operations test with deployment and support in mind
Consider operations test execution issues
Evaluate the tests effectiveness
Ensure the application and technical architecture is stable

Testing Life Cycle


Unlike SDLC (Software Development life cycle) there is STLC (software testing
life cycle). Different Organizations have different names of phases in defining the
life cycle. But broadly we can summarize as follows
A systematic approach to testing that normally includes these phases:
1. Risk Analysis
2. Planning Process
3. Test Design
4. Performing Test
5. Defect Tracking and Management
6. Quantitative Measurement
7. Test Reporting

1. Risk Analysis
A. Risk Identification
1. Software Risks - Knowledge of the most common risks associated with
software development, and the platform you are working on.
2. Testing Risks - Knowledge of the most common risks associated with
software testing for the platform you are working on, tools being used, and
test methods being applied.
3. Premature Release Risk - Ability to determine the risk associated with
releasing unsatisfactory or untested software products.
4. Business Risks - Most common risks associated with the business using
the software.
5. Risk Methods - Strategies and approaches for identifying risks or
problems associated with implementing and operating information
technology, products, and processes; assessing their likelihood, and
initiating strategies to test for those risks.
B. Managing Risks
1. Risk Magnitude - Ability to rank the severity of a risk categorically or
quantitatively.
2. Risk Reduction Methods - The strategies and approaches that can be
used to minimize the magnitude of a risk.
3. Contingency Planning - Plans to reduce the magnitude of a known risk
should the risk event occur.CSTE Body of Knowledge
Skill Category 6
2. Test Planning Process
A. Pre-Planning Activities
1. Success Criteria/Acceptance Criteria - The criteria that must be
validated through testing to provide user management with the
information needed to make an acceptance decision.
2. Test Objectives - Objectives to be accomplished through testing.
3. Assumptions - Establishing those conditions that must exist for testing to
be comprehensive and on schedule; for example, software must be
available for testing on a given date, hardware configurations available for
testing must include XYZ, etc.
4. Entrance Criteria/Exit Criteria - The criteria that must be met prior to
moving to the next level of testing, or into production.
B. Test Planning
1. Test Plan - The deliverables to meet the tests objectives; the activities to
produce the test deliverables; and the schedule and resources to complete
the activities.
2. Requirements/Traceability - Defines the tests needed and relates those
tests to the requirements to be validated.
3. Estimating - Determines the amount of resources required to accomplish
the planned activities.
4. Scheduling - Establishes milestones for completing the testing effort.

5. Staffing - Selecting the size and competency of staff needed to achieve


the test plan objectives.
6. Approach - Methods, tools, and techniques used to accomplish test
objectives.
7. Test Check Procedures (i.e., test quality control) - Set of procedures
based on the test plan and test design, incorporating test cases that ensure
that tests are performed correctly and completely.
C. Post-Planning Activities
1. Change Management - Modifies and controls the plan in relationship to
actual progress and scope of the system development.
2.
Versioning(change control/change management/configuration
management) - Methods to control, monitor, and achieve change.
3. Test Design
A. Design Preparation
1. Test Bed/Test Lab - Adaptation or development of the approach to be
used for test design and test execution.
2. Test Coverage - Adaptation of the coverage objectives in the test plan to
specific system components.
B. Design Execution
1. Specifications - Creation of test design requirements, including purpose,
preparation and usage.
2. Cases - Development of test objectives, including techniques and
approaches for validation of the product. Determination of the expected
result for each test case.
3. Scripts - Documentation of the steps to be performed in testing, focusing
on the purpose and preparation of procedures; emphasizing entrance and
exit criteria.
4. Data - Development of test inputs, use of data generation tools.
Determination of the data set or sub-set needed to ensure a comprehensive
test of the system. The ability to determine data that suits boundary value
analysis and stress testing requirements.

4. Performing Tests
A. Execute Tests - Perform the activities necessary to execute tests in accordance
with the test plan and test design (including setting up tests, preparing data
base(s), obtaining technical support, and scheduling resources).
B. Compare Actual versus Expected Results - Determine if the actual results met
expectations (note: comparisons may be automated).
C. Test Log - Logging tests in a desired form. This includes incidents not related to
testing, but still stopping testing from occurring.
D. Record Discrepancies - Documenting defects as they happen including
supporting evidence.
5. Defect Tracking and Correction

A. Defect Tracking
1. Defect Recording - Defect recording is used to describe and quantify
deviations from requirements.
2. Defect Reporting - Report the status of defects; including severity and
location.
3. Defect Tracking - Monitoring defects from the time of recording until
satisfactory resolution has been determined.
B. Testing Defect Correction
1. Validation - Evaluating changed code and associated documentation at
the end of the change process to ensure compliance with software
requirements.
2. Regression Testing - Testing the whole product to ensure that unchanged
functionality performs as it did prior to implementing a change.
3.
Verification - Reviewing requirements, design, and associated
documentation to ensure they are updated correctly as a result of a defect
correction.
6. Acceptance Testing
A. Concepts of Acceptance Testing - Acceptance testing is a formal testing process
conducted under the direction of the software users to determine if the
operational software system meets their needs, and is usable by their staff.
B. Roles and Responsibilities - The software testers need to work with users in
developing an effective acceptance plan, and to ensure the plan is properly
integrated into the overall test plan.
C. Acceptance Test Process - The acceptance test process should incorporate these
phases:
1. Define the acceptance test criteria
2. Develop an acceptance test plan
3. Execute the acceptance test plan

7. Status of Testing
Metrics specific to testing include data collected regarding testing, defect tracking,
and software performance. Use quantitative measures and metrics to manage the
planning, execution, and reporting of software testing, with focus on whether goals
are being reached.
A. Test Completion Criteria
1. Code Coverage - Purpose, methods, and test coverage tools used for
monitoring the execution of software and reporting on the degree of
coverage at the statement, branch, or path level.
2. Requirement Coverage - Monitoring and reporting on the number of
requirements exercised, and/or tested to be correctly implemented.
B. Test Metrics

1.
2.

3.

4.

5.

Metrics Unique to Test - Includes metrics such as Defect Removal


Efficiency, Defect Density, and Mean Time to Last Failure.
Complexity Measurements - Quantitative values accumulated by a
predetermined method, which measure the complexity of a software
product.
Size Measurements - Methods primarily developed for measuring the
software size of information systems, such as lines of code, function
points, and tokens. These are also effective in measuring software testing
productivity.
Defect Measurements - Values associated with numbers or types of
defects, usually related to system size, such as 'defects/1000 lines of code'
or 'defects/100 function points'.
Product Measures - Measures of a products attributes such as
performance, reliability, failure, usability,

8. Test Reporting

A.

Reporting Tools - Use of word processing, database, defect tracking, and


graphic tools to prepare test reports.
B. Test Report Standards - Defining the components that should be included in a
test report.
C. Statistical Analysis - Ability to draw statistically valid conclusions from
quantitative test results

Black Box Testing and Its Methods


Black Box Testing: Also called as Behavioural Testing. It is a Function-based testing,
which focuses on testing the functional requirements. The test cases are designed with a
view point that for a particular set of input conditions, you will get a particular set of
output values. It is mostly being done at later stages of testing.
Black Box Testing Methods: Following are the various ways in which Black Box
testing is carried out:

1. Graph- Based Testing: Identifying the objects and relationships between them. And
then testing whether the relationship behave as expected or not. Graphs are used to
prepare the test cases, in which: objects are represented as Nodes and relationships as
Links.
2. Eqivalence Class Testing: A set of input domain is partitioned into different classes.
And selecting the test data from each class. The equivalence class represents the valid
and invalid states for input conditions.
3. Boundary Value Testing: It is carried out by selecting the test cases that exercises
bounding values. For eg, if a test case accepts values with the range (a to d), then testing
the behaviour at a and d.
4. Comparison Testing: Also called back-to-back testing. In this method, different
software teams build a product using the same specification but different technologies
and methodologies. After that all the versions are tested and their ouput is compared. Its
not a full proof testing method since even if they all give the same results, if one is
incorrect all of them will be incorrect.

Input test data

Inputs causing
anomalous
behaviour

System

Output test results

Oe

Outputs which reveal


the presence of
defects

The Test Cycle

Analyze Phase

Identification of overall Test Approach through discussion with project


stakeholders

Review of existing project documents

Knowledge transfer from business users on overall business functionality,


criticality

Analy
ze
Execute

Design

Imple
ment

Analyze: Testing Terms

Test basis:

All documents from which the requirements of an information system can


be inferred.
It is the documentation on which the test is based

Design Phase

Analy
ze
Desi
gn

Exec
ute
Imple
ment

Preparation of Test Strategy and Test Plans


Preparation of templates for UT, FT, IT, RT, PT
Sign-off on templates
Review of Definition Study output documents to check for adequacy and
testability

Design: Testing Terms

Test Strategy: The distribution of the test effort and coverage aimed at finding the
most important defects as soon as possible

Test Plan: Description of the test project, activities and planning

Design Phase: Templates

A Test Case Template must contain:

Test Case#/Title
Scenario/Description:
Prerequisites
Access Path
Test Case Author/date
Test Case Actor(s)/Role
Environment
Step #/Description
Expected Results
Actual Results
Pass/Fail

A Review Log must contain:

Test Case#/Title with version#


Reviewers name/date
Comment #
Location of error
Comment Description
Severity H/M/L
Class Missing/Extra/Wrong
Phase to which the artifact under review belongs to Design/Build/Test
Status Open/Close/under review

Implement Phase

Analyz
e
Execut
e

Desig
n
Implem
ent

Test Case preparation, review and sign-off


Updating test cases based on changes or modifications
Preparation of Test Data
Checking the Test Environment for readiness

Implement: Test Terms

Test case: A description of a test to be executed, focused on a specific aim


Test Script: Sequence of related actions and checks related to a test case
Test Scenario: A scheme or context for execution of tests
Test action: An action in a previously defined start situation that produces a result.
It is part of a test case
Test object: The Application under test
Testware: All output documents generated during the test phase

Designing Test Cases: Terms

Logical Test case: A series of situations to be tested from start to finish for
running the test object.
Physical Test case: Detailed description of the logical test case containing a
starting situation, actions and result checks.

Execute Phase

Analy
ze
Exec
ute

Desig
n
Imple
ment

Test Execution
Defect management: Logging, tracking, retesting
Test result reporting: Pass/Fail report

Execute Phase: Templates


A Defect Log must contain:

Functional Area Impacted


Screen Name
View Name
Applet Name
Ref. Test Case#/Title
Description
Test data supplied
Login used
Environment on which tested
Raised by name/date
Severity- Showstopper/H/M/L
Priority - H/M/L
Status - Open/closed/Reopen/Ready for Retest
Resolved by name/date

Execute: Test Tools

Test Management Tools:


Mercury Test Director, Rational Test Manager, Silk Central Test Manager
Functional Test Automation Tools:
Mercury Quick Test Pro, Rational Functional Tester, Silk Test
Performance Test Automation Tools:
Mercury Load Runner, Rational Performance Tester, Silk Performer
Configuration Management Tools:
Rational ClearCase, VSS, PVCS
Defect Management Tools:
Mercury Quality Center, Rational ClearQuest

Siebel Testing
Two main areas of focus:

Functionality
Does the application function properly?
Performance
Does the application perform properly under load/stress?

This can be achieved through various levels of testing:

Unit Test (UT)


Integration Test (IT)
Functional Test (FT)
System Integration Test (SIT)
Regression Test (RT)
Performance Test (PT)
User Acceptance Test (UAT)

Unit Testing (UT)

Unit Testing is the first level of testing


Occurs following development of a Siebel component or an interface.
Unit Test uses Low Level Design as the guide to test the control paths within the
boundary of the module
Each developer prepares UT scripts, unit tests the code to ensure that it functions,
within the constraints of the environment, as described in the Technical
specification.
White-box testing techniques are used
UT includes testing of:
Configuration (GUI + Functional flow testing)
Interfaces

Interface Testing (IT)

This refers to interface integration testing


Includes testing of new real-time and batch Interfaces.
Interfaces are specially tested at the data level.
Includes Siebel inbound and outbound interfaces
For unidirectional interfaces: Each interface is tested by supplying the data in the
required format at the source system. The output is checked at the destination
system for correctness and accuracy.
For bidirectional interfaces: the same procedure is treated as a roundtrip of data
from the source -> destination -> source

Functional Testing (FT)

Functional Testing is initiated following compilation and check-in of the unittested work package.
Comprehensive test of configurations, modifications or customizations made to a
component.
Includes UI testing in case of any modifications to the GUI.
It does not focus on end-to-end system functionality.
Follows Black box testing techniques

System Integration Testing (SIT)

Integrated testing of functionality and interfaces together

Functionality will be tested end to end and Interfaces will be tested implicitly
through functionality

Includes testing of reports and e-mail functionality, if any

Performance Testing (PT)


PT Ensures the run-time performance of software within the context of an integrated
system. It includes

Volume Testing: attempts to verify the maximum resources designed for a


system.
Stress Testing: demands resources in an abnormal quantity, frequency or
volume to the point where it breaks down
Recovery Testing: forces the software to fail in variety of ways and
verifies that the recovery is properly performed.
Security Testing: Verifies the protection mechanisms built into the system

Regression testing (RT)

RT is done to ensure existing functionality is not impacted by:


changes or modifications or
new functionality or
defect fixes introduced into the system
Test cases from previous releases are re-run

User Acceptance Testing (UAT)

Very similar to SIT in content

However, it is conducted by the Business

Testing is done from the perspective of End Users

Entry/Exit Criteria for the Testing Phase


Entry Criteria: Whats required to
begin the test phase?
Build Tasks are complete
Configuration / development /
unit test is complete.
Functional Documents are available
Full Compiled Siebel repository is
available
The latest SRF file loaded in the test
environment
Test scripts are developed,
reviewed and signed off

Exit Criteria: When do we say the test


phase is complete?
All candidate test cases have been
executed
Pass / Fail criteria are verified and
validated
Output/Expected Results confirm to the
functionality specified
Report Defects prioritizing based on
severity
Fixed defects retested for functionality
Regression testing performed to validate
original functional requirement
Handover of test scripts to the next phase

White Box Testing

UT will utilize White Box Testing techniques


Refers to conducting tests by knowing the internal workings of a product
Includes exercising:
Independent paths of a module
All logical decisions on their true and false sides
Loops at their boundaries
Internal Data Structures

White Box Testing techniques

Local Data Structure: Ensures that the data stored temporarily maintains the
integrity during all steps in an algorithms execution

Statement testing

Branch testing - can be tested by probes inserted at points in the program


that represents arcs from branch points in the flow graph
Conditional testing - each clause in every condition is forced to take of its
possible values in combination of those of other clauses
Expression testing: This requires that every expression assume a variety of
values during a test in such a way that no expression can be replaced by a
simpler expression and still pass the test
Error Handling including exceptional errors
Boundary Conditions: Ensure that the module operates properly at
boundaries established to limit or restrict processing
Independent Paths: Data is selected to ensure that all paths of the program
have been executed

Black Box Testing

FT, IT, SIT, and RT will use Black Box Testing techniques

Refers to conducting tests by knowing the specified function that a product has
been designed to perform

Includes checking for:


Incorrect or missing functions
Interface errors
Initialization and termination errors
Positive and negative test conditions

Black Box Testing Techniques

Boundary Value Analysis: Test the boundary value itself and a significant value
on either side of the boundary
State Transition: Test the states the system can be in, the transitions between those
states, the actions that cause the transitions, and the actions that may result from
the transitions.
Equivalence Partitioning: Identify the partition of values and select representative
values from within this partition to test
Thread testing: test the business logic in the same way a user or an operator might
interact with the system during its normal use
Error Handling: determines the ability of the system to properly process incorrect
transactions.

Testware

Test Strategy & Test Plan


Test Cases & Test Scripts
Test Case Review Logs
Test Data Files
Test Problem / Defect Reports
Test Status Reports
Final Test Report

Structured Testing challenges

Time Pressure
Lack of Planning
Insufficient resources
Unclear specifications & documentation
Lack of management and control
Conflicting interests

Making Test a First Class Citizen

Requirements: UCBT**
Processes: Using RUP
Roadmap: Using Siebel eRoadmap
Architecture: Testing SOA
Risks: using T-MAP
Skills: Developing foundational skills

UCBT: Use Case Based Testing


RUP: Rational Unified Process
SOA: Service Oriented Architecture
T-Map: Test Management Approach

Use Cases for UCBT

What is a Use Case?

A Use Case defines a sequence of steps which will yield a result of


measurable value to an actor of the system.
Describes how an actor interacts with a system
Made up of a number of scenarios which describe a sequence of steps to
be performed.

An Actor is a person or another system which interacts with the system.

Primary actor the actor which initiates a use case to satisfy a goal
Secondary actor an actor which collaborates to support the completion
of the Use Case.

UCBT

Use Cases are used as input by the test team


Use Cases contain name, description, flow of events, special requirements, preconditions and post-conditions
Introduces clarity and traceability since sequence of actions are documented in the
UC
Sample Use case: Course Registration
Actor: Student, course registration database
TC scenarios based on UC
Unidentified Student
Student Quits
Registration closed
Course pre-requisite not fulfilled

Advantages of Use-case based modeling

Focus on what, not how - allows a focus on requirements by taking the


external users view of a system.
Easily understood by all stakeholders
Allows disciplined approach
Provides for prioritization by allowing use cases to be rated by importance and
assigned to development iterations
Helps to define the scope of the project
Acts as input to both Development and Test teams

Using RUP
What is RUP?

It is a software engineering process which has:


A well-defined underlying set of philosophies and practices for successful
software development
A process model and associated content library.

RUP Processes
RUP processes are organized as follows:

For every Process the following elements are defined:


Role(s)
Activities
Tools
Artifacts
Checkpoints
Templates
Reports

Test Activities according to RUP

Siebel eRoadmap

Define
goals

Discover
detailed
business
requirements

Design
Perform
Validate
Deploy
Monitor
solution configuration application application progress

Siebel Testing Process Overview

Testing SOA
The Overall goal of testing remains the same
However, some of the risks have to do with
Loose Coupling of services
Heterogeneous technologies within a solution
Lack of total control over all elements of a solution
Example: Third Party services

Risk Mitigation strategies

Follow disciplined methods of test case scripting and documentation


Build test cases that can run on different environments
Get a thorough understanding of all the services involved
Determine the right test cases and provide adequate test coverage since SOA is
flexible

Cornerstones of T-Map: Test Management Approach

Lifecycle

First and foremost cornerstone


Common thread running through the test process
Within a test lifecycle, we can:
Determine the phases
Map the sequence of activities
Identify interdependencies

T
I

Lifecycle Phases
Planning and Control (P&C)
Preparation (P)
Specification (S)
Execution (E)
Completion (C)

S
P&C

Techniques

A Test Technique is a system of actions aimed at providing a universally


applicable method of creating a test product

Organization

Test organization is the representation of effective relations between test


functions, test facilities and test activities to allow good quality in given time

Organization Areas

Structural Test Organization: includes preconditions and regulations for


people, resources and methods to achieve strategic goals.
Test Management and Control: of test process, test infrastructure and test
deliverables
Staff and Training: adequate staff and suitable skills for optimum results
Structuring of the Test Process

Infrastructure

Includes all facilities and resources required for structured testing namely:

Test Environment
Test Tools
Office Environment

How can you test better?

See the big picture, but keep an eye on the details


Actively ask questions and document answers
Highlight risks and issues
Work on your foundational skills
Client Focus
Communication skills written and spoken
Negotiation
Managing interpersonal relationships
Resolve pain-points and learn from experience
Learn functionality, processes, techniques and tools

Introducing Siebel Applications

Siebel Customer Relationship Management (CRM)


Enables you to manage all customer touchpoints through email, telephone, fax,
the Web, or in the field
Synchronizes all touchpoints through one central information repository,
one database, one tool set, and one architecture
Provides your customers with a consistent view of the company and your
company with a consistent view of the customers
Includes installed and hosted applications to align with your current and future
business requirements
Extends your CRM solution to everyone in your employee and partner
organizations

Siebel CRM Enterprise


An installed solution that provides an integrated product suite with functionality
tailored to more than 20 specific industries

Siebel CRM Professional Edition

An installed solution designed for companies with fewer than 100 users

Provides a family of multichannel sales, customer service, and marketing applications

Siebel CRM OnDemand


A hosted solution that provides core functionality to casual users, business
partners, and remote divisions
Available on a per-user basis through a monthly subscription

Siebel Business Entities

Accounts
Contacts
Opportunities
Orders
Service requests
Activities
Assets

Accounts
Are businesses external to your company

Represent a current or potential client, a business partner, or a competitor

Contacts
Are people with whom you do business
Have the following characteristics
A name
A job title
An email address and phone number

Opportunities
Are potential revenue-generating events
Have the following characteristics
A possible association with an account
An identified potential revenue
A probability of completion
A close date

Orders
Are products or services purchased by your customers
Have the following characteristics
An order number
A status and priority
An associated account

Service Requests
Are requests from customers for information or assistance with a problem related
to products or services purchased from your company
Have the following characteristics
A status
A severity level
A priority level

Activities
Are specific tasks or events to be completed
Have the following characteristics
A start date and due date
A priority level
Assigned employees

Assets
Are instances of purchased products
Have the following characteristics
An asset number
A product and part number
A status level

Types of Siebel Enterprise Applications


Employee applications
Are used by internal employees
Examples include:
Siebel Call Center
Siebel Sales

Customer and partner applications


Are used by customers and partners
Examples include:
Siebel Customer Order Management
Siebel Partner Relationship Management (PRM)

Types of User Interfaces (UI)


High-interactivity (HI) mode
Is available for employee applications, supporting highly interactive
enterprise users
Requires Internet Explorer 5.5 SP2 or 6.0 with SP1 and supports
additional usability features such as drag-and-drop for setting column
widths and positions

Standard-interactivity (SI) mode


Are available for customer applications
Use a wide variety of browsers and behaves like traditional Web
applications, requiring frequent page refreshes

Employee Application: Siebel Sales

Allows your sales force to manage accounts, sales opportunities, and contacts

Helps identify top opportunities and specific actions to better manage


those opportunities to a more rapid closure

Siebel Sales
Opportunities view

Employee Application: Siebel Call Center


Enables customer service and telesales representatives to:
Provide customer support
Generate customer loyalty
Increase revenues through effective campaign execution, cross-selling,
and up-selling

Customer Application: Sales Catalog

Allows companies to develop, manage, and deliver dynamic product catalogs across all
customer channels

Customer Application: Siebel eSales


Allows your customers to purchase products over the Web
Includes an interactive product catalog, search and product comparison
mechanisms, and online ordering capabilities

eSales Catalog
screen

Advisor

Browse products

Quick Add to
shopping cart

Partner Application: Siebel Partner Portal


Allows partners to communicate, collaborate, and conduct business with a Webbased interface
Includes product information, training, sales tools, transaction data, and
performance analysis reports

Partner Portal
Opportunities screen

Implementing Siebel Applications

Successful Siebel Product Implementations


Are achieved by project teams that:
Adhere to a standardized implementation methodology that uses a multiphased deployment approach
Minimize configuration by addressing user requirements with Siebel
application functionality and business processes

Use a Standardized Implementation Methodology


To ensure the organizations business processes are supported and requirements
are met by the Siebel application
To ensure the implementation activities are based on industry best practices
Identify metrics to establish return on investment (ROI)
Develop clear acceptance criteria
Identify project scope on key business drivers
Define project roles and responsibilities

Implementation Methodology Characteristics


Your project methodology should:
Define each implementation stage, the deliverables for each stage, and the
time frames for the stages
Identify who is responsible for which components of the plan and how the
plan is to be implemented

For example, the Siebel eRoadmap methodology:


Is a phased project rollout method
Helps identify and address key strategic and tactical issues
Helps develop an outline for the progress of the project
Prescribes activity stages that are iterative in nature
Enables the implementation team to bring the system up in phases so
employees and customers can begin to use it quickly

Advantages of a Multi-Phased Approach


Allows for manageable project size and scope
Helps achieve implementation benefits sooner
Applies knowledge and experience from earlier phases

Plan
Define

Define

Define

Discover

Discover

Discover

Design

Design

Design

Configure

Configure

Configure

Validate

Validate

Validate

Deploy

Deploy

Deploy

Sustain

Multiple
implementation
phases

Siebel Business Processes


Are modeled on industry-specific best practices
Provide a basis for the functionality embedded within the entire suite of
Siebel applications
Are defined for all Siebel applications
Depict the work flow typically followed by users or systems to
accommodate the standard application
For example, Manage Order business process
Helps create, validate, and manage the order across the entire order
lifecycle

Siebel Business Process Solutions


Identify and explain the business processes supported by Siebel applications
Can be used as an aid during the discovery, design, and deployment phases of a
Siebel implementation
Are accessed from the Siebel Business Process Solutions Library (BPSL)
Can be obtained by contacting your District Manager, TAM, or
Engagement Manager

Business Process Models


Provide step-by-step work details for individual roles
Steps correspond to applicable Siebel functional areas
Are useful during implementation because they:
Encapsulate best practices and leverage Siebel application standard
functionality
Demonstrate the flow of the user experience through the application
Provide a basis for creating test scripts

Example BP: Create Order

Siebel Product Implementations


Should be:
Based on supporting new and existing business processes with a broad set
of predefined functionality
Existing Siebel capabilities can satisfy a wide range of
implementation requirements
Focused on leveraging existing product capabilities and minimizing
custom configuration

Should not be:


Approached as a software development project
Initiated as an opportunity to gather and develop software features based
on user wish lists
Attempted until you are familiar with the standard functionality for all
entities being implemented

Two Ways to Satisfy User Requirements


Modify the purchased application through configuration to:
Match current legacy systems and processes
Meet all pertinent user requests that drive day-to-day business operations

Modify current business processes to:


Leverage application functionality
Align with best-practice business processes

Impacts of the Implementation Approaches


Modifying the application
Should only be considered as a last resort
Should not exceed 20% of the purchased application functionality
Modifying the business processes
Requires minimal modifications to application
Increases productivity and reduces implementation time

The Siebel Data Model


Defines how the data used by Siebel applications is stored in a standard thirdparty relational data base
Specifies the tables and relationships
Is designed to support the data requirements across Siebel eBusiness Applications

Understanding the Data Model


The pieces to understand:
Tables
Columns
Indexes
User keys
Primary and foreign keys

Siebel Data
Is organized and stored in normalized tables in a relational database

Each table has multiple columns storing single value data


The data schema is organized to eliminate repeated storage of data

Table

S_PROD_INT
R
O
W
_I
D

N
A
M
E

P
A
RT
_N
U
M

U
O
M
_C
D

Columns (store
single values only)

Primary Key
Is a column that uniquely identifies each row in a table

ROW_ID serves as the primary key for Siebel database tables

S_PROD_INT
Primary Key
(PK)

R
O
W
_I
D

N
A
M
E

P
A
RT
_N
U
M

U
O
M
_C
D

ROW_ID
Is a column in every table
Contains a Siebel application-generated identifier that is unique across all
tables and mobile users
Is the means by which Siebel applications maintain referential integrity
Database referential integrity constraints are not used
Is managed by Siebel applications and must not be modified by users

Can be viewed by right-clicking the record or


by navigating to Help > About Record

Tables
Approximately 3,000 tables in the database

Three major types: Data, Interface, and Repository

Interfac
e

Dat
a
S_PROD_INT

EIM_PROD_INT

Repositor
y
S_TABLE

R
O
W
_I
D

R
O
W
_I
D

R
O
W
_I
D

N
A
M
E

P
A
R
T_
N
U
M

U
O
M
_C
D

N
A
M
E

P
A
R
T_
N
U
M

U
O
M
_C
D

N
A
M
E

D
E
S
C_
TE
XT

A
LI
A
S

TY
P
E

Data Tables
Store the user data
Business data
Administrative data
Seed data

Are populated and updated:


By the users through the Siebel eBusiness Applications
By server processes such as: Enterprise Integration Manager (EIM) and
Assignment Manager

Have names prefixed with S_


Are documented in the Siebel Data Model Reference

Prominent Data Tables


Prominent tables storing data for the major business entities

Internal
Product

S_PROD_INT
R
O
W
_I
D

N
A
M
E

P
A
R
T_
N
U
M

R
O
W
_I
D

U
O
M
_C
D

S_CONTACT
R
O
W
_I
D

S_SRV_REQ

L
A
ST
_N
A
M
E

FS
T_
N
A
M
E

MI
D_
N
A
M
E

Contac
t

S
R_
N
U
M

D
E
S
C_
TE
XT

Service
Request

O
W
N
E
R_
E
M
P_
ID

R
E
S
O
L
U
TI
O
N_
C
D
S_OPTY
R
O
W
_I
D

B
D
G
T_
A
M
T

N
A
M
E

Opportunit
y
P
R
O
G
_N
A
M
E

ST
G
_N
A
M
E

Interface Tables
Are a staging area for importing and exporting data
Are used only by the Enterprise Integration Manager server component
Are named with prefix EIM_

Repository Tables
Contain the object definitions that specify one or more Siebel applications
Client application configuration
UI, business, and object definitions
Mappings used for importing and exporting data
Rules for transferring data to mobile clients
Are updated using Siebel Tools

Columns
Each table has multiple columns to store user and system data
Defined by the Column child object definitions
Columns determine the data that can be stored in that table

Column Properties
Important properties of columns
Properties of existing tables and columns should not be edited
Understanding these properties is important
Determines the size and type of data that can be stored in a column
Limits proposed modifications to a standard application

Identifies type
and size of data

System Columns
Exist for all tables to store system data
Are maintained by Siebel applications and tasks
Can be viewed by right-clicking the record or from Help > About Record

User Key
Specifies columns that must contain a unique set of values
Prevents users from entering duplicate records

Is used to determine the uniqueness of records when importing and integrating


data
Is predefined and cannot be changed

Not all columns in a user


key may be required

Index
Is a separate data structure that stores a data value for a column and a pointer to
the corresponding row
Are used to retrieve and sort data rapidly
Can be created or upgraded through Siebel Tools
Should be inspected to assess performance issues for query and sort operations

_P: index based on primary key


_U: index based on a user key

Sequence affects
the sort order in
business
components

Relationships Between Tables


Siebel database tables are related to one another
Understanding the relationships between tables is important to implementing your
business logic

Product Line

Asset

S_PROD_LN

S_PROD_INT

S_ASSET

R
O
W
_I
D

R
O
W
_I
D

R
O
W
_I
D

N
A
M
E

D
E
S
C_
TE
XT

M:M relationship

N
A
M
E

P
A
R
T_
N
U
M

U
O
M
_C
D

A
S
S
ET
_N
U
M

1:M relationship

M
F
G
D_
D
T

S
E
RI
A
L_
N
U
M

1:M Relationships
Are captured using foreign key table columns in the table on the many side of the
relationship

Foreign key column for


the 1:M Product Asset
relationship
S_PROD_INT

S_ASSET

R
O
W
_I
D

R
O
W
_I
D

N
A
M
E

PA
R
T_
N
U
M

U
O
M
_C
D

AS
SE
T_
N
U
M

M
F
G
D_
D
T

MI
D_
N
A
M
E

PR
O
D_
ID

Foreign Key Table Columns


Are columns in a table that refer to the primary key column of a related (parent)
table
Are named with suffix _ID
Capture relationships between Siebel database tables
Are maintained by Siebel applications and tasks to ensure referential integrity and
should never be updated directly using SQL

Foreign Key Table

Finding Foreign Keys for 1:M Relationships


Inspect the Foreign Key Table property in a Column object definition to determine the
column that serves as the foreign key

Foreign key column for


the 1:M Asset Product
relationship

M:M Relationships
Are captured using foreign key table columns in a third table called the
intersection table

1:1 Extension Table


Is a special table that has a 1:1 relationship with a base table
Foreign key for the relationship:
Is located in the extension table
Is named PAR_ROW_ID

Provides additional columns for business components referencing the base table
A base and extension table can be considered as a single logical table

Base
table

S_PROD_INT

S_PROD_INT_X

R
O
W
_I
D

R
O
W
_I
D

N
A
M
E

P
A
R
T_
N
U
M

U
O
M
_C
D

P
A
R_
R
O
W
_I
D

A
TT
RI
B_
39

Extension
table
Stores the
Stock Level
field

Is used:
To provide flexibility for both Siebel engineering and customer use
To support multiple business components referencing the S_PARTY table
(discussed in next module)

Standard 1:1 Extension Tables


Prebuilt for many major tables
Have the name of the base table with suffix _X
Contain 40-plus generic columns of varying types
Store additional fields for business components beyond those mapped to
the base table

1:M Extension Table


Is a special table for storing child data related to an existing parent table

Allows you to track entities that do not exist in the standard Siebel applications

S_CONTACT

S_CONTACT_XM

R
O
W
_I
D

R
O
W
_I
D

FS
T_
N
A
M
E

L
A
ST
_N
A
M
E

E
M
AI
L_
A
D
D
R

P TY N
A P A
R_ E M
R
E
O
W
_I
D

A
TT
RI
B_
01

Standard 1:M Extension Tables


Contain more than 20 predefined tables that have one-to-many relationships with
base tables

Has name of main table appended with _XM

Contains many predefined


ATTRIB columns of varying type

NAME column stores the


name of the child entity

PAR_ROW_ID column
stores foreign key to
ROW_ID in main table
TYPE column distinguishes between
different types of data in the table

Overview of Scripting
Siebel Scripting Terms

Basic elements involved in scripting for Siebel applications

Term

Definition

Siebel-defined structure that groups entities into


categories with common properties and methods
Object
An object type that specifies values for properties and
Definition
scripts for event handlers
Property
Siebel-defined characteristic of an object type
Event
User or system action in a Siebel application
Event
User-defined script for an object definition that
Handler
responds to an event
Method
Routine that causes an object instance to behave in a
certain way
Inheritance Deriving a property from a parent object
Script Editor The user interface for entering and editing event
handler scripts
Object Type

Siebel Object Model


The Siebel object interface Application Programming Interface (API) exposes
four object types
Application
Applet
Business Component
Business Service

Siebel Scripts
Are procedures that enable configuration of Siebel applications to extend standard
behavior
Are added using the Script Editor or by importing text files
Use a common syntax and commands
Are written in one of the following
Siebel Visual Basic (SVB)
Similar to Visual Basic for Applications (VBA)
Used only in Windows platforms
Siebel eScript
Similar to JavaScript
Case-sensitive, including method names
Used in Windows and UNIX platforms
Are executed by event handlers when specified events occur
An event handler is the Siebel code that executes in response to the event
Example: When the user steps off a record being edited (the event),
the application responds by committing the record to the database
(the event handler)

An event is a user or system action to which the Siebel application might


respond
Select events are exposed through the Application Programming
Interface (API)
Examples: Updating a record, updating a field, and deleting a
record
Are added to exposed object types definitions
16K script size limit per event handler

Browser Scripts
Execute in and are interpreted by the Web browser
Are written in eScript (JavaScript)
Interact with the Document Object Model (DOM)
Interact with the Siebel Object Model in the browser via the Browser Interaction
Manager
Enable developers to script the behavior of:
Siebel events
Browser events exposed by the DOM

Server Scripts
Execute within the Object Manager
Are written in eScript or Siebel Visual Basic
Enable developers to script the behavior of:
Business components via business component scripts
Business services via business service scripts
Applications via application scripts
Applets via applet Web scripts
Use event handlers for the various events exposed by the scripting model

Siebel Workflow Architecture


Here is how Siebel Workflow elements relate to each other

Workflow Process

Is a sequence of activities to be executed programmatically or triggered by a


defined set of conditions
Example: When a new employee is added to the Siebel database, export the
information to the personnel application

Workflow Step
Is an operation that begins,
performs its function, and ends
Is dragged and dropped into a
workflow from the Palette

Start defines the beginning of a process


Decision defines branching in the workflow

Business Service performs custom or pre-built actions


Sub-process calls another process
Siebel Operation updates the Siebel database
Wait pauses processing
User Interact navigates to a Siebel view
Stop stops the process and displays a message
End defines the end of a process
Connector defines sequence, can define conditions associated with it
Exception defines a branch taken outside of the normal flow, when an exception occurs

Events Workflow processes can be invoked from events in the Siebel application or from
external systems. Events can pass context from the caller, user session, for example to a
workflow using a row-id.
Rules The flow control is based on rules. Rules can be based on business component data or
local variables, known as process properties. Rule expressions are defined using Siebel Query
Language.
Actions Action can perform database record operations or invoke Business Services. Also,
an action can cause a pause in the workflow.
Data is created or updated as the process executes. There are basically three types of data a
workflow process will operate on, business business component data, processprocess properties,
and Siebel Common Object data. You can think of process properties as local variables that are
active during the process. The variables are used as inputs and outputs to the various steps in a
process.

Events Invoking Siebel Workflow Processes

There are basically three ways to invoke a workflow workflow process, through workflow policies
workflow policies, run-time events, and Siebel Tools object events.

Workflow policies allow you to define policies, or rules that can act as triggers to execute a
workflow process. The basic construct of a policy is a rule. If all the conditions of a rule are true,
then an action occurs. Typical usages of a workflow policy are EIM batch, EAI inserts and
updates, manual changes from the user interface, assignment manager assignments and Siebel
remote synchronization.

When deciding whether to implement a workflow policy versus a workflow process there are
some additional things you may want to consider. Data coming into the Siebel application via the
data layer, such as EIM and MQ channels, and those that cannot be captured via the business
layer are typically good candidates for a workflow policy. Some features not supported by
workflow processes like eMail Consolidation, Duration, and Quantity are also candidates for
workflow policies. However, workflow processes provide a better platform for development and
deployment, complex comparison logic, flow management (if, then, else, or case), leverages
business layer logic, can invoke Business Services and pause/stop/error handling capability

Workflow Policy
Monitors the server database
Invokes a workflow process
when a condition occurs
Runs the server components
Workflow Process Manager
(WfProcMgr) and Workflow
Process Batch Manager
(WfProcBatchMgr)
To invoke a workflow with
steps that call EAI adapters
from a workflow policy, create
a workflow policy action based on
the Run Integration Process program

Workflow Process and Runtime events ensure most events are captured at the business
layer logic level. However there are business scenarios where the Workflow Policy
Manager would be the best alternative. Workflow Policy Manager ensures business logic
is captured at the data layer of Siebel architecture. Some examples of such scenarios
would be when bulk data uploads happen via EIM or Data Quality cleaning happens in
the data layer.
When using Workflow Policy, the data layer business policy enforcement is
done via database triggers.
When a particular policy is violated, underlying triggers capture database events
into a Workflow Policy Manager's queuing table (S_ESCL_REQ).
Workflow Policy Manager component (Workflow Monitor Agent) polls this table
and processes requests by taking appropriate actions defined. In some cases,
actions might be to invoke Workflow Process Manager.
Workflow Policy Manager provides additional scalability by using an additional
component called Workflow Action Agent that can be executed on a different
application server within the Siebel Enterprise.

Business Logic and Decision Rules

There are several different ways to implement business rules in a Workflow process. The
following chart shows the major ways and a comparison on when to use them.

Decision Rules: Decision Step Details

Decision steps exit with multiple branches. For each branch a conditional statement is evaluated.
A conditional statement compares any two of the following:
process properties,
business component fields or
literal values.

The terms of comparison include:


two values are equivalent,
one value exists among a series of others, for example, child record values, one or all
must match,
greater than or less than,
between or not between, and
null or not null.

The Compose Condition Criteria dialog box is shown in Figure 7. This example shows a branch in
a workflow where the branch would be followed if the Severity field from a Service Request
matched the value 1-Critical.

Actions
There are several ways to affect actions in a workflow. In other words, data is taken as an input, a
transformation takes place and data is produced as output. The Table 4 below shows the major
ways to cause a transformation with some explanation of how to make design decision on how to
use them.

Actions in Siebel Workflow

Actions: Business Service Step


Business Service steps execute predefined or custom methods. Typical predefined business
services used include Assignment Manager requests, Notification through the Communications
Server, server requests and integration requests (EAI). Custom Business Services can be written
in Siebel VB or eScript. When defining a Business Service step you must specify the business

service, the business service method, input arguments (pass in Process Property, BusComp
data, or literal value) and output arguments.

Some commonly used Business Service in Workflow processes include:


1. FINS Data Transfer Utilities
Description: Allows you to transfer data from source BC to a destination BC without script.
Available Methods:

DataTransfer
GetActiveViewProp
QueueMethod
TryMockMethod.

2. FINS Validator
Description: Validate data based on predefined rules. It is developed through Application
Administration and not script. Also, supports custom messages.
Available Methods:
Validate
3. FINS Dynamic UI Service
Description: Allows creating and rendering of read-only views with a single read-only applet
based on user input. Administered through admin views and not script.
Available Methods:
AddRow
DeleteRow
SetViewName

4. Outbound Communications Manager


Description: Automates sending notifications via fax and emails to contacts and employees.
Available Methods:
CreateRequest
SendMessage
SendSmtpMessage
SubmitRequest
5. Synchronous Assignment Manager Requests
Description: Automates assigning objects by using Assignment Manager rules.
Available Methods:
Assign
6. Server Requests
Description: Allows sending of generic requests to the request broker. It can send it in three
different modes: asynchronous, synchronous, or schedule mode. (Example, call a
workflow but need to be asynchronous request)
Available Methods:
SubmitRequest
CancelRequest
7. Report Business Service
Description: Automates sending, scheduling, printing, saving, emailing reports. Also,
automates administrative tasks such as synching new users.
Available Methods:
ExecuteReport
DelOne
DownloadReport
GrantRoleAccess2Report
GrantUserAccess2Report
PrintReport
RunAndEmailReport

ScheduleReport
SyncOne

Actions: Siebel Operation Step

Siebel Operation steps allow you to perform database operations of Insert, Update and Query.
These steps are performed on business components. Once you have defined the Operation step,
you can use the Fields child object to define any field values for the step. Also, for an Update you
can use the Search Specification child object to define the records you want to update.

Examples of Operations steps include creating an Activity record when a new SR is opened or
updating a comment field if an SR has been open too long.

Actions: Wait Step


Wait steps allow you to suspend process execution for a specified period of time or until a specific
event occurs.
The example below shows how you can define a timeout based on time defined as literal values
in input arguments.

A Word about Process Properties

Process properties are used as input and output arguments for the entire process. Process
properties are used to store values that a workflow process retrieves from the database or
derives before or during processing. Decision branches can use the values in a process property
and pass properties as step arguments. When a workflow process completes, the final results of
the process properties are available as output arguments. Process property values can be used
in expressions.

With every workflow there is a set of predefined process properties that are automatically
generated when you define the workflow. These are:

Error Code: Populated by a step should an error occur


Error Message: Populated by a step should an error occur
Object Id: Row ID of the record against which the process is invoked
Process Instance ID: Unique number assigned to the currently running instance of
the process
Siebel Operation Object ID: Row ID of the record inserted by the Siebel Operation
step
As an example for using process properties, you could define three new process properties for a
workflow. These properties are of type string shown below. They have values Welcome, to,
and Siebel.

Developing Workflows in Siebel Tools

The following section brings out some points with Siebel 7.7 Workflow in Siebel Tools and
developing on a local database.

Workflow is a repository object. Workflow belongs to a project. In Siebel 7.7, workflow does not
participate in the following behaviours that are standard for other repository objects

SRF workflow has its own deployment mechanism, the details of which can be found
in the Business Process Designer Administration Guide
Merge workflow does not participate in the 3-way merge. When workflow definitions
are imported into the repository, they maintain versioning provided by workflow
Object Comparison disabled for Siebel 7.7
Archive workflows do not participate in .sif archive. Instead, workflows can be
archived as XML files using workflow export utility.
Typically, developers use local database to develop workflows. When using local database,
workflow definitions need to be checked-out from the master repository.
When developing workflows in local database, it would require the local database to have all the
referenced data objects. For those data object that are not docked and hence not packaged as
part of the database extract, they would need to be imported into the local database. The
following objects are not docked and are referenced by workflow

Data Maps
Message tables
To import data maps to the local database, you would use the dedicated client connected to the
local database and use the client-side import utility. Message tables can be copied over to the
local database. Alternatively, developers can define messages using the unbounded picklist. This
allows the creation of the messages but does not check the validity of the message at definitiontime.

Developers can also develop or modify workflows using Siebel Tools connected to the
development database by locking the project in the master repository. This way, they do
not need to make sure that all the list-of-values are made available to the local database

Event Logs
More detailed information on the execution can be viewed in log files by setting event logs.
Events used for logging are as follows:

Event: Workflow Engine Invoked (EngInv)


Description: Trace methods invoked and arguments passed

Event: Workflow Definition Loading (DfnLoad)


Description: Trace process and step definitions loaded into memory

Event: Workflow Process Execution (PrcExec)


Description: Trace process instance creation/completion. Trace process property get/set.

Event: Workflow Step Execution (StpExec)

Description: Trace step creation/completion, branch condition evaluation, business service


invocation, business component insert/update.

Migrate to Production
Once the workflows are tested, they are marked for deployment by clicking the Deploy button and
then checked into the master repository. Deployment of workflow is a two-step process:
1. Using Siebel Tools, workflow definitions are marked for deployment. This is done by clicking
the Deploy button in Siebel Tools.
2. Using the Siebel Client, workflows are activated from the Business Process Administration
view. The process of activation, writes the definitions from the repository tables to the runtime
tables for the workflow engine to execute.
Workflow definitions can be migrated across environments, from Development to Production for
example, using one of the following migration utilities:
1. Repository Migration Utility this utility allows export/import of all repository objects. This
utility is best used to migrate workflow definitions when the business is ready to rollout the
release (for example, migrate all repository objects)

2. Workflow export-import utility this utility allows incremental migration of workflow


definitions. Using Siebel Tools, one would export the workflow from one environment and import
the workflow to another environment. Import of workflows can be done in one of the following
ways:
a. Using Siebel Tools, you would import the definition into the repository of the target
environment. Then the definitions are ready to be activated. This approach ensures
that the version of the workflow definition that exists in the repository tables and the
runtime tables are the same.

Troubleshooting Common Workflow Errors

The following lists some commonly encountered errors for Workflow Process Manager.
1. Problem: You activated your workflow but it is not executing
Solution: Verify if <Reload Runtime Events> performed. In order to tell if a process has been
triggered, turn workflow logging (EngInv, StpExec, PrcExec) on. See the Business Process
Administration Guide in Siebel Bookshelf 7.7 for procedures on how to do this.

2. Problem: You revised the workflow process and reactiviated it, but somehow the previous
workflow information was read.

Solution: For workflows running in the Workflow Process Manager server component, reset
parameter <Workflow Version Checking Interval>. By default it is 60 minutes.
3. Problem: When workflow is triggered by runtime event Display Applet, the workflow is triggered
the first time but not subsequently? Why?

Solution: Since the DisplayApplet event is a UI event, and the default web UI framework design is
to use cache, the event only got fired when the first time no-cached view is accessed. The
workflow got triggered whenever the event is fired and worked correctly. To make field still work
in this scenario, you could explicitly set EnableViewCache to FALSE in .cfg file.

4. Problem: If a buscomp has code on WriteRecord and the runtime event fires on Writerecord,
which occurs first? Solution: WriteRecord runtime event is in essence a Post-WriteRecord
event and will be fired AFTER the buscomp code is executed.
5. Problem: After you triggered workflow from a runtime event, you do not get the row-id of the
record on which the event occurred. Solution: Runtime event passes the row-id of the object
BO (i.e. primary BC) and not the row-id of the BC. Retrieve the row-id of the active BC using
searchspecs (e.g. Active_row-id (process property) = [Id] defined as Type = Expression and
BC = BC name)
6. Problem: Encountered the error <Cannot resume Process <x-xxxxx> for Object-id <x-xxxxx>.
Please verify that the process exists and has a waiting status.
Solution: This error typically occurs in the following scenario:

(1) A workflow instance is started and paused, waiting for a runtime event
(2) The runtime event fires. The workflow instance is resumed and run to completion.
(3) The runtime event fires for a second time. Workflow engine tries to resume the workflow
instance and fails, since the workflow instance is no longer in a Waiting state.
Deleting existing instances will not help. You should ignore the error message and proceed.
Steps (1)-(3) need to occur, in that order, in the same user session for the error message to be
reported. As a result, the error message would disappear when the application is re-started.

Also, Purge only works on stopped/completed instances. In order to delete persisted/incomplete


instances, you will need to manually stop the instances first.
7. Problem: How do you access a different business object (BO) from a workflow process?
Solution: Workflow architecture restricts the use of 1 BO to a workflow. Use a sub-process step to
access a different BO.
8. Cannot initiate process definition <process name>

Solution: Verify that the workflow process exists, process status is set to Active, and the process
has not expired.
9. Problem: OMS-00107: (: 0) error code = 4300107, system error = 27869, msg1 = Could not
find 'Class' named 'Test Order Part A'
OMS-00107: (: 0) error code = 4300107, system error = 28078, msg1 = Unable to create the
Business Service 'Test Order Part A

Solution: Make sure at least one .srf file is copied to SIEBEL_INSTALL\objects\<lang> directory

Using Siebel State Model


Business Challenge and Solution

Business challenge
Need to control field value changes for opportunities, service requests, and
activities
Need to allow only certain positions to change field values
Do not allow certain positions to change the status, or do not allow
the status change
Solution
Use State Model

Siebel State Model


Data-driven method that controls the value of a static picklist field
Uses a set of rules and conditions that define the transitions of a field from
one state to the next
Can control which positions can change field values

Ideal for controlling progression of records through an approval process

Siebel Business Rules Using HaleyAuthority

About HaleyAuthority

Haley Systems, Inc. has been the recognized global leader in rule-based
programming, as well as a leading expert in automating managed knowledge,
since 1989

Set of tools for modeling business policies as rule statements in English, without
the need to employ programming languages

Allows to test, implement, and deploy rule statements. The statements are
executed by Haleys inference engine

Advantages of HaleyAuthority
Advantage of Rules

A rule system avoids the flow-of-control tangle by using small, independent


statements of tests and actions (the rules).

Each policy is implemented as a single, separate, independent rule.

To add a new policy to the system, write a single new rule.

To remove an old policy, delete a rule. You can change a policy freely without
fear of damaging the rest of the program.

Advantage of Natural Language

Write a statement, which is a simple description of the rule in plain English.

HaleyAuthority creates the code behind the scenes. It programs itself.

Haley Architecture

Siebel Business Rules Features

This tool is only available from Siebel 8.0 release.

Rules are centrally developed and administered.

Rules are enforced globally throughout the Siebel Application.

Uses client side configuration rather than repository based configuration and
compilation.

Reviewable by non-implementers such as business analyst.

Appropriate for business logic that changes frequently.

Appropriate for complex Business Logic

Can replace large amounts of custom scripts.

High Level Rules Architecture

Components of the Rules Architecture

Haley Enterprise's HaleyAuthority

Siebel Object Importer

Siebel Deployment plug-in

Rules Runtime Administration screen

HaleyAuthority knowledge base

Haley inference engine

Proxy Service

Runtime Inference Engine

Is a third party rules engine used to evaluate and execute the business rules at
runtime.

Installed automatically in the Siebel client during standard client installation.

Accessed by calling the Business Rules Service business service. It serves as the
interface to the inference engine.

Invoked using

An action set in a run time event.


A business service step in Siebel workflow / task.
A business service call in script.

Steps for using HaleyAuthority

Creating a knowledge base

Importing Siebel objects into HaleyAuthority

Creating rule modules


To add modules and submodules
To add a statement to a module or submodule

Deploying a rule module

Activating a deployed rules module

Configuring a runtime event to invoke rules

Testing the deployed rule module

Enterprise Integration Manager (EIM)


Is a server task that manages exchange of data between Siebel applications and
external applications via Siebel base tables and interface tables
Uses a configuration file to determine whether data should be imported, merged,
deleted, or exported

Why not SQL?

Relationships between Siebel tables are complex.

Referential integrity is maintained through ROW_IDs, not using constraints on


the database.

SQL statements cannot generate Siebel ROW_IDs.

Siebel Base Tables


Contain user data that can be exported to an external application

Siebel Base
Tables

Siebel Application

External Data
Tables

External Application

Interface Tables
Store data for export outside the Siebel database
Data brought together to represent one or more base tables
Staging area for data

User Keys

Based on multiple columns, user keys are used to uniquely identify a row for
EIM.

Interface Table Structure


All interface tables have three columns which must be populated in addition to the
data that was mapped
IF_ROW_BATCH_NUM, ROW_ID, IF_ROW_STAT

IF_ROW_BATCH_NUM ROW_ID IF_ROW_STAT NAME


100
1
FOR_EXPORT
Pen
100
2
FOR_EXPORT Staple
200
1
FOR_EXPORT Notebook
200
2
FOR_EXPORT Copier

Note: ROW_ID here is not the generated ROW_ID used on base tables

Interface Tables Temporary Column

Start With T_

Used to hold temporary values and status used during processing step.

EIM Process Flow

Process Flow between Siebel Database and Customer Master

Data Mapping

SQL Ancillary Program Utility SQL *Loader Control Files

Prepare the interface tables

Prepare EIM Configuration file

Run EIM

Verify Results

Process Flow Between Siebel Database and other


Database

Data Mapping

An interface table may populate more than one base table.

A base table may be populated by more than one interface table.


Identify

Which interface table will map to Siebel Base table.

Which Siebel base table map to interface table.

Which interface table columns will map to base table columns.

Note: Some base tables may not be mapped to corresponding interface table . In
such cases, use Siebel VB to load data.Siebel VB Works on business object layer
while EIM works on data layer.

What is Upgrade ?
Migrating the application from one version to another

Migrate the data along with the application

Migrate or redesign client customization

How to Upgrade
Upgrade Application

This is the usual route undertaken by most upgrades. This is usually when
the client wants an exactly similar system as before with minor
enhancements

Redo the customization

This is suggested when client is wanting to redesign some business


processes by taking advantage of new functionality.

Why to upgrade
To Take advantage of additional functionality provided by new version.

If Siebel discontinues customer support of old versions

Benefits of Upgrading
New Functionality

Opportunity to rework the system

Moving to supported systems

Ongoing cost reductions

Simpler Development

Upgrade Considerations

There are several areas to consider as you examine your upgrade options
including application functionality , technological enhancements , operational
considerations , user support and change management , and ongoing support
availability.

Functional Capabilities :- In considering an upgrade most organizations begin


with a critical assessment of the new capabilities and enhancements against
existing features to assess the value to be gained through investment of time and
resources.

Technological Enhancements :- Technical Infrastructure requirements are


legitimate points of consideration as you evolve your application upgrade strategy
including : client architecture , application server , web services , customizations ,
database options , and various hosting options.

User support and Change Management :- Change Management is a critical


component of the upgrade process to ensure that your project obtains optimal user
adoption as it is upgraded and deployed.

Siebel Upgrade Framework

Upgrade / Migration Assessment

Assessment Deliverables

Upgrade / Migration Roll out

Post Migration Support

Flow of the Upgrade Process

Upgrade Flow

What's new in Siebel 8/8.1 : generic functionality


Self-service Applications

The Self-service Applications ( eg: e Commerce ) have been redeveloped


in Oracle ADF and are now standalone applications that call siebel
through web services.

Developers have much greater control of UI than previously

Easier to combine Siebel with other applications in presentation


layer.

BI Publisher

From 8.1.1 , BI Publisher replaces Actuate for pixel perfect reporting.

Report Layout creation is no longer a technical development task

Report Deployment is decoupled from the Siebel Release cycle.


Task Based User Interface

This major piece of functionality redefines the way users complete tasks.

Views are arranged in a flow through which users must navigate

Consistency of process and completeness of data entry are


enforced

Microsoft Integration

Plug-ins and features greatly improve integration with key applications.

Siebel toolbars added to Word and Excel

Enhanced Exchange Information

Improved Communication between siebel and Microsoft


sharepoint.
Both Siebel 8 / 8.1 contain significant amounts of business functionality. Some of
this is applicable to many industries , while some of it is highly specific , deep
vertical functionality.

The ability to easily define and reassign sales territories.

Major Enhancements to Siebel Marketing

Additional Order Management functionality

Substantial improvements to case management for public sector customers


and support for cross agency processing.

Deepened Sample management for the pharmaceutical industry

More control over trade promotions budgets in the CPG Industry

Specific account origination functionality in Siebel eFinance.

Task Based User Interface

Overview

Task UI extends business process automation to the level of user interaction.

Task UI is a wizard-like user interface

A task can be incorporated as a step within one or more broader-based Siebel


workflows

A task can help define integration with external systems

Task UIs features

Supports forward and backward navigation through a sequence of views.

Incorporating decision processes requiring complex business logic

Supports pausing and resuming tasks if users are interrupted


- Instance of the partially completed task is saved in the users inbox
Context and all data are maintained
- Task is resumed from the Universal Inbox

Supports transfer of paused tasks to other users

Coordinating multiple actions comprising a logical transaction that must either


finish successfully or be completely rolled back.

Supports integration to external data or services.

Task UI Concepts

Transient Business Component

Task Applet

Task View

Task Chapter

Task Group

Task Playbar

Transient Business Component

Transient business components (TBCs) provide a way to create task-specific data


that can be displayed and edited in the user interface and accessed from within the
task-flow logic.

During task execution, transient data is stored in a temporary storage called TBCs.

When the task is cancelled, the transient data related to the task is removed from
temporary storage.

When the task is submitted, the transient data can committed to the Siebel
database using commit step or Siebel Operation Step.

How Transient BCs Differ from Standard BCs

A transient business component is always based on the S_TU_LOG table.

Changes to transient business components are immediately committed.

Transient business components do not allow multi-value fields.

The Column and Join properties of a TBC are auto-populated and are not editable:

Explicit joins are not allowed within a TBC.

All columns are forced active at run time, to avoid field activation problems.

LOV-bound data is stored using language-independent code, if the associated


LOV type is marked Multilingual.

Task Applet

Task Applet is designed to interact with transient data in fields of TBCs, rather
than standard fields in a regular BC.

How a task applet differs from Standard Applet


- A task applet is based on a TBC.
- A task applet has a specialized frame class, CSSSWEFrameTask
- A task applet is always a form applet.
- A task applet can be based only on grid Web templates.

Note:
-

These grid templates differ from standard Web templates in that they do
not display the applet title or title menu.
The applets in a task view can be either standard applets or task applets

Task View

The Task View is a view made up of either task applets and or standard ad-hoc
applets that contain a playbar for a user to navigate forwards and backward
throughout a task.

The task view is based on transient business components.

Applets in a task view do not have


-

An applet menu
The standard record controls such as New, Delete, and Query

Task Chapter

A task chapter is a list of task steps, grouped under a common display name (the
chapter name).

The task chapter allows you to define a logical grouping of task steps, and
displays the chapter name alongside with the task view names in the current task
pane.

Task Group

Represents a collection of related tasks that can be displayed as a set in the task
pane

Task Playbar

The Task Playbar is an applet containing buttons that allow the end user to control
the execution of the task

iHelp

Why?

Set up to provide real time task assistance to users.

Best used to provide instruction to first time or occasional users

Allows to embed view navigation links and highlight important fields and buttons.

Using iHelp to complete Tasks

From the application-level menu, choose View > Open iHelp

Click the How Do I button on the application toolbar

The iHelp pane appears in the left side of the application window

iHelp Pane lists all the iHelp items related to the current screen

The iHelp pane remains open until you close it even if you navigate to other
screen.

Using iHelp Map

It is a view that displays all the iHelp items available

Choose Navigate > Go to iHelp Map

Choose Navigate > Site Map > iHelp Map > iHelp Map

To launch an iHelp item, click the iHelp item name

Process of iHelp Administration

Creating iHelp Item record

Designing iHelp

Clearing the iHelp List Cache

Activating, Revising and Deactivating iHelp Items

Importing and Exporting iHelp Items.

SIEBEL SMART SCRIPT

Overview

Siebel Smart Script allows business analysts, call center managers, and Siebel
developers to create scripts to define the application workflow for interactive
customer communications.

The script determines the flow of the interaction, not the agent or customer.

Siebel business customer communications include marketing applications where


customers interact with Web surveys, or email questionnaires, or customers using
interactive troubleshooting guides on Service Web sites, such as Siebel eService.

Siebel SmartScript offers the following benefits

Software-controlled workflow

Reduced training time.

Simple workflow design and implementation.

Graphical user interface.

Personalized interaction.

Dynamic updating.

Dashboard.

Efficient modification and reuse of scripts.

Smart Script Terminology

Script

Page

Question

Answer

Translation

Branch

Script Designer and Page Designer

Procedure for creating Smart Script

Understand Business Scenario and prepare a design

Create Questions and Translations

Create Answers

Create Pages

Create the Script

Add Questions to pages

Add Pages to the script

Siebel Data Validation Manager

Introduction

The Data Validation Manager business service can validate business component
data based on a set of rules.

In the case of a rule violation, a custom error message appears or a user-defined


error code is returned.

The validation rules are defined using the Siebel Query Language and can be
conveniently constructed using the Personalization Business Rules Designer.

The business service centralizes the creation and management of data validation
rules without requiring extensive Siebel Tools configuration and does not require
the deployment of a new SRF.

The Data Validation Manager business service reduces the need for custom
scripts, decreases implementation costs, and increases application performance.

The DVM features

Search automatically for the proper rule set to execute based on active business
objects and views.

Write validation rules based on fields from multiple business components.

Apply a validation rule to a set of child business component records to see if a


violation occurs from one or more records.

Invoke specific actions to be performed as a result of a validation.

Write validation rules that operate on dynamic data supplied at run time together
with data from business component fields.

Automatic logging of data validation events.

Roadmap for Implementing Data Validation Processing


To automate data validation processing, perform the following processes:

A. Process of Administering Data Validation Rules

B. Process of Invoking the Data Validation Manager Business Service

A. Process of Administering Data Validation Rules


To administer data validation rules, perform the following tasks:
1 Defining Error Messages for Data Validation

2 Defining a Data Validation Rule Set

3 Defining Rule Set Arguments

4 Defining Validation Rules

5 Defining Validation Rule Actions

6 Activating a Data Validation Rule Set

Access Control Mechanisms

Access Control Requirements

Security

Resources

Secure sensitive data

Personal

Competitive

Segregate data

Show only data that user needs

Target specific data to individual users

Sharing data

Management Process

What is Access Control?

ALL mechanisms that control data access

Personal

Position

Responsibility

Organization

Access Group

Authorizations to resources

Relationships between people and data

Single Party Model

Party Entities representing people and collections of people

Person

Position

Role

Organization

Job title. Drives reporting and management

Job function that a user performs

Division
Maps to a company's physical structure
Organization Drives data visibility & company reporting process
Account
An external company

User List

Access Group Ad Hoc group of parties

Household

Categorization

Contact
Any individual person
User Contact with an application login
Employee
User who is associated with an internal position
Partner User User who is associated with an external position

Catalog

Ad hoc group of people

Group of people

Top most level within a categorized hierarchy


Contain references only to categories and not data

Category

Organizes data items


Contain
Categories
Master data
Created on an ad-hoc basis
Hierarchical
Belong to only one hierarchy

Product
Catalog

Hardware
Category

Software
Category

Data
Item 1
Data
Item 2
CRM
Category
Data
Item 1
Data
Item 2
Data
Item 3

Organizational Entities

ERM
Category
Data
Item 1
Data
Item 2
Data
Item 3

Position

Division

Have no direct effect on visibility


Rollup to organizations
Used to
Map a company's physical structure (positions)
Record addresses
Country of operation
Maintain default currencies

Organization

Have single parent position


Depict ownership
Hierarchy drives manager visibility

Ability to partition data/people into logical groups


Hierarchy drives Sub-Organization visibility

Access Group

Can exist in multiple group hierarchies


Hierarchy drives Catalog/Group visibility

Director

Mgr

Mgr

Product
Specialist

Product
Specialist

WW
Org

NA
Org

EMEA
Org

Sales

Marketing

Siebel Access Control Mechanisms

Responsibility

Position

Determine which views a user has access


Unchanged from version 6 to version 7.x

Determine which records a person has access


Persons job title
Manager Visibility
Unchanged from version 6 to version 7.x

Role

Drive visibility to application functionality such as tasks, screens, and


views
By default, users only see the screen tabs and view tabs for their roles
New in 7.5

Siebel Organization Implementation

Internal and external organizations


Facilitates delegated admin
Increases accuracy
Reduces maintenance
Improved back office integration
Improved mobile clients synch
Enhanced Assignment Manager
Increased control of data access
Facilitates call center multi-tenancy

Organization enabled Objects

Sub-Organization Visibility

Sub-Organization Visibility

Based upon hierarchical organizations


Displays data for an organization and all of its sub-organizations

Benefits

Increases flexibility of data visibility


Allows more realistic modeling of data
Partner users have access to transactional data throughout their
organization hierarchy rather than just their location
Employees who manages partners can view opportunities for their
organization and all their partners' organizations

Access Group

Logical grouping of parties

Possible members
Organization, Position, User List, Household, Role

Ad-hoc

Can be hierarchical

Members have same access

Why Another Visibility control mechanism ?

What are Access Groups ?

What are Access Groups ?

What are Access Groups ?

How Access Group Addresses Complex Requirements

Behavior of Community and Catalog - Catalog

Overview of EAI

Need for Integration


Different applications control business data
Companies purchase best-of-class applications in each domain
Each application has a different user interface
Each application uses a different data source

Users want:
Seamless access to all business data
A consistent, known user interface
Reliable data
To avoid reentering data in multiple applications

Basic Integration Tasks


Identify the data to integrate in each application
Map and transform the data from each application
Transport the data between applications

Identify the Data to Integrate in Each Application


Identify the external data to bring into the Siebel application
Identify the Siebel data to send to the external application

Map and Transform the Data from Each Application


Using a common exchange format such as XML

Map: Match Siebel field names with external field names


Transform: Match Siebel data structures to external data structures

Transport the Data Between Applications


Move data to and from the Siebel application and the external application using:
IBM MQSeries
IBM MQSeries AMI
Microsoft MSMQ
Microsoft BizTalk Server
File transport
HTTP

Traditional Application Integration


Two main traditional techniques
Batch synchronization with custom transfer routines on each side
Integration of legacy screens into the new application's GUI

Serious problems with traditional techniques


Integration points often are poorly defined
There is no consistent integration architecture across applications
As the number of applications increases, the number of integration points
grows more, with a rising cost for each new initiative
Implementations are hard to support, since they are sensitive to small
changes in schema, file format, and screen layout
A new access channel (like the Web) requires re-implementing all the
business logic embedded in existing applications
Connected applications are frozen in old releases because their
integrations are too expensive to upgrade

Features of an Integrated Environment


Open object interfaces with robust, extensible, reusable objects
Interfaces easy to modify and extend, with automatic upgrades
Multi-tier architecture with open interfaces at each tier
Serial interfaces for integration through messaging
Choice of application synchronization techniques
Data replication
Online access to external shared objects, with no data replication
Support for mobile clients
Cross-application process integration capabilities
Ability to time-out and act on asynchronous interaction failures
Trigger internal activity upon asynchronous interaction successes
Workflow automation across applications

Integration Approaches

1.
2.
3.
4.
5.

Synchronize Siebel data with external data


Display external data in Siebel applets
Display Siebel data in an external application
Control a Siebel application from an external application
Export Siebel data

User Interface
Business Logic
Raw Data

Siebel Application

External
Application
User Interface
Business Logic
Raw Data

ation

Siebel Integration Strategies

Manager
Object Interfaces Workflow for EAI
EAI Dispatch Service
Virtual Business Components
eBusiness Connectors
Enterprise Integration

Siebel

EAI

Workflow
for EAI
Dispatch
Service
eBusiness
Connectors

VBCs
EIM

Object
Interfaces

Workflow for EAI


Provides bidirectional data replication (synchronization) between a Siebel
application and an external application using standard transports
Example: Synchronize Siebel account data with customer data on a
mainframe using the MQSeries transport

Elements of Workflow for EAI


Integration objects
Siebel (internal) integration objects define Siebel data to be replicated in
an external application
External integration objects define external data to be replicated in a
Siebel application

Business services:
Map, transform, and transport data between applications
Implement both pre-built methods and custom scripting
Workflows:
Connect business services and other elements in sequences
Run in the object manager

EAI Dispatch Service


Uses rules to evaluate the structure and contents of property sets (instances); data
that matches a rule is sent to a specified workflow or business service
Optionally transforms the data before sending it
Example: Dispatch rules scan incoming documents for various patterns, then send
each document to the proper workflow

Virtual Business Components


Enable the display and manipulation of external data from within Siebel applets
without storing it in the Siebel database
Example: Display Siebel contact data with contact details
from an external source in the same view

eBusiness Connectors
Provide end-to-end integration between Siebel Applications and other
applications like Oracle and SAP R/3
Example: Exchange orders between Siebel front-office and
SAP R/3 back-office applications

Enterprise Integration Manager (EIM)


Exchanges large volumes of data between the Siebel database and external
sources through interface tables in batch mode

Example: Each week the application captures mainframe updates and runs
a batch job to synchronize the Siebel account data

Object Interfaces
Expose Siebel objects to programmatic access from Siebel Visual Basic scripts,
eScripts, or external applications
Enable external applications to control the Siebel application or access the Siebel
database using:
COM Servers: Automation Server, Data Server
CORBA Object Manager
Java Data Bean
Example: A button in an Excel
spreadsheet calls the Siebel
COM Data Server to update
Siebel contact data from
Excel values

Other EAI Strategies


ActiveX:
Displays the Siebel UI in an external application, or displays an external
application UI in a Siebel application
Client-side import/export:
Exchanges account and contact data between a Siebel database and files
Siebel Sync:
Synchronizes data between a Siebel database and a mobile Web client,
Siebel Microsoft Outlook, Palm, or Windows CE handheld

Comparing EAI Strategies


Basic decision: Shall you replicate data in each application?
If NO, consider using virtual business components
If YES, data must be synchronized between the applications
Level of abstraction

EIM

Object Interfaces

Less abstract

Workflow
More abstract

Data volume

Object Interfaces

Workflow

A few data pieces

100 Records

EIM
1,000,000 Records

Immediacy

EIM
Overnight

Workflow
Seconds

Object Interfaces
Very fast

Matching EAI Strategies to Integration Approaches


Which EAI strategies apply to which integration approaches?
What to consider?

Siebel

EAI

Workflow
for EAI
Dispatch
Service
eBusiness
Connectors

VBCs
EIM

Object
Interfaces

1. Synchronize Siebel Data with External Data


Possible solutions
eBusiness Connector
Workflow for EAI
EAI Dispatch Service
Enterprise Integration Manager (EIM)
Object Interface
Consider:
What is the source of the non-Siebel data?
Do you really want to store the data in two systems?
How quickly must changes in one system appear in the other?
How will you convert and move the data?
Should script rules apply?
What is the volume of data being synchronized?

2. Display External Data in Siebel Applets


Possible solutions
Virtual Business Component (VBC)
Workflow for EAI
EAI Dispatch Service
ActiveX

Consider:

Do you want the Siebel look-and-feel?


Does the data need to be linked to the Siebel data?
Does the data need to be stored in the Siebel database?
How complex is the data from the external application?

3. Display Siebel Data in an External System

(Without the Siebel User Interface)


Possible solutions
COM Data Server Object Interface
ActiveX
Java Data Bean
Consider:
Where does the request for Siebel data come from?
What is the purpose of displaying the Siebel data?
Does the Siebel data need to be linked with non-Siebel data?

4. Control a Siebel Application from an External System


Possible solutions
Workflow for EAI
Object Interface

Consider:
How you want to control the application
Do you want to initiate processing or direct the user interface?
The importance of the Siebel application user context
Which view is active?

5. Export Siebel Data to an External System


Possible solutions
Enterprise Integration Manager (EIM)
Object Interface
Workflow for EAI (outbound)
EAI Dispatch Service (outbound)

Consider:
How often does this need to occur?
What is the volume of data?
Is this outbound only?
Is real-time or batch processing preferred?

Business Services
Are the main building blocks of Siebel workflows
Contains prebuilt Siebel methods (global procedures)
Written in C++
Can also contain custom scripts
Written in eScript or Visual Basic
Analogy: a calculator

Business Service Method Arguments


Input arguments:
Present data to a business service method
Output arguments:
Receive data from a business service method

Property Set

Business services methods receive and send data instances in property sets

Siebel Business Service

Method

Input property set

Output property set

Is instantiated to pass data in and out of a business service


Represents data in strings using name-value pairs
Has two predefined properties: Type and Value
Has an array for creating custom property names and values
Can contain an array of child-level property sets

Hierarchical Property Set


Represents structured data (business logic):

Within a Siebel application


(business objects and business components)
Within an external application
(structure of tables, views, or files)

Conveys the structured data of an integration


object, an XML document, or other data stream

What Can Invoke a Business Service

A workflow process
A method from another business service
A user interface event
A Siebel object interface (COM, CORBA, Java)
A built-in script
An external program

Prebuilt EAI Business Services


Change data from one form into another form
Data transformation adapters (15): map and transform data
Data transport adapters (611): move data among applications

# Business Service

Source

Destination

1 EAI Siebel Adapter

2-Way

2
3
4
5
6
7
8

2-Way
2-Way
2-Way
2-Way
Inbound
Outbound
2-Way

9
10
11

Siebel
Property Set
Database
EAI Data Mapper
Property Set Property Set
XML Converter
Property Set XML (Stream)
XML Hierarchy Converter Property Set XML (Stream)
EAI XML Converter
Property Set XML (Stream)
EAI XML Read From File XML (File)
Property Set
EAI XML Write To File Property Set XML (File)
EAI File Transport
XML (File)
XML (File)
Adapter
EAI MQSeries Transport XML (Stream) MQSeries
Adapter
Queue
EAI MSMQ Transport
XML (Stream) MSMQ Queue
Adapter
EAI HTTP Transport
XML (Stream) HTTP Port
Adapter

Direction

2-Way
2-Way
Outbound

Prebuilt Data Transformation Adapters


Transform data to and from:
Integration objects in property sets and
Siebel XML in property sets

Example: The XML Converter transforms Siebel data into XML that an external
application can process
Pre-built adapters include:
EAI XML Converter
XML Hierarchy Converter
XML Converter
EAI Siebel Adapter
EAI Data Mapping Engine
XML Gateway business service

Prebuilt Data Transport Adapters


Send data to, and receive data from, external applications
Pre-built transports include:
EAI XML Read From File
EAI XML Write To File
EAI File Transport Adapter
EAI MQSeries Transport Adapter
EAI MQSeries AMI Transport Adapter
EAI MSMQ Transport Adapter
EAI DLL Transport Adapter
Microsoft BizTalk Server Adapter
EAI HTTP Transport Adapter

Custom Business Services


Example: A custom business service to transform data from an external
application into XML that a Siebel application can process
Created with Siebel eScript or Siebel Visual Basic

Using Siebel Tools; stored in the Siebel repository

Using the Siebel client; stored in the application database

Configuration Best Practices

Desc Sort Specification


All the indices in the Siebel data model are implemented in ascending (ASC) order. BCs
and PDQs should never be configured for descending sort order.

Force Active and Link Specification

Always set them to FALSE unless the field is used in script or for join or
workflow or EAI and the field is not present on UI.

Non-Indexed Search / Sort Specifications

Use fields mapped to index columns only

Similarly PDQs filter or sort should not be based on non indexed columns

Exact index sequence

Search affects WHERE and sort affects ORDER BY clause of the SQL

Custom index may aid

Primary ID Field & Primary Join Property


Always identify a Primary ID field and set the Primary Join Property to TRUE

Check No Match Property

False - Primary foreign key is NULL or invalid in the master record it performs a
secondary query and sets primary id to NoMatchRowId or the detail record row
id.

True - The application encounters a master record where the primary foreign key
is NULL, NoMatchRowId, or invalid, it performs a secondary query

Primary ID Field & Primary Join Property & Check No Match Property

Setting No Delete, No Insert, No Update Properties at BC Level

Outer Join Flag

Redundant SearchSpecs on Applets

Applet Member flag = Y

BC - ([Member Flag] <> 'N' AND [Member Flag] IS NOT NULL)

The additional or redundant searchspecs can delay the query execution.

Defining Ancestry of Custom Objects

Cloned object should have their Upgrade Ancestor property set to gain the
benefit of new functionality that may be applied to original object from which it
was cloned during a Siebel application upgrade

Required property

Set required field property to TRUE for the required fields instead of writing
code.

Ancestor and Required property

Update a field when another field is updated


Use On Field Update Set user property instead of script like:
function BusComp_SetFieldValue (FieldName)
{
if (FieldName == "Status") && (this.GetFieldValue("Status") == "Closed" )
{
this.SetFieldValue("Done", TheApplication().Today());
}
}

Update a field when another field is updated

BC Read Only Field & Field Read Only Field:fieldname & Parent
Read Only Field

Comment configuration changes

Cloning objects

Is strongly discouraged!

Can create upgrade problems which are difficult to debug.

Errors occur after the upgrade due to C++ code in the objects class refers to a
column which does not exist in the custom object.

Creates redundancy in the configuration for a little or no benefit. E.g. Copying a


new BC will also force you to make copies of related applets, views, screens etc.

Siebel Scripting Best Practices

When to use Scripting


Do not write script if there is a way to implement the required functionality through
configuration. Declarative configuration is easier to maintain and upgrade , leading to
lower total cost of ownership.
Preferred Alternatives to Scripting

Field Validation

User Properties

Workflow

Personalization

Run-time Events

State Model

Follow standard naming conventions

Have the Project team agree upon a standard way of naming variables so that the
scope and data type are identified easily. This significantly simplifies
maintenance and troubleshooting efforts.

Comment Code

Commenting code is a very good practice to explain the business purpose behind
the code.At the onset of the project , project teams should agree upon standard
commenting practice to ensure consistency , and simplify the maintenance effort.

Include a comment header at the top of the method with an explanation of the
code and revision history.Strictly maintain this headers so that they accurately
reflect the script that they describe.If you donot maintain them along the code ,
eventually they become confusing , misleading or incorrect.

Place code in the correct Event Handler

One of the most common issues identified during script reviews is the
inappropriate use of object events. Placing code in the wrong event handler can
lead to altered data and may negatively impact performance.

Do not use Siebel application Pre-events ( such as PreQuery , PreSetFieldValue ,


and PreWriteRecord ) to manipulate data in objects other than the one hosting the
script. The companion events , such as Query , SetFieldValue , and WriteRecord ,
occur after the internal and field-level validations succeed , and therefore are the
appropriate events for such manipulation.

Know when to use Browser versus Server Script


Browser Script is recommended for:

Communication with the user

Interaction with Desktop Applications

Data Validation and manipulation limited to the current record.

Server Script is recommended for:-

Query , Insert , Update , and Delete operations

Access to data beyond the current record.

Use Fast Script In Event Handlers that fire Frequently


Avoid placing complex code in event handlers that fire frequently , such as
BusComp_PreGetFieldValue , as it may degrade application performance. The
PreGerFieldValue event fires at least once for every field that is retrieved. This can
amount to hundreds of calls to the event in rapid succession. Complex script in this event
handler significantly degrades Application performance. Developers typically use script
in the PreGetFieldValue event to return a value other than the one in the database. As , an
alternative the developers can use a calculated field. The calculated field holds the
display value and the calculation performs the logic. In this case , the Siebel application

exposes the calculated field , not the actual field to the user. For other frequently fired
events , look for a configuration alternative , and if none is available , make the script as
simple as possible. One alternative for complex calculations is to create a calculated field
that uses the InvokeServiceMethod function. This function allows you to call a business
service through a calculated field and use the output value of the business service as the
calculated field value. Note that you should not display this type of calculated field in a
list applet , due to the potential performance impact.

Use Option Explicit

Include the Option Explicit in the <general> < declarations> section of every
object containing Siebel VB Code.

Option Explicit requires that you explicitly declare every variable. The compiler
returns an error if the Siebel application uses a variable before declaration; thus

simplifying debugging efforts. Without Option Explicit , the Siebel Application


defines and dimensions a variable whenever it is used in a script.

Leverage Appropriate Debugging Techniques


It is essential that you understand how to debug Siebel Applications. There are four basic
techniques:

Use Alerts or RaiseErrorText methods to pop up message boxes

Write to a file using Trace or custom methods

Use the Siebel Debugger

Use Object Manager Level Logging

Remove Unused Code from the Repository


Remove code that is :

Commented Out

Set to Inactive

Never Called

If you want to keep a record of obsolete code before removing it , you can do and export
from Siebel Tools to save a copy of the Script. To export a Script to a text file , open the
script editing window for the object in question , then choose File > Export. The Script
for all methods on that object will be exported to a file of type .js if the Script is written
in eScript , or .sbl if it is written in Siebel VB.
Alternatively , you can create an archive file , of file type .sif , with the object containing
the script. Archive files contain all property definitions for the object , whereas a .js or
.sbl file only contains the script.

Include Error Handling in all the Scripts

Proper Error Handling

Error Handling in eScript

Error

Handling in Siebel VB

Use RaiseError and RaiseErrorText Properly


In Siebel 7 , developers sometimes use RaiseError and RaiseErrorText to display
message boxes via script : however , these methods do not serve the same purpose as the
MsgBox method , which was available in version 6.x and earlier.RaiseError and

RaiseErrorText methods generate a server script exception causing the script to stop
executing at that point.Therefore , it is important to place any code that must execute
before calls to these methods.

Use Exception Information


Do not ignore exceptions. Ignoring them can cause other runtime errors to occur.

It is the duly of the exception handling to:

Catch Exceptions

Stabilize the Application

Log exception information or notify the users of what happened.

Possibly , re-throw the exception

Possibly , set a return code

In eScript , the exception object stores information in the :

errText attribute : exception.errText

Populates when the Siebel application encounters an error during runtime or when
the developer raises an exception using RaiseErrorText.

toString() method : exception.toString()

Provides exception information from a COM object or from an exception thrown


by the developer using the throw statement.

Place Return Statements Correctly : eScript

A return statement in the finally clause of a try / catch / finally block.

Suppresses any exceptions generated in the method or thrown to the method.


These exceptions will not make it out of the method. When code in the finally
clause causes an exception , the exception information makes it out of the method
, but the original exception information is lost. Ensure that the method itself
contains the return statement. A method takes input , does something , and then
returns a value , null , or exception.

Centralize Browser Script using the Top Object

In Browser Script , top is a short cut to the top level document. Using the top
object , developers can write browser script function once and call it from
anywhere within the browser aspect of objects.

Note:- Scripted objects have a server side aspect which can only call server script
and a browser script aspect which can only call browser script. Thus the top
object , being a browser script object , can only be referenced from browser script.
This is useful for any function which needs to interact programmatically with a
client application or desktop , that would also need to be called from multiple
places in the application.

Know when to use Current Context versus a New Context in


Server Script
Difference between Current and New Context

Current ( or UI ) context deals with objects that the Siebel application created to
support data that are currently available to the user

New ( or Non UI ) context is a set of objects instantiated in script that have no tie
to any objects or data that the user is currently viewing. Keeping these two
straight is important because the user may see programmatic manipulations of
data if you use the wrong context. For Example , consider a script running in any
event of the Contact Business Component that needs to get a reference to the
Contact business component to do a comparison or look up.

Guidelines for Choosing Context

Use the current context to


Access data with which the user is currently working
Perform processing that should be visible to the user

Use a New Context to


Invisible Query a New Business Component
Use a business component in a different business object context.

Use Smallest Possible Scope for Variables


Developers frequently create instance level variables that can be accessed by many
methods within an object.This is done at Application level so that the script can access
the Objects.While this is good for the purposes of instantiating an object only once , it is
a bad practice for four reasons :1.Rules for encapsulation are usually violated
2.Scripts often step on to each other
3.There is no event that guarantees that the objects are destroyed
4.It is difficult to understand the scope or state of variables , because they are instantiated
in one method and accessed in others.
It is far better to declare and use objects where they are needed and pass them as
parameters to other methods.

Instantiate Objects only as needed


Create Object Instances on an as-needed basis. Create Object instances that are needed
based upon the outcome of an evaluation after that evaluation. Otherwise , your code may
create unused object instances that can negatively impact performance.
Correct Order
1.Evaluate Condition
2.Create Objects
3.Use Objects
Incorrect Order
1.Create Objects
2.Evaluate condition
3.Use Objects

Destroy Object Variables when no longer needed


Memory leaks are a common problem , so when a script creates an object reference , that
same script must destroy the object reference before leaving the method. Object
references are: COM Objects
Property Sets
Business Services
Business Components
Business Objects
Applets
In eScript , destroy an object by setting to null ( Obc = null ).In Siebel VB , destroy an
object by setting to nothing.

Use conditional Blocks to Run Code


Just as objects should be instantiated only as needed , code should be run only as needed.
A typical example is in the BusComp_PreSetFieldValue event. Code in this event is
usually associated with a specific field , not all fields. Check which field the code is
modifying before going any further:Example :Function BusComp_PreSetFieldValue(FieldName , FieldValue )
{
Switch(FieldName)
{
Case Status;
do something
Break;
..
}
}

Verify Objects Returned


Always verify that the object returned is the one expected , especially when calling
methods such as ActiveBusObject , ActiveBusComp , and ParentBusComp.
ActiveBusObject returns the business object for the business component of the active
applet.When a business component can be the child of more than one business object ,
check which object is actually returned from a call to this method.
ActiveBusComp returns the business component associated with the active applet.When
running script outside of a business component , verify the active business component
with a call to this method.

Verify field is active before use


Calling GetFieldValue or SetFieldValue on an inactive field may lead to lost data or logic
going astray.
Server Script field is Active if :

System field ( Id , Created , Created By , Updated , Updated By )

LinkSpec property on BC set to TRUE

Force Active Property on BC set to TRUE

Included in applet definition on active view

Used in calculation of calculated field on active applet

Explicitly activated using BusComp.ActivateField(strFldName)

Browse Script field is Active if :

Id Field

Fields visible in the UI

Only use Activate field if an Execute Query statement succeeds it. As a standalone
statement , Activatefield will not implicitly activate a nonactivated field.ActivateField
tells the Siebel application to include this database column in the next SQL statement it
executes on the business component which just had a field activated.

Use Proper View Mode for Queries

View mode settings control the formation of the SQL Where clause that the
Siebel application sends to the database by using team or position visibility to
limit the records available in the business component queried in the script.

Setting a query to AllView visibility mode gives the user access to all records ,
which may be different than the view mode of the current view in the UI.For
example , a user may have SalesRep visibility on the UI whereas the script will be
giving the user All visibility. This would give the user access to records the user
might not need to access or should not be able to access.

Setting view mode is especially important in an environment with mobile web


client users. Mobile users have a subset of data in their local databases. If you do
not set the view mode correctly for limited visibility objects , unexpected
behaviour can occur , such as resetting foreign keys. For example , the application
may set the Primary ID to No Match Row ID if a child record does not exist on
the local database. This update has the potential to synchronize back up to the
server , causing a data integrity problem.

Use ForwardOnly Cursor Mode


If you not specify a cursor mode when querying with Siebel eScript or Siebel VB , the
Siebel application uses the default cursor mode of ForwardBackward.To support this
cursor mode , the system creates a cache to maintain the entire record set.
If you traverse through the record set from FirstRecord using NextRecord and will not
return to the previous record , use ForwardOnly cursor mode. The system will not need to
create the cache , improving performance. This is particularly true if you perform a look
up or if you access a pick list.
Example :- bcAccount.ExecuteQuery(ForwardOnly)

Verify Existence of Valid Record After Querying


When performing a query , always check that a record is returned through the use of
FirstRecord , NextRecord , or LastRecord methods , before attempting to retrieve or set a
field value for the record.Do this even if it seems impossible that a record will not return.
Example :bcContact.CleatToQuery();
bcContact.SetSearchSpec(Id,sContactId);
bcContact.ExecuteQuery(ForwardOnly);
//Check to see that a record was actually returned
//by examining the return of FirstRecord();
If (bcContact.FirstRecord())
{
//Okay to perform Data Processing.
}

Use Switch or Select Case Statements


When you need to evaluate and compare a single expression with many different
possibilities , the fastest and most readable way of doing this is to use switch (eScript) or
select case ( Siebel VB ).
It is more efficient because the expression is evaluated once , then compared to different
values
It is easier to read than a series of nested if.else if statements. Using switch or select
case statements can frequently compact multiple pages of script into a single page.

Call Methods Once If Results Do Not Change

In a loop be careful not to call a method more times than is necessary or the
number of method calls increases linearly with the number of loop iterations.

Amount of method calls goes from 1 to of iterations!

In the script below the number of method calls in the loop is 3n where n is
the number of iterations. Imagine where n = 100.

Here is an example from the GetChild method of the application object.


var iCnt = psInputs.GetChildCount();
for (var i = 0; i < iCnt; i++)
{
if (psInputs.GetChild(i).GetType() == sType)
{
psChild = psInputs.GetChild(i); break;
}
}

Here is how to make this loop more efficient.

Now the number of method calls in the loop is 2n.

var iCnt = psInputs.GetChildCount();


var lPS_child;

for (var i = 0; i < iCnt; i++)


{
lPS_child = psInputs.GetChild(i);
if (lPS_child.GetType() == sType)
{
psChild = lPS_child;
break;
}
}

Here is another example from the BusComp_SetFieldValue event of the


Contact business component. Here if the first value is satisfied, the next
check is not necessary at all. The logic says: If the EnableIntegration value
is TRUE or anything elsecontinue.

Since the conditional and fails as soon as any condition does not evaluate to
true if the value is not TRUE, then the entire if fails and the last condition is
not evaluated. If it is TRUE, then it will not be an empty string
automatically.

if (TheApplication().GetProfileAttr("IntegrationUser") != "Y" &&


TheApplication().GetProfileAttr("EnableIntegration") == "TRUE" &&
TheApplication().GetProfileAttr("EnableIntegration") != "")
{

Here is how to fix this to be more efficient.

if (TheApplication().GetProfileAttr("IntegrationUser") != "Y" &&


TheApplication().GetProfileAttr("EnableIntegration") == "TRUE)
{

Here is another example from the GetChild method of the Integration


business service.

for (var i = 0; i < psInputs.GetChildCount(); i++)


{
if (psInputs.GetChild(i).GetType() == sType)
{psChild = psInputs.GetChild(i); break;}
}

Here is how to fix this script to be more efficient. This goes from 4n method
calls to 2n method calls where n is the number of iterations. A savings of
50%!

var li_child_count = psInputs.GetChildCount();


for (var i = 0; i < li_child_count; i++)
{
var lPS_child = psInputs.GetChild(i);
if (lPS_child.GetType() == sType)
{
psChild = lPS_child;
break;
}
}

Script The BusComp_PreGetFieldValue Event Carefully

The BusComp_PreGetFieldValue event is called for every field that is


retrieved in the SQL SELECT statement so is fired very frequently. Any
script at all in this event causes a severe penalty in performance.

For instance, here is an example of how many times the PreGetFieldValue


event is fired on a simple query of the Account business component.

Fields are queried multiple times for calculated fields, fields that have
their Force Active set to True, fields that have their Link Spec property set
to True and fields that are in the applet definitions.

The Order Entry Orders and Sync Period business components currently
have script in this event!

fieldname: Parent Account Name


fieldname: Main Phone Number
fieldname: Account Status
fieldname: Type
fieldname: Sales Rep
fieldname: Account Type Code
fieldname: Street Address
fieldname: City
fieldname: Postal Code
fieldname: Street Address 2
fieldname: State
fieldname: Country
fieldname: Home Page

fieldname: Industry
fieldname: Main Fax Number
fieldname: Id
fieldname: Id

Other events that fire frequently are:

PreSetFieldValue
ChangeRecord
ShowControl
PreCanInvokeMethod

Use care when scripting these events.

Use The Required Field Property

Setting a fields Required property to True should replace any script that
does this. In the ValidateUpdateContactTransaction method of the Contact
business component the following script is superfluous since the fields are
already set to required and the message given does not add to the message
the Siebel application already delivers by default.

if (LastName == "")
{
TheApplication().SetProfileAttr("AM05Pending", "N");
sMessageText += ((sMessageText=="") ? "" : "\n") + "\'Last Name\' is a required
field. Please enter a value for the field. ";
TheApplication().RaiseErrorText(sMessageText);
}
if (FirstName == "")
{
TheApplication().SetProfileAttr("AM05Pending", "N");
sMessageText += ((sMessageText=="") ? "" : "\n") + "\'First Name\' is a required
field. Please enter a value for the field. ";
TheApplication().RaiseErrorText(sMessageText);
}

The default message for a required field looks like the following. And since
both the Last Name and First Name fields come out of the box as required
script like this can be deleted.

Call Custom Methods Only As Necessary

To avoid the overhead of having, maintaining, and calling a custom method,


call them only when it makes sense. With a very involved piece of custom
logic that should be reusable or just does not make sense to put in the calling
method, a separate custom method makes sense.

In the BusComp_PreNewRecord event of the Contact business component the


method simply calls the DataSync_PreNewRecord method. This is
unnecessary overhead.

All the DataSync_PreNewRecord method does is set a flag so we can go from


120 lines of interpreted code to 32 lines by simply putting this line in the
BusComp_PreNewRecord event and getting rid of unnecessary lines.

function BusComp_PreNewRecord
/*==============================================================
===
-- Comments Section
Author:
Updated By: <name and date>
Description: Calls the DataSync_PreNewRecord() custom function.
===========================================================*/
try {
//Local Variable declarations
var iReturn = ContinueOperation;
var sExceptionMsg = "";

//Data Synch Code separation - only for BusComp_PreSetFieldValue and


BusComp_PreWriteRecord
if (TheApplication().GetProfileAttr("AllowDataSyncIntegration") != "N") {
//Data Synch Code
sDelFlag = "Y";
}
}
catch (e) {
TheApplication().ApplicationErrorHandling(e, this.Name() + "Contact_PreNewRecord()");
//Throw the error message to the browser or the calling object
sExceptionMsg = (typeof(e.errText) != 'undefined') ? e.errText : e.message;
TheApplication().RaiseErrorText(sExceptionMsg);
}
}

Use Conditional Blocks To Run Code

To ensure that code is executed only as needed

BusComp_PreSetFieldValue Example:
Code typically applies to a specific field
Perform field check before executing

function BusComp_PreSetFieldValue(FieldName, FieldValue)


{
switch(FieldName)
{
case Status:
.do something.
break;
..
}
.
}

Use Conditional Blocks Example

Here is some script from the BusComp_PreSetFieldValue event of the


AccountEAI business component. Code is run before the actual field being
updated is even determined.

var sPrimAddrId = this.GetFieldValue("Primary Address Id");


if (TheApplication().GetProfileAttr("AllowDataSyncIntegration") != "N") {
try {
iReturn = DataSync_PreSetFieldValue(FieldName, FieldValue);
}
catch (e) {
sMessageText = (typeof(e.errText) != 'undefined') ? e.errText : e.message;
}
finally {
goto end_of_try;
}
}
// Application Code
var sStatus = GetFormattedFieldValue("Account Status");
var input = TheApplication().NewPropertySet();

Conditional Blocks Another Example

The following is another example of how code is executing for every invocation of the
method without necessity. It is from the ExplicWriteRecord method of the Account Entry
Applet - no buttons object.
var psIn, psOut, sStatus;
var sText = "", sCode = "", sTransNumber;
try
{
//Template - variable declarations
psIn
= TheApplication().NewPropertySet();
psOut = TheApplication().NewPropertySet();
sStatus = this.BusComp().GetFieldValue ("Account Status");
//Justin Kraus 4/5/02 Added RunAccountTransaction Code
var RunAccountTransaction =
TheApplication().GetProfileAttr("RunAccountTransaction");
if (RunAccountTransaction == "YES")
{
if (sStatus == "New")
sTransNumber = "AM02";
else
sTransNumber = "AM03";
this.InvokeMethod(sTransNumber, psIn, psOut); //Custom - Transaction Number
sText = theApplication().GetProfileAttr ("DisplayText");
sCode = theApplication().GetProfileAttr ("DisplayCode");

Conditional statements - Example

orderType = lineItemBC.GetFieldValue("Order Type");


serviceAccountId = lineItemBC.GetFieldValue("Service Account Id")
if ((orderType == "New Activation") || (orderType == "Change
MSISDN") || (orderType == "Migration"))
{
if (serviceAccountId != null && serviceAccountId != "")
{
heldFlag = lineItemBC.GetFieldValue("AW MSISDN Held Flag");
action = lineItemBC.GetFieldValue("Action Code");
if ((action != null && action == "Add") && (heldFlag != null
&& heldFlag == 'Y'))
{
CanInvoke = "TRUE";
}
}
}

conditional evaluation logic can be moved to a calculated BC field

Creating Methods As Wrappers For Simple Method Calls

There is no need to create a method that calls a single method such as


GetProfileAttr.

This can be done where ever needed and there is no benefit to doing this. It
just introduces more code and slows performance.
A good example is the AreValueOffersOnOrOff method in the Account Entry
Applet - Read Only object.
This method has 44 lines to wrap a call to GetProfileAttr.

TheApplication().GetProfileAttr("Are Offers On Or Off");

Remove Debugging Code

Debugging code should be removed before going into production.

Here is an example from the CreateCommEvent event of the Account Admin


Entry Applet.

TheApplication().Trace("BUTTONCODE: Invoking the creation of a comm event with


the business service.");

Use ActivateMultipleFields

ActivateMultipleFields/GetMultipleFieldValues/SetMultipleFieldValues are
new to Siebel 7 and can greatly reduce all the redundant lines.

var lbc_account = this;


var lPS_FieldNames = TheApplication().NewPropertySet();
var lPS_FieldValues = TheApplication().NewPropertySet();
var ls_account_products;
var ls_agreement_name;
var ls_project_name;
var ls_description;
var ls_name;
//set up the property set which will be used in all three methods
//to hold the field names.
lPS_FieldNames.SetProperty("Account Products", "");
lPS_FieldNames.SetProperty("Agreement Name", "");
lPS_FieldNames.SetProperty("Project Name", "");
lPS_FieldNames.SetProperty("Description", "");
lPS_FieldNames.SetProperty(Name", "");

//activate the fields using the property set which has the field names
lbc_account.ActivateMultipleFields(lPS_FieldNames);
lbc_account.ExecuteQuery(ForwardOnly);
if (lbc_account.FirstRecord())
{
//retrieve the values. This method acts sort of like a business service
//in that there is an input property set and an output property set. The
//field values will be in the second property set passed in.
lbc_account.GetMultipleFieldValues(lPS_FieldNames, lPS_FieldValues);
//loop through property set to get values.
ls_account_products = lPS_FieldValues.GetProperty("Account Products");
ls_agreement_name = lPS_FieldValues.GetProperty("Agreement Name");
ls_project_name
= lPS_FieldValues.GetProperty("Project Name");
ls_description
= lPS_FieldValues.GetProperty("Description");
ls_name
= lPS_FieldValues.GetProperty("Name");

//now set new values in the property set


lPS_FieldNames.SetProperty("Account Products", "All My Products");
lPS_FieldNames.SetProperty("Agreement Name", "Siebel Agreement");
lPS_FieldNames.SetProperty("Project Name", "Siebel Project #2");
lPS_FieldNames.SetProperty("Description", "This is the description");
lPS_FieldNames.SetProperty(Name", "Joey Joe Joe Junior Shabbidoo");
//set the field values
lbc_account.SetMultipleFieldValues(lPS_FieldNames);
//commit the data
lbc_account.WriteRecord();
}
This has its greatest benefits when interacting with Siebel
Object managers from the COM, Java, C++, and CORBA
interfaces since SISNAPI calls are reduced.

Use The Associate Method

Use the Associate method of the Associate business component returned from
GetAssocBusComp.
This method automatically and correctly creates a row in an intersection
table.
Often developers do this with script when the Associate method is the way to
go.

var lBC_mvg
= this.GetMVGBusComp(Sales Rep);
var lBC_associate = lBC_mvg.GetAssocBusComp();
with (lBC_associate)
{
ClearToQuery();
SetSearchSpec(Id, SomeRowId);
ExecuteQuery(ForwardOnly);
if (FirstRecord())
Associate(NewAfter);

Nested query loops

Problem
A parent business component is queried, iterated through, and for each record a child
business component is queried
Solution
Justify the business requirement for it and avoid it as much as possible. Also, when
creating a parent child set of business components from the same business object,
querying the parent automatically queries the child so no separate query is necessary for
the child business component.
Example
bcOrder.ExecuteQuery( ForwardOnly );
if ( bcOrder.FirstRecord() )
{
do {
// ...
// Query "Order Entry - Orders/Order Entry - Line Items" in the

current BO
}
while ( bcOrder.NextRecord() );
}

Cache Data
Problem
Often customers execute the exact SQL statements from various locations in script. That
generates an excessive number of script api calls and a redundant number of business
component queries.
Solution
Cache a limited set of data within your script
Gain
1) Many script api calls removed
2) Redundant buscomp and SQL executions removed.
(!) Exceptions: if the data that needs to be cached is too complex or too large or too
dynamic.

Use a Join Instead Of A Scripted Query


Problem
eScripts might be written to retrieve field values from some other business components.
Solution
Think Join. Very often, a simple Join can replace a piece of script.
Example
From a BC if you want to get the value of ChildBC.Field1, you can implement a Join
Note: This recommendation is valid only if the data model change makes sense

Reuse objects
Problem
Some objects might be instantiated repeatedly in the same eScripts
Solution
Avoid excessive instantiation of Objects. Try to reuse existing instances of objects
instead of creating new ones
Exception
It is sometimes not possible to reuse objects, for instance UI related objects that could
result in UI refresh

Example
DON T:
var boAcc1 = TheApplication().GetBusObject( "Account" );
var boAcc2 = TheApplication().GetBusObject( "Account" );
DO:
var boAcc1 = TheApplication().GetBusObject( "Account" );
var boAcc2 = boAcc1;

EIM General Recommendations


Some general recommendations for running EIM are

Always run EIM process during off peak hours, if possible. This ensures that maximum
processing capacity is available for the EIM processes and also reduce the load for
connected users

After every EIM run ,check the status of the records being processed in the EIM tables using following query.

select count(*), IF_ROW_STAT from <EIM Table>


where IF_ROW_BATCH_NUM = ?
group by IF_ROW_STAT;
This will indicate if there are tables or columns that are being processed unnecessarily.
If a lot of rows have a status of PARTIALLY IMPORTED , it is likely that further tuning
can be
done by excluding base tables and columns that are not necessary.

Set-based operations are faster then row-based operations because they process data
in sets of records rather than on row-by-row basis. Therefore, for initial load, set-based
processing should be selected. This can be done by setting SET BASED LOGGING to
TRUE.

Always delete batches from interface tables upon completion of EIM process. Leaving old
batches in the interface table wastes space and can adversely affect performance.

Complete testing is recommended to fine tune your EIM process. Run a large number of
identical EIM jobs with similar data. This allows to find out incorrect mapping the data and
also provides some insight into the optimal sizing of the EIM batches and exposure.

Avoid activating SQL Trace flags in a production environment unless absolutely


necessary.

Five ways to tune the EIM process


Broadly speaking fine-tuning of EIM process can be categorized into 5 different but inter related
steps. They are -:
1.
2.
3.
4.
5.

Database Server Optimization


Table level Optimization
Configuration file Parameters optimization
Batch Processing Optimization
Run Time Optimization

1. DATABASE SERVER OPTIMIZATION


Optimal database server performance can be achieved in following ways.

OPTIMAL UTILIZATION OF DATABASE SPACE


EIM uses database server space for base tables, secondary tables, interface
tables, indexes, database manager transaction logging area and transaction
rollback areas. Judicious use of database space requires careful planning. This
involves :
- Determining the total number, and types, of users of siebel applications
- Determining the entities required to support functionality provided by the application.
- Estimating the average number of entities per user and total number of records per
entity for the total user base.
- Determining space for Siebel application data.

Above calculations helps in determining optimal database server space.

IMPROVING INPUT/OUTPUT (I/O) PERFORMANCE


Input/output (I/O) performance can be increased by evenly distributing I/O load across
several disk devices by assigning database objects used across several disk devices.
Most RDBMS have this ability to assign a given database objects like tables and
Indexes , database log files .and temporary workspace to a specific disk.

Redundant Array of Independent Disks, or RAID technology, is then used to provide an


abstraction layer above these several disk devices making them appear as single logical
disk to operating system and RDBMS.

2. Table level Optimization


Table level optimization can be done in following ways

Ensure indexes exist for all the tables involved in EIM

Distribute more heavily used tables and indexes


Input/output (I/O) performance

Based on the business requirements, Organizations should put the most heavily used
key EIM tables and their corresponding indexes on different physical disk from the Siebel
base tables and indexes , because all of them are accessed simultaneously during EIM
operations.

Identify the most time intensive SQL statements and create additional indexes that are
necessary to improve the performance of these long running SQL.

During initial EIM load, unnecessary indexes can be dropped, saving a significant
amount of time. Typically, for a target base table or parent tables (such as,
S_ORG_EXT) we need only the primary Index and the unique Indexes and for a non
target base table or child tables (such as, S_ADDR_ORG) we need only the primary
Index , the unique Indexes , and the foreign key Indexes. All the remaining indexes can
be dropped for the duration of the EIM import.

across disk devices to improve

3. CONFIGURATION FILE PARAMETERS OPTIMIZATION


This is the easiest and fastest way of tuning EIM process. Its involves editing EIM
configuration file with one or more specific parameters.

ONLY BASE TABLES

OR IGNORE BASE TABLES . A single EIM table is


normally mapped to multiple base tables. For example, EIM_ACCOUNT is mapped to
S_PARTY, S_ORG_EXT, and
S_ADDR_ORG, as well as others. The default
configuration is to process all base tables for each EIM table. These parameters limits
the affected base tables to that are relevant for a particular EIM task.

Excerpt from IFB using ONLY BASE TABLES or IGNORE BASE TABLES parameter

[Siebel Interface Manager]


USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = $batchnum
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT, S_ADDR_ORG
IGNORE BASE TABLES= S_ACCNT_POSTN, S_ORG_TYPE

ONLY BASE COLUMNS

OR IGNORE BASE COLUMNS.


By default EIM
process all base columns for each base table specified but by using this parameter we
can limit the affected base columns in a particular base table to that are relevant for a
particular EIM task. We can see additional performance increase if by excluding
unutilized foreign key columns as EIM does not need to perform the interim processing
(via SQL statements) to resolve.

Excerpt from IFB using ONLY BASE columns or IGNORE BASE columns parameter

[Siebel Interface Manager]


USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = $batchnum
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT
ONLY BASE COLUMNS = S_ORG_EXT.NAME, S_ORG_EXT.LOC,

TRIM SPACES. This options is used for IMPORT process only. It specifies whether
the character columns in the interface tables should have trailing spaces removed before
importing. The default value is TRUE. This setting saves the vital disk space and buffer
pool space for the tablespace data.

NUM_IFTABLE_LOAD_CUTOFF EXTENDED PARAMETER . This options is more


used for MERGE process. When set to a positive value , this parameter reduces the
amount of time taken to load repository information by loading repository information for
the required interface tables only.

INSERT ROWS This options allow to suppress inserts when the base table is already
fully loaded and the table is the primary table for an EIM interface table used to load and
update other tables. The command format is INSERT ROWS = <table name>, FALSE.

UPDATE ROWS This options allow to suppress updates when the base table is
already fully loaded and does not require updates such as foreign key additions, but the
table is the primary table for an EIM interface table used to load and update other tables.
The command format is UPDATE ROWS = <table name>, FALSE.
Excerpt from IFB using INSERT ROWS and UPDATE ROWS parameter

[Siebel Interface Manager]


USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = $batchnum
TABLE = EIM_ACCOUNT
ONLY BASE TABLE = S_PARTY,S_ORG_EXT,S_BU
INSERT ROWS = S_PARTY,TRUE
UPDATE ROWS = S_PARTY ,FALSE
INSERT ROWS = S_ORG_EXT,TRUE
UPDATE ROWS = S_ORG_EXT ,FALSE
INSERT ROWS = S_BU,TRUE
UPDATE ROWS = S_BU,FALSE

SQLPROFILE This options greatly simplify the task of identifying the most time
intensive SQL statements. It places the most time intensive SQL statements in the file
specified
Excerpt from IFB using SQLPROFILE parameter

[Siebel Interface Manager]


USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
SQLPROFILE = c:\temp\eimsql.sql
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = $batchnum
TABLE = EIM_ACCOUNT

This places most time intensive SQL statements in file eimsql.sql:

TRACE LEVEL SETTINGS

Tracing information during an Enterprise Integration


Manager (EIM) process is immensely helpful to diagnose common performance related
problems. The parameters used for this purpose are - SQL Trace Flags, Error Flags and
Trace Flags. The EIM log file can contain different levels of detail information depending
on value of these parameters
Below listed are the different levels of tracing that can be set and their purposes

Flag
SQL
Trace

Value
8

Information
Shows all SQL statements that make up the EIM task.

1,2,4
2

Used for logging at the ODBC level.


Additional information about the number of parse,
execute and fetch calls and timing information about
each call.
Detail explanation of rows that were not successfully
exported.
To determine the amount of time EIM spends on each
step of the EIM task.
Traces all substitutions of user parameters.
Traces all user key overrides.
Traces all interface mapping warnings.
Traces all file attachment status.

Error

Trace

1
2
4
8
32

in EIM

Below listed are the different permutations of above parameters, which can be helpful
tuning process
- Set the Error Flag=1, the SQL flag = 1, and the Trace Flag=1 at starting.
This setting will show errors and unused foreign keys.
- Set Error flag = 1, the SQL flag = 8, and the Trace flag = 3.
These settings will produce a log file with SQL statements that include how long each
statement took, which is very useful for optimizing SQL performance.
- Set the Error flag = 0, the SQL flag = 0, and the Trace flag = 1.
These settings will produce a log file showing how long each EIM step took, which is
useful when figuring out the optimal batch size as well as monitoring for deterioration
- Set the Error flag = 1, the SQL flag = 8 and the Trace flag = 1.
For all normal diagnostic purposes, it is recommended to use these values
during EIM process:

FILTER QUERY. This options is used for IMPORT process only. Using this option we
can specify a filter query that runs before the import process and eliminates all rows
which fails from further processing. The query expression should be a self-contained
WHERE clause expression without the WHERE keyword and should use only unqualified
column names from the interface table or literal values such as NAME IS NOT NULL.
Example of such query is FILTER QUERY=(ACCNT_NUM 1500)

UPDATE STATISTICS This parameter is applicable to the DB2 database platform


only. The default setting is TRUE. This enables EIM to dynamically update the statistics
about the physical characteristics of interface table and the associated indexes . This
parameter is used to come up with a set of statistics on the EIM tables that can be save
using db2look utility and then reapply them to subsequent runs
Default setting should be used when an interface table has had many updates. Once
optimal set of statistics have been determined, UPDATE STATISTICS should be set to
FALSE to saving time during the EIM runs.
Excerpt from IFB using UPDATE STATISTICS parameter
[Siebel Interface Manager]
USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = $batchnum
SET BASED LOGGING = TRUE
UPDATE STATISTICS = FALSE
TABLE = EIM_CONTACT
INSERT ROWS = S_PARTY,FALSE
UPDATE ROWS = S_PARTY,FALSE
INSERT ROWS = S_CONTACT,FALSE
UPDATE ROWS = S_CONTACT,TRUE
SING SYNONYMS = FALSE
ONLY BASE TABLES = S_PARTY, S_CONTACT
ONLY BASE COLUMNS= S_PARTY.PARTY_UID,S_PARTY.PARTY_TYPE_CD, \
S_CONTACT.PERSON_UID,S_CONTACT.BU_ID, \
S_CONTACT.CONSUMER_FLG

USING SYNONYMS This parameter is used for checking account synonyms during
IMPORT process. The default setting is TRUE. When account synonyms are not needed,

set this parameter to FALSE .This saves processing time because queries that look up
account synonyms in S_ORG_SYN table are not used.

USE INDEX HINTS This parameter is applicable to Microsoft SQL Server and Oracle
database platforms only. The default setting is FALSE. This parameter controls whether
EIM issues hints to optimize the underlying database to improve performance and
throughput. Test EIM processing with both settings TRUE and FALSE to determine
which provides better performance for each of the respective EIM jobs.

4. Batch Processing Optimization

BATCH SIZE The number of rows processed in a single batch is called batch size.
Though the optimal batch size to be use may vary depending upon the amount of buffer
cache available. To reduce demands on resources and improve performance, smaller
batch sizes should be preferred .Following points should be kept in mind before deciding
batch size
1. It should not be more than 100,000 rows.
2. For an initial load, it should be between 25, 000 to 30,000 rows. For ongoing loads, it
should be between 2,500 to 10,000 rows with Transaction Logging or 10,000 to
15,000 rows without Transaction Logging.

NUMBER

As far as possible try to divide EIM batches into Insert only transaction and update only
transactions. Following two IFB files demonstrates dividing EIM batches into Insert only
transaction and update only transactions.

OF RECORDS IN EIM TABLE Limit the number of records in the interface


tables to those that are being processed. For example, if the batch size is 19,000 rows
and there are going 8 EIM processes running in parallel , then there should be 152,000
rows in the interface table. Under no circumstances there should be more than 250,000
rows in any single interface table. Increasing the number of records in an EIM table
causes by object fragmentation or full table scans and large index range scans resulting
in reduced performance of EIM jobs

Excerpt from old IFB for mixed transactions:


[Weekly Accounts]
TYPE = IMPORT
BATCH = 1-10
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT
IGNORE BASE COLUMNS = S_ORG_EXT.?

Excerpt from modified IFB for separate insert/update transactions:

[Weekly Accounts New]


TYPE = IMPORT
BATCH = 1-2
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT
IGNORE BASE COLUMNS = S_ORG_EXT.?
INSERT ROWS = TRUE
UPDATE ROWS = FALSE
[Weekly Accounts Existing]
TYPE = IMPORT
BATCH = 3-10
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT
ONLY BASE COLUMNS = S_ORG_EXT.NAME, S_ORG_EXT.LOC, S_ORG_EXT.?
INSERT ROWS = FALSE
UPDATE ROWS = TRUE

BATCH RANGE Use Batch ranges in form of BATCH = xy. This will enable us to run
with smaller batch sizes and avoid the startup overhead on each batch. Though there is
a limit to maximum number of batches that can be run in a single EIM process i.e. 1,000
batches.
Excerpt from IFB using batch range
[Siebel Interface Manager]
USER NAME = "SADMIN"
PASSWORD = "SADMIN"
PROCESS = IMPORT EIM Type
[IMPORT EIM Type]
TYPE = IMPORT
BATCH = 1-10
TABLE = EIM_ACCOUNT
ONLY BASE TABLES = S_ORG_EXT

Import parents and children separately. Wherever possible, load data such as
Accounts, Addresses and Teams at the same time, using the same interface
table.

5. Run Time Optimization

PARALLEL PROCESSING

FOR EIM JOBS THAT HAVE NO INTERFACE OR BASE TABLES IN COMMON, RUNNING
THEM IN
parallel can help increasing EIM throughput rate. No special setup is required for
this. But for concurrent EIM processes that run against the same interface table ,
the parallel EIM jobs must use different batch numbers or the database used
should support row level locking.
Running EIM tasks in parallel should be last option for EIM optimization as it
may cause a deadlock when multiple EIM processes access the same interface
table simultaneously. So before running tasks in parallel, check the value of the
Maximum Tasks parameter. This parameter specifies the maximum number of
running tasks that can be run at a time for EIM process
This parameter can be found in sever administration screen in Server Component
Parameters as shown below.

DISABLE TRIGGER

DOCKING TRANSACTION LOGGING This parameter controls logging of

Disabling database triggers can also help increasing EIM


throughput rate. This can be done by running the Generate Triggers server task with
both the REMOVE and EXEC parameters set to TRUE from server administration
screen. Workflow Manager and Assignment Manager will not function for the new or
updated data after disabling triggers.

transactions into the S_DOCK_TXN_LOG table for the purpose of routing data to mobile
web clients. The default value is FALSE. If there are no mobile web clients, then the
default setting should remain. However even if there are mobile web clients, during initial
data loads, set this parameter to FALSE as with a large volume of data, it may take quite
a long time for the Transaction Processor and Router tasks to process the changes for
each of the Remote clients. It would be faster to extract the mobile clients after the data
has been loaded and assigned.
This value of this parameter can be changed from System Preferences screen as shown
below or by setting LOG TRANSACTIONS parameter in ifb file.

Recommended Order of tuning


EIM tuning should be approached in following order

Database Server Optimization

Table level Optimization

Configuration file parameters


optimization

Batch Processing Optimization

Run Time Optimization

Vi. Precautions

Running EIM tasks in parallel should be last option for tuning EIM as it may cause a
deadlock when multiple EIM processes access the same interface table simultaneously

If you are using disabling database triggers for tuning EIM , reapply the triggers after
completing the EIM load.

If you have dropped unnecessary indexes during initial EIM load for tuning EIM, Then
remember to later create these indexes in batch the mode by utilizing parallel execution
strategies available for the respective database platform.

For mobile clients when running large EIM processes, after the data has been loaded and
assigned, turn on transaction logging and reextract the mobile clients.

After initial data loading is complete , if the architecture set up uses mobile web clients ,
EIM should be run in row-by-row operations This can be done by setting SET BASED
LOGGING to FALSE

Remove any PRIMARY KEYS ONLY parameters from EIM configuration file and avoid
using the UPDATE PRIMARY KEYS parameter

Set the UPDATE STATISTICS parameter to FALSE when running parallel EIM
processes on a DB2 database

If you plan to use multiple addresses for accounts do not set the USING SYNONYMS
parameter to FALSE as then EIM will not attach addresses to the appropriate accounts.

GLOSSARY
Acceptance Testing
Formal testing conducted to enable a user, customer or other authorized entity to determine whether to accept a
system or component.
Actual Outcome
The behavior actually produced when the object is tested under specified conditions.
Ad hoc testing
Testing carried out using no recognized test case design technique
Agents
Self contained processes that run in the background on a client or server and that perform useful functions for a
specific user/owner. Agents may monitor exceptions based on criteria or execute automated tasks.
Aggregated Data
Data that is results from applying a process to combine data elements. Data that is summarized.
Algorithm
A sequence of steps for solving a problem.
Alpha Testing
Simulated or actual operational testing at an in-house site not otherwise involved with the software developers
Application Portfolio
An information system containing key attributes of applications deployed in a company. Application portfolios are
used as tools to manage the business value of an application throughout its lifecycle.
Arc Testing
A test case design technique for a component in which test case are designed to execute branch condition
outcomes
Artificial Intelligence (AI)
The science of making machines do things which would require intelligence if they were done by humans.
Asset
Component of a business process. Assets can include people, accommodation, computer systems, networks,
paper records, fax machines, etc.
Attribute
A variable that takes on values that might be numeric, text, or logical (true/false). Attributes store the factual
knowledge in a knowledge base The use of software to perform or support test activities, e.g. test management,
test design, test execution and results checking.
Automation

There are many factors to consider when planning for software test automation. Automation changes the
complexion of testing and the test organization from design through implementation and test execution. There are
tangible and intangible elements and widely held myths about benefits and capabilities of test automation.
Availability
Ability of a component or service to perform its required function at a stated instant or over a stated period of time.
It is usually expressed as the availability ratio, i.e. the proportion of time that the service is actually available for
use by the Customers within the agreed service hours.

B
Backward chaining
The process of determining the value of a goal by looking for rules that can conclude the goal. Attributes in the
premise of such rules may be made sub goals for further search if necessary.
Balanced Scorecard
An aid to organizational performance management. It helps to focus, not only on the financial targets but also on
the internal processes, customers and learning and growth issues.
Baseline
A snapshot or a position which is recorded. Although the position may be updated later, the baseline remains
unchanged and available as a reference of the original state and as a comparison against the current position.
Basic Block
A sequence of one or more consecutive, executable statements containing no branches
Basic test set
A set of test cases derived from the code logic which ensure that 100% branch coverage is achieved
Behavior
The combination of input values and preconditions and the required response for a function of a system. The full
specification of a function would normally comprise one or more behaviors
Beta Testing
Operational testing at a site not otherwise involved with the software developers
Big-bang testing
Integration testing where no incremental testing takes place prior to all the system's components being combined
to form the system
Black Box Testing
Test case selection based on an analysis of the specification of the component without reference to its internal
workings
Blackboard
A hierarchically organized database which allows information to flow both in and out from the knowledge sources.
Bottom up testing

An approach to integration testing where the lowest level components are tested first, then used to facilitate the
testing of higher level components.
Branch
A conditional transfer of control from any statement to any other statement in a component, or an unconditional
transfer of control from any statement to any other statement in the component except the next statement, or
Branch Condition Combination Testing
A test case design technique in which test cases are design to execute combinations of branch condition outcomes

Branch Condition Coverage


The percentage of branch condition outcomes in every decision that have been exercised by a test case suite
Branch Condition Testing
A test case design technique in which test cases are designed to execute branch condition outcomes
Branch Coverage
The percentage of branches that have been exercised by a test case suite
Branch Testing
A test case design technique for a component in which test cases are designed to execute branch outcomes
Breadth first search
A search strategy that examines all rules that could determine the value of the current goal or sub goal before
backtracking through other rules to determine the value of an unknown attribute in the current rule.
Bridge
Equipment and techniques used to match circuits to each other ensuring minimum transmission impairment.
Business Recovery Plans
Documents describing the roles, responsibilities and actions necessary to resume business processes following a
business disruption.

C
CAST
Computer Aided Software Testing
Capture / Playback Tool
A test tool which records test input as it is sent to the software under test. The input cases stored can than be used
to design test cases
Case-Based Reasoning (CBR)
A problem-solving system that relies on stored representations of previously solved problems and their solutions.
Cause Effect Graph

A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used
to design test cases
Cause Effect Graphing
A test case design technique in which test cases are designed by consideration of cause effect graphs
Certainty processing
Allowing confidence levels obtained from user input and rule conclusions to be combined to increase the overall
confidence in the value assigned to an attribute.
Certification
The process of confirming that a system or component complies with its specified requirements and is acceptable
for operational use
Change Control
The procedure to ensure that all changes are controlled, including the submission, analysis, decision making,
approval, implementation and post implementation of the change.
Charter
A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.
Classes
A category that generally describes a group of more specific items or objects.
Clause
One expression in the If (premise) or Then (consequent) part of a rule. Often consists of an attribute name followed
by a relational operator and an attribute value.
Code Coverage
An analysis method that determines which parts have not been executed and therefore may require additional
attention
Code based testing
Designing tests based on objectives derived from implementation such as tests that execute specific control flow
paths or use specific data items
Compatibility testing
Testing whether the system is compatible with other systems with which it should communicate
Component
A minimal software item for which a separate specification is available
Component Testing
The testing of individual software components
Conclusion / Consequent
The Then part of a rule, or one clause or expression in this part of the rule.
Condition
A Boolean expression containing no Boolean operators.

Condition Outcome
The evaluation of a condition to TRUE or FALSE
Confidence / Certainty factor
A measure of the confidence assigned to the value of an attribute. Often expressed as a percentage (0 to 100%) or
probability (0 to 1.0). 100% or 1.0 implies that the attribute's value is known with certainty.
Configuration Item (CI)
Component of an infrastructure - or an item, such as a Request For Change, associated with an infrastructure that is (or is to be) under the control of Configuration Management. It may vary widely in complexity, size and type,
from an entire system (including all hardware, software and documentation) to a single module or a minor
hardware component.
Configuration Management
The process of identifying and defining Configuration Items in a system.
E.g. recording and reporting the status of Configuration Items and Requests For Change, and verifying the
completeness and correctness of Configuration Items.
Conformance Criterion
Some method of judging whether or not the component's action on a particular specified input value conforms to
the specification
Conformance Testing
The process of testing that an implementation conforms to the specification on which it is based
Contingency Planning
Planning to address unwanted occurrences that may happen at a later time. Traditionally, the term has been used
to refer to planning for the recovery of IT systems rather than entire business processes.
Control Flow
An abstract representation of all possible sequences of events in a program's execution
Control Flow Graph
The diagrammatic representation of the possible alternative control flow paths through a component
Control information
Elements of a knowledge base other than the attributes and rules that control the user interface, operation of the
inference engine and general strategies employed in implementing a consultation with an expert system.
Conversion Testing
Testing of programs or procedures used to convert data from existing systems for use in replacement systems
Correctness
The degree to which software conforms to its specification
Coverage
The degree, expressed as a percentage, to which a specific coverage item has been exercised by a test case suite
Coverage Item

An entity or property used as a basis for testing


Critical Success Factor(CSF)
A measure of success or maturity of a project or process. It can be a state, a deliverable or a milestone. An
example of a CSF would be 'the production of an overall technology strategy'. Key areas of business activity in
which favorable results are necessary for a company to reach its goals.

D
Data Definition
An executable statement where a variable is assigned a value
Data Definition C-use coverage
The percentage of data definition C-use pairs in a component exercised by a test case suite
Data Definition C-use pair
A data definition and computation data use, where the data use uses the value defined in the data definition
Data Dictionary
A database about data and database structures. A catalog of all data elements, containing their names, structures,
and information about their usage.
Data Flow Testing
Testing in which test cases are designed based on variable usage within the code
Data Mining
Extraction of useful information from data sets. Data mining serves to find information that is hidden within the
available data.

Data Use
En executable statement where the value of a variable is accessed
Debugging
The process of finding and removing the causes of failures in software
Decision
A program point at which the control flow has two or more alternative routes. The choice of one from among a
number of alternatives; a statement indicating a commitment to a specific course of action.
Decision Condition
A condition within a decision
Decision Coverage
The percentage of decision outcomes exercised by a test case suite
Decision Outcome
The result of a decision which therefore determines the control flow alternative taken
Delta Release

A delta, or partial, Release is one that includes only those CIs within the Release unit that have actually changed
or are new since the last full or delta Release. For example, if the Release unit is the program, a Delta Release
contains only those modules that have changed, or are new, since the last Full Release of the program or the last
delta Release of certain modules.
Depth first search.
A search strategy that backtracks through all of the rules in a knowledge base that could lead to determining the
value of the attribute that is the current goal or sub goal.
Descriptive Model
Physical, conceptual or mathematical models that describe situations as they are or as they actually appear.
Design Based Testing
Designing tests based on objectives derived from the architectural or detail design of the software. This is excellent
for testing the worst case behavior of algorithms
Design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and
identifying the associated high level test cases.
Desk Checking
The testing of software by the manual simulation of its execution
Deterministic Model
Mathematical models that are constructed for a condition of assumed certainty. The models assume there is only
one possible result (which is known) for each alternative course or action.
Dirty Testing
Testing which demonstrates that the system under test does not work
Domain Expert
A person who has expertise in a specific problem domain.

Downtime
Total period that a service or component is not operational, within an agreed service times.

E
Error
A human action producing an incorrect result
Error Guessing
A test case design technique where the experience of the tester is used to postulate which faults might occur and
to design tests specifically to expose them
Evaluation report

A document produced at the end of the test process summarizing all testing activities and results. It also contains
an evaluation of the test process and lessons learned.
Executable statement
A statement which, when compiled, is translated into object code, which will be executed procedurally when the
program is running and may perform an action on program data.
Exhaustive testing
The last executable statement within a component.
Exit point
The last executable statement within a component.
Expected outcome
The behavior predicted by the specification of an object under specified conditions.
Expert system
A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the
knowledge base to respond to a user's request for advice.
Expertise
Specialized domain knowledge, skills, tricks, shortcuts and rules-of-thumb that provide an ability to rapidly and
effectively solve problems in the problem domain.

F
Failure
A fault, if encountered, may cause a failure, which is a deviation of the software from its expected delivery or
service
Fault
A manifestation of an error in software (also known as a defect or a bug)
Firing a rule
A rule fires when the if part (premise) is proven to be true. If the rule incorporates an else component, the rule also
fires when the if part is proven to be false.
Fit for purpose testing
Validation carried out to demonstrate that the delivered system can be used to carry out the tasks for which it was
acquired.

Forward chaining
Applying a set of previously determined facts to the rules in a knowledge base to see if any of them will fire.
Full Release
All components of the Release unit that are built, tested, distributed and implemented together. See also delta
Release.

Functional specification
The document that describes in detail the characteristics of the product with regard to its intended capability.
Fuzzy variables and fuzzy logic
Variables that take on multiple values with various levels of certainty and the techniques for reasoning with such
variables.

G
Genetic Algorithms
Search procedures that use the mechanics of natural selection and natural genetics. It uses evolutionary
techniques, based on function optimization and artificial intelligence, to develop a solution.
Geographic Information Systems (GIS)
A support system which represents data using maps.
Glass box testing
Testing based on an analysis of the internal structure of the component or system.
Goal
A designated attribute: determining the values of one or more goal attributes is the objective of interacting with a
rule based expert system.
The solution that the program is trying to reach.
Goal directed
The process of determining the value of a goal by looking for rules that can conclude the goal. Attributes in the
premise of such rules may be made sub goals for further search if necessary.
Graphical User Interface (GUI)
A type of display format that enables the user to choose commands, start programs, and see lists of files and other
options by pointing to pictorial representations (icons) and lists of menu items on the screen.

H
Harness
A test environment comprised of stubs and drivers needed to conduct a test.
Heuristics
The informal, judgmental knowledge of an application area that constitutes the "rules of good judgment" in the
field.
Heuristics also encompass the knowledge of how to solve problems efficiently and effectively, how to plan steps in
solving a complex problem, how to improve performance, etc

I
ITIL

IT Infrastructure Library (ITIL) is a consistent and comprehensive documentation of best practice for IT Service
Management.
ITIL consists of a series of books giving guidance on the provision of quality IT services, and on the
accommodation and environmental facilities needed to support IT.
Inference
New knowledge inferred from existing facts.
Inference engine
Software that provides the reasoning mechanism in an expert system. In a rule based expert system, typically
implements forward chaining and backward chaining strategies.
Infrastructure
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office
environment and procedures.
Inheritance
The ability of a class to pass on characteristics and data to its descendants.
Input
A variable (whether stored within a component or outside it) which is read by the component
Input Domain
The set of all possible inputs
Input Value
An instance of an input
Integration testing
Testing performed to expose faults in the interfaces and in the interaction between integrated components.
Intelligent Agent
Software that is given a particular mission carries out that mission, and then reports back to the user.
Interface testing
Integration testing where the interfaces between system components are tested.
Isolation testing
Component testing of individual components in isolation from surrounding components, with surrounding
components being simulated by stubs.
Item
The individual element to be tested. There usually is one test object and many test items.

K
KBS
Knowledge-Based System.
Key Performance Indicator

The measurable quantities against which specific Performance Criteria can be set.
Knowledge
Knowledge refers to what one knows and understands.
Knowledge Acquisition
The gathering of expertise from a human expert for entry into an expert system.
Knowledge Representation
The notation or formalism used for coding the knowledge to be stored in a knowledge-based system.
Knowledge base
The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically
incorporates definitions of attributes and rules along with control information.
Knowledge engineering
The process of codifying an expert's knowledge in a form that can be accessed through an expert system.
Knowledge-Based System
A domain specific knowledge base combined with an inference engine that processes knowledge encoded in the
knowledge base to respond to a user's request for advice.
Known Error
An incident or problem for which the root cause is known and for which a temporary Work-around or a permanent
alternative has been identified

L
Lifecycle
A series of states connected by allowable transitions.
Linear Programming
A mathematical model for optimal solution of resource allocation problems.
Log
A chronological record of relevant details about the execution of tests.
Logging
The process of recording information about tests executed into a test log.

M
Maintainability
The ease with which the system/software can be modified to correct faults, modified to meet new requirements,
modified to make future maintenance easier, or adapted to a changed environment.
Maintainability testing
Testing to determine whether the system/software meets the specified maintainability requirements.
Management

The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.
Metric
Measurable element of a service process or function.
Mutation analysis
A method to determine test case suite thoroughness by measuring the extent to which a test case suite can
discriminate the program from slight variants (mutants) of the program.

N
N-Transitions
A sequence of N+1 transitions
Natural Language Processing (NLP)
A Computer system to analyze, understand and generate natural human-languages.
Negative Testing
Testing which demonstrates that the system under test does not work
Neural Network
A system modeled after the neurons (nerve cells) in a biological nervous system. A neural network is designed as
an interconnected system of processing elements, each with a limited number of inputs and outputs. Rather than
being programmed, these systems learn to recognize patterns.
Non Functional Requirements Testing.
Testing of those requirements that do not relate to functionality. I.e. performance or usability
Normalization
The process of reducing a complex data structure into its simplest, most stable structure. In general, the process
entails the removal of redundant attributes, keys, and relationships from a conceptual data model.

O
Object
A software structure which represents an identifiable item that has a well-defined role in a problem domain.
Object Oriented
An adjective applied to any system or language that supports the use of objects.
Objective
A reason or purpose for designing and executing a test.
Operational Testing
Testing conducted to evaluate a system or component in its operational environment
Oracle
A mechanism to produce the predicted outcomes to compare with the actual outcomes of the software under test

Outcome
The actual or predicted outcome of a test
Output
A variable whether stored within a component or outside it which is written to by the component
Output Domain
The set of all possible outputs
Output Value
An instance of an output

P
P-use
A data use in a predicate
PRINCE2
Projects in Controlled Environments, is a project management method covering the organization, management and
control of projects. PRINCE2 is often used in the UK for all types of projects.

Page Fault
A program interruption that occurs when a page that is marked not in real memory is referred to by an active
page.
Pair programming
A software development approach whereby lines of code (production and/or test) of a component are written by
two programmers sitting at a single computer. This implicitly means ongoing real-time code reviews are performed.
Pair testing
Two testers work together to find defects. Typically, they share one computer and trade control of it while testing.
Partition Testing
A test case design technique for a component in which test cases are designed to execute representatives from
equivalence classes
Pass
A test is deemed to pass if its actual result matches its expected result.
Pass/fail criteria
Decision rules used to determine whether a test item (function) or feature has passed or failed a test.
Path
A sequence of events, e.g. executable statements, of a component or system from an entry point to an exit point.
Path coverage
The percentage of paths that have been exercised by a test suite.
Path sensitizing

Choosing a set of input values to force the execution of a given path.


Path testing
A white box test design technique in which test cases are designed to execute paths.
Peer Review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A
technical review is also known as a peer review.
Performance
The degree to which a system or component accomplishes its designated functions within given constraints
regarding processing time and throughput rate.
Performance indicator
A metric, in general high level, indicating to what extent a certain target value or criterion is met. Often related to
test process improvement objectives, e.g. Defect Detection Percentage (DDP).
Performance testing
The process of testing to determine the performance of a software product.

Performance testing tool


A tool to support performance testing and that usually has two main facilities: load generation and test transaction
measurement.
Load generation can simulate either multiple users or high volumes of input data. During execution, response time
measurements are taken from selected transactions and these are logged. Performance testing tools normally
provide reports based on test logs and graphs of load against response times.
Phase
A distinct set of test activities collected into a manageable phase of a project, e.g. the execution activities of a test
level.
Plan
A document describing the scope, approach, resources and schedule of intended test activities.
Policy
A high level document describing the principles, approach and major objectives of the organization regarding
testing.
Portability testing
The process of testing to determine the portability of a software product.
Post condition
Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.
Precondition
Environmental and state conditions that must be fulfilled before the component or system can be executed with a
particular test or test procedure.

Predicted outcome
The behavior predicted by the specification of an object under specified conditions.
Priority
The level of (business) importance assigned to an item, e.g. defect.
Problem Domain
A specific problem environment for which knowledge is captured in a knowledge base.
Problem Management
Process that minimizes the effect on Customer(s) of defects in services and within the infrastructure, human errors
and external events.
Process
A set of interrelated activities, which transform inputs into outputs.
Process cycle test
A black box test design technique in which test cases are designed to execute business procedures and
processes.
Production rule
Rules are called production rules because new information is produced when the rule fires.
Project
A project is a unique set of coordinated and controlled activities with start and finish dates undertaken an objective
conforming to specific requirements, including the constraints of time, cost and resources.
Project test plan
A test plan that typically addresses multiple test levels.
Prototyping
A strategy in system development in which a scaled down system or portion of a system is constructed in a short
time, tested, and improved in several iterations.
Pseudo-random
A series which appears to be random but is in fact generated according to some prearranged sequence.

Q
Quality
The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied
needs.
Quality assurance
Part of quality management focused on providing confidence that quality requirements will be fulfilled.
Quality attribute
A feature or characteristic that affects an items quality.
Quality management

Coordinated activities to direct and control an organization with regard to quality. Direction and control with regard
to quality generally includes the establishment of the quality policy and quality objectives, quality planning, quality
control, quality assurance and quality improvement.
Query
Generically query means question. Usually it refers to a complex SQL SELECT statement.
Queuing Time
Queuing time is incurred when the device, which a program wishes to use, is already busy. The program therefore
has to wait in a queue to
Obtain service from that device.

R
ROI
The return on investment (ROI) is usually computed as the benefits derived divided by the investments made. If we
are starting a fresh project, we might compute the value of testing and divide by the cost of the testing to compute
the return.
Random testing
A black box test design technique where test cases are selected, possibly using a pseudo-random generation
algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as
reliability and performance.
Re-testing
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective
actions.
Recoverability
The capability of the software product to re-establish a specified level of performance and recover the data directly
affected in case of failure.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or
uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software
or its environment is changed.
Relational operator
Conditions such as is equal to or is less than that link an attribute name with an attribute value in a rule's premise
to form logical expressions that can be evaluated as true or false.
Release note
A document identifying test items, their configuration, current status and other delivery information delivered by
development to testing, and possibly other stakeholders, at the start of a test execution phase.
Reliability

Is the probability that software will not cause the failure of a system for a specified time under specified conditions
Repeatability
An attribute of a test indicating whether the same results are produced each time the test is executed.
Replace ability
The capability of the software product to be used in place of another specified software product for the same
purpose in the same environment.
Requirements
The degree to which a requirement is stated in terms that permit establishment of test designs (and subsequently
test cases) and execution of tests to determine whether the requirements have been met.
Requirements-based testing
An approach to testing in which test cases are designed based on test objectives and test conditions derived from
requirements.
e.g. tests that exercise specific functions or probe non-functional attributes such as reliability or usability.
Resource utilization
The capability of the software product to use appropriate amounts and types of resources. For example the
amounts of main and secondary memory used by the program and the sizes of required temporary or overflow
files, when the software performs its function under stated conditions.
Result
The consequence/outcome of the execution of a test. It includes outputs to screens, changes to data, reports, and
communication messages sent out.
Resumption criteria
The testing activities that must be repeated when testing is re-started after a suspension.
Review
A detailed check of the test basis to determine whether the test basis is at an adequate quality level to act as an
input document for the test process.
Reviewer
The person involved in the review who shall identify and describe anomalies in the product or project under review.
Reviewers can be chosen to represent different viewpoints and roles in the review process.
Risk
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Risk management
Systematic application of procedures and practices to the tasks of identifying, analyzing, prioritizing, and controlling
risk.
Robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful
environmental conditions.

Roll in roll out (RIRO)


Used on some systems to describe swapping.

Root cause
An underlying factor that caused a non-conformance and possibly should be permanently eliminated through
process improvement.
Rule
A statement of the form: if X then Y else Z. The if part is the rule premise, and the then part is the consequent. The
else component of the consequent is optional. The rule fires when the, if part is determined to be true or false.
Rule Base
The encoded knowledge for an expert system. In a rule-based expert system, a knowledge base typically
incorporates definitions of attributes and rules along with control information.

S
Safety testing
The process of testing to determine the safety of a software product.
Scalability
The capability of the software product to be upgraded to accommodate increased loads.
Schedule
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in
their context and in the order in which they are to be executed.
Scribe
The person who has to record each defect mentioned and any suggestions for improvement during a review
meeting, on a logging form. The scribe has to make ensure that the logging form is readable and understandable.
Script
Commonly used to refer to a test procedure specification, especially an automated one.
Security
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or
deliberate, to programs and data.
Severity
The degree of impact that a defect has on the development or operation of a component or system.
Simulation
The representation of selected behavioral characteristics of one physical or abstract system by another system.
Using a model to mimic a process.
Simulator

A device, computer program or system used during testing, which behaves or operates like a given system when
provided with a set of controlled inputs.
Smoke test
A subset of all defined/planned test cases that cover the main functionality of a component or system, to
ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and
smoke test is among industry best practices.
Stability
The capability of the software product to avoid unexpected effects from modifications in the software.

State diagram
A diagram that depicts the states that a component or system can assume, and shows the events or
circumstances that cause and/or result from a change from one state to another.
State table
A grid showing the resulting transitions for each state combined with each possible event, showing both valid and
invalid transitions.
State transition testing
A black box test design technique in which test cases are designed to execute valid and invalid state transitions.
Statement
An entity in a programming language, which is typically the smallest indivisible unit of execution.
Statement coverage
The percentage of executable statements that have been exercised by a test suite.
Statement testing
A white box test design technique in which test cases are designed to execute statements.
Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.
Statistical testing
A test design technique in which a model of the statistical distribution of the input is used to construct
representative test cases.
Strategy
A high-level document defining the test levels to be performed and the testing within those levels for a programme
(one or more projects).
Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
Structural coverage
Coverage measures based on the internal structure of the component.
Stub

A skeletal or special-purpose implementation of a software component, used to develop or test a component that
calls or is otherwise dependent on it. It replaces a called component.
Sub goal
An attribute which becomes a temporary intermediate goal for the inference engine. Subgoal values need to be
determined because they are used in the premise of rules that can determine higher level goals.
Suitability
The capability of the software product to provide an appropriate set of functions for specified tasks and user
objectives
Suspension criteria
The criteria used to (temporarily) stop all or a portion of the testing activities on the test items.
Symbolic Processing
Use of symbols, rather than numbers, combined with rules-of-thumb (or heuristics), in order to process information
and solve problems.
Syntax testing
A black box test design technique in which test cases are designed based upon the definition of the input domain
and/or output domain.

System
A collection of components organized to accomplish a specific function or set of functions.
System integration testing
Testing the integration of systems and packages; testing interfaces to external organizations (e.g. Electronic Data
Interchange, Internet).

T
Technical Review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken. A
technical review is also known as a peer review.
Test Case
A specific set of test data along with expected results for a particular test condition
Test Maturity Model (TMM)
A five level staged framework for test process improvement, related to the Capability Maturity Model (CMM) that
describes the key elements of an effective test process.
Test Process Improvement (TPI)
A continuous framework for test process improvement that describes the key elements of an effective test process,
especially targeted at system testing and acceptance testing.
Test approach

The implementation of the test strategy for a specific project. It typically includes the decisions made that follow
based on the (test) projects goal and the risk assessment carried out, starting points regarding the test process
and the test design techniques to be applied.
Test automation
The use of software to perform or support test activities, e.g. test management, test design, test execution and
results checking.
There are many factors to consider when planning for software test automation. Automation changes the
complexion of testing and the test organization from design through implementation and test execution.
There are tangible and intangible elements and widely held myths about benefits and capabilities of test
automation.
Test case specification
A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution
preconditions) for a test item.
Test charter
A statement of test objectives, and possibly test ideas. Test charters are amongst other used in exploratory testing.
Test comparator
A test tool to perform automated test comparison.
Test comparison
The process of identifying differences between the actual results produced by the component or system under test
and the expected results for a test. Test comparison can be performed during test execution (dynamic comparison)
or after test execution.
Test condition
An item or event of a component or system that could be verified by one or more test cases, e.g. a function,
transaction, quality attribute, or structural element.
Test data
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the
component or system under test.
Test data preparation tool
A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and
edited for use in testing.
Test design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and
identifying the associated high level test cases.
Test design tool
A tool that supports the test design activity by generating test inputs from a specification that may be held in a
CASE tool repository.

E.g. requirements management tool, or from specified test conditions held in the tool itself.
Test environment
An environment containing hardware, instrumentation, simulators, software tools, and other support elements
needed to conduct a test.
Test evaluation report
A document produced at the end of the test process summarizing all testing activities and results. It also contains
an evaluation of the test process and lessons learned.
Test execution
The process of running a test by the component or system under test, producing actual result(s).
Test execution phase
The period of time in a software development life cycle during which the components of a software product are
executed, and the software product is evaluated to determine whether or not requirements have been satisfied.
Test execution schedule
A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in
their context and in the order in which they are to be executed.
Test execution technique
The method used to perform the actual test execution, either manually or automated.
Test execution tool
A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.
Test harness
A test environment comprised of stubs and drivers needed to conduct a test.
Test infrastructure
The organizational artifacts needed to perform testing, consisting of test environments, test tools, office
environment and procedures.
Test item
The individual element to be tested. There usually is one test object and many test items.
Test level
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a
project.
Examples of test levels are component test, integration test, system test and acceptance test.
Test log
A chronological record of relevant details about the execution of tests.

Test manager
The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers
plans and regulates the evaluation of a test object.

Test object
The component or system to be tested.
Test point analysis (TPA)
A formula based test estimation method based on function point analysis.
Test procedure specification
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test
script.
Test process
The fundamental test process comprises planning, specification, execution, recording and checking for completion.
Test run
Execution of a test on a specific version of the test object.
Test specification
A document that consists of a test design specification, test case specification and/or test procedure specification.
Test strategy
A high-level document defining the test levels to be performed and the testing within those levels for a programme
(one or more projects).
Test suite
A set of several test cases for a component or system under test, where the post condition of one test is often used
as the precondition for the next one.
Test type
A group of test activities aimed at testing a component or system regarding one or more interrelated quality
attributes.
A test type is focused on a specific test objective, i.e. reliability test, usability test, regression test etc., and may
take place on one or more test levels or test phases.
Testability
The capability of the software product to enable modified software to be tested.
Tester
A technically skilled professional who is involved in the testing of a component or system.
Testing
The process of exercising software to verify that it satisfies specified requirements and to detect faults
Thread testing
A version of component integration testing where the progressive integration of components follows the
implementation of subsets of the requirements, as opposed to the integration of components by levels of a
hierarchy.
Top-down testing

An incremental approach to integration testing where the component at the top of the component hierarchy is
tested first, with lower level components being simulated by stubs.
Tested components are then used to test lower level components.
Traceability
The ability to identify related items in documentation and software, such as requirements with associated tests.
See also horizontal traceability, vertical traceability

U
Understandability
The capability of the software product to enable the user to understand whether the software is suitable, and how it
can be used for particular tasks and conditions of use.
Unit testing
The testing of individual software components
Unreachable code
Code that cannot be reached and therefore is impossible to execute.
Unstructured Decisions
This type of decision situation is complex and no standard solutions exist for resolving the situation. Some or all of
the structural elements of the decision situation are undefined, ill-defined or unknown.
Usability
The capability of the software to be understood, earned, used and attractive to the user when used under specified
conditions.
Use case testing
A black box test design technique in which test cases are designed to execute user scenarios.
User acceptance testing
Formal testing conducted to enable a user, customer or other authorised entity to determine whether to accept a
system or component.
User test
A test whereby real-life users are involved to evaluate the usability of a component or system.
User-Friendly
An evaluative term for a System's user interface. The phrase indicates that users judge the user interface as to
easy to learn, understand, and use.

V
V model
describes how inspection and testing activities can occur in parallel with other activities
Validation

Correctness: Determination of the correctness of the products of software development with respects to the user
needs and requirements
Verification
Completeness: The process of evaluating a system or component to determine whether the products of the given
development phase satisfy the conditions imposed at the start of that phase
Version Identifier
A version number; version date, or version date and time stamp.
Volume Testing
Testing where the system is subjected to large volumes of data.

W
Walkthrough
A step-by-step presentation by the author of a document in order to gather information and to establish a common
understanding of its content.
Waterline
The lowest level of detail relevant to the Customer.
What If Analysis
The capability of "asking" the software package what the effect will be of changing some of the input data or
independent variables.
White box test design technique. Documented procedure to derive and select test cases based on an analysis of
the internal structure of a component or system. Documented procedure to derive and select test cases based on
an analysis of the internal structure of a component or system. Testing based on an analysis of the internal
structure of the component or system.
Workaround
Method of avoiding an incident or problem, either from a temporary fix or from a technique that means the
Customer is not reliant on a particular aspect of a service that is known to have a problem.

X
XML
Extensible Markup Language. XML is a set of rules for designing text formats that let you structure your data.
XML makes it easy for a computer to generate data, read data, and ensure that the data structure is unambiguous.
XML avoids common pitfalls in language design: it is extensible, platform-independent, and it supports
internationalization and localization.

REGRESSION TEST PLAN


Amendment History
Date

Version

Name

Reason for Change

Sign-Off List
Name

Position

Review List
In addition to the above:
Name

Position

Distribution List
Name

Position

Related Documentation
Ref

Title

Author

Version

1.
2.
3.
4.

Open Issues
Ref

Title

Author

Ref

Title

Author

Outstanding Areas
Title

Person responsible

CONTENTS
1
Introduction 441
1.1 Document Purpose 441
1.2 Document Scope
441
1.3 Test Focus
441
1.4 Test Objectives
441
1.5 Dependencies and Assumptions
442
1.6 Testing Coverage & Traceability
442
2
Approach
443
2.1 Test Collateral 443
2.2 Data Requirements 443
2.3 Confidence Testing 443
2.4 Regression in System Test phase
444
2.5 Regression in System Integration Test phase 444
2.6 Performance Testing 445
3
Regression Test scope 446
3.1 Location of Test Scripts
446
3.2 System Test Regression Summary 446
3.3 Core Siebel CRM 4.3 Functions/Features to be tested during ST
3.4 System Integration Test Regression Summary
447
3.5 Core Siebel CRM 4.3 Functions/Features to be tested during SIT
3.6 Legacy Interface Functions/Features to be tested
450
3.7 Performance Testing 451
3.8 Functions/Features not to be tested 451
3.9 Entry and Exit Criteria
451
3.10 Suspension and Resumption Criteria 452
4
Testing tools and Techniques 453
5
Test Plan and Schedule
454
5.1 Resource Requirements
454
5.2 Test Plan
454
6
Risks and COntingencies
455
7
Appendix
456
7.1 Legacy Interface Explanation 456
7.2 Confidence Test Checklist 460
7.3 Performance KPIs
461

446
447

Introduction
Document Purpose
The purpose of this document is to provide a comprehensive description of the scope,
tasks and responsibilities of Regression Testing during System (ST) and System
Integration (SIT) phases of CRM 4.4.

Document Scope
The scope of this document is the regression testing activities to be planned and executed
by the IBM/IGSI CRM 4.4 test team. The document will describe the exiting (CRM 4.3)
functionality that will be proved by the regression test pack.
The document will not cover the areas of the CRM 4.3 solution that have been changed
as part of CRM 4.4 as it is concerned with validating areas of existing functionality and
not testing new areas. These areas will be covered by the ST and SIT Detailed Test Plans.
The regression plan supports both the CRM 4.4 Quality Plan overall GCAP Test strategy
by providing Touch and Pipe test coverage of Siebel and Legacy Interface
functionality during System Test and System Integration Test phases, thereby providing a
robust platform for detailed regression and End to End testing performed by Allied in the
post-SIT test phase.

Test Focus
The regression phase for CRM 4.4 will be focussed on proving the 4.3 functionality that
has not been changed as part of the latest release. The focus of the 4.4 regression pack
will be on three groups of tests: Core 4.3 Siebel functionality; Legacy Interface tests;
Basic Performance Tests.

Test Objectives
The test objectives for the regression test phases can be described as follows;
To verify that existing functionality continues to work as specified. More
specifically to;
o Prove unchanged 4.3 functionality works and is stable
o Prove integration with legacy systems works and is stable
o Provide a robust basis for subsequent regression testing

Avoid costly defect resolution further through the testing phase

Dependencies and Assumptions

Dependency 1 The System Test environment will be available for limited Touch
Testing targeted at key regression areas.

Dependency 2 - The Assembly Blue environment will be available for SIT with the
requisite levels of integration between Siebel and legacy, in order to run all the regression
tests.

Dependency 3 - Test data including will have been set up in the system to allow the
execution of the regression tests.

Dependency 4 - The environment will be based on a subset of Siebel production data


which will have been subject to the requisite Data Migration scripts in order for legacy
interface and data quality tests to be carried out.

Testing Coverage & Traceability


The basis of the regression pack will be the set of tests prepared for the CRM 4.3 release,
plus a subset of test cases created for proving enhancements introduced in 4.3. The
resulting pack is designed to cover all Siebel functionality that could have been impacted
by 4.4, plus all integration points with other systems. This is a good starting point for the
regression pack as there are few areas of functionality that are not covered by the above
criteria. Such areas that arent covered have been identified and are detailed in Section
3.8.
The tests are split into three areas: CRM 4.3 Core Siebel tests; Legacy Interface tests; and
Basic Performance Tests. The CRM 4.3 Core Siebel tests cover all CRM areas that have
not been altered; the Legacy System Tests cover the integration of 4.4 with legacy
systems; and the Basic Performance Tests provide an early assessment of System
Performance against predetermined benchmarks.
Other that the scripts testing enhancements made in CRM 4.3, the test cases are the
legacy of numerous Reuters CRM releases, with the coverage determined by iterations of
post-testing defect analysis. Furthermore, in many cases tests have been created by
breaking down existing End to End tests, making them more modular in order that testers
with no prior knowledge of the Reuters solution could run them. As a consequence, the
majority of test cases are not directly traceable to a specific set of requirements.

Approach
Test Collateral
The initial set of regression tests have been from those prepared for the SIT phase of the CRM
4.3 release which themselves were based on previous Reuters releases. This pack is
supplemented by a subset of tests covering enhancements introduced in CRM 4.3, as prioritised
by IGSI.

For the build of the regression pack a summary of CRM functional areas was used to confirm that
tests covered all major areas of functionality. See Sections 3.5 & 3.6 for functional areas. In total
around 45% of the enhancement tests for CRM 4.3 are being used to create the Regression test
pack for CRM 4.4. See Appendix for matrix of included/excluded tests.

In addition to the full regression set, a subset of high priority, rapidly executable confidence tests
will be identified to be run after every release to the test environments to ensure that basic
processes have not been impacted by the new code.

Eventually the regression test pack will incorporate tests which cover new areas of functionality
that have been tested during SIT to build a pack that can be used as the basis for a regression in
subsequent phases.

Data Requirements
The plan is for initial limited regression testing to be performed during the System Test
phase in the System Test environment, followed by broader regression coverage during
System Integration Test phase in the Assembly Blue environment. A recent cut of
production data will need to exist in both environments in order to run these tests. Some
tests will require new data to be created by the tester. Those regression tests being used
which require this will either have specific instructions within the test regarding the
creation of data.
It is essential that in both environments a selection of User Ids are unlocked and available
for testing. It would be advantageous if these were dedicated test Ids with a variety of
Service, Sales and Client Training responsibilities. At least one Administrator Id and one
Id that has been added to the State Model are also required.

Confidence Testing
Experience has shown that confidence testing of each release has great benefits in terms
of identifying any problems with a drop of new code at the earliest opportunity. As well
as reducing turnaround times for defect resolution, the impact on other environment users
is minimised. Confidence Testing will consist of a checklist (see appendix) of

streamlined, rapidly executable Siebel and Integration tests developed during previous
Reuters releases. Simplified versions of tests for new functionality can be added to this as
the development cycle progresses to broaden the coverage of the confidence test pack as
required.

Regression in System Test phase


The overall emphasis during the System Test phase will be on touch testing as much of
Siebel as possible to ensure that enhancements havent caused disruption rather than
providing a rigorous regression of the system.
To this end a core subset of the regression pack has been selected, focussing on
functional areas identified as being of critical importance or at higher risk of being
impacted by 4.4 enhancements. Due to environment restrictions the tests are limited to
functionality contained entirely within Siebel.
Test execution will require sufficient flexibility to cover as much Siebel functionality as
possible given environmental and time restrictions whilst doing so in a timely manner.
For example, unchanged SR functionality should not be verified until enhancements in
this area have been released. As such, tests will be identified and prioritised in advance,
testers will need to understand existing and new functionality properly and release notes
need to be circulated to testers in sufficient time to allow selection and execution of
relevant tests. Once the full release is available in the System Test environment, one full
iteration of the core subset will be executed.
In addition, the set of confidence tests described above will be run with each release of
new functionality into the System Test environment. As this new functionality is released
and proven it is suggested that, where appropriate, 4.4 enhancement tests are incorporated
into the confidence test set.

Regression in System Integration Test phase


System Integration Testing (SIT) will be performed in the Assembly Blue environment.
In accordance with the GCAP Test Strategy, the approach during this phase will be more
comprehensive than that proposed for System Test, with the entire regression pack of 192
Functional Test Cases will be run at least once.
Repeated iterations of regression test cases are a matter of good practice. Whilst it will
not be possible to run a full iteration of the regression pack with each drop of new
functionality it is proposed that a subset of test cases is identified after the first full run
through of the pack of a size executable at each drop given time and resource constraints.
Again, high degrees of flexibility and co-ordination are required for this in order that the
slimmed-down regression pack can be targeted at areas more at risk of being impacted by
the new code being released, as indicated by the Release Notes.

The Assembly Blue environment enables testing of the complete set of legacy Siebel
Interfaces. Regression testing of these during SIT will focus on basic connectivity tests
and some basic End to End scenarios. This exercise requires a good deal of forward
planning and co-ordination with third parties within the Reuters organisation and as such
is identified as a risk area / high priority for the early stages of SIT.
The principle of confidence testing each release established during ST will be continued
during SIT. By this stage it is anticipated that the Confidence Test pack will incorporate
core 4.4 enhancements as well as core regression, whilst remaining a streamlined and
rapidly executable resource. In this way a reliable overview of the system can be made
quickly at each release.

Performance Testing
The GCAP Test Strategy provides for Limited Performance Testing during both ST
and SIT, which is most logically incorporated into the Regression Test Plan. It is
suggested however that performance testing of this kind is only of true value in the fully
integrated Assembly Blue environment with the full build present. As such, testing will
consist of timed touch tests of various operations against benchmarks run as a single
iteration in SIT when regression testing is complete.

Regression Test scope


Location of Test Scripts
Regression test scripts can be found in the following location on the shared drive:
Z:\XXXXX

System Test Regression Summary


In total 39 regression scripts will be run comprising

Core Siebel CRM 4.3 Functions/Features to be tested during ST


In detail the regression pack for the core CRM 4.3 functionality will cover the following
areas:
Functional Area
Opportunity
Quote
Order

Accounts & Contacts

Function
Ability to Create and & manage an
Opportunity
Ability to create a Quote
Ability to validate & approve a Quote
Ability to manage a Quote
Ability to create & populate Order
Ability to submit Order
Ability to create Account & Account
hierarchy
Ability to manage Account
Ability to create Contact
Ability to promote Prospect to Contact
Ability to create & manage SRs

Service

Ability to create & manage Service Activities

Ability to create & Manage a Course


Client Training

Ability to create & manage a Training Class

Ability to create & send emails


Communications
Ability to create
Broadcasts

&

manage

Message

Script ref
Sales 1.0 Sales Opportunity
SPA 1.0
SPA 1.0
SPA 1.0
SPA 1.0
Confidence Tests
Confidence Tests
CR_FT_CR422_TC002
List Management 3.0
Closed SR 1.0
Complaint SR 1.0
Full SR 1.0 E2E
Full SR 2.0 DC Refer
GSLA SR 1.0
GSLA SR 2.0
Hot Topics 1.0
PRT_FT_PRT01_TC001
PRT_FT_PRT02_TC002
PRT_FT_PRT02_TC004
PRT_FT_PRT04_TC001
Engineer Dispatch 1.0
FS 1.0
FS 2.0
Client Training 1.0
Client Training 2.0
Client Training 3.0
Client Training 4.0
Client Training 5.0
Confidence Tests
Email Outbound Forward
Email Reply
Confidence Tests

Functional Area
Marketing

Function
Ability to create & manage and execute
Campaigns

Script ref
Campaign Management 1.0
Campaign Management 2.0
Campaign Management 3.0
Campaign Management 4.0

Data Quality
Management

Ability to merge Account Records


Ability to merge Contact records
Ability to create & manage Correspondence
Document Management

DQM 1.0
DQM 3.0
Confidence Tests
Product Literature 1.0

Ability to create & manage Lists of contacts


Ability to create & manage audit trails
Ability to create & manage calendar items
Ability to export to excel from Siebel
Ability to use Search Centre

List Management 1.0


Audit Trail 1.0
Calendar 1.0
Confidence Tests
Confidence Tests

Misc

Ability to create and modify Pricelists


Ability to create and modify Products
Ability to create Actuate reports

Global Price Lists 1.0


Product Admin 1.0
Actuate 1.0

Ability to create Analytics reports

Enabling Services

Products &
Pricing
Reporting

System Integration Test Regression Summary


In total 192 regression scripts will be run comprising
141 CRM 4.3 Functions

51 Legacy Interface Functions

In addition a total of 21 timed Performance Tests against KPIs will be executed.

Core Siebel CRM 4.3 Functions/Features to be tested during SIT


In detail the regression pack for the core CRM 4.3 functionality will cover the following
areas:
Functional Area
Opportunity

Function
Ability to Create and & manage an
Opportunity

Ability to create a Quote


Quote

Ability to validate & approve a Quote


Ability to manage a Quote
Ability to create & populate Order

Order

Accounts & Contacts

Ability to submit Order


Ability to
hierarchy

create

Account

&

Account

Script ref
Sales 1.0 Sales Opportunity
Sales 2.0 Cancellation Opportunity
Sales 4.0 Opportunity TAS
Sales 5.0 New Business
SPA 1.0
SPA 2.0
SPA 1.0
SPA 2.0
FT_CR962_TC001
SPA 1.0
SPA 2.0
SPA 1.0
SPA 2.0
Confidence Tests

Functional Area

Function
Ability to manage Account

Ability to create Contact

Ability to promote Prospect to Contact


Ability to create & manage SRs

Service

Ability to create & manage Service Activities

Ability to create & Manage a Course

Client Training
Ability to create & manage a Training Class

Script ref
FT_CR677_TC001
FT_CR677_TC002
Sales 3.0 Location Prospect (4.3 Env)
Confidence Tests
CR_FT_CR422_TC002
FT_CR422_TC001
FT_CR936_TC001
List Management 3.0
Assignment Rules 1.0
Closed SR 1.0
Complaint SR 1.0
CSS 1.0
Full SR 1.0 E2E
Full SR 2.0 DC Refer
Full SR 3.0 FS Refer
Full SR 4.0 FLO Change
Full SR 5.0 Entitlements
GSLA SR 1.0
GSLA SR 2.0
GSLA SR 3.0
GSLA SR 4.0
GSLA SR 5.0
HDB 1.0
Hot Topics 1.0
Hot Topics 2.0
Hot Topics 3.0
PRT_FT_PRT01_TC001
PRT_FT_PRT01_TC002
PRT_FT_PRT01_TC003
PRT_FT_PRT01_TC004
PRT_FT_PRT01_TC005
PRT_FT_PRT01_TC006
PRT_FT_PRT01_TC007
PRT_FT_PRT01_TC008
PRT_FT_PRT01_TC009
PRT_FT_PRT01_TC0010
PRT_FT_PRT02_TC002
PRT_FT_PRT02_TC003
PRT_FT_PRT02_TC004
PRT_FT_PRT02_TC005
PRT_FT_PRT02_TC005
PRT_FT_PRT03_TC001
PRT_FT_PRT04_TC001
PRT_FT_PRT04_TC002
PRT_FT_PRT04_TC003
Light SR 1.0
eChannel_FT_ES_44_TC001
Engineer Dispatch 1.0
Engineer Dispatch 2.0
Engineer Dispatch 3.0
FS 1.0
FS 2.0
FS 3.0
FS 4.0
Misc 4.3 Tests 1.0
Client Training 1.0
Client Training 2.0
Client Training 6.0
Client Training 7.0
Client Training 8.0
Client Training 9.0
Client Training 10.0
Client Training 11.0
Client Training 3.0
Client Training 4.0
Client Training 5.0
Client Training 12.0
Client Training 13.0
FT_CR722_TC001

Functional Area

Function
Ability to create & send emails

Communications

Marketing

Data Quality
Management

Ability to create & manage Message


Broadcasts
Ability to create & manage and execute
Campaigns

Ability to merge Account Records


Ability to merge Contact records

Ability to create & manage Correspondence


Document Management
Ability to create & manage Lists of contacts

Enabling Services

Ability to create & manage audit trails


Ability to create & manage calendar items
Ability to export to excel from Siebel
Ability to use Search Centre

Misc

Products &
Pricing

Ability to create and modify Pricelists

Ability to create and modify Products


Ability to create Actuate reports
Reporting

Ability to create Analytics reports

Script ref
CPM Email 1.0 - Unsolicited mail
CPM Email 2.0 - Activity Reply
CPM Email 3.0 - SR Reply
CRMC Support Email 1.0 - Unsolicited mail
CRMC Support Email 2.0 - Activity Reply
CRMC Support Email 3.0 - SR Reply
Data Helpdesk Email 1.0 - Unsolicited mail
Data Helpdesk Email 2.0 - Activity Reply
Data Helpdesk Email 3.0 - SR Reply
Email Outbound Forward
Email Reply
eSupport Email 1.0 - Unsolicited mail from
unregistered account
eSupport Email 2.0 - Unsolicited mail from
registered account
eSupport Email 2.1 - Unsolicited mail from
registered account
eSupport Email 2.2 - Unsolicited mail from
registered account
eSupport Email 2.3 - Unsolicited mail from Active
eSpresso User
eSupport Email 3.0 - SR Reply
eSupport Email 4.0 - Activity reply
Resolver Group Email 1.0 - Unsolicited mail
Resolver Group Email 2.0 - Activity Reply
Resolver Group Email 3.0 - SR Reply
Misc 4.3 Tests 5.0
Campaign Management 1.0
Campaign Management 2.0
Campaign Management 3.0
Campaign Management 4.0
CT Products 1.0
FT_CR896_TC001
DQM 1.0
DQM 2.0
DQM 3.0
DQM 4.0
DQM 5.0
Correspondence 1.0
Product Literature 1.0
List Management 1.0
List Management 4.0
List Management 5.0
Audit Trail 1.0
Calendar 1.0
Business Direct 1.0 Basic Tests
Export Functionality 1.0
FT_CR717_TC001
FT_CR811_TC001
FT_CR926_TC001
Misc 4.3 Tests 4.0
Homepage 1.0
Misc 4.3 Tests 2.0
Misc 4.3 Tests 3.0
FT_CR963_TC001
FT_CR983_TC001
Global Price Lists 1.0
Pricing Administration 1.0
Pricing Administration 2.0
Product Admin 1.0
Actuate 1.0
Actuate 2.0
Actuate 3.0
Analytics Scenarios 1.0
Analytics Scenarios 2.0
Analytics Scenarios 3.0

Functional Area

Function

Script ref
Analytics Scenarios 4.0
Analytics Scenarios 5.0
Analytics Scenarios 6.0

Legacy Interface Functions/Features to be tested


See Appendix for an explanation of legacy interfaces.
In detail the legacy interface tests will cover the following functionality:
Functional Area

Function

Script ref

Ability to generate Interface Request from


Siebel
Ability to initiate RQ Interface from RQ

RQ Interface 1.0

Reuters Q Interface

Customer Zone

Ability to create SR via Contact Us in CZ


Ability to manage Training Class

3000 Xtra

Ability to create SR via Contact Us function

Customer Zone 1.0


Client Training Existing Contact Enrols via CZ
Client Training New Contact Enrols via CZ
3000 Xtra 1.0 eSupport

Ability to create SR via Contact Us function

3000 Xtra Companion 1.0

RQ Interface 2.0

3000 Xtra Companion


PRM 1.0

PRM Portal
Factiva

Ability to launch Factiva Portal from Siebel

Factiva Portal 1.0

ECCO Interface

Ability to create and manage ECCO request

ECCO 1.0

Ability to create Assets in Siebel via RISC

RISC 1.0

Ability to troubleshoot Asset & run


diagnostics
Ability to launch and query in Portal from
Siebel
Ability to create and manage Skipper Alert

RISC 2.0
RISC 3.0
CSS & Sales Portal 1.0

RISC
CSS / Sales Portal
Skipper

TPASS 1.0

TPASS
Venus

Alerts & Notifications

Ability to create SR from Venus alert

Venus 1.0

Ability to create ANSS Profile

Alerts & Notifications 1.0

Ability to generate alerts from Siebel


triggers

Alerts & Notifications 2.0


Alerts & Notifications 3.0
Alerts & Notifications 4.0
Alerts & Notifications 5.0
Alerts & Notifications 6.0
eSpresso 1.0
eSpresso 2.0
eSpresso 3.0
eSpresso 4.0
eSpresso 5.0
eSpresso 6.0
eSpresso 9.0
eSpresso 10.0
eSpresso 11.0
eSpresso 11.1
eSpresso 11.2
eSpresso 11.3
SPA 1.0
SPA 2.0
No Interface to be invoked with assistance of Ops

Ability to create and manage Organisations


in eSpresso

Espresso Interface

Ability to create and manage Users in


eSpresso
Ability to generate Siebel SRs and activities
from Contact Us function

Oracle Projects
SAI

Skipper 1.0

Ability to submit Solutions Orders to


Oracle Projects
Ability to create CORE accounts in Siebel
via SAI

Functional Area

Function

Script ref

Ability to Create & Manage SRs from


eService

eChannel_BC01_TC001
eChannel_BC02_TC001
eChannel_BC03_TC004
eChannel_BC07_TC001
eChannel_FT_BC_05_TC001
eChannel_FT_ES_41_TC001
eChannel_FT_ES_65_TC001
eService_ES24_TC001
eService_ES53_TC001
eService_FT_ES_8_TC001-TC003
eService_FT_ES_8_TC004
eService_FT_ES_20,55,56_TC001
eService_FT_ES_25_TC001
eService_FT_ES_53_TC002

eService

Performance Testing
The 21 KPIs to be tested during SIT are detailed in the appendix.

Functions/Features not to be tested


The tests that comprise this regression pack were designed to test the core CRM 4.3
functionality and legacy interfaces. Due to skill constraints within the Test Team,
Incentive Compensation and the CTI interface are not included in this definition.

Entry and Exit Criteria


Entry Criteria
The table below describes the preconditions for execution of the regression test plan.
No

Criteria description

Owner

Regression Test Plan has been completed, approved and distributed to the
Solution Delivery Test Manager.

Regression test lead

Regression test scripts, analysis, design and preparation complete

Regression test lead

Test environments ready

Operations team

Regression Test Resources in place

Regression test lead

Development fix support ready

Development Manager

Exit Criteria
The table below describes the regression component of the exit criteria for SIT.
No

Criteria description

Owner

100% of Regression test cases have been executed.

Regression test lead

All Regression defects are raised in Test Director and meet defect exit
criteria.

Regression test lead

Regression element of the SIT Test Report has been delivered

Suspension and Resumption Criteria


Suspension
No

Criteria description

Owner

High incidence of software, data, environment or other test issues

Regression test lead

Environment confidence tests fail

Regression test lead

Resumption
No

Criteria description

Owner

Evidence that environment build is valid and confidence tests are passed
successfully

Regression test lead

Testing tools and Techniques


The regression test phase will use the same test tools as all the other phases for test script
creation, execution and defect tracking, namely Test Director.

Test Plan and Schedule


Resource Requirements
Onshore tester
Offshore IGSI tester
External Legacy system support resource
Infrastructure support resources

Test Plan
The detailed test execution sequence is not yet finalised.

Risks and COntingencies


No

Risk

Likelihood

Impact

Mitigation

???

High

Environments are not ready early enough for


regression tests to be carried out

Environment team to deliver


integrated environments as
scheduled

Medium

Medium

Sufficient training is provided


by the UK resource.

Test resources are not familiar enough with the


system to execute the regression tests potentially
resulting in incomplete execution of the regression
pack, the raising of a large percentage of invalid
defects and genuine defects going undetected.

Medium

Medium

???

When defects are raised around legacy interface


testing, there is not sufficient knowledge within the
development team to allow the timely turnaround of
these defects.

Appendix
CRM 4.3 Test Scripts

Module
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eChannel
eService
eService
eService
eService
eService
eService
eService
eService
eService
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web

Priority
(High/Medium/
Low)
High
High
High
Low
Medium
High
Medium
Medium
Medium
Low
Medium
High
Medium
High
Medium
Medium
Medium
Medium
High

Included in
regression
pack Y/N
Y
Y
Y
N
Y
Y
Y
Y
N
N
Y
Y
Y
Y
Y
Y
Y
Y
N

CRM4.3_FT_WT_UC01_TC001-TC006

High

CRM4.3_FT_WT_UC01_TC008

Low

CRM4.3_FT_WT_UC02_TC001-TC005

High

Medium

CRM4.3_FT_WT_UC03_TC001-TC005

High

CRM4.3_FT_WT_UC03_TC007

Low

CRM4.3_FT_WT_UC04_TC001-TC003

Medium

CRM4.3_FT_WT_UC05_TC001-TC004

Medium

CRM4.3_FT_WT_UC06_TC001-TC004

Medium

CRM4.3_FT_WT_UC07_TC001

Medium

CRM4.3_FT_WT_UC08_TC001

Medium

CRM4.3_FT_WT_UC09_TC001

Medium

High

CRM4.3_FT_WT_UC21_TC006

Medium

CRM4.3_FT_WT_UC21_TC007

High

CRM4.3_FT_WT_UC21_TC008
CRM4.3_FT_WT_UC21_TC009

Medium
Medium

N
N

Test Script Name


CRM4.3_FT_BC01_TC001
CRM4.3_FT_BC02_TC001
CRM4.3_FT_BC03_TC001
CRM4.3_FT_BC04_TC001
CRM4.3_FT_BC05_TC001
CRM4.3_FT_BC07_TC001
CRM4.3_FT_ES41_TC001
CRM4.3_FT_ES44_TC001
CRM4.3_FT_ES65_TC001
CRM4.3_FT_ISQ1212231
CRM4.3_FT_ES25_TC001
CRM4.3_FT_ES24_TC001
CRM4.3_FT_ES8_TC004
CRM4.3_FT_ES53_TC001
CRM4.3_FT_ES53_TC002
CRM4.3_FT_ES8_TC001-TC003
CRM4.3_FT_ES07_TC001
CRM4.3_FT_ES20,55,56_TC001
CRM4.3_FT_ES101_TC001

CRM4.3_FT_WT_UC02_TC008

CRM4.3_FT_WT_UC21_TC001-TC004

Comment

Low priority

Low priority

Incorporated in existing Test


Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test

Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
Web
Training
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
PRT
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
IC
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR

CRM4.3_FT_WT_UC22_TC001-TC004

High

Medium

High

CRM4.3_FT_WT_UC24_TC001

Medium

CRM4.3_FT_WT_UC25_TC001

Medium

CRM4.3_FT_WT_UC29_TC001-TC002
CRM4.3_FT_PRT01_TC001
CRM4.3_FT_PRT01_TC002
CRM4.3_FT_PRT01_TC003
CRM4.3_FT_PRT01_TC004
CRM4.3_FT_PRT01_TC005
CRM4.3_FT_PRT01_TC006
CRM4.3_FT_PRT01_TC007
CRM4.3_FT_PRT01_TC008
CRM4.3_FT_PRT01_TC009
CRM4.3_FT_PRT01_TC010
CRM4.3_FT_PRT02_TC002
CRM4.3_FT_PRT02_TC003
CRM4.3_FT_PRT02_TC004
CRM4.3_FT_PRT02_TC005
CRM4.3_FT_PRT03_TC001
CRM4.3_FT_PRT04_TC001
CRM4.3_FT_PRT04_TC002
CRM4.3_FT_PRT04_TC003
CRM4.3_FT_IC030_TC001
CRM4.3_FT_IC030_TC002
CRM4.3_FT_IC030_TC003
CRM4.3_FT_IC034_TC001
CRM4.3_FT_IC034_TC002-TC004
CRM4.3_FT_IC037_TC001
CRM4.3_FT_IC038_TC001
CRM4.3_FT_IC085_TC001
CRM4.3_FT_IC085_TC002
CRM4.3_FT_IC087_TC001
CRM4.3_FT_IC088_TC001
CRM4.3_FT_IC089_TC001
CRM4.3_FT_IC089_TC002
CRM4.3_FT_IC091_TC001
CRM4.3_FT_IC35_TC001
CRM4.3_FT_ISQ 1129067_TC001
CRM4.3_FT_CR983_TC001
CRM4.3_FT_CR963_TC001
CRM4.3_FT_CR962_TC001
CRM4.3_FT_CR961_TC001
CRM4.3_FT_CR960_TC001
CRM4.3_FT_CR936_TC001
CRM4.3_FT_CR926_TC001
CRM4.3_FT_CR903_TC001
CRM4.3_FT_CR902a_TC001
CRM4.3_FT_CR896_TC001

High
High
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Medium
High
Medium
High
Medium
Medium
High
Medium
Medium
High
Medium
Medium
High
Medium
High
High
Medium
Medium
Medium
Medium
High
Medium
Medium
Medium
Medium
Medium
Medium
Medium
Low
Low
Medium
Medium
Medium
Medium
Medium

N
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
Y
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
N
Y
Y
Y
N
N
Y
Y
N
N
Y

CRM4.3_FT_WT_UC22_TC005
CRM4.3_FT_WT_UC23_TC001-TC004

Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials
Incorporated in existing Test
Materials

Out of Regression Scope


Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope
Out of Regression Scope

Low priority
Low priority

CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR
CR

CRM4.3_FT_CR829_TC001
CRM4.3_FT_CR811_TC003
CRM4.3_FT_CR811_TC002
CRM4.3_FT_CR811_TC001
CRM4.3_FT_CR722_TC001
CRM4.3_FT_CR717_TC001
CRM4.3_FT_CR677_TC002
CRM4.3_FT_CR677_TC001
CRM4.3_FT_CR519_TC001
CRM4.3_FT_CR422_TC002
CRM4.3_FT_CR422_TC001
CRM4.3_FT_CR1172_TC001
CRM4.3_FT_CR1113_TC001
CRM4.3_FT_ CR172_TC001

Medium
Low
Low
Medium
Medium
Medium
Medium
Medium
Medium
High
Medium
High
High
Medium

N
N
N
Y
Y
Y
Y
Y
N
Y
Y
N
N
N

Low priority
Low priority

Legacy Interface Explanation


Application

Description

A&N

Provides Clients with the facility to


receive notifications when SRs are
raised against subscribed Accounts /
Products. Subscription is by logging
onto a dedicated site. Service
Requests trigger an SR message in
Siebel that is sent to the ANSS
system.
The
ANSS
system
processes the messages and sends
the alerts to the subscribed users by
email, SMS etc.
Call Centre tool that allocates
incoming calls to available agents,
voice mail or different Call Centres
using direct computer access using
the incoming call number or a
Personal Identification Number (PIN).
Not covered in the past by Test Team
as it requires specialised equipment /
knowledge. Matt Cunningham is the
Reuters contact for CTI Testing.

CTI

Customer
Zone

ECCO
(IBM)

Espresso

Customer Support & Training portal


used
to
facilitate
customer
interactions. 'Contact Us' facility allow
clients to create a one-off message
that creates an SR / Activity in Siebel
and is then assigned appropriately
(via Assignment Manager) according
to the Query parameters.
Used to send messages to the ECI
system in IBM in the US only for
certain Service functions. 2-way
interface with Siebel, using MQ as
the middleware for sending and
receiving messages.
Account
management
/
permissioning portal for Reuters webbased products. New or Updated
Account, User and Subscription data

Interface
point
with CRM 4.4

Through Siebel

Direction
of
communication

Type
of
passed

information

One way, Siebel


ANSS

XML triggers generated by


various CRM processes
(eg SR creation / update
on specified products /
accounts). Triggers are
sent to ANSS which
generates
alerts
to
subscribed clients.
When calls are connected
to agents, details about
the caller or service
required are presented on
screen as the call is
connected, saving time
and optimising service.
CTI can also help with
outgoing
calls
by
automatically driving calls
through the telephone
system with names and
numbers extracted from
the companys database.
XML passes CZ query
parameters,
which
generates
an
SR
accordingly. Assignments
Manager takes over to
Assign SR correctly. Also
sends email notification
direct to the user informing
them of SR details.

Through Siebel

Two way, Siebel


CTI

Through Siebel

One way,
Siebel

Through
via MQ

Two way, Siebel


ECCO

XML. Information relates to


resolution of specific Service
issues. Interface is initiated
from Siebel SR / interface
activity.

One
eSpresso
Siebel

Manually uploaded (during


testing) delimited files are
generated from the "old"
PPD. 2 Types of files are

Siebel,

Through Siebel /
data migration

CZ

way,

is entered by CSRs / Users in


eSpresso and propagated to Siebel
via the CUF file to ensure
synchronisation across systems.
'Contact Us' function allows users to
generate queries in eSpresso that
are passed to Siebel and handled in
the form of Activities and SRs.

Factiva

Oracle
Projects
(SPA
Orders)

Provides User with a direct link from


an Account selected in Siebel to
related stories on Factiva (web-based
news and business information
service).

generated, containing new or


updated Organisation and
User / Subscription data
respectively.
CU functionality is routed
via the eSupport and creates
Activity or SR record in
Siebel depending on query
criteria.

Launches
Siebel

RQ

SAI
(Compass /
ISIS)

Siebel
Factiva

SPA (Solutions) order information


created in Siebel is submitted to
Oracle Projects for billing.
Through
via MQ

RISC

from

Reuters Integrated Service Console web portal from which CSRs can run
technical tasks remotely against
specific assets on client sites.
Launched from Siebel, the interface
shows
event,
hierarchy
and
diagnostic tools in a location / server
specific manner, ensuring that
relationships between delivery chain
devices are accurately reflected.

Siebel,

One way Siebel

Oracle
Projects

Siebel

Two way Siebel


RISC
Tivoli / Vantive

Siebel, via MQ

Two way Siebel


RQ

Siebel

One
way
COMPASS
/
ISIS Siebel

ReutersQ is an Internal Reuters


problem handling / tracking system.
New records can be initiated either
from Siebel or from the RQ
application itself.

The Subscriber Accounts Interface


(also called The COMPASS / ISIS
Account Interface) creates and
maintains SIEBEL records for
Accounts, Account Addresses and

Launch from Siebel opens


the Factiva Portal and
automatically generates a
Factiva search using the
Account details as search
criteria.
Portal
thereby
automatically returns new
stories relation to the selected
account, most recent first
XML sent via MQ series.
Order Line Item information
for Hardware, Software,
Consulting and Bespoke
Devt Orders of Revenue type
- Once-off are passed by the
interface,
together
with
Project Id information. Free
trial, recurring revenue or
maintenance items are not
passed.
XML. Launch of RISC Portal
from a specific Account in
Siebel
automatically
generates query in the Portal
for Assets at that location.
Create / Update Asset
information generated in
Legacy systems is sent
TivoliSiebel via the RISC
portal.
Event
/
diagnostic
information initiated by
engineers in the RISC portal
is sent TivoliSiebel via the
RISC portal.
XML. Create / Update record
information is passed both
ways between Siebel and
RQ. For a query initiated in
Siebel an Interface Activity
is created manually in Siebel
and the information passed to
RQ via the interface. Any
subsequent correspondence
across the interface appends
to this Activity. A query
initiated in RQ creates an
Interface Activity in Siebel
automatically.
Any
subsequent correspondence
across the interface appends
to this Activity.
Operation has two stages, the
first is the extraction of data
from legacy accounting
systems
(ISIS
and
COMPASS/Area CDB) and

Account Teams.
The interface
compares previously extracted data
with the current data and only sends
transactions for data that has been
modified.

the generation of an XML


file based on differences
between the current runs and
the last runs data. Second
stage is the processing of the
XML File containing the
changes into SIEBEL via a
VB
Application.
Thus
updates are made to the
Siebel Subscriber Accounts
records and synchronisation
is achieved across systems.
NB - It is the responsibility
of the Bridge IT Groups to
generate XML files sourced
from the Bridge Account
Systems.

Skipper

Used by Reuters Financial Software


(RFS) to manage the development of
software to resolve bugs. RFS
Support handle the client calls
referred to them on Siebel and
therefore need an interface between
Siebel and Skipper to automate the
creation of bug records in Skipper.
Siebel

Two way Siebel


Skipper

XML. Outbound messges:


RFS receive notification of
client issues in the form of
SRs.
Interface
initiated
Activity created in Siebel
using information from the
SR and sent to Skipper,
creating a new bug record.
Subsequant updates made in
Siebel are passed to Skipper.
Inbound
Messages:
Notifications are sent from
Skipper to Siebel on creation
of the original bug record and
on
completion
of
development in Skipper.

TPASS

Venus

The TPASS interface is a weekly


asset interface that is used to import
TCID assets into Siebel. TPASS was
previously an automatic interface.
The function is now executed in
Production by means of a manual
migration and import from the
Transaction Vault into Siebel as a
report.
Interface to import Venus Alert
information into Siebel (Venus is the
tool by which the Service Support
Help Desks issue and manage
service
information,
keeping
customers up to date with the status
of Reuters Products / Networks).

Siebel

Siebel

One way TPASS


Siebel

Text file. Report is generated


in the Transaction Vault and
exported as a text file. Data is
manipulated
and
then
manually attached to activity
for processing via a weekly
workflow process.

Two way Siebel


Venus

XML. Alert is generated in


Venus and sent into Siebel
creating
an
Interface
Activity. From this an SR
is generated in Siebel.
Interface passes Siebel
Activity reference back to
Venus. Subsequant Venus
updates are sent into
Siebel and update the
Activity.

Confidence Test Checklist


Siebel Basics

Status

Functional Area

Working / Not Working / Not Tested

Create Account Hierarchy


Create Contact
Assignment Manager
Comms Inbound (incl attachment)

Comments / Actions

Comms Outbound (incl attachment)


Forward / Reply Function (incl attachment)
List Import
Dispatch Board incl. refresh Service Region
Message Broadcast
RQ
Search Centre
Actuate reports
ANSS
Customter Zone
eService
General Navigation / Query / Sort etc
Create / copy / delete records
Opportunities / Quotes / Approvals / Orders
Correspondance
State Model
eSpresso

Performance KPIs
CSS
Number

Screen

Action
Action:

Contact

Contact

Parameters:

From the results contact list applet located beneath


the search criteria on the Search centre

Action:

Drill down
surname

View:

Service Contact Detail View

SR

Activity

Search for contact using


search centre
Contact Last Name
City

on

Contact Service
List Applet

Action:

Click Product MVG button

View:

Service
View

Applet:

RCRM Activity List Applet


- Basic

Action:

Click New to create a new


activity

View:

All Service Request Across


Organizations

SR

Applet:

Target 1

Target 2

Time taken for


results
to
appear

30

Time taken for


contact/
SR
screen
to
appear

10

Time taken for


product
pick
applet
to
appear

10

Time taken for


new record to
appear

10

Time for the


query to be
loaded
and
record to be
returned.

10

contact

Applet:

Request

Measure

Request

Detail

Service Request List Applet

Action:

Enter SR # in the SR # field


and execute query

Sales
Search Centre Accounts Location
6

Accounts

Contacts

Parameters:

City
Account Name

View:

All Contacts Contact List


View

Applet:
Parameters:
View:

Applet:
8

Activities

Parameters:

Contact List Applet


First Name = [Value]
Last Name =[Value]

10

Time for Query


Result
to
complete

10

10

Time for Query


Result
to
complete

Time
for
message to pop
up

30

50

Time for the


screen
to
complete

10

20

Time to get the


quote loaded in
the screen

Time for the ok


message to pop
up

10

20

RCRM All Activity List


View
RCRM Activity List Applet
Time for the
Basic No Toggle
query to be
loaded
Account Manager = <User
Login>
Activity Type = Action
Item

View:

All Opportunities
Organizations

Applet:

RCRM Opportunity
Applet - Primary

Opportunities

Time for the


query to be
loaded

Across
List

Parameters:

Account Team = [User login]


Close Date <= [Value]

View:

Quote Orders View

Action:

Press Auto Order button

View:

RCRM
View

Solutions

10

11

12

13

Order

Quote

Approval

Approvals

Quote

Action:

Press Approval button

View:

All Quote List View

Applet:

Quote List Applet

Parameters:

Quote Number

View:

Quote Detail View

Action:

Press Validate button

Quote

Client Training
View:
14

15

Marketing
Contacts

Account Detail Contacts


View

Applet:

RCRM Contact Form Applet


Large (with Toggle)

Action:

Create a new contact

View:

RCRM Activity List View


(My Activities)

Activity
Applet:

RCRM Activity List Applet


Basic No Toggle

Action:

Copy an activity

View:

Campaign
List

Time for the


record to be
ready for edit

20

Time till the


record is ready
to
be
completed

Time for Query


Result
to
complete
loading

10

20

Time for the


query to be
loaded

N/A

10

Time for SR
list to reappear
after save

N/A

10

Business Direct & Marketing

Applet:
16

Campaign

Action:

Administration

Campaign
Administration
List Applet
On list applet, query for
Campaigns in UK

Parameters:

Campaign Name = *UK*

View:
Applet:
Action

N/A
N/A
Query for SR #

Parameters:

SR Number

eService

17

SR

View:
18

SR
Applet:
Action:

Save new SR

All

19

Login

Browser set with default home page as Siebel Login


Screen. Enter User Id / pwd click login

Time for Siebel


home page to
load

20

50

20

Home Page

Click on Home Page Tab from anywhere in System

Time for Siebel


Home Page to
Load

10

60

21

Email

Email send using Send Communication applet

Time between
send
and
Email
Outbound
being set to
status
Queued

10

Best Practices for Siebel Functional Test Script


Development

Purpose
This document provides you some of the Best Practices to be followed for developing a Functional Test
Script.
Below are some of the best practices/guidelines that can be implemented to make a Test script more
valuable:

1. Comment Test Scripts


Use commented text in your scripts to describe the test steps. A test script that is thoroughly explained is
much easier to maintain. When sharing test scripts among a large team of testers and developers, it is often
helpful to document conventions and guidelines for providing comments in test scripts.

2. Scope Script Variables and Data Tables for Script


Modules
Make sure to scope script variables and data tables appropriately. The scope, either modular or global,
determines when a variable or data table is available. By scoping them appropriately, you can be sure that
the global variables are available across multiple scripts, and modular variables maintain their state within a
specific script module.
The test tool typically allows you to store data values in tables and variables, but often these tables and
variables have a scope that is defined by the test tool. You might need to override the scoping that is
predefined by the test tool. Identify when the variables are used within the script, and construct your test so
that variables and data values are available when needed.

3. Include Multiple Validation Conditions


Use verification procedures to perform the function of the visual monitoring that the tester does during
manual testing:
Checkpoints. Include object checkpoints to verify that properties of an object
are correct.

Verification routines. Include routines that verify that the actions requested
were performed, expected values were displayed, and known states were reached.
Make sure that these routines have appropriate comments and log tracing.

Negative and boundary testing. Include routines that perform negative


tests. For example, attempt to insert and commit invalid characters and numbers into
a date field that should only accept date values.

Also include routines that perform boundary testing in fields that should accept only a
specific range of values. Test values above, below, and within the specified limits. For
example, attempt to insert and commit the values of 0, 11, and 99 in a field that
should accept values only between 1 and 31. Boundary tests also include field length
testing. For example a field that accepts 10 characters should be tested with string
lengths that are greater than, less than, and equal to 10.

4. Handle Error Conditions


For critical scripts that validate key application functionality, insert validation conditions and error event
handling to decide whether to proceed with the script or abort the script. If error events are not available in
the test tool, you can write script logic to address error conditions. Set a global or environment variable on or
off at the end of the script module, and then use a separate script module to check the variable before
proceeding. Construct your test scripts so that for every significant defect in the product, only one test will
fail. This is commonly called the one bug, one fail approach.
When an error condition is encountered, the script should report errors to the test tool's error logging system.
When a script aborts, the error routine should clean up test data in the application before exiting.
Construct your test scripts so that individual test modules use global setup modules to initialize the testing
environment. This allows you to design tests that can restart the application being tested and continue with
script execution (for example, in the event of a crash).

5. Define Data Values with Structured Format


Some fields in Siebel applications require data values that have a defined format. Therefore, you must use
data values that are formatted as the fields are configured in the Siebel application.
For example, a date field that requires a value in the format 4/28/2003 02:00:00 PM causes an error if the
data value supplied by the test script is 28 Apr 2003 2:00 PM. Test automation checkpoints should also use
data that has been formatted correctly, or use regular expressions to do pattern matching.

6. Use Variables and Expressions When Working with


Calculated Fields
Some fields in Siebel applications are calculated automatically and are not directly modifiable by the user
(for example, Today's Date, Total Income). Construct your test scripts so that they remember calculated
values in a local variable or in an output value if the calculated value needs to be used later in the script. For
example, you might need to use a calculated value to run a Siebel query.
When you set a checkpoint for a calculated value, you might not know the value ahead of time. Use a
regular expression in your checkpoint such as an asterisk (*) to verify that the field is not blank. When you
are using a tabular checkpoint, you might want to omit the calculated field from the checkpoint.

7. Import Generalized Functions and Subroutines


Store generalized script functions and routines in a separate file. This allows you to maintain these pieces of
script separately from specialized test code. In your test tool, use the import functionality (if available) to
access the generalized scripts stored in the external file.
NOTE: When developing and debugging generalized functions, keep them in the specialized
test script until they are ready to be extracted. This is because you might not be able to debug
external files due to test tool limitations.

8. Run a Query Before Adding a Record or Accessing Data


Before creating data that could potentially cause a conflict, run a query to verify that no record with the same
information already exists. If a matching record is found, the script should delete it, rename it, or otherwise
modify the record to mitigate the conflict condition.

9. Run a Query Before Accessing Application Data


Before accessing an existing record or record set, run a query to narrow the records that are available. Do
not assume that the desired record is in the same place in a list applet because the test database can
change over time.
You should also query for data ahead of time when you are in the process of developing test scripts. Check
for data that should not be in the database, or that was left in the database by a previous test pass, and
delete it before proceeding.

10. Manage Test Data from Within the Test Script


Create all test data necessary for running a script within the script itself. Avoid creating scripts that are
dependent on preexisting data in a shared test database. Manage test data using setup scripts and script
data tables, rather than database snapshots.

11. Remove New Records Created During a Test


At the end of the script, remove all records created by the test. This should be done at the beginning also, in
case a previous test failed to complete the clean-up process.
You can implement the clean-up process as a reusable script module. For each module in the test, you can
create a corresponding clean-up module and run it before and after the test module.
The general approach is to have the clean-up script perform a query for the records in a list applet and
iterate through them until the entire associated test records are deleted or renamed. When the records are

needed to be renamed, the initial query should be repeated after each record is renamed, until the row count
is 0.

12. Exercise UI Components with Basic Mouse Clicks


When recording a test script, perform all actions using the visual components as if you were a beginning
user. This requires clicking on the UI components rather than using keyboard accelerators and other
shortcuts.
Most shortcuts in Siebel applications are supported for test automation. However, the Tab key shortcut is not
supportedpressing the Tab key typically moves the focus from one control to another based on a
preconfigured tab order. Click the mouse to move focus rather than using the Tab key.

Siebel 8.0 Test Automation

Siebel 8.0 Test Automation


New and Enhanced Features

Test Automation support for the following new features in the Siebel
8.0 UI

Task Based UI

InkData Control

New API called Siebel Test Optimizer (formerly known as Siebel Test
Express)

Task Based UI

The Task Based UI allows users to perform process-centric work by


means of a wizard-like sequence of UI Views

Siebel Test Automation supports recording and playback of the new


Task Based UI components

The following controls are supported

SiebTask
SiebTaskStep
SiebTaskUIPane
SiebTaskLink

SiebTask
Control Specification

SiebTask

Enables Test Automation on the Active Task object

Parent: SiebApplication

Sample script line

SiebApplication(Siebel Call Center).SiebTask(Create


Opportunity).GetRepositoryName()

SiebTaskStep
Control Specification

SiebTaskStep

Enables Test Automation on the Active Task Step object

Parent: SiebTask

Sample script line

var = SiebApplication(Siebel CallCenter).SiebTask(Create


Opportunity).SiebTaskStep(Step 1).TaskStepTitle

SiebTaskUIPane
Control Specification

SiebTaskUIPane

Enables Test Automation on the Task Pane

Parent: SiebApplication

Sample script line

SiebApplication("Siebel Call Center").SiebTaskUIPane("TaskUIPane").Close

SiebTaskLink
Control Specification

SiebTaskLink

Enables Test Automation on the Task Link object

Parent: SiebTaskUIPane

Sample script line

SiebApplication("Siebel Call Center").SiebTaskUIPane("TaskUIPane").


SiebTaskLink("TaskLink").Click

InkData Control

Siebel Test Automation supports recording and playback of the new


Ink Data Control component

InkData control is available on Siebel 8.0 applications for Tablet PC


to accept handwritten input content (e.g. signature)

The following control is supported

SiebInkData

SiebInkData
Control Specification

SiebInkData

Enables Test Automation on the InkData Control

Parent: SiebApplet

Sample script line

bVar = SiebApplication("Siebel Call Center").SiebScreen("Expense


Reports").SiebView("My Expense Reports").SiebApplet("Expense
Reports").SiebInkData(Signature).IsEnabled

Siebel 8.0 Test Automation


Benefits

Support for new UI components in Siebel 8.0

Increased coverage of UI components for test automation

Enables automated test plans to verify quality of new UI features

Siebel Test Optimizer


Introduction

Siebel Test Optimizer Aims to Address High-Priority Customer


Requirements for Functional Testing

It is a new feature in Siebel 8.0 that allows users to

Author test scripts in an offline mode

Identify changes made to the Siebel Repository objects used by


test scripts since the last synchronization point

Siebel Test Optimizer is a joint feature with Mercury QTP

Siebel provides an API for QTP to directly access the Siebel


Repository (without having to record via the UI)

QTP provides the UI features that interact with the Siebelprovided API

Only available with QTP 9.0

Siebel Test Optimizer


Benefits

Enable offline test authoring much earlier in the software


development lifecycle

Reduce script maintenance costs by keeping tests in sync with ongoing


updates in the Siebel repository

Make offline scripting easy by providing a specialized API, script


validation, and Siebel repository integration

Siebel Test Optimizer


Installation and Setup Considerations

The Siebel Test Optimizer API is J2EE Web Services based

It is supported on the following J2EE Application Servers WebSphere 6.0 , WebLogic 9.0 and JBoss 4.0.2

The API is packaged as an EAR file that can be deployed on one of


the above application servers

Siebel Test Optimizer


Configuration Steps

Create JDBC data sources to connect to the Siebel Repository

Can be done via the application servers administration


console

Configure connection pools

Enable Siebel Add-in while launching QTP 9.0

Test Optimizer is enabled in QTP via the Siebel Add-in

Siebel Test Optimizer


Usage

Launch QTP 9.0 with Siebel Add-in enabled

Navigate to Resources->Object Repository Manager

In the Object Repository Manager, navigate to Tools->Siebel Test


Optimizer->Create Object Repository

Fill out the connection and login parameters on the first page of the
wizard and press Next.

Now you will see a Screen selection page

Select the desired screens and press Next

QTP starts communicating with the Siebel Test Optimizer API to


fetch the specified screens and populates the QTP Object Repository
(OR)

Once a QTP OR has been created and saved, you can start authoring test
scripts

Since the OR is pre-populated with Siebel metadata objects, intellisense is


available during scripting

An Update Object Repository option is also available to fetch metadata


changes from the Siebel Repository

Siebel Functional Testing Overview

Siebel Functional Testing module enables automated testing of Siebel 7.7, 7.8 and
8.x applications

Records and plays back browser actions in either Internet Explorer 6.x, 7.x
& 8.x
Firefox not supported for Siebel

Integrates with Siebel Test Automation CAS Library (must be enabled on


the Siebel Server)

Provides several key features, such as:

Siebel Object identification (including Siebel HI & SI controls)


Siebel Validation (including Siebel HI & SI controls)
Parameterization
Rich tree UI for visual scripting
Extensible code interface

Creating a Siebel Functional Test Script

Click FileNew

Select Siebel in the Functional Testing folder

Enabling Siebel Test Automation


Siebel Test Automation CAS must be enabled on Siebel Server before recording (may
require System Administration rights)
To configure Siebel Test Automation on Siebel 7.7 / 7.8

Open the .CFG file for the Siebel application you are testing on the Siebel server
Set the EnableAutomation and AllowAnonUsers switches to TRUE in the [SWE]
section as follows:

[SWE]
EnableAutomation = TRUE
AllowAnonUsers = TRUE
Restart the Siebel Server

To configure Siebel Test Automation on Siebel 8.0 / 8.1

Log into Siebel as Administrator


Go to Site Map
Go to Administration - Server Configuration
Click Components under Server node
Select Call Center Object Manager (provided you want to enable automation
for Call Center, otherwise select desired app)
Under list of Components, click Parameters tab
Find EnableAutomation and AllowAnonUsers parameters and set both to
TRUE
Restart Siebel Server

Verify Siebel Script Recording

If Siebel Test Automation is not enabled properly on the Siebel server side, it will
prevent OpenScript from capturing Siebel HI actions

When you navigate to Siebel with the ?SWECmd=AutoOn URL for the first
time (even just in IE), you should be prompted to download Siebel Test
Automation plugin to your browser like this:

Click Record button in OpenScript toolbar to start recording through IE browser


Append the ?SWECmd=AutoOn token to the URL to enable Siebel Test Automation
during recording:

http://siebel/callcenter_enu/start.swe?SWECmd=AutoOn

Then when you look into the Downloaded Program Files in IE you will see that Siebel
Test Automation got downloaded to your browser like this:

Then when you record scripts that load and click on Siebel HI controls, you will see a
SiebelFTInit command and SiebelFT . actions (instead of just Web actions) in your
scripts like this:

If you do not see this, then check whether Siebel Test Automation is properly enabled on
the Siebel server

Configuring Internet Explorer


Ensure Siebel pages are not being requested from browser cache

Change Internet Explorer cache settings to check for new versions Every time I
visit the webpage
Tools, Internet Options, General, Browsing History, Settings

Login to Siebel at least once on the machine before recording to load all ActiveX controls

Remove any duplicate Siebel Test Automation programs in Internet Explorer

Under C:\WINDOWS\Downloaded Program Files\, delete older unused Siebel


Test Automation files

Siebel Script Views


Script commands recorded to Tree View and corresponding Java Code View

Siebel Step Groups

Script commands in Tree View automatically combined into Step Groups

Manually re-apply by selecting Script Create Step Groups

Can be disabled globally through OpenScript Preferences

Siebel Init Action

Used only once after login to Siebel application

Connects OpenScript to the Siebel Test Automation CAS framework running


inside the browser

Siebel Page Tabs

Siebel Text Fields

Siebel Buttons

Siebel Picklists

Confirmation Dialogs

Siebel Menus

Search Button

Other Siebel Actions


Right-click on the tree view and choose Add, Other

Siebel Functional Test folder

Not all actions are recorded; some are only intended to be manually inserted

All have corresponding code representation

Parameterization
Use the Properties dialog in tree view to edit parameter values

Substitute variable
Connect to Databank

Edit the Value field to submit a different value during playback

Click Substitute Variable button to get values from a script Variable or


Databank file

Object Identification
Objects are identified using a Siebel Test Automation CAS XPath

Edit the Path field to specify a different object

Capture object path through browser

Select from Object Library or Save to library

Siebel Object Identification Preferences


Object Identification settings used to specify which default attributes are used to identify
Siebel Web objects during script recording

For each web object type, one or more attributes specified


Corresponding object path is captured during script recording (Xpath)
Modify default object attributes to change how new scripts are recorded

Using Object Library for Siebel Scripts


Object Library can be enabled during script recording

Stores all Siebel object library names/descriptions and object path


Can be script specific (Script Libraries) or shared (Repository Libraries)
Siebel objects will be added to library during record

Object Test
Used to verify an objects properties

Lets you validate Siebel HI components

Select Add Object Test from the toolbar

Select an object to test

Select which property or properties to test

Table Test

Used to verify data in a table

Perform table tests on SiebList components

Select individual cells to test

Recording a Siebel Load Test Script

Record using the same Proxy Recorder as HTTP module

Also records correlation data from Siebel CorrLib API

CorrLib requests are treated as Navigations

Stored in recordedData

Generates HTTP Test Elements


PostData (SWE)

Generates Variables CORRLIB variables

Siebel CorrLib variables map to siebel.solve() statements


Similar to http.solve()
Operates on CorrLib requests

Effectively the same as HTTP variables

Siebel Correlation Library

OpenScript ships with a default Siebel Correlation Library

Selection of HTTP Variable Substitution Rules

Replaces various Post Data parameters

SWETS inserts timestamp

Siebel Correlation Rule

Siebel Test Automation Load Correlation Library is a Win32 DLL that integrates
with OpenScript

Plugs into a Siebel application and gets additional information used for
correlating the script

Collects http session state information


Automatically updates dynamic session data in http requests
Note that Siebel and Correlation API only work in Internet Explorer

Siebel Correlation Rule

Extracts all SWEC variables and RowIDs from CORRLIB requests


Substitutes them into the parameters for all navigations

Validation

Validation of Siebel scripts uses the standard HTTP-based validations

Validation cannot be done via Text-Matching Test through the UI for Siebel HI
controls

Text Matching Tests


Server Response Time

Siebel HI ActiveX applet not downloaded during playback, therefore cant


validate contents

Siebel Correlation library also adds extra Siebel error checking to watch for
common Siebel error conditions

Text Matching Test

Tests search for Text on the given navigation

Can also search using a Regular Expression

Emits custom error message to the console when it fails

Server Response Test


Validate server response time of a page against upper/lower limits

Parameterization
Siebel scripts use standard OpenScript parameterization / Data Banking

Script Step Groups

Siebel Step Groups

Can create groups by Siebel URL pattern


Can also use HTTP rules of page title
Can be disabled entirely or grouped after the fact

Вам также может понравиться