Вы находитесь на странице: 1из 155

RR ITEC

RR ITEC ODI 12C Classroom Notes

By Ram Reddy
1/1/2014
Version 1.0

Confidential

Contents
1.

Data Warehouse Concepts

2.

Introduction to Oracle Data Integrator

20

3.

ODI Installation

23

4.

ODI Component Architecture

31

5.

Creating Master and Work Repository

35

5.1 Create the Master Reposi tory

35

5.2 Create the Work Repository

40

6.

Configuring RRITEC database

48

6.1 Configuring source database

48

6.2 Configuring Target database

48

7.

Creating and Managing Topology

49

7.1 Creating Physical Architecture

49

7.2 Creating Logical Architecture:

53

7.3 Creating Contexts

54

8.

Creating Model

56

9.

Creating Project

60

10.

Components

64

10.1 Projector Components

64

10.2 Selector Components

64

11.

Create a Mapping using expression

65

12.

Create a Mapping using Flat File and Filter

69

13.

Create a Mapping using Split

74

14.

Create a Mapping using Joiner

77

15.

Create a Mapping using Lookup

81

16.

Create a Mapping using Sort

84

17.

Create a Mapping using Aggregate

87

18.

Create a Mapping using Distinct

89

19.

Create a Mapping using Set Component

92

20.

User Functions

95

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

21.

Variables

101

22.

Sequences

105

23.

Procedures

112

24.

Packages

120

25.

Scenarios

122

26.

Version Control

123

27.

Modifying Knowledge Modules

124

28.

Change Data Capture (CDC)

136

29.

Migration (Exporting and Importing)

146

30.

Load Plan

Error! Bookmark not defined.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

1. Data Warehouse Concepts


Data
1. Any meaningful information is called as data
2. Data is two types
1. Transactional Data
2. Analytical Data
Transactional Data
1. Is run time data or day to day data
2. is current and detail
3. Is useful to run the business
4. Is stored in OLTP(On Line Transaction Processing)
5. Source of transactional data is Applications
6. Example: ATM Transactions , Share market transactions..etc
Transaction Example Diagram :

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

Analytical Data
1. is useful to ANALYSE the business
2. is Historical and summarized
3. Is stored in OLAP(On Line Analytical Processing) or DW(Data Warehouse )
4. Source of Analytical data is OLTP

DW Architecture

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

DW Tools
1. DW tools are divided into two types. some of those tools are
ETL

Informatica

Reporting

OBIEE
BI Publisher

Data Stage

Abintio

Cognos

SSIS

SAP-BO

ODI ( ELT )

DOMO

OWB

Qlick View

BODI

MSTR

OBIA
1. OBIA stands for Oracle Business Intelligence Applications.
2. OBIA is a predefined work of ETL and Reporting.
3. OBIA some of the important plug-ins are
1. SDE(Source Dependent Extraction) OLTP TO STAGING AREA
2. SIL(Source Independent Loading)Staging Area to DW
3. DAC(Data Warehouse Administration Console) Scheduling tool of ETLs(SDE
&SIL)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

4. OBAW(Oracle Business Analytic Warehouse) data model(set of tables around


1000)
5. Pre build Semantic layer RPD
6. Pre build Reports & Pre build Dashboards Web Catalog
OLTP Vs OLAP
OLTP

OLAP

1. Is useful to store Transactional data

1. Is useful to store Analyatical data

2. Is useful to run the business

2. Is useful to Analyze the business

3. The nature of data is current and

3. The nature of data is historical and

Detail
4. OLTP Supports CRUD(Create ,

summarized
4. OLAP supports only read

Partially read, update and delete)


5. It is a application oriented DB

5. It is subject oriented DB

6. It is volatile

6. It is nonvolatile

7. In OLTP data storage time is fixed

7. In OLAP data storage time is variant

8. OLTP DB are isolated as Applications

8. OLAP is integrated as per subject area

9. No of users are more(customers + emp

9. No of users are less (MM+HM)

)
10. In OLTP we will use normalizes
schema

10. In OLAP we will use Denormalized


Schema

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

Transactional Vs Analytical Systems (Continued)


1. Transactional schema optimized for Partial read/writemultiple joins

2. Analytics schema optimized for querying large datasetsfew joins

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

10

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

11

Data Warehouse or Database Main Objects

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

12

Columns:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

13

Schemas
1. A group of tables are called as schema
1. Star
2. Snow Flake
3. Constellation or mixed
1. Star Schema
1.

Organizes data into a central fact table with surrounding dimension tables

2.

Each dimension row has many associated fact rows

3.

Dimension tables do not directly relate to each other

4. All Dimension Tables are de normalized


5. Optimized to read data
6.

User friendly ,easy to understand

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

14

7.

In OBIEE BMM layer only Star schemas are used

Star Schema diagram taken from OBIEE Tool

Star schema fact


1. Contains business measures or metrics
2. Data is often numerical
3. Is the central table in the star

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

15

Star Schema Dimension


1. Contains attributes or characteristics about the business
2. Data is often descriptive (alphanumeric)
3. Qualifies the fact data

Star Schema with Sample Data

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

16

Star Schema user friendly and easy to understand

Snow Flake Schema


1. Normalized tables are used
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

17

2. Is also called as extended star schema


3. Two dimensional tables will be directly joined
4. Like star schema ,it has only one fact table
Snow Flake Schema diagram from OBIEE tool

Snow Flake Schema detail diagram

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

18

Mixed Schema
1. It contains more than one fact with some common dimensions (Conformed Dimensions)
2. It is combination of some stars or some snows or both

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

19

Conformed Dimensions
1. A dimension table is shared by two or more facts then it is called as conformed
dimension
2. OBIA data model created using conformed dimensions
2. Introduction to Oracle Data Integrator
1. A widely used data integration software product, Oracle Data Integrator provides a new
declarative design approach to defining data transformation and integration processes,
resulting in faster and simpler development and maintenance.
2. Based on a unique E-LT architecture (Extract - Load Transform), Oracle Data
Integrator not only guarantees the highest level of performance possible for the
execution of data transformation and validation processes but is also the most costeffective solution available today.
3. Oracle Data Integrator provides a unified infrastructure to streamline data and
application integration projects.

The Business Problem


1. In today's increasingly fast-paced business environment, organizations need to use
more specialized software applications; they also need to ensure the coexistence of
these applications on heterogeneous hardware platforms and systems and guarantee
the ability to share data between applications and systems.
2. Projects that implement these integration requirements need to be delivered on-spec,
on-time and on-budget.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

20

A Unique Solution
1. Oracle Data Integrator employs a powerful declarative design approach to data
integration,which separates the declarative rules from the implementation details.
2. Oracle Data Integrator is also based on a unique E-LT (Extract - Load Transform)
architecture which eliminates the need for a standalone ETL server and proprietary
engine, and instead leverages the inherent power of your RDBMS engines. This
combination provides the greatest productivity for both development and maintenance,
and the highest performance for the execution of data transformation and validation
processes.
3. Here are the key reasons why companies choose Oracle Data Integrator for their data
integration needs:

1. Faster and simpler development and maintenance: The declarative rules driven
approach to data integration greatly reduces the learning curve of the product and
increases developer productivity while facilitating ongoing maintenance. This
approach separates the definition of the processes from their actual implementation,
and separates the declarative rules (the "what") from the data flows (the "how").

2. Data quality firewall: Oracle Data Integrator ensures that faulty data is
automatically detected and recycled before insertion in the target application. This is
performed without the need for programming, following the data integrity rules and
constraints defined both on the target application and in Oracle Data Integrator.

3. Better execution performance: traditional data integration software (ETL) is based


on proprietary engines that perform data transformation row by row thus limiting
performance. By implementing an E-LT architecture, based on your existing RDBMS

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

21

engines and SQL, you are capable of executing data transformations on the target
server at a set-based level, giving you much higer performance

4. Simpler and more efficient architecture: The E-LT architecture removes the need
for an ETL Server sitting between the sources and the target server. It utilizes the
source and target servers to perform complex transformations, most of when
happen in batch mode when the server is not busy processing end-user queries

5. Platform Independence: Oracle Data Integrator supports many platforms, hardware


and OSs with the same software.

6. Data Connectivity: Oracle Data Integrator supports many RDBMSs including


leading Data Warehousing platforms such as Oracle, Exadata, Teradata, IBM DB2,
Netezza,Sybase IQ and numerous other technologies such as flat files, ERPs,
LDAP, XML.

7. Cost-savings: The elimination of the ETL Server and ETL engine reduces both the
initial hardware and software acquisition and maintenance costs. The reduced
learning curve and increased developer productivity significantly reduce the overall
labor costs of the project, as well as the cost of ongoing enhancements.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

22

3. ODI Installation

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

23

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

24

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

25

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

26

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

27

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

28

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

29

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

30

4. ODI Component Architecture

Repositories:
1. Oracle Data Integrator Repository is composed of one Master Repository and several
Work Repositories.
2. ODI objects developed or configured through the user interfaces (ODI Studio) are stored
in one of these repository types.
3. Master repository stores the following information
1. Security information including users, profiles and rights for the ODI platform
2. Topology information including technologies, server definitions, schemas,
contexts,languages and so forth.
3. Versioned and archived objects.
4. Work repository stores the following information
1. The work repository is the one that contains actual developed objects. Several
work repositories may coexist in the same ODI installation (example Dev work
repository ,prod work repository etc )

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

31

2. Models, including schema definition, datastores structures and metadata, fields


and columns definitions, data quality constraints, cross references, data lineage
and so forth.
3. Projects, including business rules, packages, procedures, folders, Knowledge
Modules,variables and so forth.
4. Scenario execution, including scenarios, scheduling information and logs.
5. When the Work Repository contains only the execution information (typically for
production purposes), it is then called an Execution Repository.

ODI Studio and User Interface:


1. Administrators, Developers and Operators use the Oracle Data Integrator
Studio to access the repositories.
2. ODI Studio provides four Navigators for managing the different aspects and
steps of an ODI integration project:

1. Designer Navigator: is used to design data integrity checks and to build


transformations

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

32

2. Operator Navigator: is the production management and monitoring tool. It is


designed for IT production operators. Through Operator Navigator, you can
manage your mapping executions in the sessions, as well as the scenarios in
production.

3. Topology Navigator: is used to manage the data describing the information


system's physical and logical architecture. Through Topology Navigator you can
manage the topology of your information system, the technologies and their
datatypes, the data servers linked to these technologies and the schemas they
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

33

contain, the contexts, the languages and the agents, as well as the repositories.
The site, machine, and data server descriptions will enable Oracle Data
Integrator to execute the same mappings in different physical environments.

4. Security Navigator: is the tool for managing the security information in Oracle
Data Integrator. Through Security Navigator you can create users and profiles
and assign user rights for methods (edit, delete, etc) on generic objects (data
server, datatypes, etc), and fine tune these rights on the object instances (Server
1, Server 2, etc).

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

34

5. Creating Master and Work Repository


5.1

Create the Master

Repository

Step1: Creating Repository Schema in Database.


1. Open SQL PLUS Type / as sysdba press enter
2. Create a user by executing below commands
a. Create user ODIMR identified by RRitec123;
b. Grant DBA to ODIMR;
c. Conn ODIMR @ORCL
d. Password RRitec123
e. Select count (*) from tab;

3. Please Note that no tables are available in this schema.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

35

Step2: Creating Master Repository.

1. Start ODI Studio


Navigate to below location and double click on ODI.exe
C:\Oracle\Middleware\Oracle_Home\odi\studio\odi.exe
2. Select the New button in the Studio tool bar

3. Select the Master Repository Creation Wizard click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

36

4. Provide below information

5. Select the Master Repository Creation Wizard click on ok


6. Click on test connection click on ok Click on next

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

37

7. Provide username and password as SUPERVISOR click on next

8. Select internal password Storage ->Click on Finish

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

38

9. Once Process completed

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

39

10. Go back to sql plus and execute select count(*) from tab and notice that 67
tables are created

5.2

Create the Work Repository

Step1: Creating Work Repository Schema in Database.


1. Open SQL PLUS Type / as sysdba press enter
2. Create a user by executing below commands
a. Create user ODIWR identified by RRitec123;
b. Grant DBA to ODIWR;
c. Conn ODIWR @ORCL
d. Password RRitec123
e. Select * from tab;
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

40

3. Please Note that no tables are available in this schema.

Step2: Creating Work Repository.

1. Start ODI Studio


2. Click on ODI menu click on connect Click on new
3. Provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

41

4. Click on test click on ok click on ok


5. Click on Topology expand repositories section
6. Right click on work repositories click on new work repository

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

42

7. Provide below information Click on next

8. In conformation window click on yes

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

43

9. Provide below information


1. Name : RRITEC_DEV_WORKREP
2. Password : RRitec123
3. Work Repository Type : Development

10. In conformation window click on No


RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

44

11. In ODIWR schema observe that 153 tables created

Exercise: please complete \02 ODI\04 ORACLE REFERENCE\ Creating Master and
Work Repository by RCU.docx (Please do not run drop RCU Section)
6. Creating master and work repositories using RCU
1. Download the RCU software and extract it
2. Navigate to location E:\05-ODI12C\ODI12C_rcuHome\BIN
3. Double click on rcu.bat file Click on RUN
4. In welcome screen click on next
5. Select create click on next

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

45

6. Select below all options click on next click on ignoreclick on ok

7. Select oracle data integrator

8. Click on next click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

46

9. Provide password and conform password as RRitec123 click on next

10. Provide the master and work repository passwords

11. Click on next nextokok create finish

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

47

7. Configuring RRITEC database


7.1

Configuring source database

Step 1: Creating source tables into Scott schema


12. Open SQL PLUS
a. Conn scott@ORCL
b. Password tiger
c. Select count (*) from tab;
13. Go to RRITEC labcopy labdata folder and take full path of driver and execute as for
below
14. Commit;

7.2

Configuring Target database

Step 1: Creating user TDBU and load tables into TDBU schema
1. Open SQL PLUS Type / as sysdba press enter
2. Create a user by executing below commands
a. Create user TDBU identified by RRitec123;
b. Grant DBA to TDBU;
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

48

c. Conn TDBU@ORCL
d. Password RRitec123
e. Select count (*) from tab;
3. Go to RRITEC labcopy labdata folder and take full path of driver and execute as for
below
8. Creating and Managing Topology
The Oracle Data Integrator Topology is the physical and logical representation of
the Oracle Data Integrator architecture and components.
8.1

Creating Physical Architecture

1. The physical architecture defines the different elements of the information system, as
well as their characteristics taken into account by Oracle Data Integrator.
2. Each type of database (Oracle, DB2, etc.) or file format (XML, Flat File), or
application software is represented in Oracle Data Integrator by a technology.
3. The physical components that store and expose structured data are defined as
dataservers.
4. A data server is always linked to a single technology. A data server stores
information according to a specific technical logic which is declared into physical
schemas attached to this data server.
Process:
1. Open ODI Studio
2. Go to ODI menu Click on connect Select login name as RRITEC Click on
edit Under work repository section select work repository
RRITEC_DEV_WORKREP Click on test Click on ok Click on okClick on
ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

49

3. Go to

Right click on oracle Click on New Data Server

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

50

4. In Definition tab provide below information

5. In jdbc tab provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

51

6. Click on test connection Click on test click on ok


7. Close

Creating Physical Schema:


1. Right Click on RRITEC_ORCL Click on New Physical Schema

2. Provide below information

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

52

Note: using dedicated schema as a work schema is best practice


3. Click on Save click on ok close the window
4. Similarly create one more physical schema with TDBU schema

8.2

Creating Logical Architecture:

1. The logical architecture allows a user to identify as a single Logical Schema a group
of similar physical schemas - that is containing data stores that are structurally
identical - but located in different physical locations.
2. Logical Schemas, like their physical counterpart, are attached to a technology.
3. All the components developed in Oracle Data Integrator are designed on top of
the logical architecture. For example, a data model is always attached to logical
Schema
Process:
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

53

1. Under logical Architecture Right click on oracle Click on New Logical


Schema

2. Provide below information save and close the window

4. Similarly create one more new logical schema with the name of TARGET_TDBU
8.3

Creating Contexts

1. Contexts bring together


1. Components of the physical architecture (the real Architecture) of the
information system
With

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

54

2. Components of the Oracle Data Integrator logical architecture (the


Architecture on which the user works).
2. For example, contexts may correspond to different execution environments
(Development, Test and Production) or different execution locations (Boston Site,
New-York Site, and so forth.)

Process (optional):
1. By default one context created with installation of ODI with the name of Global
2. We already mapped logical and physical schemas while we were creating logical
schemas For just conformation right click on global context open observe mapping

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

55

Agents:
1. Oracle Data Integrator run-time Agents orchestrate the execution of jobs. The agent
executes jobs on demand and to start the execution of scenarios according to a
schedule defined in Oracle Data Integrator.
Languages:
1. Languages defines the languages and language elements available when editing
expressions at design-time.
9. Creating Model
1. Models are the objects that will store the metadata in ODI.
2. They contain a description of a relational data model. It is a group of DataStores
(Known as Tables) stored in a given schema on a given technology.
3. A model typically contains metadata reverse-engineered from the real data model
(Database, flat file, XML file, Cobol Copybook, LDAP structure ...etc)
4. Database models can be designed in ODI. The appropriate DDLs can then be generated
by ODI for all necessary environments (development, QA, production)
5. Reverse engineering is an automated process to retrieve metadata to create or update
a model in ODI.
6. Reverse- Engineering also known as RKM(Reverse engineering Knowledge Module)
7. Reverse engineering two types
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

56

1. Standard reverse-engineering
i. Uses JDBC connectivity features to retrieve metadata, then writes it to the
ODI repository.
ii. Requires a suitable driver
2. Customized reverse-engineering
i. Read metadata from the application/database system repository, then
writes these metadata in the ODI repository
ii. Uses a technology-specific strategy, implemented in a Reverseengineering Knowledge Module (RKM)

8. Some other methods of reverse engineering are


1. Delimited format reverse-engineering
i. File parsing built into ODI.
2. Fixed format reverse-engineering
i. Graphical wizard, or through COBOL copybook for Mainframe files.
3. XML file reverse-engineering (Standard)
i. Uses JDBC driver for XML
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

57

9. Reverse engineering is incremental.


10. New metadata is added and old metadata is removed.
Process:
1. Go to Designer Navigator Click on New Model Folder Name it as
RRITEC_MODEL_FOLDER

2. Right click on RRITEC_MODEL_FOLDER Click on New Model Name it as


RRITEC_MODEL

3. Provide below information Click on save Click on reverse engineer

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

58

4. Observe all tables

5. Similarly create one more data model with name of RRITEC_TARGET_MODEL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

59

10. Creating Project

1. A project is a collection of ODI objects created by users for a particular functional


domain. Those objects are(Folders ,Variables etc)

2. A folder is a hierarchical grouping beneath a project and can contain other folders and
objects.
3. Every package, Mapping, Reusable Mapping and procedures must belong to a folder.
4. Objects cannot be shared between projects. except (Global variables, sequences, and
user functions)
5. Objects within a project can be used in all folders.
6. A knowledge module is a code template containing the sequence of commands
necessary to carry out a data integration task.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

60

7. There are different knowledge modules for loading, integration, checking, reverse
engineering, and journalizing.
8. All knowledge modules code will be executed at run time.

MAPPINGS

9. There are six types of knowledge modules:

10. Some of the KMs are

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

61

1. LKM File to Oracle (SQLLDR)


i. Uses Jython to run SQL*LOADER via the OS
ii. Much faster than basic LKM File to SQL
2. LKM Oracle to Oracle (DBLINK)
i. Loads from Oracle data server to Oracle data server
ii. Uses Oracle DBLink
3. CKM Oracle
i. Enforces logical constraints during data load
ii. Automatically captures error records
Markers
1. A marker is a tag that you can attach to any ODI object, to help organize your project.
2. Markers can be used to indicate progress, review status, or the life cycle of an object.
3. Graphical markers attach an icon to the object, whereas non graphical markers attach
numbers, strings, or dates.
4. Markers can be crucial for large teams, allowing communication among developers from
within the tool.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

62

1. Review priorities.
2. Review completion progress.
3. Add memos to provide details on what has been done or has to be done.
4. This can be particularly helpful for geographically dispersed teams.
5. Project markers:
1. Are created in the Markers folder under a project
2. Can be attached only to objects in the same project
6. Global markers:
1. Are created in the Global Markers folder in the Others view
2. Can be attached to models and global objects
Process:
1. Under Designer Navigator Click on new Project name it as RRITEC_PROJECT

2. Click on save Right click on First Folder open rename as RRITEC save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

63

11. Components
1. In the logical view of the mapping editor, you design a mapping by combining datastores
with other components. You can use the mapping diagram to arrange and connect
components such as datasets, filters, sorts, and so on. You can form connections
between data stores and components by dragging lines between the connector ports
displayed on these objects.
2. Mapping components can be divided into two categories which describe how they are
used in a mapping: projector components and selector components.
i. Projector Components
ii. Selector Components
11.1

Projector Components

1. Projectors are components that influence the attributes present in the data that flows
through a mapping. Projector components define their own attributes: attributes from
preceding components are mapped through expressions to the projector's attributes. A
projector hides attributes originating from preceding components; all succeeding
components can only use the attributes from the projector.
2. Built-in projector components:
1. Dataset Component
2. Datastore Component
3. Set Component
4. Reusable Mapping Component
5. Aggregate Component
6. Distinct Component
11.2

Selector Components

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

64

1. Selector components reuse attributes from preceding components. Join and Lookup
selectors combine attributes from the preceding components. For example, a Filter
component following a datastore component reuses all attributes from the datastore
component. As a consequence, selector components don't display their own attributes in the
diagram and as part of the properties; they are displayed as a round shape. (The
Expression component is an exception to this rule.)
2. When mapping attributes from a selector component to another component in the mapping,
you can select and then drag an attribute from the source, across a chain of connected
selector components, to a target datastore or next projector component. ODI will
automatically create the necessary queries to bring that attribute across the intermediary
selector components.
3. Built-in selector components:
1. Expression Component
2. Filter Component
3. Join Component
4. Lookup Component
5. Sort Component
6. Split Component

12. Create a Mapping using expression


In this exercise, the student will create a mapping that represents the data flow between the
EMPLOYEE source and the ODS_EMPLOYEE target.
A mapping represents the dataflow between sources and targets. The instructions defined in
the mapping tell the ODI Server how to read, Load and transform the data
Step1: Create a mapping
1. Under RRITEC folder Right click on mappings Click on new mapping name it
as m_ODS_EMPLOYEE Click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

65

2. In Logical tab of the mapping drag and drop e mployee table from
RRITEC_SOURCE_MODEL
3. Drag and drop ods_employee table from RRITEC_TARGET_MODEL
Step2: Create a Expression Transformation
Expression transformation is useful to do all calculations except aggregate calculations(ex sum ,avg ..etc)

1. From Components pane drag and drop Expression

Transformation

2. Drag and drop all columns from employee source to expression Transformation
3. Select Expression transformation In expression properties pane expand Attributes
4. Select last attribute click on new attribute name it as FULL_NAME data type as
varchar
5. Click on expression

6. Develop below expression

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

66

7. Click on ok connect all columns from expression to target ods_employee

8. Click on Validate save


Step3: Run the mapping

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

67

1. From toolbar click on run click on ok again ok

2. Go to operator navigator observe the session status

1. Create and populate below table from EMP table


CREATE TABLE EMP_TOTAL_SAL(EMPNO NUMBER ,ENAME VARCHAR2(40) ,SAL
NUMBER ,COMM NUMBER ,TOTALSAL NUMBER)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

68

2. Create a table similar to EMP table but replace scott name with tiger and populate data

13. Create a Mapping using Flat File and Filter


1. Flat Files are two types
1. Fixed
2. Delimited
2. A filter is a SQL clause applied to the columns of one source data store. Only records
matching this filter are processed by the Mapping.
Step1: Create Flat File
1. Go to path C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC (if this path not
Available create it)
2. Create a notepad file with below data and save it as emp.txt Close file
empno,sal,deptno
101,1000,10
102,2000,20
103,3000,30

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

69

Step2: Create Physical File Data Server and Physical Schema


1. Go to topology Under physical Architecture Right click on File Click on New
Data Server
2. In Definition and JDBC tab provide below information Click on save

3. Right Click on RRITEC_FLATFILES Schema Click on New Physical Schema


Provide below information Click on save
Director (Schema) : C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC
Ditectory (Work Schema) : C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC

Step3: Create Logical File Schema


1. Go to topology Under Logical Architecture Right click on File Click on New
Logical Schema Provide name and context Click on Save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

70

Step4: Create Model based on File Logical Schema


1. Go to Designer Right Click on RRITEC_MODEL_FOLDER Click on New Model
2. Provide Name ,Technology and Logical Schema Click on Save

3. Right Click on Model RRITEC_FLATFILES Click on New Data Store


4. In Definition tab provide Name ,Alias ,Rosource Name Click on save In Files Tab
Provide File Format ,Heading(No of Rows) ,Field Separator

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

71

5. In Attributes tab click on Reverse Engineering Save

Step5: Create Target Table and adding to model


1. Create target table in TDBU schema
Create table temp as select empno,sal,deptno from scott.emp where 1=2;

2. Open Model RRITEC_TARGET_MODEL Click on Reverse Engineer Save

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

72

Step6: Create Mapping


1. In Designer under project RRITEC_PROJECT Right Click on Mapping New
Mapping name it as m_FLATFILE_TO_TABLE Click on ok
2. Drag and drop emp from RRITEC_FLATFILES
3. Drag and drop Temp from RRITEC_TARGET_MODEL
4. Drag and drop Filter Transformation from Components pane
5. Drag and drop Deptno from emp to filter
6. Connect empno,sal,deptno from emp to temp

7. Select Filte object In filter Properties pane develop filter condition to get 10 and 20

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

73

8. Click on save Run observe Output

Exercise: please complete \02 ODI\04 ORACLE REFERENCE\ Flat File to a Table.docx

14. Create a Mapping using Split


1. A Split is a Selector component that divides a flow into two or more flows based on ifthen-else conditions.
2. It is a new feature in ODI 12C
3. It is equivalent to ROUTER transformation in Informatica
4. It can be used in place of multiple conditions. The same row may be passed to multiple
flows
5. We can capture rejected records in to a table by selecting Remainder option
Exercise 1: Splitting Sales Non-Sales and Rookies data
The exercises in this lab are designed to walk the student through the process of using a Split
transformation.
Objectives
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

74

After completing the lab, the student will be able to:

Understand the functionality of the Split transformation.


Apply the Split in a mapping where multiple filter conditions are required.
Summary
In this lab, the student will develop a mapping where a row can be sent to one of three different
target tables. The mapping will use a Split transformation to group employee records based
on various criteria, then route each record to the appropriate target table.
SOURCE:
TARGETS:

ODS_EMPLOYEE
ODS_EMPLOYEE_SALES
ODS_EMPLOYEE_NON_SALES
ODS_EMPLOYEE_ROOKIE

The conditions for routing the records are as follows:


ODS_EMPLOYEE_SALES should contain records for employees where the TYPE_CODE
column contains the value SALES.
ODS_EMPOYEE_NON_SALES

should

contain

records

for

employees

where

the

TYPE_CODE column contains the value FIN, ADMIN or MGR.


ODS_EMPLOYEE_ROOKIE should contain records for employees whose start date
(DATE_HIRED column) is less than 2 years from todays date.
Process:
1. Create a new mapping with the name of m_Split
2. Drag and drop ODS_EMPLOYEE from RRITEC_ TARGET_MODEL
3. Drag and drop ODS_EMPLOYEE_SALES, ODS_EMPLOYEE_NON_SALES,
ODS_EMPLOYEE_ROOKIE from RRITEC_TARGET_MODEL
4. Drag and drop Split Component
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

75

5. Drag and Drop Type_code Column from ODS_EMPLOYEE table to Split


6. Select Split transformation under output connector ports change names ,Connected to
and Expression as shown below

7. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_SALES
8. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_NON_SALES
9. Drag and Drop all the columns one by one from ODS_EMPLOYEE table to
ODS_EMPLOYEE_ROOKIE

10.
11. Click on Validate Save Run
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

76

Exercise: From emp table load deptno 10 ,20 and 30 data into different tables .Please do
as shown below

15. Create a Mapping using Joiner


1. A join is a selector component that creates a join between multiple flows. The attributes
of all flows are combined as the attributes of the join component.
2. The following join types can be created.
1. Inner Join (No check boxes selected)
2. Left Outer Join (Left data store checkbox selected)
3. Right Outer Join (Right data store checkbox selected)
4. Full Outer Join (Both data store checkboxes selected)
5. Cross Join (Cross checkbox selected)
6. Natural Join (Natural checkbox selected)
3. A NATURAL JOIN is a JOIN operation that creates an implicit join clause for you based
on the common columns in the two tables being joined. Common columns are columns
that have the same name in both tables.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

77

Example : SELECT * FROM EMP NATURAL JOIN DEPT


4. NATURAL JOIN was introduced in ODI 11G.
5. To Create natural join we must select option Generate ANSI Syntax
6. Natural join not required any join condition
Exercise 1: Join EMP and DEPT tables
Step1: Create Target Table
1. Go to SQL Developer login as TDBU user
2. Create a table
CREATE TABLE JOINER_EMP
(
EMPNO NUMBER,
JOB

VARCHAR2(20),

SAL

NUMBER,

DEPTNO NUMBER,
DNAME VARCHAR2(20)
);

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

78

3. Reverse Engineer JOINNER_EMP table into RRITEC_MODEL_TARGET


Step2: Create Mapping
1. Create a mapping with the name of m_JOINNER_EMP
2. Drag and drop Emp,Dept from RRITEC_SOURCE_MODEL
3. Drag and drop JOINNER_EMP from RRITEC_TARGET_MODEL
4. Drag and drop Joiner Component into work area
5. Drag and drop deptno from emp and dept tables to Joiner transformation
6. Connect Empno,Job and Sal from emp table to Joinner_emp target table
7. Connect Deptno,Dname from Dept table to Joinner_emp target table

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

79

8. Click on Join Component Select required type of join by default it is Inner Join

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

80

9. Click on Validate and save


10. Run the mapping and observe output
Exercise 2: Join EMP and DEPT tables using Right outer Join
1. In above mapping select Joiner component Select Dept Check box

2. Select target table Select Truncate_Target_table option as true

3. Validate and save


4. Run the mapping and observe output.
16. Create a Mapping using Lookup
1. Use a lookup component using a source as the driving table and lookup another Data
Store.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

81

2. Lookups appear as compact graphical objects in the interface sources diagram.


3. Choose how the each lookup is generated
a. SQL left outer join in the FROM clause: The lookup source will be left-outerjoined with the driving source in the FROM clause of the generated SQL.
b. SQL expression in the SELECT clause: A SELECT-FROM-WHERE statement
for the lookup source will be embedded into the SELECT clause for the driving
source.Left Outer Join in the FROM clause
4. Lookup Works similar to Join
5. Benefits of using lookups
a. Simplifies the design and readability of interfaces using lookups
b. Enables optimized code for execution
Exercise 1: Lookup Dname and Loc from Dept Table
Step1: Create Target Table
1. Go to SQL Developer login as TDBU user
2. Create a table
CREATE TABLE LKP_EMP_DNAME_LOC
(
EMPNO NUMBER,
JOB

VARCHAR2(20),

SAL

NUMBER,

DEPTNO NUMBER,
DNAME VARCHAR2(20),
LOC VARCHAR2(30)
);
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

82

3. Reverse Engineer LKP_EMP_DNAME_LOC table into RRITEC_MODEL_TARGET


Step2: Create Mapping
1. Create a mapping with the name of m_LOOKUP_DNAME_LOC
2. Drag and drop Emp,Dept from RRITEC_SOURCE_MODEL
3. Drag and drop LKP_EMP_DNAME_LOC from RRITEC_TARGET_MODEL
4. Drag and Drop Lookup from components pane
5. Drag and drop deptno from driving table e mp on to Lookup
6. Drag and drop deptno from Lookup table Dept on to Lookup
7. Connect Columns Empno,Job,Sal,Deptno from emp to target LKP_EMP_DNAME_LOC
8. Connect Columns Dname ,Loc from Dept to target LKP_EMP_DNAME_LOC

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

83

9. Click on Validate Click on SaveClick on Run observe output

17. Create a Mapping using Sort


1. A sort is a selector component (see Selector Components) that sorts the incoming flow
based on a list of attributes.
2. The functionality is equivalent to the SQL ORDER BY clause.
3. Sort condition fields can be added by dragging source attributes onto the sort
component.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

84

Exercise 1: Sorting emp data based on Deptno and empno


Step1: Create Target Table using ODI
1. Expand RRITEC_SOURCE_MODEL Right Click on EMP table Click on Duplicate
2. Drag and drop Copy of EMP(EMP) onto RRITEC_TARGET_MODEL
3. Right click on Copy of EMP(EMP) Click on open provide name, Alias and
Resource Name as emp_sort

4. Click on save Close


Step2: Create Mapping
1. Create a mapping with the name of m_SORT_EMP
2. Drag and drop Emp from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_SORT from RRITEC_TARGET_MODEL
4. Drag and drop Sort Component into work area
5. Drag and drop Deptno ,empno from EMP table to SORT
6. Connect all the ports one by one from EMP table to SORT_EMP

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

85

7. Click on Physical tab Select target table EMP_SORT Select


CREATE_TARGET_TABLE value as True

8. Validate Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

86

18. Create a Mapping using Aggregate


1. Use to group and aggregate attributes using aggregate functions, such as average,
count, maximum, sum, and so on.
2. ODI automatically selects attributes without aggregation functions to be used as groupby attributess.
3. You can override this by using the Is Group By Column and Manual Group By Clause
properties.
Exercise 1: Calculate deptno wise sal expenditure

Step1: Create Target Table


1. Go to SQL Developer login as TDBU user
2. Create a table
CREATE TABLE AGG_EMP_DEPTNO_WISE_SAL
(
DEPTNO NUMBER,
SAL

NUMBER

);

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

87

3. Reverse Engineer AGG_EMP_DEPTNO_WISE_SAL table into


RRITEC_MODEL_TARGET
Step2: Create Mapping
1. Create a mapping with the name of m_ AGG_EMP_DEPTNO_WISE_SAL
2. Drag and drop Emp from RRITEC_SOURCE_MODEL
3. Drag and drop AGG_EMP_DEPTNO_WISE_SAL from RRITEC_TARGET_MODEL
4. Drag and drop Sort and Aggregate Components into work area
5. Drag and drop Deptno from EMP table to SORT
6. Drag and drop Deptno,sal from EMP table to Aggregate
7. Select Aggregate and change expression as SUM(EMP.SAL)

8. Drag and drop Deptno,sal from EMP table to AGG_EMP_DEPTNO_WISE_SAL

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

88

9. Validate Save RunObserve output

19. Create a Mapping using Distinct


1. A distinct component is a projector component that projects a subset of attributes in the
flow.
2. The values of each row have to be unique; the behavior follows the rules of the SQL
DISTINCT clause.
3. It is new feature in ODI 12C
Exercise 1: Capturing country code and name
Step1: Create Flat File Source Data
1. Go to path C:\Oracle\Middleware\Oracle_Home\odi\demo\RRITEC (if this path not
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

89

Available create it)


2. Create a notepad file with below data and save it as Region_country.txt Close file
REGION_ID,REGION,COUNTRY_ID,COUNTRY
20,South,1,USA
21,West,1,USA
22,East Coast,1,USA
23,Mid West,1,USA
24,South India,2,India
25,North India,2,India

3. Right Click on Model RRITEC_FLATFILES Click on New Data Store


4. In Definition tab provide Name ,Alias ,Resource Name Click on save In Files Tab
Provide File Format ,Heading(No of Rows) ,Field Separator

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

90

5. In Attributes tab click on Reverse Engineering Save

Step2: Create Target Table


1. Go to SQL Developer login as TDBU user
2. Create a table
CREATE TABLE DISTINCT_COUNTRY
(

COUNTRY_ID NUMBER,
COUNTRY_NAME

VARCHAR2(50)

);

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

91

3. Reverse Engineer DISTINCT_COUNRY table into RRITEC_MODEL_TARGET


Step3: Create Mapping
1. Create a mapping with the name of m_DISTINCT_COUNRY
2. Drag and drop Region_Country from RRITEC_FLATFILE_MODEL
3. Drag and drop DISTINCT_COUNRY from RRITEC_TARGET_MODEL
4. Drag and drop DISTINCT Components into work area
5. Drag and drop Country_id,Country from Region_Country table to DISTINCT
6. Drag and drop Country_id,Country from DISTINCT table to DISTINCT_COUNRY

7. Validate Save RunObserve output

20. Create a Mapping using Set Component


1. A set component is a projector component that combines multiple input flows into one
output flow by using set operation such as UNION, INTERSECT, EXCEPT, MINUS, and
others.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

92

Exercise 1: Combine using UNION set operator


Step1: Create Tables to Utilize as Source Data
1. Connect to source database Schema Scott
CREATE TABLE EMP1020 AS SELECT DEPTNO,ENAME,JOB, SAL FROM EMP
WHERE DEPTNO IN ( 10,20 );
CREATE TABLE EMP2030 AS SELECT DEPTNO,ENAME,JOB, SAL FROM EMP
WHERE DEPTNO IN ( 20,30 );
2. Reverse Engineer emp1020 and emp2030 tables into RRITEC_SOURCE_MODEL
Step2: Create Target Table
1. Go to SQL Developer login as TDBU user
2. Create a table
CREATE TABLE EMP_UNION102030 AS
SELECT DEPTNO,ENAME,JOB, SAL FROM SCOTT.EMP WHERE 1 = 2 ;

3. Reverse Engineer EMP_UNION102030 table into RRITEC_TARGET_MODEL


Step3: Create Mapping
1. Create a mapping with the name of m_ EMP_UNION102030
2. Drag and drop emp1020 and emp2030 from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_UNION102030 from RRITEC_TARGET_MODEL
4. Drag and drop SET Components into work area
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

93

5. Select SET and change properties as shown

6. Drag and drop DEPTNO,ENAME,JOB, SAL from EMP1020 table to SET


7. Drag and drop DEPTNO,ENAME,JOB, SAL from EMP2030 table to corresponding
columns of SET
8. Drag and drop DEPTNO,ENAME,JOB, SAL from SET table to EMP_UNION102030

9. Validate Save RunObserve output

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

94

21. User Functions

1. A cross-technology macro defined in a lightweight syntax used to create an alias for a


recurrent piece of codeExample
2. Functions are two Types
a. Global functions can be used anywhere in any project.
b. Project functions can be used within their project.
3. A simple Example formula:
a. If <param1> is null then <param2> else <param1> end if
i.

Can be implemented differently in different technologies:


1. Oracle
a. nvl(<param1>, <param2>)
2. Other technologies:
a. case when <param1> is null then <param2> else
<param1> end

b. And could be aliased to:


i. RemoveNull(<param1>, <param2>)

Exercise 1: Converting dname Descriptions into codes


Step1: Create User Function
1. Under RRITEC_PROJECT right click on user functions Click on New User
Function

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

95

2. Provide Name ,Group and Syntax

3. Click on Imple mentations Click on Add Implementation Provide below syntax and
select technology Click on ok Save
DECODE($(DNAME),'ACCOUNTING','ACC','RESEARCH','R and
D','OPERATIONS','OPT','SALES')

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

96

Step2: Utilizing User Function in a mapping


1. Expand RRITEC_SOURCE_MODEL Duplicate dept table Drag and drop copy of
dept onto RRITEC_TARGET_MODEL Right click on copy of dept open Change
name ,alias and resource name as DEPT_CODES

Create a mapping with the name of m_user_function

2. Drag and drop dept as source and dept_codes as target


3. Connect deptno ,loc of source and target
4. Select dname column of target click on expression and develop as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

97

5. Click on physical tab select target table under integration knowledge module
select create table property as true
6. Click on validate save run and observe output

Exercise 2: Converting given Phone Number in to smart phone format


Step1: Creating Flat File and reverse engineer into Source Flat File Model
1. Develop a flat file as shown and save as PHONE.txt

2. Reverse engineer into source flat file model


RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

98

Step2: Creating Target table and reverse engineer into Target Model
1. Develop a target table as shown below
create table RR_PHONE_FORMAT(ENAME VARCHAR2(20),PHONE VARCHAR2(20))

2. Reverse engineer into target model


Step3: Creating project level user function
1. Under RRITEC_PROJECT right click user functions new user function
2. Provide name ,group and syntax as shown below

3. Click on imple mentations tab Click on

provide below formula and select

technology
'(' || SUBSTR($(PHONE),1,3)||') - ' ||SUBSTR($(PHONE),4,3) || '-' ||
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

99

SUBSTR($(PHONE),7,6)

4. Click on save and close


Step4: using user function in interface
1. Create a mapping with the name of m_ PHONE_FORMAT
2. Drag and drop phone source table from RRITEC_FILE_MODEL
3. Drag and drop RR_PHONE_FORMAT target table from RRITEC_TARGET_MODEL
4. Connect all corresponding columns
5. Connect Ename of source to ENAME of target
6. Select target phone column and call user function

7. Select proper LKM and IKM


8. Save run

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

100

22. Variables

1. Variable An ODI object which stores a typed value, such as a number, string or date.
2. Variables are used to customize transformations as well as to implement control
structures, such as if-then statements and loops, into packages.
3. Variables are two types
a. Global
b. Project Level
4. To refer to a variable, prefix its name according to its scope:
a. Global variables: GLOBAL.<variable_name>
b. Project variables: <project_code>.<variable_name>
5. Variables are used either by string substitution or by parameter binding.
a. Substitution: #<project_code>.<variable_name>
b. Binding: :<project_code>.<variable_name>
6. Variables are four types
a. Declare
b. Set
c. Evaluate
d. Refresh
7. Variables can be used in packages in several different ways, as follows:
a. Declaration: When a variable is used in a package (or in certain elements of the
topology which are used in the package), it is strongly recommended that you
insert a Declare Variable step in the package. This step explicitly declares the
variable in the package.
b. Refreshing: A Refresh Variable step allows you to re-execute the command or
query that computes the variable value.
c. Assigning: A Set Variable step of type Assign sets the current value of a
variable.
d. Incrementing: A Set Variable step of type Increment increases or decreases a
numeric value by the specified amount.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

101

e. Conditional evaluation: An Evaluate Variable step tests the current value of a


variable and branches depending on the result of the comparison.
8. Variables also can be used in expressions of mappings, procedures and so forth.

Exercise 1: Filter DEPTNO10 using default value/static assignment


Step 1: Create a static variable
1. Under RRITEC_PROJECT Right click on Variables new variable name it as
DEPTNO10 Data type as Numeric and default value as 10

2. Click on Save Click on Close


Step 2: Using static variable as filter
1. Open m_FLATFILE_TO_TABLE mapping select filter condition change condition as
EMP.deptno =#RRITEC_PROJECT.DEPTNO10

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

102

2. Truncate target table


3. Click on validate Click on save click on Run

Exercise 2: Filter MIN DEPTNO using SQL Query result


Step 1: Create a refreshable variable
1. Under RRITEC_PROJECT Right click on Variables new variable name it as
MINDEPTNO Data type as Numeric
2. Click on refreshing tab under sql query type select min(deptno) from scott.emp
select schema as source Click on test query

3. Save close
Step 2: Using variable as filter

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

103

1. Open m_FLATFILE_TO_TABLE mapping select filter condition change condition


as EMP.deptno =#RRITEC_PROJECT.MINDEPTNO

2. Truncate target table


3. Click on validate Click on save click on Run
4. Notice that no data loaded into target as we have no value in the variable .From this we
come to understand variable corresponding query will not execute with mapping run
Exercise 3: Filter MIN DEPTNO using refresh variable type of package
Step 1: Creating package
1. Under RRITEC_PROJECT Right click on package new package name it as
PKG_VARIABLE_MAPPING
2. Drag and drop MINDEPTNO variable and m_FLATFILE_TO_TABLE mapping
3. From the toolbar click on next step on success drag and drop from variable to
mapping

4. Select variable select type as refresh variable


5. Click on save click on run and observe that target table loaded with 10 deptno data

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

104

23. Sequences
1. A Sequence is a variable that increments itself automatically each time it is used.
2. It is mainly useful to generate surrogate Keys
3. It is equivalent to Sequence generator transformation of informatica
4. A sequence can be created as a global sequence or in a project. Global sequences are
Common to all projects, whereas project sequences are only available in the project
Where they are defined.
5. Oracle Data Integrator supports three types of sequences:
1. Standard sequences: whose current values are stored in the Repository.
2. Specific sequences: whose current values are stored in an RDBMS table cell.
Oracle Data Integrator reads the value, locks the row (for concurrent updates)
and updates the row after the last increment.
3. Native sequence: that maps a RDBMS-managed sequence.
6. Even we can use directly database Sequence without using ODI sequence types

Exercise 1: Create Sequence by using directly database sequence


Step1: Create a sequence in database
1. Open sql developer connect TDBU schemaexecute below code
Create sequence EMPID_SEQUENCE
minvalue 1
maxvalue 9999
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

105

start with 1
increment by 1

Step2: Create Target Table


1. Open sql developer connect TDBU schemaexecute below code
CREATE TABLE SEQ_EMP AS
SELECT 1 AS ROW_WID , EMP.* FROM SCOTT.EMP WHERE 1=2

2. Reverse Engineer and import table into RRITEC_TARGET_MODEL


Step3: Create Mapping
1. Create a mapping with the name of m_ DAT ABASE_SEQ
2. Drag and drop emp from RRITEC_SOURCE_MODEL
3. Drag and drop SEQ_EMP from RRITEC_TARGET_MODEL
4. Connect all corresponding columns from EMP table to SEQ_EMP
5. Select ROW_WID and provide below expression

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

106

6. Validate Save RunObserve output

Exercise 2: Create Sequence by using native sequence


Step1: Create Native Sequence
1. Under RRITEC_PROJECT Right click on sequence Click on New Sequence

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

107

2. Provide Name: NAT_SEQ_EMP_ROW_WID


3. Select Native sequence select Schema target_tdbu Select native sequence name
as EMPID_SEQUENCE

4. Click on save
Step2: Using Native Sequence
1. In above mapping m_ DAT ABASE_SEQ select row_wid column
2. Change expression as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

108

3. Click on save Click on Run observe output

Exercise 3: Create Sequence by using standard sequence


Step1: Create Standard Sequence
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

109

1. Under RRITEC_PROJECT Right click on sequence Click on New Sequence

2. Provide Name: STD_SEQ_EMP_ROW_WID provide Increment by 1


3. Select standard sequence

4. Click on save
Step2: Using Standard Sequence
1. In above mapping m_ DAT ABASE_SEQ select row_wid column
2. Change expression as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

110

3. Go to Physical tab select SEQ_EMP table select truncate_taget_table as true


4. Click on save Click on Run observe output
Exercise 4: Create Sequence by using Specific sequence
Step1: Create Specific Sequence
1. Under RRITEC_PROJECT Right click on sequence Click on New Sequence

2. Provide Name: SPECIFIC_SEQ_EMP_ROW_WID provide Increment by 1


3. Select Specific sequence

4.

Click on save

Step2: Using Specific Sequence

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

111

1. In above mapping m_ DAT ABASE_SEQ select row_wid column


2. Change expression as shown below

3. Go to Physical tab select SEQ_EMP table select truncate_taget_table as true


4. Click on save Click on Run observe output
24. Procedures
1. Procedure is a sequence of commands/Tasks executed by database engines or the
operating system, or using ODI Tools.
2. A procedure can have options that control its behavior.
3. Procedures are reusable components that can be inserted into packages.

Procedure Examples:

1. Email Administrator procedure:


a. Uses the OdiSendMail ODI tool to send an administrative email to a user. The
email address is an option.
2. Clean Environment procedure:
a. Deletes the contents of the /temp directory using the OdiFileDelete tool
b. Runs DELETE statements on these tables in order: CUSTOMER, CITY,
REGION, and COUNTRY
3. Create and populate RDBMS table:
a. Run SQL statement to create an RDBMS table.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

112

b. Run SQL statements to populate table with records.


4. Initialize Drive procedure:
a. Connect to a network drive using either a UNIX or Windows command
(depending on an option).
b. Create a /work directory on this drive.
5. Email Changes procedure:
a. Wait for 10 rows to be inserted into the INCOMING table.
b. Transfer all data from the INCOMING table to the OUTGOING table.
c. Dump the contents of the OUTGOING table to a text file.
d. Email this text file to a user.

Commands or ODI objects that can be used in ODI procedures:


1. SQL Statements
2. OS Commands
3. ODI Tools
4. JYTHON Programs
5. Variables
6. Sequences
7. User Functions
8. etc
Using ODI Tools
1. Example of ODI tools are
1. FILES Related
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

113

i. OdiFileAppend,OdiFileCopy,OdiFileDelete,OdiFileMove,OdiFileWait,Odi
MkDir,OdiOutFile,OdiSqlUnload,OdiUnZip,OdiZip
2. Internet Related
i. OdiSendMail,OdiFtpGet,OdiFtpPut,OdiReadMail,OdiScpGet,OdiScpPut,
OdiSftpGet,OdiSftpPut
3. .. etc (For more ODI TOOLS see package screen)

Let us see about OdiFileCopy


OdiFileCopy Syntax
OdiFileCopy -DIR=<dir> -TODIR=<dest_dir> [-OVERWRITE=<yes|no>] [RECURSE=<yes|no>] [-CASESENS=<yes|no>]
OdiFileCopy -FILE=<file> -TOFILE=<dest_file>|-TODIR=<dest_dir> [OVERWRITE=<yes|no>] [-RECURSE=<yes|no>] [-CASESENS=<yes|no>]
Examples
1. Copy the file "host" from the directory /etc to the directory /home:
a. OdiFileCopy -FILE=/etc/hosts -TOFILE=/home/hosts
2. Copy all *.csv files from the directory /etc to the directory /home and overwrite:
a. OdiFileCopy -FILE=/etc/*.csv -TODIR=/home -OVERWRITE=yes

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

114

3. Copy all *.csv files from the directory /etc to the directory /home while changing their
extension to .txt:
a. OdiFileCopy -FILE=/etc/*.csv -TOFILE=/home/*.txt -OVERWRITE=yes
4. Copy the directory C:\odi and its sub-directories into the directory C:\Program Files\odi
a. OdiFileCopy -DIR=C:\odi "-TODIR=C:\Program Files\odi" RECURSE=yes

Exercise 1: Delete target table data


1. Under RRITEC_PROJECT Right Click on procedure Click on new procedure
2. Name it as Delete target table data Click on Tasks
3. Click on add Provide below information
1. Name

: Delete EMP10

2. LogCounter :Delete
3. Technology :Oracle
4. Context

: Global

5. Schema

: Target_TDBU

6. Command

: delete from <%=snpRef.getObjectName("L","EMP10","D")%>

4. Similarly Create two more tasks to delete from EMP20 and EMP30

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

115

5. Click on save Run observe that the tables data deleted


Exercise 2: Execute/Call a PL/SQL Procedure
Step1: Creating Table
1. Open SQL Developer create EMP_PROC table in TDBU schema
CREATE TABLE EMP_PROC (

EMPNO NUMBER(10,0),

ENAME VARCHAR2(30),

SAL NUMBER(10,0) )

Step 2: Create PL/SQL procedure


CREATE OR REPLACE PROCEDURE INSERT_RECORD_PROC
AS
BEGIN
INSERT INTO EMP_PROC VALUES (101,'RAM',5000);
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

116

END;
/

Step3: Creating ODI Procedure


1. Under RRITEC_PROJECT Right Click on procedure Click on new procedure
2. Name it as INSER_RECORD_USING_PROCEDURE Click on Tasks
3. Click on add Provide below information
a. Name

: INSERT RECORD

b. LogCounter :Insert
c. Technology :Oracle
d. Context

: Global

e. Schema

: Target_TDBU

f.

: BEGIN TDBU.INSERT_RECORD_PROC(); END;

Command

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

117

6. Click on save Run observe that the record is inserted in table EMP_PROC

Exercise : Create a target table with the name emp_totalsal_tax with columns empno
ename sal comm. ,totalsal,tax and populate totalsal and tax columns using stored
procedure
Totalsal =sal +nvl(comm.,0)
Tax=totalsal*0.1
Exercise 3: Copy files from one folder to another folder using OdiFileCopy
Step1: Understanding Procedure Options and Disable PL/SQL procedure
1. Open above procedure INSER_RECORD_USING_PROCEDURE
2. Click on options Click on add options provide name as shown below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

118

3. Click on save click on tasksuncheck always Execute

Step2: Create a task to copy files


1. Create one more task with the name of CopyFile
2. Provide below expression
OdiFileCopy -FILE=/Oracle/Middleware/Oracle_Home/odi/demo/RRITEC/*.txt TODIR=/Oracle/Middleware/Oracle_Home/odi/demo -OVERWRITE=yes

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

119

3. Click on yes runobserve files are moved or not


25. Packages
1. An organized sequence of steps that makes up a workflow. In Package Each step
performs a small task, and they are combined together to make the package.
2. A Simple Package example
a. This package executes Three Mappings, and then archives some files.
b. If one of the Four steps fails, an email is sent to the administrator.

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

120

Exercise 1: Running two mappings and archiving .txt files


1. Under RRITEC_PROJECT Right click on package Click on new package
2. Name it as pkg_MAPPINGS_ARCHIVE_FILES
3. Drag and drop two mappings (m_ODS_EMPLOYEE , m_FLATFILE_TO_TABLE) and
OdiFileMove
4. Connect all steps as shown below

5. Select OdiFileMove and change Filename and Target Directory as shown below

6. Save and run the package

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

121

26. Scenarios
1. A scenario is designed to put a source component (mapping, package, procedure,
variable) into production.
2. When a component development is finished and tested, you can generate the scenario
corresponding its actual state. This operation takes place in Designer Navigator.
3. The scenario code (the language generated) is frozen, and all subsequent modifications
of the components which contributed to creating it will not change it in any way.
4. It is possible to generate scenarios for packages, procedures, mappings, or variables.
5. Scenarios generated for procedures, mappings, or variables are single step scenarios
that execute the procedure, mapping, or refresh the variable.
6. Once generated, the scenario is stored inside the work repository. The scenario can be
Exported then imported to another repository (remote or not) and used in different
contexts. A scenario can only be created from a development work repository, but can
be imported into both development and execution work repositories.

Exercise 1: Creating Scenario, exporting Scenario and importing scenario


1. Right click on package pkg_MAPPINGS_ARCHIVE_FILES click on generate
scenario
2. Select scenario PKG_MAPPINGS_ARCHIVE_FILES version 001 Click on export
save on to desktop
3. Assume like by accident we deleted the scenario (or need to import in another
environment )
4. Then right click on scenario click on import point to previous exported file click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

122

27. Version Control


1. Oracle Data Integrator provides a comprehensive system for managing and
safeguarding changes.
2. It also allows these objects to be backed up as stable checkpoints, and later restored
from these checkpoints.
3. These checkpoints are created for individual objects in the form of versions, and for
consistent groups of objects in the form of solutions.
Note: Version management is supported for master repositories installed on database engines
such as Oracle, Hypersonic SQL, and Microsoft SQL Server.
4. A version is a backup copy of an object. It is checked in at a given time and may be
restored later.
5. Versions are saved in the master repository. They are displayed in the Version tab of the
object window.
6. The following objects can be checked in as versions
1. Projects, Folders
2. Packages, Scenarios
3. Mappings (including Reusable Mappings), Procedures, Knowledge Modules
4. Sequences, User Functions, Variables
5. Models, Model Folders
6. Solutions
7. Load Plans
Exercise 1: Creating version on package/procedure...Etc, modifying, creating one
more version, restoring to first version, comparing versions
1. Right click on package pkg_MAPPINGS_ARCHIVE_FILES Click on version click on
Create version click on ok
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

123

2. Open the package pkg_MAPPINGS_ARCHIVE_FILES change filename as shown


below

3. Save close
4. Right click on package pkg_MAPPINGS_ARCHIVE_FILES Click on version click on
Create version click on ok
5. Now if you want to restore the initial version (1.0.0.0) Right click on package
pkg_MAPPINGS_ARCHIVE_FILES Click on Restore Select 1.0.0.0 version click
on ok
6. Open package observe the initial code
28. Modifying Knowledge Modules
1. Knowledge Modules are templates of code that define integration patterns and their
implementation
2. They are usually written to follow Data Integration best practices, but can be modified for
project specific requirements
3. These are 6 types

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

124

4. RKM (Reverse Knowledge Modules) are used to perform a customized Reverse


engineering of data models for a specific technology. These KMs are used in data
models.
5. LKM (Loading Knowledge Modules) are used to extract data from source systems (files,
middleware, database, etc.). These KMs are used in Mappings.
6. JKM (Journalizing Knowledge Modules) are used to create a journal of data
Modifications (insert, update and delete) of the source databases to keep track of the
changes. These KMs are used in data models and used for Changed Data Capture.
7.

IKM (Integration Knowledge Modules) are used to integrate (load) data to the target
tables. These KMs are used in Mappings.

8. CKM (Check Knowledge Modules) are used to check that constraints on the sources
and targets are not violated. These KMs are used in data models static check and
interfaces flow checks.
9. SKM (Service Knowledge Modules) are used to generate the code required for creating
data services. These KMs are used in data models.
Mainly used KMs?
1. When processing happens between two data servers, a data transfer KM is required.
a. Before integration (Source Staging Area)
Requires an LKM, which is always multi-technology
b. At integration (Staging Area Target)
Requires a multi-technology IKM
2. When processing happens within a data server, it is entirely performed by the server.
a. A single-technology IKM is required.
3. LKM and IKMs can use in four possible ways

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

125

4. Normally we ever create new KMs .But sometimes we may need to modify the existing
KMs
5. While modifying KMs , Duplicate existing steps and modify them. This prevents typos in
the syntax of the odiRef methods.
Exercise 1: Incremental Loading
1. Inexistent rows are inserted; already existing rows are updated.
2. If we deleted a record in source then it wont be deleted in the target
3. For incremental update mandatorily we should have a unique/primerkey column on
target
Step1: Create target table
1. Open SQL Developer create EMP_INCR_MKT table in TDBU schema
CREATE TABLE EMP_INCR_MKT
(
EMPNO NUMBER PRIMARY KEY ,

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

126

ENAME VARCHAR2(30),
SAL NUMBER
)

2. Reverse Engineer and Import table EMP_INCR_MKT into RRITEC_TARGET_MODEL


Step2: Importing Required KM
1. Under RRITEC_PROJECT Under Knowledge Modules Right Click on
integration(IKM)
2. Click on import knowledge Modules provide xml reference path and select IKM
Oracle Incremental Update

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

127

3. Click on ok
Step3: Create a mapping for incremental loading
1. Create a mapping with the name of m_ EMP_INCR_LOADING
2. Drag and drop EMP from RRITEC_SOURCE_MODEL
3. Drag and drop EMP_INCR_MKT from RRITEC_TARGET_MODEL
4. Connect corresponding columns
5. In logical tab Select target table select incremental type as incremental update

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

128

6. Click on physical tab select target table EMP_INCR_MKTSelect integration


Knowledge Module as IKM oracle Incremental Update Select
Flow_control=FALSE

7. Click on validate click on save Click on run


Exercise 2: Modifying Knowledge module and adding audit table
1. Under RRITEC_PROJECT Under Knowledge Modules Expand integration(IKM)
2. Right-click on the IKM Oracle Incremental Update Knowledge ModuleRight click
select Duplicate Selection.
3. Open the copy of IKM Oracle Incremental Update and rename it to IKM Oracle
Incremental Update Audit

4. The objective here is to add two steps that will:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

129

a. Create an audit table (and only generate a warning if the table already exists).
Name the table after the target table; simply add _audit at the end of the table
name.
b. In the audit table, insert three columns, the primary key of the record being
processed by the mapping, a time stamp, and an indicator that will tell us what
operation was done on the record (I= Insert ,U = Update).
5. Click on tasks Click on add name it as Create Audit table select ignore errors
6. Select target technology as oracle provide target command as shown below ok

Create table <%=odiRef.getTable("L", "TARG_NAME", "A")%>_AUDIT


(
<%=odiRef.getColList("", "[COL_NAME]\t[DEST_CRE_DT]",",\n\t", "", "PK")%>,
AUDIT_DATE DATE,
AUDIT_INDICATOR VARCHAR2(1)
)

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

130

7. Similarly create one more task to Insert records Click on add name it as Insert into
Audit table select ignore errors
8. Select target technology as oracle provide target command as shown below ok

Insert into <%=odiRef.getTable("L", "TARG_NAME", "A")%>_AUDIT


(
<%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "PK")%>,
AUDIT_DATE,
AUDIT_INDICATOR
)
select <%=odiRef.getColList("", "[COL_NAME]", ",\n\t", "", "PK")%>,
sysdate,

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

131

IND_UPDATE
from <%=odiRef.getTable("L","INT_NAME","W")%>

Note: To reduce typing, you can copy the code from a similar step and modify as
needed.
Note: These substitution methods use the following parameters:

GetTable:

a. L: Local naming convention. For example, in Oracle that would be schema.table


(versus R for remote: schema.table@server).
b. A: Automatic. It enables ODI to determine which physical schema to use (the
Data schema [D] or the Staging schema [W])

getColList:

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

132

1. Notice the PK parameter. If it is used, only the columns that are part of the
primary key are included.

9. Save it Verify that your new knowledge module IKM Oracle Incremental Update
Audit appears in the Knowledge Modules tree.
10. Select Create Audit Table task and move to on top of commit transactions task
11. Select Insert into Audit Table task and move to on top of commit transactions task

12. Modify the m_ EMP_INCR_LOADING mapping to be executed with your newly


Created knowledge module. Change IKM entry to use IKM Oracle Incremental Update
Audit

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

133

Exercise 3: Add an Option to your KM

Step1: Create a option


1. To make your KM more user friendly, you can add an option that will let the end user
Choose when he/she want to generate audits:
2. In above audit KM Click on options Add an option Name the option as
AUDIT_CHANGES Set the type to Boolean Set the default value to True
3. Save the Option

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

134

Step2: Link the option to your tasks


1. In above audit KM click two tasks one by one (create audit table and insert records
into audit table) Unselect Always Execute Select AUDIT_CHANGES

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

135

2. Save and close the IKM. The execution of these steps can now be set by the end-user.
3.

In the Physical tab of your mapping, click on the Target table

4. verify that the AUDIT_CHANGES option is set to True.


5. Run the mapping ,check the behavior in the Operator mapping.
6. Change the value to False, run the mapping again, and compare the generated code in
the operator mapping.
29. Change Data Capture (CDC)
1. The purpose of Changed Data Capture is to allow applications to process changed data
only
2. Loads will only process changes since the last load
3. The volume of data to be processed is dramatically reduced
4. CDC is extremely useful for near real time implementations, synchronization, Master
Data Management
5. In general CDC Techniques are Four types
1. Trigger based ODI will create and maintain triggers to keep track of the
changes

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

136

2. Logs based for some technologies, ODI can retrieve changes from the
database logs. (Oracle, AS/400)
3. Timestamp based If the data is time stamped, processes written with ODI can
filter the data comparing the time stamp value with the last load time. This
approach is limited as it cannot process deletes. The data model must have been
designed properly.
4. Sequence number if the records are numbered in sequence, ODI can filter the
data based on the last value loaded. This approach is limited as it cannot
process updates and deletes. The data model must have been designed
properly.
6. CDC in ODI is implemented through a family of KMs: the Journalization KMs
7. These KMs are chosen and set in the model
8. Once the journals are in place, the developer can choose from the interface whether he
will use the full data set or only the changed data
9. Changed Data Capture (CDC), also referred to as Journalizing

Journalizing Components

1. Journals: Contain references to the changed records


2. Capture processes: Captures the changes in the source datastores either by creating
triggers on the data tables, or by using database-specific programs to retrieve log data
from data server log files
3. Subscribers (applications, integration processes, and so on): That use the changes
tracked on a datastore or on a consistent set
CDC Infrastructure in ODI
1. CDC in ODI depends on a Journal table
2. This table is created by the KM and loaded by specific steps of the KM
3. This table has a very simple structure:
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

137

1. Primary key of the table being checked for changes


2. Timestamp to keep the change date
3. A flag to allow for a logical lock of the records
4. A series of views is created to join this table with the actual data
5. When other KMs will need to select data, they will know to use the views instead of the
tables
Using CDC
1. Set a JKM in your model
2. For all the following steps, right-click on a table to process just that table, or right-click on
the model to process all tables of the model:
3. Add the table to the CDC infrastructure: Right-click on a table and select Changed Data
Capture / Add to CDC
4. Oracle Data Integrator supports two journalizing modes:
1. Simple Journalizing tracks changes in individual datastores in a model.
2. Consistent Set Journalizing tracks changes to a group of the model's data
stores, taking into account the referential integrity between these datastores. The
group of datastores journalized in this mode is called a Consistent Set.
5. Simple vs. Consistent Set Journalizing
Simple Journalizing enables you to journalize one or more datastores. Each journalized
datastore is treated separately when capturing the changes.
This approach has a limitation, illustrated in the following example: You want to process
changes in the ORDER and ORDER_LINE datastores (with a referential integrity
constraint based on the fact that an ORDER_LINE record should have an associated
ORDER record). If you have captured insertions into ORDER_LINE, you have no
guarantee that the associated new records in ORDERS have also been captured.
Processing ORDER_LINE records with no associated ORDER records may cause
referential constraint violations in the integration process.
Consistent Set Journalizing provides the guarantee that when you have an
ORDER_LINE change captured, the associated ORDER change has been also
captured, and vice versa. Note that consistent set journalizing guarantees the
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

138

consistency of the captured changes. The set of available changes for which
consistency is guaranteed is called the Consistency Window. Changes in this window
should be processed in the correct sequence (ORDER followed by ORDER_LINE) by
designing and sequencing integration interfaces into packages.
Although consistent set journalizing is more powerful, it is also more difficult to set up. It
should be used when referential integrity constraints need to be ensured when capturing
the data changes. For performance reasons, consistent set journalizing is also
recommended when a large number of subscribers are required.
It is not possible to journalize a model (or datastores within a model) using both
consistent set and simple journalizing.
6. For consistent CDC, arrange the datastores in the appropriate order (parent/child
relationship) : in the model definition, select the Journalized tables tab and click the
Reorganize button
7. Add the subscriber (The default subscriber is SUNOPSIS) Right-click on a table and
select Changed Data Capture / Add subscribers
8. Start the journals: Right-click on a table and select Changed Data Capture / Start
Journal

Exercise 1: Create a simple Journalizing


Step 1: Import Journalizing KM
1. Expand Project Right Click on JKM click on import KM

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

139

2. Provide the KM xml reference path


C:\Oracle\Middleware\Oracle_Home\odi\sdk\xml-reference
3. Select JKM oracle Simple

4. Click on OK
Step 2: Assigning KM to Model
1. Right click on RRITEC_MODEL_SCOTT click on open select Journal select
Journalizing mode as simple
2. Select KM as JKM oracle Simple

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

140

3. Click on save close model


Step 3: Adding a table to CDC
1. open RRITEC_MODEL_SCOTT Right click on EMP table click on add to CDC

2. Make sure Emp table is appearing under journalized tables of the model

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

141

Step 4: Start Journal


1. Right Click on EMP table Changed data capture Click on start Journal

2. Select SUNOPSIS Subscriber click on ok select proper context click on ok ok


3. Go to operator tab observe one by one step code

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

142

4. By observing this you come to notice below objects are creating as part of CDC
1. One J$<TABLE> created
2. One Subscriber table created SNP_SUBSCRIBERS
3. Two Views created JV$<TABLE>,JV$D<TABLE>
4. One trigger created T$<TABLE>
Step 4: To understand CDC Create target table in TDBU schema
1. Crate table using below syntax
CREATE TABLE CDC_EMP
(
EMPNO NUMBER (10) PRIMARY KEY ,
ENAME VARCHAR2(30),
JOB

VARCHAR2(30),

SAL

NUMBER(10),

Deptno NUMBER(2)
)
2. Import into target model
Step 5: Create a mapping
1. Right click on mapping folder create mapping name it as m_Simple_CDC
2. Drag and drop EMP source table and CDC_EMP target table into work area
3. Connect all corresponding columns
4. in logical tab Select target table CDC_EMP select integration type as incremental
update

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

143

5. Click on Physical tab select target table select IKM as IKM oracle incremental
update

6. Select flow_control as false


7. Click on save run
Note: First time we are loading all data into target table
Step 6: Understanding CDC
1. Go to EMP table and update any one record and notice that this change registered in
J$emp table
2. In next run we need to load only the records available in J$emp table for that go to
mapping select source table and select Journalized data only

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

144

\
3. Make sure proper subscriber name in journalized condition

4. Save run

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

145

30. Migration (Exporting and Importing)


1. Copying or deploying objects from one environment to another is called as deployment
or migration
1. deploying from development to testing
2. deploying from testing to production
2. Oracle Data Integrator 12c introduces the use of globally-unique object identifiers.
3. Export
1. Object wise
i. On Any object (mapping ,package etc) we can right click and export it
in XML format .
ii. In this process we miss dependent objects

2. Smart export
i. We can click on connect navigator and export required objects by using
simple drag and drop method
ii. We can export in two formats XML and ZIP file
iii. Even dependencies also exported
iv. It is recommended method

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

146

4. Import:
1. Duplication
i. This mode creates a new object (with a new GUID and internal ID) in the
target Repository, and inserts all the elements of the export file.
ii. Note that this mode is designed to insert only 'new' elements.
2. Synonym
i. Synonym Mode INSERT
1. Tries to insert the same object (with the same GUID) into the
target repository. The original object GUID is preserved.
2. If an object of the same type with the same internal ID already
exists then nothing is inserted.
ii. Synonym Mode UPDATE
1. Tries to modify the same object (with the same GUID) in the
repository.
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

147

2. This import type updates the objects already existing in the target
Repository with the content of the export file.
3. If the object does not exist, the object is not imported.
iii. Synonym Mode INSERT_UPDATE
1. If no ODI object exists in the target Repository with an identical
GUID, this import type will create a new object with the content of
the export file. Already existing objects (with an identical GUID)
will be updated; the new ones, inserted.
2. Existing child objects will be updated, non-existing child objects
will be inserted, and child objects existing in the repository but not
in the export file will be deleted.
3. This import type is not recommended when the export was done
Without the child components. This will delete all sub-components
of the existing object.
4. If export file contains dependencies then this method is
recommended

3. Smart
i. This method is always given first priority

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

148

Exercise 1: Migrating from development to test work repository

Step1: Exporting From Development environment

1. Connect to work repository Click on Connect Navigator Select Export Select


Smart Export
2. Drag and drop project RRITEC_PROJECT on to object to be exported Click on
export

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

149

3. Select desktop and create a folder and save it.

Step2: Creating Testing work repository


Similar to on page 40Create the Work Repository (refer page no 40 )
Step 3: Creating user TDBU_TEST and load tables into TDBU _TEST schema
1. Open SQL PLUS Type / as sysdba press enter
2. Create a user by executing below commands
a. Create user TDBU_TEST identified by RRitec123;
b. Grant DBA to TDBU_TEST;
c. Conn TDBU_TEST@ORCL
d. Password RRitec123
e. Select count (*) from tab;
3. Go to RRITEC labcopy labdata folder and take full path of driver and execute as for
below

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

150

Step4: Creating a physical schema with testing data base

Step5: Creating a testing context and mapping the source and target schemas

Step6: Importing into testing environment


RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

151

1. Connect to testing work repository Click on Connect Navigator


2. Select Import Select Smart Import point to previously exported file click on
nextfinish Click on close

Step7: Running and loading data into test data base


1. Right click on m_ODS_EMPLOYEE mapping click on run select testing context
ok

2. Observe data in TDBU_TEST schema ODS_EMPLOYEE table

Exercise 2: Migrating from test work repository to Execute Work Repository

Step1: Exporting Scenarios from testing environment

1. Right click on m_ODS_EMPLOYEE Click on generate scenario Click on ok

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

152

2. Right click on scenario m_ODS_EMPLOYEE Version 001 click on export


3. Select export path provide export file name click on export close

Step2: Creating Execution work repository


Similar to on page 40Create the Work Repository (refer page no 40 ) (however please take care
below step)

Step 3: Creating user TDBU_PROD and load tables into TDBU _PROD schema
4. Open SQL PLUS Type / as sysdba press enter
5. Create a user by executing below commands
a. Create user TDBU_ PROD identified by RRitec123;
b. Grant DBA to TDBU_ PROD;
RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182
Confidential

153

c. Conn TDBU_ PROD @ORCL


d. Password RRitec123
e. Select count (*) from tab;
6. Go to RRITEC labcopy labdata folder and take full path of target.sql file and execute
Step4: Creating a physical schema with Production data base

Step5: Creating a Production context and mapping the source and target schemas

Step6: Importing into production environment

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

154

1. Connect to RRITEC_EWR work repository Go to operator tab Under load Plans


and Scenarios click on import scenario

Step7: Running and loading data into production data base


1. Right click on m_ODS_EMPLOYEE sceanrio click on run select production context
Click on ok
2. Observe data in TDBU_PROD schema ODS_EMPLOYEE table
31. Security

32. Agents
33. CKM
34. Target Load Plan

RR ITEC #209,Nilagiri Block,Adithya Enclave,Ameerpet @8801408841,8790998182


Confidential

155

Вам также может понравиться