Вы находитесь на странице: 1из 112

OBIEE Repository

Repository stores required metadata for reporting purpose. Repository can be stored in XML or binary.
till OBIEE1g repository used to store only in binary format. From 11g onwards we can save reposiroty in xml or binary.
every binary repository stores in a binary file with extention ".rpd"
OBIEE Repository metadata storage types

Binary
XML based

repository contains three layers


physical,BMM and Presentation

Physical Layer
In physical layer we need to model phsyical tables as it is with similar structure of original
database objects.
Note(physical layer structure(metadata) should always match with existing physical objects structure
OBIEE supports Relationational,non relational databases like excel,xml,webservices and multi dimension

Relational databases like oracle,teradata,SQL server,db2 …etc


Non Relational : XML files,excel files, webservices…etc
we can combine heterogenuos database in physical layer. That means we can combine oracle table with
teradata table with xml file.

BMM layer
Also popularly known as Logical layer.In this layer we convert phsyical models(structures) into logical st
example: we can convert snowflake physical model to logical star.
normalised oltp models into star models.
denormalised models into normalised..etc

OBIEE recommends to design only logical star models in BMM layer.


we can create n no of logical star model in BMM layer.

Presentation Layer

In presentation layer we can add objects on which users wants to create analystics /reports
we can create any no of subject areas in presentation layer.
every logical star model in BMM layer can be a subject area in presentation layer

In order to work on OBIEE check below links and verify whether all server running or not.
Weblogic Application server
http://applebi:7001/console

Open Enterprise manager Fussion middleware console


login using weblogic user name and pwd weblogic
http://applebi:7001/em
after logging in go to business intelligence folder at left nagivation pane

http://applebi:7001/analytics
check whether you are able to create new report.

Basic minimum configuration information which every developer should know


Repository

BI Server configuraion File

changing the rpd content


rpd changes can be done using below methods
online
offline
MUDE

Offline Mode: in offline mode we can develop or make changes to any repository opening
in offline mode. go to file menu and select offline
Online Mode: open Administrator tool and Go to File menu=> Open=>Online
provide repository password
and provide weblogic user/pwd and click ok
it will automatically open the repository which is loaded in bi server
to make any changes on repository after loading
go to File menu and select check out all
make changes in rpd
and perform check in all under file menu.
here check out enables user to make changes on exisig rpd.
check in commits the changes onto rpd.
when you perform check in it will also perform global consistency.
now save repository.which confirms our changes onto repository.

to reflect new changes on to presentation services(like analsys and dashbaords)


login to presentation server
next go toAdmisnitration tool(top right button under presentation server)
click on Reload server metadata.

MUDE : we need to configure MUDE before goting to use. This functionality allows multiple users
to develop a single repository parallel
Steps to Upload new repository into BI Server

This can be done using two methods


1) using fussion middleware GUI link or manually editing NQSconfig file.

a) deploy from fussion middle ware Enterprise manager.


Logon to Fussion middleware using the link http://applebi:7001/em
Navigate to business intellligence folder at left navigation pane.
click on core applications.
In the right side frame go to deployments tab
click on Lock and edit config tab.
next go to repository tab.
click on browse button under upload repository section.
provide user/pwd of repository.
next click on apply changes
next click on release and apply changes.
then restar bi server

In online mode we can perform the chaanges on existing loaded repository on BI server.

a) manual uploading
this method exists in OBIEE10g and also we can use in 11g. However it is not recommended.
to upload rpd changes or new rpd to BI server perform any one of the option
in this method first place the repository under repository fodler.
example

next Go to NWSconfig file (NQSConfig.ini) which is located under


D:\OBIEE116\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1
D:\OBIEE116\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\re
open nqs config file
and go to repository section of file like
[REPOSITORY]
and change the repository name like
Star=test.rpd (here test.rpd is the repository file which we want to upload

save the file and restart all weblogic and bi server.

Note: to perform this manual uploading we need to stop bi server while performing these changes)
e. Repository can be stored in XML or binary.
From 11g onwards we can save reposiroty in xml or binary.

hsyical tables as it is with similar structure of original

ata) should always match with existing physical objects structures in database)
elational databases like excel,xml,webservices and multi dimensional structure like cube to design and use phsyical layer.

data,SQL server,db2 …etc


webservices…etc
ase in physical layer. That means we can combine oracle table with excel file

.In this layer we convert phsyical models(structures) into logical structures as per business need.
physical model to logical star.

gical star models in BMM layer.


del in BMM layer.

ects on which users wants to create analystics /reports


s in presentation layer.
r can be a subject area in presentation layer

ow links and verify whether all server running or not.

pwd weblogic
ence folder at left nagivation pane
click on coreapplications and check all the services running or not

e new report.

ation which every developer should know

all the repositories will be saved by default under below directory


D:\OBIEE116\instances\instance1\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository

BI server runs using the configuration which is setup in NQSconfig.INI file . This file is always located under below directory
D:\OBIEE116\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1
BI server loads the repositories which are configure in NQSConfig file to facilitate analytics

n develop or make changes to any repository opening


select offline
ol and Go to File menu=> Open=>Online

ory which is loaded in bi server

changes on exisig rpd.

o perform global consistency.


our changes onto repository.

ation services(like analsys and dashbaords)

ht button under presentation server)

before goting to use. This functionality allows multiple users


or manually editing NQSconfig file.

Enterprise manager.
he link http://applebi:7001/em
der at left navigation pane.

repository section.

haanges on existing loaded repository on BI server.

lso we can use in 11g. However it is not recommended.


BI server perform any one of the option
ory under repository fodler.

g.ini) which is located under


nfig\OracleBIServerComponent\coreapplication_obis1
ifoundation\OracleBIServerComponent\coreapplication_obis1\repository

pository file which we want to upload

and bi server.

ding we need to stop bi server while performing these changes)


n xml or binary.

Stores repository content in rpd file format


stores metadata content(repository content in XML format).

ysical objects structures in database)


ces and multi dimensional structure like cube to design and use phsyical layer.

mbine oracle table with excel file

ructures) into logical structures as per business need.


eck all the services running or not

d by default under below directory


e1\bifoundation\OracleBIServerComponent\coreapplication_obis1\repository

ration which is setup in NQSconfig.INI file . This file is always located under below directory
e1\config\OracleBIServerComponent\coreapplication_obis1
which are configure in NQSConfig file to facilitate analytics

ws multiple users
reapplication_obis1\repository

ming these changes)


(new in 11g)
Physica layer
below are the physical layer objects
database
database folder(object) in physical layer defines the database type and database features which obie
phsyical SQL.

we need to create one database for one typs of database . Example one for oracle , one for XML..etc
we can add any no of schemas(users) to a single database for importing tables.

virtual private database


when physical database is designed to work as virtual database then we need to select VPD option(vi
database).
generally when databases are designed to create a seperate virtual database for every user who logs
this will have different database params fror different users.

allow populate queries by default.


this option enables obiee to generate create table queries to insert data temporarly

direct database request


this option enable users to create reports directly on database with their custom SQL bypassing rpd s

Connect Pool
connection pool provides the connection related information to connect to database from OBIEE.
we can create any no of connection pools under one database.
out of many connection pools one connection pool can be used as a persistant connection pool.
this setting we need to do under database.
when we select persistant connection pool , obiee will use this connection information to perform temparory table
writeback,aggregate persistant wazards etc.

example when a query is having too many values in where condition inclause then obiee will automatically inserts
a temp table and joins with main table.

call interface
we have two types of call interafces
1) OCI which is a native connectivity interaface to connect to database.
Oracle call interface

2) ODBC: it is a non native connective driver . It used odbc.

additional a default XML type to connect to xml files.

always use native connectivity drivers only to connect database as they are faster and supports all database featur
we can go for ODBC when native connections are not available.

Datasource name
under data source name we need to provide database name of database which we need to connect.
when you select ODBC under call interface then we need to provide odbc name under this option.

username/pwd
these are username and password of database.

maximum connections
this restricts how many no of maximum connections we can have to database at a time from obiee connection poo
total no of connections obiee server can have to database is sum of no of connections of all connection pools.

shared logon
when we select shared logon option , every presentation server user(obiee user who connects from browser) whe
report will make use of single user/pwd information provided under user/pwd textbox to execute queries.

in case we want to restrict every user to user different database users to connect to database then we need to de
option and need to provide user information under identity manager.

connection pooling
connection pooling is a pooling time that how much time a database sesstion(connection) can be active even after
excution completed on database.

we need to keep always minimum time like 3-5 min.

isolation level
four levels of isolations

1) commited
when we select this option obiee will read only committed data from database
2) dirty
obiee reads uncommitted data also from database to generate reports.
3) repeatable
when we select repeatable obiee server creates a row level lock on table for the records reading into database.
this way no one can change the data when obiee reads certain records.
4) serialisable
this option create a table level lock by obiee on the table from which obiee reading data.

generally datawarehosuing we use commited data only as we read data only once loaded by ETL.
3 and 4 options are appilicabe when we create rpd on realtime datawarehouse or OLTP's.

Reuired fully qualified table name


when we have to model tables from more than one schema/user then we need to select Required fully qualified ta
when we select this option obiee will prefix schema names in front of table names. This prefix acts as a owner nam
table in database.

example;
import tables from bisample and scott.
create a connection pool poiting to user bisample.
then enable this option

PhysIcal Joins
Joins are two types in physical layer.
1) Foreign key join
2) Complex join
To Create above joins we have two separate join objects in 10g version however we will have single object icon in
we can create two types of joins using single icon click.

1) Foreign key join


Any join which is create between two table based on primary key and foeriegn key is called foreign k
we cannot add any custom conditions / join columns in this join
example : emp.deptno=dept.deptno
when we drag phsyical tables to BMM layer, all the foreign key joins are inherited to BMM layer
2) Complex join
Any join which is created based on user defined/custom condition is called complex join
example: emp.sal between salgrade.losal and salgrade.hisal
complex joins are not inherited to BMM layer when we drag phsyical tables to BMM layer
layer objects

folder(object) in physical layer defines the database type and database features which obiee can use while generating

to create one database for one typs of database . Example one for oracle , one for XML..etc.
dd any no of schemas(users) to a single database for importing tables.

ysical database is designed to work as virtual database then we need to select VPD option(virtual private

when databases are designed to create a seperate virtual database for every user who logs into database.
ave different database params fror different users.

s by default.
n enables obiee to generate create table queries to insert data temporarly

n enable users to create reports directly on database with their custom SQL bypassing rpd subject areas

des the connection related information to connect to database from OBIEE.


f connection pools under one database.
n pools one connection pool can be used as a persistant connection pool.
do under database.
ant connection pool , obiee will use this connection information to perform temparory table creations,
ersistant wazards etc.

is having too many values in where condition inclause then obiee will automatically inserts them into
with main table.

all interafces
a native connectivity interaface to connect to database.

n native connective driver . It used odbc.

ML type to connect to xml files.

ectivity drivers only to connect database as they are faster and supports all database features.
hen native connections are not available.

me we need to provide database name of database which we need to connect.


under call interface then we need to provide odbc name under this option.

d password of database.

y no of maximum connections we can have to database at a time from obiee connection pool.
s obiee server can have to database is sum of no of connections of all connection pools.

logon option , every presentation server user(obiee user who connects from browser) when generates
single user/pwd information provided under user/pwd textbox to execute queries.

rict every user to user different database users to connect to database then we need to de select this
ovide user information under identity manager.

pooling time that how much time a database sesstion(connection) can be active even after a query

s minimum time like 3-5 min.

tion obiee will read only committed data from database

ted data also from database to generate reports.

able obiee server creates a row level lock on table for the records reading into database.
ange the data when obiee reads certain records.

ble level lock by obiee on the table from which obiee reading data.

e commited data only as we read data only once loaded by ETL.


en we create rpd on realtime datawarehouse or OLTP's.

el tables from more than one schema/user then we need to select Required fully qualified table name option.
tion obiee will prefix schema names in front of table names. This prefix acts as a owner name of the

bles from bisample and scott.


connection pool poiting to user bisample.
ble this option

physical layer.

we have two separate join objects in 10g version however we will have single object icon in 11g version
es of joins using single icon click.

which is create between two table based on primary key and foeriegn key is called foreign key join
ot add any custom conditions / join columns in this join
: emp.deptno=dept.deptno
drag phsyical tables to BMM layer, all the foreign key joins are inherited to BMM layer

which is created based on user defined/custom condition is called complex join


emp.sal between salgrade.losal and salgrade.hisal
oins are not inherited to BMM layer when we drag phsyical tables to BMM layer
while generating

Eg: Large set of values in 'In List' clause.


TABLE PROPERTIES

in phsyical layer we can create types of tables.

1) phsyical
It is a regular phsyicle table which exists in a phsyical database.
here any phsyical object which phsyically exists in database or pshycal database can be created as a
phsyicle type of table.
2) select
in select type of table we can write any valid SQL.
generally it is a in-line view concept in database.
in case if we want to use any SQL output as a table then we can select table type as select and write the
SQL in SQL text box.

3) procedure
procedure helps to design any procedure output n physical layer.
in this case procedure out put columns will become columns in physical object.

dynamic table name


we can also provide table name in phsyical layer dynamically at run time.
we can use repository variables to generate table name dynamically and replace accordingly.

Cache
table cache property enables query output to be cached and mantained as per cache persitant time.
we can see more on cache manager

deploy view
we can see this opton enabled for the table which have created with table type select.
by selecting deploy view obiee will create a physical view on database against the gainst which specified under
select type database sql.

scenario to create a table type select

JOIN BISAMPLE LOOKUP TABLE TO GET THE CUSTOMER STATUS TYPE.

steps.
Import SAMP_LOOKUPS_D table from BISAMPLE database to phsyical layer.

next dbl click SAMP_LOOKUPS_D table and select table type as select.
provide the following SQL under text(default intialiser string)
SELECT lookup_dsc, language_key, lookup_KEY, lookup_type FROM SAMP_LOOKUPS_D
WHERE lookup_type='Customer Status' AND LANGUAGE_KEY='en'

ensure that columns of sql will match with column name of phsysical object.
define primary key on lookup_key colmn.
now join SAMP_LOOKUPS_D to customer dimension table(SAMP_CUSTOMERS_D).
note: drag first from customer table to lookup)
provide the following condition
apple.""."BISAMPLE"."SAMP_LOOKUPS_D"."LOOKUP_KEY"
= "apple".""."BISAMPLE"."SAMP_CUSTOMERS_D"."STATUS_KEY"

now drag samp_lookups_d phsyical table to BMM layer and join with customer table in BMM layer.

rename lookup_dsc column to "customer status"


drag customer status column to customer folder of presentation layer.
save and load metadata.

Alias and Duplicates

Alias alias is a phyical object which reference to a original phsyical object.

scenario to create a Alias


1) model HR subject area to display manager name.
Steps
open rpd in online mode.
go to physical layer and rigtclick EMP Physical table and select create alias.
provide alias name as EMP_MGR
now you can notice a new object created in phsyical layer.
dbl click EMP_MGR and define primary key on empno column.
next go to phsyical diagram and join EMP AND EMP_MGR table(Link from EMP to EMP_MGR)
keep the following join condition
EMP.mgr=EMP_MGR.empno

next drag EMP_MGR table from physical layer to HR Analysis folder of BMM layer.
Go to BMM Layer joins and join EMP_MGR to EMP if join not already exists.
rename ENAME column of EMP_MGR logical table to "manager name"

now drag manager name logical column from BMM layer to emp folder of presentation layer.

save and check for consistency.

Go to presentation services and reload server metadata from administration page.

Scenario2)
open rpd in online mode.
go to physical layer and create a alias on SAMP_TIME_DAY_D
provide alias name as Dim_Paid_Date
define primary key on dim_paid_date on CALENDAR_DATE column.
now go to phsyical diagram and join from REVENUEW FACT TABLE TO DIM_PAID_DATE
provide following join condition.

apple.""."BISAMPLE"."dim_PAID_date"."CALENDAR_DATE" =
apple.""."BISAMPLE"."SAMP_REVENUE_F"."PAID_DAY_DT"

next drag dim_paid_date physical table from phsyical layer to BMM layer.
go to BMM logical joins diagram and ensure that a logical join exists from revenue to dim_paid_date.
(if not create join from revenue fact table to dim_paid_date)

rename CALENDAR_DATE OF dim_paid date column in BMM layer to "Bill Piad Date"
drag all DIM_PAID_DATE LOGICAL TABLE FROM BMM layer to presentation layer
and save validate.

duplicate
duplicate is a copy of same existsing phsyical object.
to create a duplciate right click a phsyical table and select duplicate.
duplicate will create a new physical object with same structure of original object. But it cannot have the same nam
of original. So we need to provide a new name.
types of tables.

e table which exists in a phsyical database.


bject which phsyically exists in database or pshycal database can be created as a

e we can write any valid SQL.


ne view concept in database.
o use any SQL output as a table then we can select table type as select and write the

design any procedure output n physical layer.


re out put columns will become columns in physical object.

e in phsyical layer dynamically at run time.


s to generate table name dynamically and replace accordingly.

uery output to be cached and mantained as per cache persitant time.

for the table which have created with table type select.
will create a physical view on database against the gainst which specified under

TO GET THE CUSTOMER STATUS TYPE.

ble from BISAMPLE database to phsyical layer.

_D table and select table type as select.


er text(default intialiser string)
key, lookup_KEY, lookup_type FROM SAMP_LOOKUPS_D
er Status' AND LANGUAGE_KEY='en'

match with column name of phsysical object.


o customer dimension table(SAMP_CUSTOMERS_D).
table to lookup)

OOKUPS_D"."LOOKUP_KEY"
P_CUSTOMERS_D"."STATUS_KEY"

yical table to BMM layer and join with customer table in BMM layer.

"customer status"
o customer folder of presentation layer.

eference to a original phsyical object.

ay manager name.

k EMP Physical table and select create alias.

ect created in phsyical layer.


primary key on empno column.
d join EMP AND EMP_MGR table(Link from EMP to EMP_MGR)

m physical layer to HR Analysis folder of BMM layer.


n EMP_MGR to EMP if join not already exists.
P_MGR logical table to "manager name"

al column from BMM layer to emp folder of presentation layer.

d reload server metadata from administration page.

a alias on SAMP_TIME_DAY_D

d_date on CALENDAR_DATE column.


d join from REVENUEW FACT TABLE TO DIM_PAID_DATE
D_date"."CALENDAR_DATE" =
EVENUE_F"."PAID_DAY_DT"

cal table from phsyical layer to BMM layer.


m and ensure that a logical join exists from revenue to dim_paid_date.
e fact table to dim_paid_date)

im_paid date column in BMM layer to "Bill Piad Date"


CAL TABLE FROM BMM layer to presentation layer

stsing phsyical object.


a phsyical table and select duplicate.
sical object with same structure of original object. But it cannot have the same name
ide a new name.
BMM Layer
In BMM layer we can convert all physical modeling to logical modeling
Logical tables:
Logical table is table in BMM layer which can point to one or more than one phsyical tables
All the names in logical table including table/column etc can be renamed to as per business convineince.

Logical table Source (LTS)


it is popularly know as LTS.
each logical table can point to one or more than one LTS.
here again one LTS can map to one or more than one physical tables.

column mapping
columns mappng defines the mapping of each logical column to a phsyical column.

content
content will defines the actual granularity of table.

logical table can be a Logical fact or logical dimension

logical joins
relationship of logical table in BMM layer is called logical join or BMM join.
In logical(BMM) join we can specify the cardinality and join type (like inner outer ) only.

cardinality
cardinality is a relationship information from one table to another table
like one to one
one to many
many to one
many to many

when we model multiple logical tables and join them using logical joins, then OBIEE will automatically recognises the
logical fact and logical dimension based on the following infromation

A single logical table which has many "one to many' relation ships from other logical tables will be recognised as logical
fact.

example:

Here all D1,D2,D3,D4 logical tables are having one to many relationship onto f5 tables.
so F5 will be identified as Logical fact table.
Calculations

a) aggregation rules
normal aggregations
we can apply any summary function like SUM,AVG,FIRST, ..etcs on measure column of logical fact .
these columns are called measures in logical fact table.
level based measure
A measure calculaion(aggregationsum,avg..etc) always based a particular level is called level based measure

Physical calculation
calcused based on physical table columns is called phsyical calculation

Logical calculation.
calculations based on Logical columns of existing logical tables columns is called logical calculations.

Dimesional hierarchies
Defining the hierachies on a single multple dimensional columns with proper level is called dimensional hierarchy

Business Model
Is a folder under which you categorise a subject area related business
information.
generally we can create one folder for one subject area like hr,sales,invetory
shipment …etc

Logical table (LT)


logical table is table in BMM layer which points to physical tables via
Logical table source(LTS).
one LT can map to one or more physical tables.
one LT can map to one or more LTS's

Logical coumns
a columns which required to use in analyitcs for users.
it can be a direct map to physical columns or any calculted column.
each Logical column can mapped to physical columns via column mapping
tab of LTS.

Logical table source(LTS).


it is a critical one on BMM layer to design model.
LTS a interface which mapps every Logical table to physical table.
Again each LTS can map to one or more physical tables.

BMM joins
there are two types of BMM joins
1) complex joins 2) foreign key join.
BMM joins or logical joins need to be specified in Logical diagram(BMM model diagram)
Logical joins helps to idendifiy fact and dimension tables based on cardinality
set in join conditions.
in these joins we will not bother about join conditions but cardinality and join type
like (inner,outer) are important.
OBIEE uses these joins to understand what are tables can participate in joins
and what is th join path ..etc.
always OBIEE expects a logical join and also a physical join for two tables.

from logical join it will take join type and condition will be taken from physical joins.

logical joins play key role when we are deisgning multiple star schemas.

Complex logical join


It specifies the join type and cardinality only from one LT to another LT.
always it is recommended to use in BMM layer.
foreign key Logical joins
it contains the join condition and join type.

In OBIEE 11 g these two joins types have been combined to single join where both can be specified
on need basis.

Calculations
In BMM layer we can perform two types of calculations
1) physical
2) logical
in another method we can create calculations using wizard.

Aggregation Rules

Logical Calculation
Logical Calculation uses the existing logical columns of BMM Layer
We can use all Logical tables in BMM Layer and their columns in this calculation.
When we perform the Logical calculations,first it will perform the the applied calculation on used
logical column and then performs the calculation which specified in Log Calculations.
some times it produces wrong result on measure columns.
example Total amount=sum(amount_sold)*sum(quantity_sold)
this will lead to wrong total.

Physical calcualation
Physical Calculations uses physical tables of Physical layer.
we can use only physical tables which are mapped to Logical table via Logical tables source.
these calculations are performed direcly on database physical columns.
example sum(amount_sold*quantity_sold)
Aggregate calculations on Dimension tables.

count of male customers


count of female cusomer
usiness convineince.

atically recognises the

be recognised as logical
ed level based measure

ensional hierarchy
Header Detailed Scenario in BMM Layer.

when we have two fact tables at different granualar level , and if you are planning to design
BMM layer to create reports with combination of measures from both tables then we need to
join them as a independent fact or LTS in BMM layer.

usually we cannot join two fact tables , if they are at different granular level.
when you join them it will lead to wrong results due to one-Many relation ship from
one fact table to another fact table.
(first fact table data will get duplicated while producing results from seconds fact table.

this problem scenario is called header-Detail or master -detailed fact tables scenario.

this is also known as chasm trap in Reporting world.

to solve this problem design fact tables as a independent LTS (or fact) in BMM layer.
also ensure that these two facts are not joined in phsyical layer.
note: we can join two fact table if they are at same granualar level.
other way if they are at 1-1 relation ship.

Granularity is nothing but, what level of information you are storing in table.
Every tables granualarity is at primary key level.

Example:

To create this scenario create a sales_ship fact table in sh schema


(follow steps provided in notes).

open rpd which had sales and other dimension tables designed.

now import sales_ship fact table to physical layer.


join customers,products,times,channels,promotons dimension tables with sales_ship

drag sales_ship fact table from physical layer onto sales Logical table.
now observe that a additional LTS has been created under sales Logical table by name sales_ship
and also ship_amt and other sales_ship columns are added as logical columns.

now when you dbl click sales LTS you will see column mappings for only sales table physical columns
when you dbl click sales_ship LTS you will see columns maping for only sales_ship columns.

create sum aggregation rule for ship_amt column.

drag ship_amt meaure to sales presentation layer folder.

restart services and create a report with product desc,amount sold,ship amount.
notice that two separate queries are generated two produce the out put.
here two SQL results are combined at OBIEE level.

Working with LTS Physical Layer


CUSTOMERS
COUNTRIES
Creating rpd using Excel sheet

create a excel sheet with following data.


EMPNO ADDR
7369 HYD
7499 CHENNAI
7521 CHENNAI
7566 CHENNAI
7654 CHENNAI
7698 DELHI
7782 DELHI
7788 DELHI
7839 DELHI
7844 DELHI
7876 DELHI
7900 MUMBAI
7902 HYD
7934 HYD
rename the tab (sheet) name to emp_addr
Save excel with obiee.xls(ensure that excel file name is "obiee.xls")

Go to control panel from program files


dbl click odbc.

go to system DSN
click on Add button
then select Microsoft Excel Drive(.xls)
click finish.

Provide following information under ODBc windows.

Datasource name:obiee_excel
click on select work book and select the file which was created and saved in above step(obiee.xls)

next click ok and close odbc.


now import excel sheet into rpd.
open rpd in online mode
Go to file and import metadata.
select data source types as ODBC
then choose obiee_excel odbcname
click next
in next windows enable system tables option in addition to existing options.
click next
now it will show all the sheet names of excel sheet as tables names.
select emp_addr$ and add to physical layer.
finish.

now Go to phsyical layer of rpd


dbl click emp_addr physical table and define primary key on empno clolumn
Go to physical digram ( while going to phsyical digram select all the table (emp,dept,..etc) along with emp_addr to
show them in phsyical layer).

now create a join from emp(oracle table) to emp_addr(excel table).


ensure that join condition is excel.empno=oracle emp.empno.

save the changes.

now drag emp_addr physical table to HR subject area.


next drag emp_addr logical table to HR subject area of Presentation layer.

save rpd and refresh metadata in presentationservices.


tep(obiee.xls)
g with emp_addr to
derivations and Calculations in BMM layer.

Applying aggregation rule(functions) on logical fact measure.

QTY
C1 P1 12 60
C1 P2 1 250000
13 250060

3250780

understanding column mapping


Every logical column in BMM layer logical table can map to a a physical column of physical table using
column mapping tab of LTS.
when you drag a table from phsyical layer to BMM layer it will automatically map every logical column to a phsyica
column.

Aggregation rule
We can apply aggregate function like sum/avg/first ..etc on a measure column of logical fact.

All the columns which are displayed under BMM layer are called logical columns.

Creating new columns in BMM layer

In addition to existing columns dragged from physical layer to BMM layer, we can create new logical coumns
in any logical table.
when you create a new logical column, this column will not be mapped to any phsyical column.
it will display blank mapping in column mapping tab.

To provide the content to new logical column we perform any one of activity
Default Value
1) provide default value like 'Apple' or '1999' …etc
Physical calcuations
2) provide a derivation logic using directly physical columns.
this type of calculations are called phsyical calculations.
when we are performing phsyical calculations we can use the columns of the phsyical tables which are mapped to
logical table in table mapping.

We can apply aggregation functions on top the derivations


3) logical calculations
After creating a new logical column, we can also write derivations based on existing logical columns of BMM layer.
these type of calculations are called logical calculaions.

Logical calculaions can use any existing logical column of any logical table in derivations.
(where as it is not possible in physical cacl)

we cannot apply aggregation functions on top logical derivations(logical calculations)


bmm layer will inherit the aggregation rules of used logical columns to parent logical column.

sum(SAMP_REVENUE_F.COST_FIXED * SAMP_REVENUE_F.UNITS)

sum(case when SAMP_REVENUE_F.COST_FIXED is null then 0 else SAMP_REVENUE_F.COST_

sum(SAMP_REVENUE_F.COST_FIXED) * SUM(SAMP_REVENUE_F.UNITS)

measures on dimension table.


we cannot create measures on Logical dimension tables in BMM layer.
In order to create measures on logical dimension table we need drag the physical table once again to BMM layer
and link from dimension table to newly dragged table to consider newly dragged table as a fact table.

Example: create a no of customers measure on customer table.


1) open the rpd in online mode.(which conains about revenue fact and customer,products and other dimensions)
2) Drag SAMP_CUSOMER_D physical table from physical layer onto BMM layer.
3) Go to BMM layer logical model diagram
Rename the newly dragged dimension table to Fact_Customer
then create a logical join from Fact_customers logical table to SAMP_CUSTOMER_D dimension table.
ensure that cardinality from SAMP_CUSTOMER_D to Fact_customers to SAMP_CUSTOMER_D table
is 1 to many(1==>N)
4) now dbl click cust_key column of Fact_customers table and provide aggregate function as count.
rename that column to "No of customers"
5) Drag Fact customers table to presentation layer and delete all clumns except "No of Customer"
Save and verify data.

sample formulas
date
year(Current_Date)
TimeStampAdd(SQL_TSI_YEAR,1, Current_Date)
add or minus dates as per the period menstioned in SQL TSI
TimeStampDiff(SQL_TSI_DAY, Current_Date , "apple".""."BISAMPLE"."SAMP_CUSTOMERS_D"."BIRTH_DT" )
finds the difference of two dates as per SQL_TSI
following are the valid SQL_TSI key words
SQL_TSI_DAY
SQL_TSI_WEEK
SQL_TSI_MONTH
SQL_TSI_QUARTER
SQL_TSI_YEAR
SELECT COUNT(DISTINCT ORDER_NUMBER) no_orders
,COUNT(CASE WHEN ORDER_STATUS='6-Cancelled' THEN ORDER_NUMBER ELSE NULL END) NO_CANCELLED_ORD
,COUNT(CASE WHEN ORDER_STATUS='1-Booked' THEN ORDER_NUMBER ELSE NULL END) NO_booked_ORD
,COUNT(CASE WHEN ORDER_STATUS in ('1-Booked','6-Cancelled') THEN null else ORDER_NUMBER END) NO_others_ORD
FROM SAMP_REVENUE_F
720
250000
250720

ysical table using

ery logical column to a phsyical

eate new logical coumns

al tables which are mapped to

ogical columns of BMM layer.


else SAMP_REVENUE_F.COST_FIXED end ) * sum(case when SAMP_REVENUE_F.UNITS is null then 0 else SAMP_REVENUE_F.UNITS end )

ble once again to BMM layer


le as a fact table.

ducts and other dimensions)

TOMER_D dimension table.


SAMP_CUSTOMER_D table

ction as count.

of Customer"

MERS_D"."BIRTH_DT" )
O_CANCELLED_ORD
booked_ORD
BER END) NO_others_ORD
en 0 else SAMP_REVENUE_F.UNITS end )
Hierarchi
hierachi is a logical arrangement of all the dimensional attributes from top level to lower level.
place of a level in a hierarchi is called position.
We can create following types of hierarchies

1) level based hierarchies


2) value baed hierarchies
3) balanced hierarchies
4) un balanced hierarchies
5) skipped hierarchies
6) ragged hierarchies.

creating level based heirarchies

first identify the levels and their positions of each logical dimensional table in BMM layer

example:
for product dimension

Brand==>LOB==>Type==>Desc
for calendar date
Year==>half Year==>Quarter==>month==>week==>Date.

steps to create hierarchies

Open rpd in online mode


right click on BMM layer and select create level based hierarchi.
Provide name as "H Product"
next right click "H product" and select level and provide level name as "Grand Total" and select
Grand total option

right click "Grand total" and select child level and provide name as "Brand"
righ click "brand" and create child level and name as "LOB"
rigt click LOB and create child level and name as "Type"
right click type and create child level and name as "Prod details"

now drag brand ,brand_key columns from product logical dimension table onto
"Brand" level under "H Product"
similarly drag LOB,LOB_KEY to LOB leve;
TYPE,TYPE_KEY onto to TYPE level
prod_key,prod_dsc column onto "Prod details" level.

setup key column for leach level


dbl click "Brand" level and create new key name as "brand_key" and add "BRAND" column as a key
column. and also select use for display option.

similarly create keys for level


LOB==> LOB
TYPE==>TYPE
"prod details"==>PROD_DSC

save rpd and test with analysis.

understanding hierarchi

when we create a hierachi we can get benifitted in following ways.


1) drilling analysis(drill down,drill up,drill across can happen automatically)
2) level based measures can be created.
3) we can define a time series dimesional hierarchi
4) apply time series functions.
5) Define content for each logical table of BMM layer to understand the granularity of tables.

level key
level key identifies the granularity of the level amoung multiple attributes.
use for display
a column which is selected as use for display will be used to display by default as a result of drill
technique

Drill techniques

there are four types of drill techniues


1) Drill down
navigating from one level to next lower level columns within one hierachial dimension is called drill
2) Drill up
navigating from one level to next upper leve; columns within one hierachial dimension is
called drill down

3) Drill across
navigating from one dimensional hierarchi level to another dimensional hierarchi level is
called drill across.

4) Drill through
navigating from one hierach to another non related content is called drill through.

We can desgin drill down,up,across in rpd but drill through can be desgned in report(analysis).

when we are analysing data with drill technique rpd will automatically indentyfies the position of every column
selected in report and takes to appropriate levels.

when it is taking to next level always it will display the column in report which is selected as use for
display.

always drill navigation path can go via a column which is select as key in the level.
Grand total grand total level can be used to identify the top level position in a dimensional hierarchi.

Preferred Drill path and drill up/across designing


to define a preferred drill path dbl click level on which we need to create a preferred drill path.
go to preferred drill path tab and select the level onto which we need to navigate from this level.

drill up technique
dbl click "prod details" level and add TYPE level under preferred drill path.
dbl click "Type" level and add LOB level under preferred drill path
dbl click "LOB" and add brand level under preferred drill path

Drill across
create a new dimnesional hierarchi on SAMP_time_day_d table.
YEAR=>H YEAR==>QUARTER=>MONTH=>WEEK=> DAY.
add relevant columns to each level.

Now dbl click "Prod details" level of product hierarchi


and add Year level from calenday hierarchi.

this will enable to navigate to year level from prod details level.which is a drill across as we are
navigating from one level from one dimensional hierarchi to another.

Level based measured

level based measures are always calculated at a particular level of hierarchi.


normal measures will will calculated based the dimensional columns selected in the report.
normal measures output will change as per the selected dimensional columns.
level based measures values will be always same irrespective of what columns selected in report.

Creating a level based measure


to create a level based measure create a duplicate column of revenue(measure).
Rename to BrandwiseRevenue
dbl click brandwiseRevenue and go to level tab and select brand level of Product hierachi.

drag the column to presentation layer and save and test report.

measures at grand total level.

in order to create level based measure at overall(grand total) first ensure that
Grand total level is created under heirarchies.

now duplicate BrandwiseRevenue column and rename to OverallProductrevenue


dbl column and go to level tab and select grand total level of Product hierarchi.

Drag OverallProductrevenue to presentation layer and save test.

multiple hiearchies in a single dimensional hierarchi 01-APR-2011 TO 31 MAR 2012

CALENDAR FINANCE
DATE WEEKNUMER
MONTH QUARTER HF YEAR WEEKNUMER
MONTH QUARTER
1-Jan-12 MONDAY W1 1 1 1 2012 W37 10 4

project1 project2
Ragged hierarchi
Skipped hierarchies
if any hierarchi has skipped levels compared to actual desired level then it is called skipp

example: above

in skipped level starting level and ending levels may be same but in between levels will k

Ragged/unblanced ierarchies
if any hierachi levels are not exacly match with no of levels of regular hierarchies then it

in this hierarchies no of child levels to a parent is not same compare to regular hierarchi

Both Ragged /unbalanced /skipped levels are not supported by OBIEE10g.


all these hierarchies can be created in OBIEE 11g

STEPS TO CREATE
in bisample schema we have table SAMP_PRODUCTS_DR which contains both ragged an

1) import SAMP_PRODUCTS_DR table into phsyical layer.


create a primary key on PROD_KEY column of this table.

2) join SAMP_PRODUCTS_DR table with SAMP_REVENUE_FAC

3) Drag SAMP_PRODUCTS_DR table from phsyical layer to BMM layer.


4) create a logical join from SAMP_PRODUCTS_DR to SAMP_REVENUE logical fact.

create a new Dimensional hierarchi(select level based hierachi option). name it as "H dim Prod
Create level in following way

Grand total

Brand
add brand column of SAMP_PRODUCTS_DR
create key on brand column and select use for display
LOB
add LOB column of SAMP_PRODUCTS_DR
create key on LOB column and select use for display
type
add Type column of SAMP_PRODUCTS_DR
create key on type column and select use for display
prod desc
add prod_dsc,prod_key columns
create key on prod_dsc and select use for dispay.

now dbl click H dim Prod ragged and ensure that ragged and skipped option is selected,
olumn as a key
al dimension is called drill down

al dimension is

ierarchi level is

cted as use for


ional hierarchi.

om this level.

ed in report.
2011

HF YEAR
2 2011

project 3
skipped
evel then it is called skipped level hierarchies.

ut in between levels will ke skipped.

egular hierarchies then it is called ragged or un balanced hierarchies.

mpare to regular hierarchies.

for this we used create a hierarchi table and design them.

h contains both ragged and skipped level.


name it as "H dim Prod ragged"

e for display

pped option is selected,


Time Series hierarchi and function.

We can create three time series functions(Ago,todate,period rolling) in OBIEE 11g logical calculations.
Ago ,todate are old functions and preiod rolling introduced in 11g)

To use these functions we need to define a time series dimensional hierachi.

time series Dimensional herarchi.


We can create time series dimensional hierarchi on a hierachi which contains date related information.
usually we can have a date dimension table in any datawarehousing which stores about calendar and
fianancial dates.
this dimension table can be used to create as time series dimension table.

we can have only one timeseries dimension for logical layer.

when we create a dimension hierarchi as time series then we need to select a chronological key
as mandatory.

chronological key
chrnological key helps identify the granularities of levels and identify the exact levels and
befor and after levels.

generally we can select lower level granualar column of hierarchi as chronological key.

time series functions


Ago
ago function calculated the previuo period measure values. This is similar to
lag function in oracle.
say for example we want to show previuos year revenue for a given year.

syntax
Ago("BISAMPLE"."SAMP_REVENUE_F"."Actual REVENUE" , "BISAMPLE"."dim Hierachi Cal Date"."Year"
here first parameter is the measure for which we need to get previuo periods data.
second parameter is the timeseries hierarchi level to tell for which level we need to get previuos data
third parameter is how many back periods data we want

Todate
todate calculates running measure values.
example in a report we want to display sum of above records values for a given record.

ToDate("BISAMPLE"."SAMP_REVENUE_F"."Actual REVENUE" , "BISAMPLE"."dim Hierachi Cal Date"."Ye


here first parameter is the measure for which we need get above values.
second parameter is a level to tell at what level we need get above values.
period rolling
period rolling provides the sum of above and previuos values as per the given parameters
PeriodRolling("BISAMPLE"."SAMP_REVENUE_F"."Actual REVENUE" , -1,0 )

first parameter is measure


second parameter is how many previuos values for a current record
thord paramter is how many future values from a current record.
dim Hierachi Cal Date"."Year" , 1)

we need to get previuos data

given record.

E"."dim Hierachi Cal Date"."Year" )


iven parameters
join multiple phsyical tables into a single LTS of logical table.

back ground
We can join more than one phsyical tables to a signle LTS of logical tables.always when drag
a phsyical table to bmm layer one logical taable is created along with one LTS.
here each lts will point to phsyical table which you have dragged.

in some scenarios we may have to link multiple physical tables to a single logical table.
this can be done in two ways

1) add more than one phsyical table to LTS of logical table


2) add more than one LTS to a logical table.

1) add more than one phsyical table to LTS of logical table


we need to se this scenario in following cases

a) convert snow flake tables of phsyical layer to logical star schema in BMM layer
b) need to calculate/write phsyical derivations on columns from more than one phsyical tables.

steps to create
create a measure to show revenue of custers whose marital status is SINGLE

open the rpd in online mode.

1) drag samp_customer_d phsyical table from phsyical layer onto LTS of SAMP_REVENUE_F
(note that we need to drag this only onto LTS. Dragging onto other parts will create another LTS)
now u can notice that a new physical table is added under tables map section of LTS.
to see this dbl click LTS and verify two tables (SAMP_REVENUE_F,SAMP_CUSTOMERS_D)

and a default join inner join.( change the join type to outer in case needed).

2) create a new logical column and name it as SINLGE_REVENUE

now go to column mapping tab and click the expression editior Of single_revenue

now u can see two physical tables added in phyical tables section (left side)

write the following expression

case when MARITAL_ST='SINGLE' THEN REVENUE ELSE 0 END

3) save expression and apply sum aggregation rule to SINGLE_REVENUe logical column
(aggregaion tabe=>sum)
4) drag SINGLE_REVENUE measure column from BMM layer to presentation layer

save rpd and test

alternate

IMPORT SAMP_ADDRESS_D table into phsyical layer


define primary key on ADDRESS_KEY column
create a phsyical join from SAMP_CUSTOMER_D TO samp_address_d with
join condition(SAMP_CUSTOMERS_D.address_key=SAMP_ADDRESS_D.ADDRESS_KEY)
ENSURE cardianality is one to many from address to customers

now dbl click LTS of customer logical dimension table


click on plus(+) to add a new physical table under general tab of LTS.
select SAMP_ADDRESS_D TABLE under phsyical table list and click on select.
then in next window change the join type to inner/outer ..etc

next drag area,city,region columns of phsyical layer adress dimension to logical customer dimension

next drag these column to presentation layer(area,city,region)

save rpd

2) add more than one LTS to a logical table.

This method can be used a) to join two fact tables which are at different granualr level.
b) desginging aggregate tables.

Designing two fact table which are at different granular level

when we have two fact tables at different granual level then we should not join them directly in phsy
layer.
this will lead to a data issue and this issue is popularly known as header detail problem in OBIEE
this also called as chasm trap in other tools.
issue: when we have two fact tables on one to many relationship and if we join them directly
then first table data will get replicated and final out put of first table measures will produce double
amounts.

To solve this we need to model each fact table in both phsyical layer and BMM layer as star schema.
join each fact table with their respective dimension tables.
mostly these dimension tables are conformed dimensions.

steps to create.

open rpd in online mode.

here we will consider SAMP_QUOTAS_F as a additional fact table which has different granualarutity compare
to existing fact table SAMP_REVENUE_F

import SAMP_QUOTAS_F table into physical layer and join with product dimension table.

next drag SAMP_QUOTAS_F Physical table onto SAMP_REVENUE_F loical fact table.
now all the columns(like VALUES) will be added to REVENUE logical fact table.
and also you can notice a additional LTS created below SAMP_REVENUE_F lts.

now dbl click VALUE column and change aggregation rule to SUM

drag VALUES measure to presentation layer of revenue folder.

save rpd and create a sample report with following columns selected

BRAND REVENUE VALUES

after report execution go to log file nqquery under below directory

D:\OBIEE116\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1

you will see two queries generated each one for one fact table i.e revenue and quotas

concept
when we have two logical stars where each star with different fact table.
when you select measures from both stars along with common dimension attributes
then OBIEE generates to separate queries each one for one fact table.
and both output data can be combined at BI server level to show them in single report.

note:
when you select two measures from different logical stars without a common dimension attribute the
it will lead to metadata error or abnormal result.
Designing Aggregate tables

Aggregate tables are summarised table which created for faster retrieval of data.
fact tables contains pre-calculated summarised data of original fact tables.

when we have data in both fact and aggregate tables always queries from aggregate table produce reports in faster
way.

when we are building aggregates tables we will build data at very higher granular level to not to
have so many records in aggregate tables.

example
REVENUE FACT contains data at all dimensional level wheres as revenue aggregate table in
sample schema contains only at product and emp level

PROD EMP DATE ORDER CUSTOMER

REVENUE FACT REVUNUE/UNITS X X X X X


SAMP_REVENUE_FA1 REVUNUE/UNITS X X

Designing aggregate table in rpd.

Import SAMP_REVENUE_FA1 table from sample schema to phsyical layer.


join SAMP_REVENUE_FA1 table with product dimension table

Now ensure that revenue and units measure column names of existing revenue logical fact table are named
as REVENUE AND UNITS(if not rename to REVENUE and UNITS)

now drag SAMP_REVENUE_FA1 aggregate table from phsyical layer onto SAMP_REVENUE_F logical fact table.
this will create one additional LTS under SAMP_REVENUE_F logical fact by name SAMPL_REVENUE_FA1

also notice that REVENUE,UNITS logical columns will be pointing to both revenue and aggregate fact.
this can be notices under column mapping tab of each LTS.(aggregate and fact)

now dbl click aggregate LTS(samp_revenue_fa1)


Go to content tab
select logical
and select prod_desc hierarchi level under product hierachi drop down list.

this information indicates to OBIEE that aggregate table is at product hiearchi and detail level granual level.

for fact table LTS , go to content tab and with leave all options blank
or select all dimensional hierarchies with lower level granual columns

this indicates to OBIEE that fact table is at all dimensional granualar level
save rpd and create a report with
brand and revenue column(report1)

create another report cal year and revue(report2)

in both reports we can see data but OBIEE produces data for report1 from
aggregate table for report2 generates data from fact table.
s.always when drag

logical table.

an one phsyical tables.

MP_REVENUE_F
will create another LTS)

CUSTOMERS_D)
DDRESS_KEY)

ogical customer dimension

granualr level.

ot join them directly in phsyical

etail problem in OBIEE


we join them directly
sures will produce double

ity compare

single report.

mon dimension attribute then


ce reports in faster

granular level to not to

aggregate table in

cal fact table.


CREATE TABLE EXT_SEC(
UNAME VARCHAR2(20)
,PWD VARCHAR2(20)
,GRP VARCHAR2(50)
,ROL VARCHAR2(200)
)

INSERT INTO EXT_SEC


VALUES('user_america','apple123','AMERICAS','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('userasia','apple123','APAC','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('user_middleeast','apple123','EMEA','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('user_biz','apple123','BizTech','BIAuthor;BIConsumer;roleProd')

INSERT INTO EXT_SEC


VALUES('user_fun','apple123','FunPod','BIAuthor;roleProd')

INSERT INTO EXT_SEC


VALUES('userapple','apple123','DEFAULT','BIAdministrator;BIAuthor;BIConsumer;BISystem')

COMMIT;
Security

x user
presentation server

user1

1) manually create users default LDAP server


2) LDAP
analytics 3) single sign on
console
em

SSO
multi sign on
LDAP light weight directory access protocol

Securit in BI 11g.
security can be broadly classified into two parts
authentication
authoriation

Authentication is process of checking the validness of user and their credentials.


we can create users and groups for OBIEE authetication purpose
authoriation
Authorisation is a process of setting Access permissions ,object level and data level security.
we can use roles for this configuration puprose.
further we can also implement web catalog for GUI level authorisation.

In OBIEE 10g there used to be separate security configurations for each BI servers.like BI server,publisher and schedulor.
In order to integrate security configuration in OBIEE11g there are new changes introduced.

all the authentication process in11g can be configured at weblogic application server.
all the authorisation process can be configured at fussion middleware,identity manager and web catalog.

Authentication
we can implement authentication in the following ways
Default LDAP authentication from Weblogic
external LDAP authentication using external LDAP servers.
external table authentication
Single singn on implementation (SSO)

LDAP
light wieght Directory access protocal
this is protocal server where we can create users, groups and configure security for each group/users in terms
access,restrictions enable disable ..etc.
Default LDAP authentication from Weblogic
by default weblog comes with preconfigured LDAP to implement security.
create new users and groups for new logins.
Below are default groups created by DefaultAuthenticator
AdminChannelUsers
Administrators
AppTesters
BIAdministrators
BIAuthors
BIConsumers
CrossDomainConnectors
Deployers
Monitors
Operators
after creating users all the users can login to presentation servers. We need to set roles later in middleware contro

external LDAP authentication using external LDAP servers.


in case if already a LDAP server exists we can configure weblogic to use existing LDAP server.
we can directly login to presentation server using the users which are already available in LDAP server

external table authentication


we can maintain users /group in a external physical table and use them to logon
SSO
single sign on
single sign is a configuration setup were we can connect to single inerface and from there we can logon to any
other servers without again providing the crdenials.
in this process all the servers will use the security of a common security server configuration
other servers without again providing the crdenials.

Authorisation
autorisation is a process of providing relevent permissions,grants roles etc which are applicable to a user group
example validating whether user1 user name is correct and enered password is correct is called
authenticaion
and after logging into server setting what report permissions he can have to analyse data view repor is
authorisation
can be classified into three parts.
setting up permissions
object level security
data level security.

setting up permissions
we can set up permissions for the users which have been created in weblogic.
to set up permissions fussion middleware has below listed fedault roles.
BISystem system level permissions
BIAdministrator Administrator level permissions
BIAuthor permissions to create reports/dashboards..etc
BIConsumer read only access.

to provide permissions we need to add users to above respectibe roles.

Role
role is a permission (responsibility) which we can define and access to any user or group
these roles have to be created in enterprise manager and grant them to users/groups which have been created in
weblogic
configuring permissions/acess/restrictions to each role can be done in OBIEE repository identity manager

We can create user defined roles and add users to functionally group users.
all the roles which have been created in em can be automatically added to repository and there we can
set the acces/restrictions like data level security and object level security

Object level security


granting revoking permissions to make access on prentation layer objects like prsentation layer table /column
is called object level security

data level security


restrictions some records/rows/data out of complete resultset is called data level security.
DEFAULT_PRIVILEGES = READ;
PROJECT_INACCESSIBLE_COLUMN_AS_NULL
default LDAP server
d
,publisher and schedulor.

each group/users in terms

les later in middleware control.

ble in LDAP server


there we can logon to any

applicable to a user group


rect is called

to analyse data view repor is

s which have been created in

ory identity manager

y and there we can

ation layer table /column


Variables
variable are three types
Repository variables
Session variables
Presentation variables

Repository variables
repository variables stores the variable values at BI server level.
Repository variables are initialised at BI server level and provide same values to
entire user community.

Repository variables are two type


Static
Dynamic

Static variables hold the static information, which means static variable values will not
be changed during rpd run.
best examples to create static variables are :OrgName,Initial/Full load of Datawareshouse..

Dynamic variables:
Dynamic variable values are tend to change in regular intervals.
example: current data,last refreshdate,current month,week last month,week…etc.

to create create dynamica variable Initializaton blocks are mandatory.

Initializaion Block:
It is a component which initialises the values to Dynamic variables.
to initialise values to variables we need to write SQL statements in Init block.
the output of SQL's will be associated to Rep dynamic variables.

We can set the interval to init block on what intervals it has to execute.
when you run BI server it will be executed auto and from then it will keep
executing as per given interval.

Properties of Rep init block


Refresh interval
specifies the time to get Rep init block to refresh.
Datasource
Connection Pool
Provide the SQL to get values to dynamic variables
data source info on which SQL's have to be executed.
Target
defines dynamic variables and change the order of variables.
execution Preference
We can create multiple init blocks in Rpd. When we create multiple rpd's and when schedule them to ru
Note: synax to call a repository varibale : VALUEOF("vStaticRepOrg")

Session variables
session variable are two types
system non system
all the session variables should be associated with session initialisation block.

System
system variables are built I variables. We can use them with or without creaating them. Below os the list of system
USER
GROUP
LOGLEVEL
ROLES
PERMISSIONS
USERGUID
DISABLE_CACHE_SEED
DISABLE_CACHE_HIT
ROLEGUID
SELECTPHYSICAL
USERLOCALE

Non System
non system varibales are user defined variables.
session initialisation block
Required for logon (authentication)

Properties of Sess init block

allow deffered execution


this option will ensure that only session initialisation block are executed whichare really used in rpd.
in 10g all the session initialiation blocks used to be executed irrespective of used/not used.

row wise initialisation


will allows to create session variables dynamically and each session can varibale can have assigned with
CREATE TABLE ROW_WISE (VAR_NAME VARCHAR2(10),VAR_VALUE VARCHAR2(50))

INSERT INTO ROW_WISE VALUES('test1','BizTech')


INSERT INTO ROW_WISE VALUES('test1','FunPod')
INSERT INTO ROW_WISE VALUES('test1','HomeView')

INSERT INTO ROW_WISE VALUES('test2','BizTech')


INSERT INTO ROW_WISE VALUES('test2','FunPod')
SELECT var_name,VAR_VALUE FROM ROW_WISE

CREATE TABLE EXT_SEC(


UNAME VARCHAR2(20)
,PWD VARCHAR2(20)
,GRP VARCHAR2(50)
,ROL VARCHAR2(200)
)

INSERT INTO EXT_SEC


VALUES('user_america','apple123','AMERICAS','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('userasia','apple123','APAC','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('user_middleeast','apple123','EMEA','BIAuthor;roleCust')

INSERT INTO EXT_SEC


VALUES('user_biz','apple123','BizTech','BIAuthor;BIConsumer;roleProd')

INSERT INTO EXT_SEC


VALUES('user_fun','apple123','FunPod','BIAuthor;roleProd')

INSERT INTO EXT_SEC


VALUES('userapple','apple123','DEFAULT','BIAdministrator;BIAuthor;BIConsumer;BISystem')

COMMIT;
and when schedule them to run at a time, we need to specify the preference of order to execte them.
em. Below os the list of system variables.

whichare really used in rpd.


of used/not used.

aribale can have assigned with multiple values


Cache
We have three types of caches in obiee.
1) BI server cache
2) Presentation server cache
3) browser cache

BI Server cache:
when we enable cache for BI server all the report outputs will be stored in BI server cache.
these are called cache entries.

BI server cache has following components

Cache storage
Cache hit detection
cache manager

setting up cache.
to configure cache for BI server first we need to configure CACHE ENABLE property to 'YES'
in nqsconfigfile.
this can be done either from Enterprise manager console or on nqsconfig file directly.
in addtion to this we also need to configure cache store directory
max entries,max size ..etc

cache storage
is a location /path where all BI server caches have to be stored.
if BI server is a clustered then we need to provide clustured location
cache detection
cache detection is process algorithm of identifing when a report can get from cache or when
a report output has to get from database directly.

when a cache entry has created obiee stores the metadata of that report in cache manager.
this metadata includes logical sql of report,size,last used,users etc.
cache entries are created for for each user.
when report is execute first cache detection process gathers the logical sql of report and check whether
any cache entries available for that particular logical sql.
if a same logical sql data is available In cache or a superset of logical sqldata is available on cache then
automatically bi servers directs report request to read from cache entries.

in all other cased report request will be routed to database directly.

if you select cache roll up aggregation then new cache entries will be created for the reports which
have been created based on existing cache.

cache manager
cache manager is a tool in rpd which helps to manage cache and monitory in GUI.
cache purging
cache purging is a process of deleting the cache entries
this can be done in following ways

manually purge cache using cache manager


use nqcmd command like Sapurgeallcache() .
set cache persistent time to delete cache after specific time.
use event pooling conifiguration to purge cache as soon as warehouse tables are changed.

Cache purging can be done in following ways

manual method
using cache manager.
executing NQcmd commands manually
auto method
Specifiy the cache expiry time for every table in physical layer
using Event pooling
schedule NQcmd command to clean as per given intervals

using cache manager.


open rpd in online mode and go to cache manager.
(note that cache manager is enabled only when cache propert is set to yes
in NQSconfig file.)
select the cacje entries which you want to purge
right click and purge the cache.

executing NQcmd commands manually


method1)
Go to Oracle BI Presentation Services Administration
go to
type the following command and click on execute button
call SApurgeallcache()
method2) execute below command at command prompt.
or keept this command in shell script and execute.

OracleBI_Home\server\Bin\nqcmd.exe -d AnalyticsWeb -u Administrator -p Password


-s "OracleBI_Home\Cache\PurgeAllCache.sql" -o "OracleBI_Home\Cache\PurgeAllCache.log"

Configure event pooling

create a event pooling table by getting the table structure from C:\OracleBI\server\Schema

it will create a table called S_NQ_EPT


generally this table has to be created in dw schema (here it is sh schema)

Import s_nq_ept table into physical layer


Go to rpd /tools ==>utilities=>Oracle BI event tables
Add s_nq_ept table to left side list
set the pooling frequency as 1 hr

close save rpd.

from the ETL side as soon as any bi tables is updates we need to update event pooling table.
example:

insert into S_NQ_EPT values(1,sysdate,'apple','SalesAgg','sh','customers',null)

BI server will watch this table on regular intervels specified in pooling frequency
and clear the cache entries for all the tables which have been changed during the period.

Presentation server cache

every time a report is viewed by a user , the report data will be kept in presentation server
for a short period.
this cache is called presentation server cache.
by default this cache is maintained for 2 min and cleaned automatically after 2 min.

we can configure not to have presentation server cache by changing the instance config
file chache property.
erver all the report outputs will be stored in BI server cache.

or BI server first we need to configure CACHE ENABLE property to 'YES'

her from Enterprise manager console or on nqsconfig file directly.


e also need to configure cache store directory

here all BI server caches have to be stored.


ered then we need to provide clustured location

process algorithm of identifing when a report can get from cache or when
to get from database directly.

d obiee stores the metadata of that report in cache manager.


sql of report,size,last used,users etc.
or each user.
che detection process gathers the logical sql of report and check whether
that particular logical sql.
lable In cache or a superset of logical sqldata is available on cache then
s report request to read from cache entries.

st will be routed to database directly.

egation then new cache entries will be created for the reports which
xisting cache.

which helps to manage cache and monitory in GUI.


eleting the cache entries

che manager
rgeallcache() .
lete cache after specific time.
n to purge cache as soon as warehouse tables are changed.

rging can be done in following ways

using cache manager.


executing NQcmd commands manually

Specifiy the cache expiry time for every table in physical layer
using Event pooling
schedule NQcmd command to clean as per given intervals

in online mode and go to cache manager.


t cache manager is enabled only when cache propert is set to yes

e cacje entries which you want to purge


and purge the cache.

Oracle BI Presentation Services Administration


Issue SQL Directly
following command and click on execute button
call SApurgeallcache()
below command at command prompt.
this command in shell script and execute.

cmd.exe -d AnalyticsWeb -u Administrator -p Password


geAllCache.sql" -o "OracleBI_Home\Cache\PurgeAllCache.log"

getting the table structure from C:\OracleBI\server\Schema

reated in dw schema (here it is sh schema)


Oracle BI event tables

ny bi tables is updates we need to update event pooling table.

sysdate,'apple','SalesAgg','sh','customers',null)

on regular intervels specified in pooling frequency


all the tables which have been changed during the period.

by a user , the report data will be kept in presentation server

n server cache.
ned for 2 min and cleaned automatically after 2 min.

resentation server cache by changing the instance config


D:\OBIEE116\Oracle_BI1\rcu\integration\biserver\scripts\oracle

CREATE TABLE S_NQ_EPT (


UPDATE_TYPE DECIMAL(10,0) DEFAULT 1 NOT NULL,
UPDATE_TS DATE DEFAULT SYSDATE NOT NULL,
DATABASE_NAME VARCHAR2(120) NULL,
CATALOG_NAME VARCHAR2(120) NULL,
SCHEMA_NAME VARCHAR2(120) NULL,
TABLE_NAME VARCHAR2(120) NOT NULL,
OTHER_RESERVED VARCHAR2(120) DEFAULT NULL NULL
);

Usage Tracking

MUDE
Multi User development Environment

Architecture

Star = b14_BI0011.rpd, DEFAULT;

abc = test.rpd;

MUDE
Multi User development Environment
Parent child
EMPNO ENAME PISITION(DESGN)HIRED_DT MGR_ID TYPE
1 Fred Webster 6 2-Apr-08 5 TYPE 1
2 Aurelio Miranda 3 9-Sep-07 5 TYPE 2
3 XX 20
4 YY 69
5 Jonny Harston 7 22-Jul-04 6 TYPE 1
6 Jack Benetti 8 26-Nov-05 10 TYPE 1
7 QQ 66
8 NN 34
9 PP 67
10 Roger Wray 6 5-Jan-09 12 TYPE 3

10G SOLUTION

EMP_HIERARCHIES
EMPNO ENAME POSITION LEVEL1_CODE LEVEL1_DESC LEVEL1_POSITION
LEVEL2_CODE
1 Fred Webster 6 5 Jonny Harston 7 6

1 1 0
1 5 1
1 6 2 declare
1 10 3 v_max_depth integer;
1 12 4 v_stmt varchar2(32000);
5 6 i integer;
5 10 begin
5 12 0 select max(level) into v_max_depth
100 100 from SAMP_EMPL_D_VH
100 0 connect by prior EMPLOYEE_KEY=MGR_ID
start with MGR_ID is null;
v_stmt := 'insert into BISAMPLE.PARENT_CHILD_TABLE (MEMBE
Import || 'select EMPLOYEE_KEY as member_key, null, null, 0 from SAM
SELECT * FROM SAMP_EMPL_D_VH || 'union all' || chr(10)
|| 'select' || chr(10)
|| ' member_key,' || chr(10)
select * from SAMP_EMPL_POSTN_D || ' replace(replace(ancestor_key, ''\p'', ''|''), ''\'', ''\'') as ances
|| ' case when depth is null then 0' || chr(10)
|| ' else max(depth) over (partition by member_key) - depth +
SELECT * FROM samp_empl_parent_child_map || ' end as distance,' || chr(10)
|| ' is_leaf' || chr(10)
|| 'from' || chr(10)
|| '(' || chr(10)
|| ' select' || chr(10)
|| ' member_key,' || chr(10)
|| ' depth,' || chr(10)
|| ' case' || chr(10)
|| ' when depth is null then '''' || member_key' || chr(10)
|| ' when instr(hier_path, ''|'', 1, depth + 1) = 0 then null' ||
|| ' else substr(hier_path, instr(hier_path, ''|'', 1, depth) + 1,
|| ' end ancestor_key,' || chr(10)
|| ' is_leaf' || chr(10)
|| ' from' || chr(10)
|| ' (' || chr(10)
|| ' select EMPLOYEE_KEY as member_key, MGR_ID as ances
|| ' case when EMPLOYEE_KEY in (select MGR_ID from SAM
|| ' from SAMP_EMPL_D_VH ' || chr(10)
|| ' connect by prior EMPLOYEE_KEY = MGR_ID ' || chr(10)
|| ' start with MGR_ID is null' || chr(10)
|| ' ),' || chr(10)
|| ' (' || chr(10)
|| ' select null as depth from dual' || chr(10);
for i in 1..v_max_depth - 1 loop
v_stmt := v_stmt || ' union all select ' || i || ' from dual' || c
end loop;
v_stmt := v_stmt || ' )' || chr(10)
|| ')' || chr(10)
|| 'where ancestor_key is not null' || chr(10);
execute immediate v_stmt;
end;
/
LEVEL2_DESC
LEVEL2_POSITION
LEVEL3_CODE
LEVEL3_DESC
LEVEL3_POSITION
Jack Benetti 8 10 Roger Wray 6

ENT_CHILD_TABLE (MEMBER_KEY, ANCESTOR_KEY, DISTANCE,


BALANCED IS_LEAF)' || chr(10)
r_key, null, null, 0 from SAMP_EMPL_D_VH where MGR_ID is null' || chr(10)

implicit fact column

\p'', ''|''), ''\'', ''\'') as ancestor_key,' || chr(10) aggregate persistent wizard

by member_key) - depth + 1' || chr(10) lookup, index …


member_key' || chr(10)
depth + 1) = 0 then null' || chr(10)
er_path, ''|'', 1, depth) + 1, instr(hier_path, ''|'', 1, depth + 1) - instr(hier_path, ''|'', 1, depth) - 1)' || chr(10)

mber_key, MGR_ID as ancestor_key, sys_connect_by_path(replace(replace(EMPLOYEE_KEY, ''\'', ''\''), ''|'', ''\p''), ''|'') as hier_path,' || chr
n (select MGR_ID from SAMP_EMPL_D_VH ) then 0 else 1 end as IS_LEAF' || chr(10)

KEY = MGR_ID ' || chr(10)

ect ' || i || ' from dual' || chr(10);


''\''), ''|'', ''\p''), ''|'') as hier_path,' || chr(10)
Multiple Hierarchies
composite keys

Implicit Fact column

Lookup table
Lookup(DENSE "HR Analysis"."DEPT_LKP"."DNAME" , "HR Analysis"."EMP"."DEPTNO" )
Lookup(sparse "HR Analysis"."DEPT_LKP"."DNAME" ,'default', "HR Analysis"."EMP"."DEPTNO" )

Aggregate persistent

create aggregates

ag_SAMP_REVENUE_F
for "Revenue Analysis"."SAMP_REVENUE_F"("Total Fixed Cost")
at levels ("Revenue Analysis"."H Dim date"."Year")
using connection pool "apple"."Connection Pool"
in "apple".."BISAMPLE";

Evaluate EVALUATE('upper(%1)', 'kishore kumar')

EVALUATE('LAG(%1,1,0) OVER (ORDER BY %2)',"Sales"."AMOUNT_SOLD","Times"."CALENDAR_YEAR")


P"."DEPTNO" ) inner join
sis"."EMP"."DEPTNO" ) outer join

"Times"."CALENDAR_YEAR")

Вам также может понравиться