Академический Документы
Профессиональный Документы
Культура Документы
Repository stores required metadata for reporting purpose. Repository can be stored in XML or binary.
till OBIEE1g repository used to store only in binary format. From 11g onwards we can save reposiroty in xml or binary.
every binary repository stores in a binary file with extention ".rpd"
OBIEE Repository metadata storage types
Binary
XML based
Physical Layer
In physical layer we need to model phsyical tables as it is with similar structure of original
database objects.
Note(physical layer structure(metadata) should always match with existing physical objects structure
OBIEE supports Relationational,non relational databases like excel,xml,webservices and multi dimension
BMM layer
Also popularly known as Logical layer.In this layer we convert phsyical models(structures) into logical st
example: we can convert snowflake physical model to logical star.
normalised oltp models into star models.
denormalised models into normalised..etc
Presentation Layer
In presentation layer we can add objects on which users wants to create analystics /reports
we can create any no of subject areas in presentation layer.
every logical star model in BMM layer can be a subject area in presentation layer
In order to work on OBIEE check below links and verify whether all server running or not.
Weblogic Application server
http://applebi:7001/console
http://applebi:7001/analytics
check whether you are able to create new report.
Offline Mode: in offline mode we can develop or make changes to any repository opening
in offline mode. go to file menu and select offline
Online Mode: open Administrator tool and Go to File menu=> Open=>Online
provide repository password
and provide weblogic user/pwd and click ok
it will automatically open the repository which is loaded in bi server
to make any changes on repository after loading
go to File menu and select check out all
make changes in rpd
and perform check in all under file menu.
here check out enables user to make changes on exisig rpd.
check in commits the changes onto rpd.
when you perform check in it will also perform global consistency.
now save repository.which confirms our changes onto repository.
MUDE : we need to configure MUDE before goting to use. This functionality allows multiple users
to develop a single repository parallel
Steps to Upload new repository into BI Server
In online mode we can perform the chaanges on existing loaded repository on BI server.
a) manual uploading
this method exists in OBIEE10g and also we can use in 11g. However it is not recommended.
to upload rpd changes or new rpd to BI server perform any one of the option
in this method first place the repository under repository fodler.
example
Note: to perform this manual uploading we need to stop bi server while performing these changes)
e. Repository can be stored in XML or binary.
From 11g onwards we can save reposiroty in xml or binary.
ata) should always match with existing physical objects structures in database)
elational databases like excel,xml,webservices and multi dimensional structure like cube to design and use phsyical layer.
.In this layer we convert phsyical models(structures) into logical structures as per business need.
physical model to logical star.
pwd weblogic
ence folder at left nagivation pane
click on coreapplications and check all the services running or not
e new report.
BI server runs using the configuration which is setup in NQSconfig.INI file . This file is always located under below directory
D:\OBIEE116\instances\instance1\config\OracleBIServerComponent\coreapplication_obis1
BI server loads the repositories which are configure in NQSConfig file to facilitate analytics
Enterprise manager.
he link http://applebi:7001/em
der at left navigation pane.
repository section.
and bi server.
ration which is setup in NQSconfig.INI file . This file is always located under below directory
e1\config\OracleBIServerComponent\coreapplication_obis1
which are configure in NQSConfig file to facilitate analytics
ws multiple users
reapplication_obis1\repository
we need to create one database for one typs of database . Example one for oracle , one for XML..etc
we can add any no of schemas(users) to a single database for importing tables.
Connect Pool
connection pool provides the connection related information to connect to database from OBIEE.
we can create any no of connection pools under one database.
out of many connection pools one connection pool can be used as a persistant connection pool.
this setting we need to do under database.
when we select persistant connection pool , obiee will use this connection information to perform temparory table
writeback,aggregate persistant wazards etc.
example when a query is having too many values in where condition inclause then obiee will automatically inserts
a temp table and joins with main table.
call interface
we have two types of call interafces
1) OCI which is a native connectivity interaface to connect to database.
Oracle call interface
always use native connectivity drivers only to connect database as they are faster and supports all database featur
we can go for ODBC when native connections are not available.
Datasource name
under data source name we need to provide database name of database which we need to connect.
when you select ODBC under call interface then we need to provide odbc name under this option.
username/pwd
these are username and password of database.
maximum connections
this restricts how many no of maximum connections we can have to database at a time from obiee connection poo
total no of connections obiee server can have to database is sum of no of connections of all connection pools.
shared logon
when we select shared logon option , every presentation server user(obiee user who connects from browser) whe
report will make use of single user/pwd information provided under user/pwd textbox to execute queries.
in case we want to restrict every user to user different database users to connect to database then we need to de
option and need to provide user information under identity manager.
connection pooling
connection pooling is a pooling time that how much time a database sesstion(connection) can be active even after
excution completed on database.
isolation level
four levels of isolations
1) commited
when we select this option obiee will read only committed data from database
2) dirty
obiee reads uncommitted data also from database to generate reports.
3) repeatable
when we select repeatable obiee server creates a row level lock on table for the records reading into database.
this way no one can change the data when obiee reads certain records.
4) serialisable
this option create a table level lock by obiee on the table from which obiee reading data.
generally datawarehosuing we use commited data only as we read data only once loaded by ETL.
3 and 4 options are appilicabe when we create rpd on realtime datawarehouse or OLTP's.
example;
import tables from bisample and scott.
create a connection pool poiting to user bisample.
then enable this option
PhysIcal Joins
Joins are two types in physical layer.
1) Foreign key join
2) Complex join
To Create above joins we have two separate join objects in 10g version however we will have single object icon in
we can create two types of joins using single icon click.
folder(object) in physical layer defines the database type and database features which obiee can use while generating
to create one database for one typs of database . Example one for oracle , one for XML..etc.
dd any no of schemas(users) to a single database for importing tables.
ysical database is designed to work as virtual database then we need to select VPD option(virtual private
when databases are designed to create a seperate virtual database for every user who logs into database.
ave different database params fror different users.
s by default.
n enables obiee to generate create table queries to insert data temporarly
n enable users to create reports directly on database with their custom SQL bypassing rpd subject areas
is having too many values in where condition inclause then obiee will automatically inserts them into
with main table.
all interafces
a native connectivity interaface to connect to database.
ectivity drivers only to connect database as they are faster and supports all database features.
hen native connections are not available.
d password of database.
y no of maximum connections we can have to database at a time from obiee connection pool.
s obiee server can have to database is sum of no of connections of all connection pools.
logon option , every presentation server user(obiee user who connects from browser) when generates
single user/pwd information provided under user/pwd textbox to execute queries.
rict every user to user different database users to connect to database then we need to de select this
ovide user information under identity manager.
pooling time that how much time a database sesstion(connection) can be active even after a query
able obiee server creates a row level lock on table for the records reading into database.
ange the data when obiee reads certain records.
ble level lock by obiee on the table from which obiee reading data.
el tables from more than one schema/user then we need to select Required fully qualified table name option.
tion obiee will prefix schema names in front of table names. This prefix acts as a owner name of the
physical layer.
we have two separate join objects in 10g version however we will have single object icon in 11g version
es of joins using single icon click.
which is create between two table based on primary key and foeriegn key is called foreign key join
ot add any custom conditions / join columns in this join
: emp.deptno=dept.deptno
drag phsyical tables to BMM layer, all the foreign key joins are inherited to BMM layer
1) phsyical
It is a regular phsyicle table which exists in a phsyical database.
here any phsyical object which phsyically exists in database or pshycal database can be created as a
phsyicle type of table.
2) select
in select type of table we can write any valid SQL.
generally it is a in-line view concept in database.
in case if we want to use any SQL output as a table then we can select table type as select and write the
SQL in SQL text box.
3) procedure
procedure helps to design any procedure output n physical layer.
in this case procedure out put columns will become columns in physical object.
Cache
table cache property enables query output to be cached and mantained as per cache persitant time.
we can see more on cache manager
deploy view
we can see this opton enabled for the table which have created with table type select.
by selecting deploy view obiee will create a physical view on database against the gainst which specified under
select type database sql.
steps.
Import SAMP_LOOKUPS_D table from BISAMPLE database to phsyical layer.
next dbl click SAMP_LOOKUPS_D table and select table type as select.
provide the following SQL under text(default intialiser string)
SELECT lookup_dsc, language_key, lookup_KEY, lookup_type FROM SAMP_LOOKUPS_D
WHERE lookup_type='Customer Status' AND LANGUAGE_KEY='en'
ensure that columns of sql will match with column name of phsysical object.
define primary key on lookup_key colmn.
now join SAMP_LOOKUPS_D to customer dimension table(SAMP_CUSTOMERS_D).
note: drag first from customer table to lookup)
provide the following condition
apple.""."BISAMPLE"."SAMP_LOOKUPS_D"."LOOKUP_KEY"
= "apple".""."BISAMPLE"."SAMP_CUSTOMERS_D"."STATUS_KEY"
now drag samp_lookups_d phsyical table to BMM layer and join with customer table in BMM layer.
next drag EMP_MGR table from physical layer to HR Analysis folder of BMM layer.
Go to BMM Layer joins and join EMP_MGR to EMP if join not already exists.
rename ENAME column of EMP_MGR logical table to "manager name"
now drag manager name logical column from BMM layer to emp folder of presentation layer.
Scenario2)
open rpd in online mode.
go to physical layer and create a alias on SAMP_TIME_DAY_D
provide alias name as Dim_Paid_Date
define primary key on dim_paid_date on CALENDAR_DATE column.
now go to phsyical diagram and join from REVENUEW FACT TABLE TO DIM_PAID_DATE
provide following join condition.
apple.""."BISAMPLE"."dim_PAID_date"."CALENDAR_DATE" =
apple.""."BISAMPLE"."SAMP_REVENUE_F"."PAID_DAY_DT"
next drag dim_paid_date physical table from phsyical layer to BMM layer.
go to BMM logical joins diagram and ensure that a logical join exists from revenue to dim_paid_date.
(if not create join from revenue fact table to dim_paid_date)
rename CALENDAR_DATE OF dim_paid date column in BMM layer to "Bill Piad Date"
drag all DIM_PAID_DATE LOGICAL TABLE FROM BMM layer to presentation layer
and save validate.
duplicate
duplicate is a copy of same existsing phsyical object.
to create a duplciate right click a phsyical table and select duplicate.
duplicate will create a new physical object with same structure of original object. But it cannot have the same nam
of original. So we need to provide a new name.
types of tables.
for the table which have created with table type select.
will create a physical view on database against the gainst which specified under
OOKUPS_D"."LOOKUP_KEY"
P_CUSTOMERS_D"."STATUS_KEY"
yical table to BMM layer and join with customer table in BMM layer.
"customer status"
o customer folder of presentation layer.
ay manager name.
a alias on SAMP_TIME_DAY_D
column mapping
columns mappng defines the mapping of each logical column to a phsyical column.
content
content will defines the actual granularity of table.
logical joins
relationship of logical table in BMM layer is called logical join or BMM join.
In logical(BMM) join we can specify the cardinality and join type (like inner outer ) only.
cardinality
cardinality is a relationship information from one table to another table
like one to one
one to many
many to one
many to many
when we model multiple logical tables and join them using logical joins, then OBIEE will automatically recognises the
logical fact and logical dimension based on the following infromation
A single logical table which has many "one to many' relation ships from other logical tables will be recognised as logical
fact.
example:
Here all D1,D2,D3,D4 logical tables are having one to many relationship onto f5 tables.
so F5 will be identified as Logical fact table.
Calculations
a) aggregation rules
normal aggregations
we can apply any summary function like SUM,AVG,FIRST, ..etcs on measure column of logical fact .
these columns are called measures in logical fact table.
level based measure
A measure calculaion(aggregationsum,avg..etc) always based a particular level is called level based measure
Physical calculation
calcused based on physical table columns is called phsyical calculation
Logical calculation.
calculations based on Logical columns of existing logical tables columns is called logical calculations.
Dimesional hierarchies
Defining the hierachies on a single multple dimensional columns with proper level is called dimensional hierarchy
Business Model
Is a folder under which you categorise a subject area related business
information.
generally we can create one folder for one subject area like hr,sales,invetory
shipment …etc
Logical coumns
a columns which required to use in analyitcs for users.
it can be a direct map to physical columns or any calculted column.
each Logical column can mapped to physical columns via column mapping
tab of LTS.
BMM joins
there are two types of BMM joins
1) complex joins 2) foreign key join.
BMM joins or logical joins need to be specified in Logical diagram(BMM model diagram)
Logical joins helps to idendifiy fact and dimension tables based on cardinality
set in join conditions.
in these joins we will not bother about join conditions but cardinality and join type
like (inner,outer) are important.
OBIEE uses these joins to understand what are tables can participate in joins
and what is th join path ..etc.
always OBIEE expects a logical join and also a physical join for two tables.
from logical join it will take join type and condition will be taken from physical joins.
logical joins play key role when we are deisgning multiple star schemas.
In OBIEE 11 g these two joins types have been combined to single join where both can be specified
on need basis.
Calculations
In BMM layer we can perform two types of calculations
1) physical
2) logical
in another method we can create calculations using wizard.
Aggregation Rules
Logical Calculation
Logical Calculation uses the existing logical columns of BMM Layer
We can use all Logical tables in BMM Layer and their columns in this calculation.
When we perform the Logical calculations,first it will perform the the applied calculation on used
logical column and then performs the calculation which specified in Log Calculations.
some times it produces wrong result on measure columns.
example Total amount=sum(amount_sold)*sum(quantity_sold)
this will lead to wrong total.
Physical calcualation
Physical Calculations uses physical tables of Physical layer.
we can use only physical tables which are mapped to Logical table via Logical tables source.
these calculations are performed direcly on database physical columns.
example sum(amount_sold*quantity_sold)
Aggregate calculations on Dimension tables.
be recognised as logical
ed level based measure
ensional hierarchy
Header Detailed Scenario in BMM Layer.
when we have two fact tables at different granualar level , and if you are planning to design
BMM layer to create reports with combination of measures from both tables then we need to
join them as a independent fact or LTS in BMM layer.
usually we cannot join two fact tables , if they are at different granular level.
when you join them it will lead to wrong results due to one-Many relation ship from
one fact table to another fact table.
(first fact table data will get duplicated while producing results from seconds fact table.
this problem scenario is called header-Detail or master -detailed fact tables scenario.
to solve this problem design fact tables as a independent LTS (or fact) in BMM layer.
also ensure that these two facts are not joined in phsyical layer.
note: we can join two fact table if they are at same granualar level.
other way if they are at 1-1 relation ship.
Granularity is nothing but, what level of information you are storing in table.
Every tables granualarity is at primary key level.
Example:
open rpd which had sales and other dimension tables designed.
drag sales_ship fact table from physical layer onto sales Logical table.
now observe that a additional LTS has been created under sales Logical table by name sales_ship
and also ship_amt and other sales_ship columns are added as logical columns.
now when you dbl click sales LTS you will see column mappings for only sales table physical columns
when you dbl click sales_ship LTS you will see columns maping for only sales_ship columns.
restart services and create a report with product desc,amount sold,ship amount.
notice that two separate queries are generated two produce the out put.
here two SQL results are combined at OBIEE level.
go to system DSN
click on Add button
then select Microsoft Excel Drive(.xls)
click finish.
Datasource name:obiee_excel
click on select work book and select the file which was created and saved in above step(obiee.xls)
QTY
C1 P1 12 60
C1 P2 1 250000
13 250060
3250780
Aggregation rule
We can apply aggregate function like sum/avg/first ..etc on a measure column of logical fact.
All the columns which are displayed under BMM layer are called logical columns.
In addition to existing columns dragged from physical layer to BMM layer, we can create new logical coumns
in any logical table.
when you create a new logical column, this column will not be mapped to any phsyical column.
it will display blank mapping in column mapping tab.
To provide the content to new logical column we perform any one of activity
Default Value
1) provide default value like 'Apple' or '1999' …etc
Physical calcuations
2) provide a derivation logic using directly physical columns.
this type of calculations are called phsyical calculations.
when we are performing phsyical calculations we can use the columns of the phsyical tables which are mapped to
logical table in table mapping.
Logical calculaions can use any existing logical column of any logical table in derivations.
(where as it is not possible in physical cacl)
sum(SAMP_REVENUE_F.COST_FIXED * SAMP_REVENUE_F.UNITS)
sum(SAMP_REVENUE_F.COST_FIXED) * SUM(SAMP_REVENUE_F.UNITS)
sample formulas
date
year(Current_Date)
TimeStampAdd(SQL_TSI_YEAR,1, Current_Date)
add or minus dates as per the period menstioned in SQL TSI
TimeStampDiff(SQL_TSI_DAY, Current_Date , "apple".""."BISAMPLE"."SAMP_CUSTOMERS_D"."BIRTH_DT" )
finds the difference of two dates as per SQL_TSI
following are the valid SQL_TSI key words
SQL_TSI_DAY
SQL_TSI_WEEK
SQL_TSI_MONTH
SQL_TSI_QUARTER
SQL_TSI_YEAR
SELECT COUNT(DISTINCT ORDER_NUMBER) no_orders
,COUNT(CASE WHEN ORDER_STATUS='6-Cancelled' THEN ORDER_NUMBER ELSE NULL END) NO_CANCELLED_ORD
,COUNT(CASE WHEN ORDER_STATUS='1-Booked' THEN ORDER_NUMBER ELSE NULL END) NO_booked_ORD
,COUNT(CASE WHEN ORDER_STATUS in ('1-Booked','6-Cancelled') THEN null else ORDER_NUMBER END) NO_others_ORD
FROM SAMP_REVENUE_F
720
250000
250720
ction as count.
of Customer"
MERS_D"."BIRTH_DT" )
O_CANCELLED_ORD
booked_ORD
BER END) NO_others_ORD
en 0 else SAMP_REVENUE_F.UNITS end )
Hierarchi
hierachi is a logical arrangement of all the dimensional attributes from top level to lower level.
place of a level in a hierarchi is called position.
We can create following types of hierarchies
first identify the levels and their positions of each logical dimensional table in BMM layer
example:
for product dimension
Brand==>LOB==>Type==>Desc
for calendar date
Year==>half Year==>Quarter==>month==>week==>Date.
right click "Grand total" and select child level and provide name as "Brand"
righ click "brand" and create child level and name as "LOB"
rigt click LOB and create child level and name as "Type"
right click type and create child level and name as "Prod details"
now drag brand ,brand_key columns from product logical dimension table onto
"Brand" level under "H Product"
similarly drag LOB,LOB_KEY to LOB leve;
TYPE,TYPE_KEY onto to TYPE level
prod_key,prod_dsc column onto "Prod details" level.
understanding hierarchi
level key
level key identifies the granularity of the level amoung multiple attributes.
use for display
a column which is selected as use for display will be used to display by default as a result of drill
technique
Drill techniques
3) Drill across
navigating from one dimensional hierarchi level to another dimensional hierarchi level is
called drill across.
4) Drill through
navigating from one hierach to another non related content is called drill through.
We can desgin drill down,up,across in rpd but drill through can be desgned in report(analysis).
when we are analysing data with drill technique rpd will automatically indentyfies the position of every column
selected in report and takes to appropriate levels.
when it is taking to next level always it will display the column in report which is selected as use for
display.
always drill navigation path can go via a column which is select as key in the level.
Grand total grand total level can be used to identify the top level position in a dimensional hierarchi.
drill up technique
dbl click "prod details" level and add TYPE level under preferred drill path.
dbl click "Type" level and add LOB level under preferred drill path
dbl click "LOB" and add brand level under preferred drill path
Drill across
create a new dimnesional hierarchi on SAMP_time_day_d table.
YEAR=>H YEAR==>QUARTER=>MONTH=>WEEK=> DAY.
add relevant columns to each level.
this will enable to navigate to year level from prod details level.which is a drill across as we are
navigating from one level from one dimensional hierarchi to another.
drag the column to presentation layer and save and test report.
in order to create level based measure at overall(grand total) first ensure that
Grand total level is created under heirarchies.
CALENDAR FINANCE
DATE WEEKNUMER
MONTH QUARTER HF YEAR WEEKNUMER
MONTH QUARTER
1-Jan-12 MONDAY W1 1 1 1 2012 W37 10 4
project1 project2
Ragged hierarchi
Skipped hierarchies
if any hierarchi has skipped levels compared to actual desired level then it is called skipp
example: above
in skipped level starting level and ending levels may be same but in between levels will k
Ragged/unblanced ierarchies
if any hierachi levels are not exacly match with no of levels of regular hierarchies then it
in this hierarchies no of child levels to a parent is not same compare to regular hierarchi
STEPS TO CREATE
in bisample schema we have table SAMP_PRODUCTS_DR which contains both ragged an
create a new Dimensional hierarchi(select level based hierachi option). name it as "H dim Prod
Create level in following way
Grand total
Brand
add brand column of SAMP_PRODUCTS_DR
create key on brand column and select use for display
LOB
add LOB column of SAMP_PRODUCTS_DR
create key on LOB column and select use for display
type
add Type column of SAMP_PRODUCTS_DR
create key on type column and select use for display
prod desc
add prod_dsc,prod_key columns
create key on prod_dsc and select use for dispay.
now dbl click H dim Prod ragged and ensure that ragged and skipped option is selected,
olumn as a key
al dimension is called drill down
al dimension is
ierarchi level is
om this level.
ed in report.
2011
HF YEAR
2 2011
project 3
skipped
evel then it is called skipped level hierarchies.
e for display
We can create three time series functions(Ago,todate,period rolling) in OBIEE 11g logical calculations.
Ago ,todate are old functions and preiod rolling introduced in 11g)
when we create a dimension hierarchi as time series then we need to select a chronological key
as mandatory.
chronological key
chrnological key helps identify the granularities of levels and identify the exact levels and
befor and after levels.
generally we can select lower level granualar column of hierarchi as chronological key.
syntax
Ago("BISAMPLE"."SAMP_REVENUE_F"."Actual REVENUE" , "BISAMPLE"."dim Hierachi Cal Date"."Year"
here first parameter is the measure for which we need to get previuo periods data.
second parameter is the timeseries hierarchi level to tell for which level we need to get previuos data
third parameter is how many back periods data we want
Todate
todate calculates running measure values.
example in a report we want to display sum of above records values for a given record.
given record.
back ground
We can join more than one phsyical tables to a signle LTS of logical tables.always when drag
a phsyical table to bmm layer one logical taable is created along with one LTS.
here each lts will point to phsyical table which you have dragged.
in some scenarios we may have to link multiple physical tables to a single logical table.
this can be done in two ways
a) convert snow flake tables of phsyical layer to logical star schema in BMM layer
b) need to calculate/write phsyical derivations on columns from more than one phsyical tables.
steps to create
create a measure to show revenue of custers whose marital status is SINGLE
1) drag samp_customer_d phsyical table from phsyical layer onto LTS of SAMP_REVENUE_F
(note that we need to drag this only onto LTS. Dragging onto other parts will create another LTS)
now u can notice that a new physical table is added under tables map section of LTS.
to see this dbl click LTS and verify two tables (SAMP_REVENUE_F,SAMP_CUSTOMERS_D)
and a default join inner join.( change the join type to outer in case needed).
now go to column mapping tab and click the expression editior Of single_revenue
now u can see two physical tables added in phyical tables section (left side)
3) save expression and apply sum aggregation rule to SINGLE_REVENUe logical column
(aggregaion tabe=>sum)
4) drag SINGLE_REVENUE measure column from BMM layer to presentation layer
alternate
next drag area,city,region columns of phsyical layer adress dimension to logical customer dimension
save rpd
This method can be used a) to join two fact tables which are at different granualr level.
b) desginging aggregate tables.
when we have two fact tables at different granual level then we should not join them directly in phsy
layer.
this will lead to a data issue and this issue is popularly known as header detail problem in OBIEE
this also called as chasm trap in other tools.
issue: when we have two fact tables on one to many relationship and if we join them directly
then first table data will get replicated and final out put of first table measures will produce double
amounts.
To solve this we need to model each fact table in both phsyical layer and BMM layer as star schema.
join each fact table with their respective dimension tables.
mostly these dimension tables are conformed dimensions.
steps to create.
here we will consider SAMP_QUOTAS_F as a additional fact table which has different granualarutity compare
to existing fact table SAMP_REVENUE_F
import SAMP_QUOTAS_F table into physical layer and join with product dimension table.
next drag SAMP_QUOTAS_F Physical table onto SAMP_REVENUE_F loical fact table.
now all the columns(like VALUES) will be added to REVENUE logical fact table.
and also you can notice a additional LTS created below SAMP_REVENUE_F lts.
now dbl click VALUE column and change aggregation rule to SUM
save rpd and create a sample report with following columns selected
D:\OBIEE116\instances\instance1\diagnostics\logs\OracleBIServerComponent\coreapplication_obis1
you will see two queries generated each one for one fact table i.e revenue and quotas
concept
when we have two logical stars where each star with different fact table.
when you select measures from both stars along with common dimension attributes
then OBIEE generates to separate queries each one for one fact table.
and both output data can be combined at BI server level to show them in single report.
note:
when you select two measures from different logical stars without a common dimension attribute the
it will lead to metadata error or abnormal result.
Designing Aggregate tables
Aggregate tables are summarised table which created for faster retrieval of data.
fact tables contains pre-calculated summarised data of original fact tables.
when we have data in both fact and aggregate tables always queries from aggregate table produce reports in faster
way.
when we are building aggregates tables we will build data at very higher granular level to not to
have so many records in aggregate tables.
example
REVENUE FACT contains data at all dimensional level wheres as revenue aggregate table in
sample schema contains only at product and emp level
Now ensure that revenue and units measure column names of existing revenue logical fact table are named
as REVENUE AND UNITS(if not rename to REVENUE and UNITS)
now drag SAMP_REVENUE_FA1 aggregate table from phsyical layer onto SAMP_REVENUE_F logical fact table.
this will create one additional LTS under SAMP_REVENUE_F logical fact by name SAMPL_REVENUE_FA1
also notice that REVENUE,UNITS logical columns will be pointing to both revenue and aggregate fact.
this can be notices under column mapping tab of each LTS.(aggregate and fact)
this information indicates to OBIEE that aggregate table is at product hiearchi and detail level granual level.
for fact table LTS , go to content tab and with leave all options blank
or select all dimensional hierarchies with lower level granual columns
this indicates to OBIEE that fact table is at all dimensional granualar level
save rpd and create a report with
brand and revenue column(report1)
in both reports we can see data but OBIEE produces data for report1 from
aggregate table for report2 generates data from fact table.
s.always when drag
logical table.
MP_REVENUE_F
will create another LTS)
CUSTOMERS_D)
DDRESS_KEY)
granualr level.
ity compare
single report.
aggregate table in
COMMIT;
Security
x user
presentation server
user1
SSO
multi sign on
LDAP light weight directory access protocol
Securit in BI 11g.
security can be broadly classified into two parts
authentication
authoriation
In OBIEE 10g there used to be separate security configurations for each BI servers.like BI server,publisher and schedulor.
In order to integrate security configuration in OBIEE11g there are new changes introduced.
all the authentication process in11g can be configured at weblogic application server.
all the authorisation process can be configured at fussion middleware,identity manager and web catalog.
Authentication
we can implement authentication in the following ways
Default LDAP authentication from Weblogic
external LDAP authentication using external LDAP servers.
external table authentication
Single singn on implementation (SSO)
LDAP
light wieght Directory access protocal
this is protocal server where we can create users, groups and configure security for each group/users in terms
access,restrictions enable disable ..etc.
Default LDAP authentication from Weblogic
by default weblog comes with preconfigured LDAP to implement security.
create new users and groups for new logins.
Below are default groups created by DefaultAuthenticator
AdminChannelUsers
Administrators
AppTesters
BIAdministrators
BIAuthors
BIConsumers
CrossDomainConnectors
Deployers
Monitors
Operators
after creating users all the users can login to presentation servers. We need to set roles later in middleware contro
Authorisation
autorisation is a process of providing relevent permissions,grants roles etc which are applicable to a user group
example validating whether user1 user name is correct and enered password is correct is called
authenticaion
and after logging into server setting what report permissions he can have to analyse data view repor is
authorisation
can be classified into three parts.
setting up permissions
object level security
data level security.
setting up permissions
we can set up permissions for the users which have been created in weblogic.
to set up permissions fussion middleware has below listed fedault roles.
BISystem system level permissions
BIAdministrator Administrator level permissions
BIAuthor permissions to create reports/dashboards..etc
BIConsumer read only access.
Role
role is a permission (responsibility) which we can define and access to any user or group
these roles have to be created in enterprise manager and grant them to users/groups which have been created in
weblogic
configuring permissions/acess/restrictions to each role can be done in OBIEE repository identity manager
We can create user defined roles and add users to functionally group users.
all the roles which have been created in em can be automatically added to repository and there we can
set the acces/restrictions like data level security and object level security
Repository variables
repository variables stores the variable values at BI server level.
Repository variables are initialised at BI server level and provide same values to
entire user community.
Static variables hold the static information, which means static variable values will not
be changed during rpd run.
best examples to create static variables are :OrgName,Initial/Full load of Datawareshouse..
Dynamic variables:
Dynamic variable values are tend to change in regular intervals.
example: current data,last refreshdate,current month,week last month,week…etc.
Initializaion Block:
It is a component which initialises the values to Dynamic variables.
to initialise values to variables we need to write SQL statements in Init block.
the output of SQL's will be associated to Rep dynamic variables.
We can set the interval to init block on what intervals it has to execute.
when you run BI server it will be executed auto and from then it will keep
executing as per given interval.
Session variables
session variable are two types
system non system
all the session variables should be associated with session initialisation block.
System
system variables are built I variables. We can use them with or without creaating them. Below os the list of system
USER
GROUP
LOGLEVEL
ROLES
PERMISSIONS
USERGUID
DISABLE_CACHE_SEED
DISABLE_CACHE_HIT
ROLEGUID
SELECTPHYSICAL
USERLOCALE
Non System
non system varibales are user defined variables.
session initialisation block
Required for logon (authentication)
COMMIT;
and when schedule them to run at a time, we need to specify the preference of order to execte them.
em. Below os the list of system variables.
BI Server cache:
when we enable cache for BI server all the report outputs will be stored in BI server cache.
these are called cache entries.
Cache storage
Cache hit detection
cache manager
setting up cache.
to configure cache for BI server first we need to configure CACHE ENABLE property to 'YES'
in nqsconfigfile.
this can be done either from Enterprise manager console or on nqsconfig file directly.
in addtion to this we also need to configure cache store directory
max entries,max size ..etc
cache storage
is a location /path where all BI server caches have to be stored.
if BI server is a clustered then we need to provide clustured location
cache detection
cache detection is process algorithm of identifing when a report can get from cache or when
a report output has to get from database directly.
when a cache entry has created obiee stores the metadata of that report in cache manager.
this metadata includes logical sql of report,size,last used,users etc.
cache entries are created for for each user.
when report is execute first cache detection process gathers the logical sql of report and check whether
any cache entries available for that particular logical sql.
if a same logical sql data is available In cache or a superset of logical sqldata is available on cache then
automatically bi servers directs report request to read from cache entries.
if you select cache roll up aggregation then new cache entries will be created for the reports which
have been created based on existing cache.
cache manager
cache manager is a tool in rpd which helps to manage cache and monitory in GUI.
cache purging
cache purging is a process of deleting the cache entries
this can be done in following ways
manual method
using cache manager.
executing NQcmd commands manually
auto method
Specifiy the cache expiry time for every table in physical layer
using Event pooling
schedule NQcmd command to clean as per given intervals
create a event pooling table by getting the table structure from C:\OracleBI\server\Schema
from the ETL side as soon as any bi tables is updates we need to update event pooling table.
example:
BI server will watch this table on regular intervels specified in pooling frequency
and clear the cache entries for all the tables which have been changed during the period.
every time a report is viewed by a user , the report data will be kept in presentation server
for a short period.
this cache is called presentation server cache.
by default this cache is maintained for 2 min and cleaned automatically after 2 min.
we can configure not to have presentation server cache by changing the instance config
file chache property.
erver all the report outputs will be stored in BI server cache.
process algorithm of identifing when a report can get from cache or when
to get from database directly.
egation then new cache entries will be created for the reports which
xisting cache.
che manager
rgeallcache() .
lete cache after specific time.
n to purge cache as soon as warehouse tables are changed.
Specifiy the cache expiry time for every table in physical layer
using Event pooling
schedule NQcmd command to clean as per given intervals
sysdate,'apple','SalesAgg','sh','customers',null)
n server cache.
ned for 2 min and cleaned automatically after 2 min.
Usage Tracking
MUDE
Multi User development Environment
Architecture
abc = test.rpd;
MUDE
Multi User development Environment
Parent child
EMPNO ENAME PISITION(DESGN)HIRED_DT MGR_ID TYPE
1 Fred Webster 6 2-Apr-08 5 TYPE 1
2 Aurelio Miranda 3 9-Sep-07 5 TYPE 2
3 XX 20
4 YY 69
5 Jonny Harston 7 22-Jul-04 6 TYPE 1
6 Jack Benetti 8 26-Nov-05 10 TYPE 1
7 QQ 66
8 NN 34
9 PP 67
10 Roger Wray 6 5-Jan-09 12 TYPE 3
10G SOLUTION
EMP_HIERARCHIES
EMPNO ENAME POSITION LEVEL1_CODE LEVEL1_DESC LEVEL1_POSITION
LEVEL2_CODE
1 Fred Webster 6 5 Jonny Harston 7 6
1 1 0
1 5 1
1 6 2 declare
1 10 3 v_max_depth integer;
1 12 4 v_stmt varchar2(32000);
5 6 i integer;
5 10 begin
5 12 0 select max(level) into v_max_depth
100 100 from SAMP_EMPL_D_VH
100 0 connect by prior EMPLOYEE_KEY=MGR_ID
start with MGR_ID is null;
v_stmt := 'insert into BISAMPLE.PARENT_CHILD_TABLE (MEMBE
Import || 'select EMPLOYEE_KEY as member_key, null, null, 0 from SAM
SELECT * FROM SAMP_EMPL_D_VH || 'union all' || chr(10)
|| 'select' || chr(10)
|| ' member_key,' || chr(10)
select * from SAMP_EMPL_POSTN_D || ' replace(replace(ancestor_key, ''\p'', ''|''), ''\'', ''\'') as ances
|| ' case when depth is null then 0' || chr(10)
|| ' else max(depth) over (partition by member_key) - depth +
SELECT * FROM samp_empl_parent_child_map || ' end as distance,' || chr(10)
|| ' is_leaf' || chr(10)
|| 'from' || chr(10)
|| '(' || chr(10)
|| ' select' || chr(10)
|| ' member_key,' || chr(10)
|| ' depth,' || chr(10)
|| ' case' || chr(10)
|| ' when depth is null then '''' || member_key' || chr(10)
|| ' when instr(hier_path, ''|'', 1, depth + 1) = 0 then null' ||
|| ' else substr(hier_path, instr(hier_path, ''|'', 1, depth) + 1,
|| ' end ancestor_key,' || chr(10)
|| ' is_leaf' || chr(10)
|| ' from' || chr(10)
|| ' (' || chr(10)
|| ' select EMPLOYEE_KEY as member_key, MGR_ID as ances
|| ' case when EMPLOYEE_KEY in (select MGR_ID from SAM
|| ' from SAMP_EMPL_D_VH ' || chr(10)
|| ' connect by prior EMPLOYEE_KEY = MGR_ID ' || chr(10)
|| ' start with MGR_ID is null' || chr(10)
|| ' ),' || chr(10)
|| ' (' || chr(10)
|| ' select null as depth from dual' || chr(10);
for i in 1..v_max_depth - 1 loop
v_stmt := v_stmt || ' union all select ' || i || ' from dual' || c
end loop;
v_stmt := v_stmt || ' )' || chr(10)
|| ')' || chr(10)
|| 'where ancestor_key is not null' || chr(10);
execute immediate v_stmt;
end;
/
LEVEL2_DESC
LEVEL2_POSITION
LEVEL3_CODE
LEVEL3_DESC
LEVEL3_POSITION
Jack Benetti 8 10 Roger Wray 6
mber_key, MGR_ID as ancestor_key, sys_connect_by_path(replace(replace(EMPLOYEE_KEY, ''\'', ''\''), ''|'', ''\p''), ''|'') as hier_path,' || chr
n (select MGR_ID from SAMP_EMPL_D_VH ) then 0 else 1 end as IS_LEAF' || chr(10)
Lookup table
Lookup(DENSE "HR Analysis"."DEPT_LKP"."DNAME" , "HR Analysis"."EMP"."DEPTNO" )
Lookup(sparse "HR Analysis"."DEPT_LKP"."DNAME" ,'default', "HR Analysis"."EMP"."DEPTNO" )
Aggregate persistent
create aggregates
ag_SAMP_REVENUE_F
for "Revenue Analysis"."SAMP_REVENUE_F"("Total Fixed Cost")
at levels ("Revenue Analysis"."H Dim date"."Year")
using connection pool "apple"."Connection Pool"
in "apple".."BISAMPLE";
"Times"."CALENDAR_YEAR")