Вы находитесь на странице: 1из 18

SAP BW Interview Questions

What is ODS?
It is operational data store. ODS is a BW Architectural component that appears between PSA ( Persistant
Staging Area ) and infocubes and that allows Bex ( Business Explorer ) reporting.
It is not based on the star schema and is used primarily for details reporting, rather than for dimensional
analysis. ODS objects do not aggregate data as infocubes do. Data are loaded into an IDS object by
inserting new records, updating existing records, or deleting old records as specified
byRECORDMODE value.
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an infocube?
3. What are the four ASAP Methodologies?
4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes?
1. Ans. This depends,if you have complex coding in update rules it will take longer time,orelse it will take
less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by different dim table
which connects to sids. And the data wise, you will have aggregated data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have granular data(detailed
level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as navigational
attribute is used for drilling down in the report.We don't need to maintain Nav attribute in the cube as a
characteristic(that is the advantage) to drill down.
*****
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by request.
[use Delta upload method]
Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can.ODS is nothing but a table.
Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes ofcourse.For example, for loading text and hierarchies we use different data sources but the same
infosource.
Q4. BRIEF THE DATAFLOW IN BW.
Data flows from transactional system to analytical system(BW).

DS on the transactional system needs to be replicated on BW side and attached to infosource and update
rules respectively.
Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY NOT IN TRANSFER
RULES?
Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE PROCEDURE IN LOCOCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append fields).Refer white paper on
LO-Cokpit extractions.
Q8. SIGNIFICANCE OF ODS.
It holds granular data.
Q9. WHERE THE PSA DATA IS STORED?
In PSA table.
Q10.WHAT IS DATA SIZE?
The volume of data one data target holds(in no.of records)
Q11. DIFFERENT TYPES OF INFOCUBES.
Basic,Transactional and Virtual Infocubes(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects/Charactaristic InfoObjects
Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns,you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit
Customer exit
Authorization
Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets
Q18. WHAT ARE INDEXES?
Indexes are data base indexes,which help in retrieving data fastly.
Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED.
Nope

Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?


KPIs indicate the performance of a company.These are key figures
Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)
Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*,Number ranges,delete indexes before load ..etc
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily(SM37 jobs)
Q26. AUTHORIZATIONS.
Profile generator[PFCG]
Q27. WEB REPORTING.
Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE INFOPROVIDER.
Of course
Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help.What are you expecting??.Multicube works on Union condition
Q30. EXPLAIN TRANPORTATION OF OBJECTS?
Dev ---> Q and Dev ---> P

Daily Tasks in Support Role and Infopackage Failures


1. Why there is frequent load failures during extractions? and how they are
going to analyse them?
If these failures are related to Data,, there might be data inconsistency in
source system.
though you are handling properly in transfer rules. You can monitor these issues in T-code -> RSMO and
PSA (failed records).and update .
If you are talking about whole extraction process, there might be issues of work process scheduling and
IDoc transfer to target system from source system. These issues can be re-initiated by canceling that
specific data load and ( usually by changing Request color from Yellow - > Red in RSMO).. and restart
the extraction.
2. Can anyone explain briefly about 0record modes in ODS?
ORECORDMODE is SAP Delivered object and will be added to ODS object while activating. Using this ODS
will be updated during DELTA loads.. This has three possible values ( X D R).. D & R is for deleting and
removing records and X is for skipping records during delta load.
3. What is reconciliation in bw? What the procedure to do reconciliation?
Reconcilation is the process of comparing the data after it is transferred to the BW system with the

source system. The procedure to do reconcilation is either you can check the data from the SE16 if the
data is coming from a particular table only or if the datasource is any std datasource then the data is
coming from the many tables in that scenario what I used to do ask the R/3 consultant to report on
that particular selections and used to get the data in the excel sheet and then used to reconcile with
the data in BW . If you are familiar with the reports of R/3 then you are good to go meaning you need
not be dependant on the R/3 consultant ( its better to know which reports to run to check the data ).
4. What is the daily task we do in production support.How many times we will extract the data at what
times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends in number of
records and kind of transfer rules you have provided. If transfer rules have some kind of round about
transfer rules and updates rules has calculations for customized key figures... long times are
expected..
Usually You need to work on RSMO and see what records are failing.. and update from PSA.
5. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you want it for
the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures

Questions Answers on SAP BW


What is the purpose of setup tables?
Setup tables are kind of interface between the extractor and application tables. LO extractor takes data
from set up table while initalization and full upload and hitting the application table for selection is
avoided. As these tables are required only for full and init load, you can delete the data after loading in
order to avoid duplicate data. Setup tables are filled with data from application tables.The setup tables sit
on top of the actual applcation tables (i.e the OLTP tables storing transaction records). During the Setup
run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before
executing the setup runs so as to avoid duplicate records for the same selections
We are having Cube. what is the need to use ODS. what is the necessary to use ODS though we are having
cube?
1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas for a ods the default is
overwrite.
What is the importance of transaction RSKC? How it is useful in resolving the issues with speial
characters.
How to handle double data loading in SAP BW?

What do you mean by SAP exit, User exit, Customer exit?


What are some of the production support isues-trouble shooting guide?
When we go for Business content extraction and when go for LO/COPA extraction?
What are some of the few infocube name in SD and MM that we use for extraction and load them to BW?
How to create indexes on ODS and fact tables?
What are data load monitor (RSMO or RSMON)?
1A. RSKC.
Using this T-code, you can allow BW system to accept special char's in the data coming from source
systems. This list of chars can be obtained after analyzing source system's data OR can be confirmed with
client during design specs stage.
2A. Exit.s
These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report)
Some can be developed by BW/ABAP developer and inserted wherever its required.
Some of these programs are already available and part of SAP Business Content. These are called SAP
Exits. Depends on the requirement, we need to extend some exits and customize.
3A.
Production issues are different for each BW project and most common issues can be obtained from some
of the previous mails. (data load issues).
4A.
LIS Extraction is kind of old school type and not preferred with big BW systems. Here you can expect
issues related to performance and data duplication in set up tables.
LO extraction came up with most of the advantages and using this, you can extend exiting extract
structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structures, you don't need to write
custom extractions... You can get clear idea on this after analyzing source system's data fields and
required fields in target system's data target's structure.
5A.
MM - 0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)
SD - 0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..
6A.
You can do this by choosing "Manage Data Target" option and click on few buttons available in
"performance" tab.
7A.

RSMO is used to monitor data flow to target system from source system. You can see data by request,
source system, time request id etc.... just play with this..
What is KPI?
A KPI are Key Performance Indicators.
These are values companies use to manage their business. E.g. net profit.
In detail:
Stands for Key Performance Indicators. A KPI is used to measure how well an organization or individual
is accomplishing its goals and objectives. Organizations and businesses typically outline a number of KPIs
to evaluate progress made in areas where performance is harder to measure.
For example, job performance, consumer satisfaction and public reputation can be determined using a set
of defined KPIs. Additionally, KPI can be used to specify objective organizational and individual goals
such as sales, earnings, profits, market share and similar objectives.
KPIs selected must reflect the organization's goals, they must be key to its success, and they must be
measurable. Key performance indicators usually are long-term considerations for an organization

. How to convert a BeX query Global structure to local structure


involved)?

(Steps

To convert a BeX query Global structure to local structureSteps:


A local structure when you want to add structure elements that are unique to
the specific query. Changing the global structure changes the structure for
all the queries that use the global structure. That is reason you go for a
local structure.
Coming to the navigation part-In the BEx Analyzer, from the SAP Business Explorer toolbar, choose the open
query icon (icon tht looks like a folder) On the SAP BEx Open dialog box:
Choose Queries. Select the desired InfoCube Choose New. On the Define the
query screen: In the left frame, expand the Structure node. Drag and drop the
desired structure into either the Rows or Columns frame. Select the global
structure. Right-click and choose Remove reference. A local structure is
created.
Remember that you cannot revert back the changes made to global structure in
this regard. You will have to delete the local structure and then drag n drop
global structure into query definition.
When you try to save a global structure, a dialogue box prompts you to comfirm
changes to all queries. that is how you identify a global structure.
2.I have RKF & CKF in a query, if report is giving error which one should be
checked first RKF or CKF and why (This is asked in one of int).

RKF consists of a key figure restricted with certain charecteristics


combinations CKF have calculations which fully uses various key figures
They are not interdependent on each other . You can have both at same time
To my knowledge there is no documented limit on the number of RKF's and CKF's.
But the only concern would be the performance. Restructed and Calculated Key
Figures would not be an issue. However the No of Key figures that you can have
in a Cube is limited to around 248.
Restricted Key Figures restrict the Keyfigure values based on a
Characteristic.(Remember it wont restrict the query but only KF Values)
Ex: You can restrict the values based on particular month
Now I create a RKFlike this:(ZRKF)
Restrict with a funds KF
with period variable entered by the user.
This is defined globally and can be used in any of the queries on that
infoprovider. In columns: Lets assume 3 company codes are there. In new
selection, i drag
ZRKF
Company Code1
Similarly I do for other company codes.
Which means I have created a RKF once and using it in different ways in
different columns(restricting with other chars too)
In the properties I give the relevant currency to be comverted which will
display after converting the value to target currency from native currency.
Similarly for other two columns with remaining company codes.
3.
Cell

What is the use of Define cell in BeX & where it is useful?


in BEX:::Use

When you define selection criteria and formulas for structural components and
there are two structural components of a query, generic cell definitions are
created at the intersection of the structural components that determine the
values to be presented in the cell.
Cell-specific definitions allow you to define explicit formulas, along with
implicit cell definition, and selection conditions for cells and in this way,
to override implicitly created cell values. This function allows you to design
much more detailed queries.
In addition, you can define cells that have no direct relationship to the
structural components. These cells are not displayed and serve as containers
for help selections or help formulas.
You need two structures to enable cell editor in bex. In every query you have
one structure for key figures, then you have to do another structure with
selections or formulas inside.

Then having two structures, the cross among them results in a fix reporting
area of n rows * m columns. The cross of any row with any column can be
defined as formula in cell editor.
This is useful when you want to any cell had a diferent behaviour that the
general one described in your query defininion.
For example imagine you have the following where % is a formula kfB/KfA * 100.
kfA
chA
chB
chC

kfB %
6 4 66%
10 2 20%
8 4 50%

Then you want that % for row chC was the sum of % for chA and % chB. Then in
cell editor you are enable to write a formula specifically for that cell as
sum of the two cell before. chC/% = chA/% + chB/% then:
kfA
chA
chB
chC

kfB %
6 4 66%
10 2 20%
8 4 86%

1) What is process chain? How many types are there? How many we use in real
time scenario? Can we define interdependent processes with tasks like data
loading, cube compression, index maintenance, master data & ods activation in
the best possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data
modelling?
6) How can enhance business content and what for purpose we enhance business
content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we
done tuning in real time. tuning can only be done for infocube partitions and
creating aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1: Process chains exists in Admin Work Bench. Using these we can
automate ETTL processes. These allows BW guys to schedule all activities and
monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any
given process chain. Is a procedure either with in the SAP or external to it
with a start and end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In
other words each process is dependent on the previous process and dependencies
are clearly defined in the process chain.

This is normally done in order to automate a job or task that has to execute
more than one process in order to complete the job or task. 1. Check the
Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of a) Executing Server b) WP Number/PID
9. Open a new SM37 (/OSM37) command
10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11a. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option. * -- Ramkumar K
Ans # 2: Data Integrity is about eliminating duplicate entries in the database
and achieve normalization.
Ans # 4: InfoCube compression creates new cube by eliminating duplicates.
Compressed infocubes require less storage space and are faster for retrieval
of information. Here the catch is .. Once you compress, you can't alter the
InfoCube. You are safe as long as you don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
Ans#3: Indexing is a process where the data is stored by indexing it. Eg: A
phone book... When we write somebodys number we write it as Prasads number
would be in "P" and Rajesh's number would be in "R"... The phone book process
is indexing.. similarly the storing of data by creating indexes is called
indexing.

Ans#5: Datamodeling is a process where you collect the facts..the attributes


associated to facts.. navigation atributes etc.. and after you collect all
these you need to decide which one you ill be using. This process of
collection is done by interviewing the end users, the power users, the share
holders etc.. it is generally done by the Team Lead, Project Manager or
sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to
worry about it....But do remember that it is a imp aspect of any
datawarehousing soln.. so make sure that you have read datamodeling before
attending any interview or even starting to work....
Ans#6: We can enhance the Business Content bby adding fields to it. Since BC
is delivered by SAP Inc it may not contain all the infoobjects, infocubes etc
that you want to use according to your company's data model... eg: you have a
customer infocube(In BC) but your company uses a attribute for say..apt
number... then instead of constructing the whole infocube you can add the
above field to the existing BC infocube and get going...
Ans#7: Tuning is the most imp process in BW..Tuning is done the increase
efficiency.... that means lowering time for loading data in cube.. lowering
time for accessing a query.. lowering time for doing a drill down etc.. fine
tuning=lowering time(for everything possible)...tuning can be done by many
things not only by partitions and aggregates there are various things you can
do... for eg: compression, etc..
Ans#8: Multiprovider can combine various infoproviders for reporting
purposes.. like you can combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or
IC, ODS and Master data.. etc.. you can refer to help.sap.com for more info...
Ans#9: Scheduled data load means you have scheduled the loading of data for
some particular date and time you can do it in scheduler tab if infoobject...
and monitored means you are monitoring that particular data load or some other
loads by using transaction RSMON.
BW Query Performance
Question:
1. What kind of tools are available to monitor the overall Query Performance?
o BW Statistics
o BW Workload Analysis in ST03N (Use Export Mode!)
o Content of Table RSDDSTAT
2. Do I have to do something to enable such tools?
o Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
3. What kind of tools are available to analyse a specific query in detail?
o Transaction RSRT
o Transaction RSRTRACE
4. Do I have a overall query performance problem?
o Use ST03N -> BW System load values to recognize the problem. Use the
number given in table 'Reporting - InfoCubes:Share of total time (s)'

to check if one of the columns %OLAP, %DB, %Frontend shows a high


number in all InfoCubes.
o You need to run ST03N in expert mode to get these values
5. What can I do if the database proportion is high for all queries?
Check:
o If the database statistic strategy is set up properly for your DB platform
(above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services
(EarlyWatch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)
6. What can I do if the OLAP proportion is high for all queries?
Check:
o If the CPUs on the application server are exhausted
o If the SAP R/3 memory set up is done properly (use TX ST02 to find
bottlenecks)
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
Customizing default)
7. What can I do if the client proportion is high for all queries?
o Check whether most of your clients are connected via a WAN Connection and
the amount
of data which is transferred is rather high.
8. Where can I get specific runtime information for one query?
o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
current data.
o To get to a specific query you need to drill down using the InfoCube
name
o Use Aggregation Query to get more runtime information about a
single query. Use tab All data to get to the details.
(DB, OLAP, and Frontend time, plus Select/ Transferred records,
plus number of cells and formats)
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime
o High Frontend Runtime
10. What can I do if a query has a high database runtime?
o Check if an aggregate is suitable (use All data to get values
"selected records to transferred records", a high number here would
be an indicator for query performance improvement using an aggregate)

o Check if database statistics are update to data for the


Cube/Aggregate, use TX RSRV output (use database check for statistics
and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)
11. What can I do if a query has a high OLAP runtime?
o Check if a high number of Cells transferred to the OLAP (use
"All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
is necessary (Stock Query, Exception Aggregation, Calc. before
Aggregation, Virtual Char. Key Figures, Attributes in Calculated
Key Figs, Time-dependent Currency Translation)
together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
as deep as possible. This limits the levels of the
hierarchy that must be processed. Use SE16 on the inclusion
tables and use the List of Value feature on the column successor
and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exist
12. What can I do if a query has a high frontend runtime?
o Check if a very high number of cells and formattings are transferred
to the Frontend ( use "All data" to get value "No. of Cells") which
cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient

SAP BW REALTIME INTERVIEW QUESTIONS


1) Why we delete the setup tables (LBWG) & fill them (OLI*BW)?
A) Initially we don't delete the setup tables but when we do change in extract structure we go for it. We r
changing the extract structure right, that means there are some newly added fields in that which r not
before. So to get the required data ( i.e.; the data which is required is taken and to avoid redundancy) we
delete n then fill the setup tables.
To refresh the statistical data. The extraction set up reads the dataset that you want to process such as,
customers orders with the tables like VBAK, VBAP) & fills the relevant communication structure with the
data. The data is stored in cluster tables from where it is read when the initialization is run. It is important
that during initialization phase, no one generates or modifies application data, at least until the tables can
be set up.
2) SIGNIFICANCE of ODS?
It holds granular data (detailed level).

3) WHERE THE PSA DATA IS STORED?


In PSA table.
3) WHAT IS DATA SIZE?
The volume of data one data target holds (in no. of records)
4) Different types of INFOCUBES.
Basic, Virtual (remote, sap remote and multi)
Virtual Cube is used for example, if you consider railways reservation all the information has to be
updated online. For designing the Virtual cube you have to write the function module that is linking to
table, Virtual cube it is like a the structure, when ever the table is updated the virtual cube will fetch the
data from table and display report Online... FYI.. you will get the
information : https://www.sdn.sap.com/sdn/index.sdn and search for Designing Virtual Cube and you will
get a good material designing the Function Module
5) INFOSET QUERY.
Can be made of ODS's and Characteristic InfoObjects with masterdata.
6) IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES ARE THERE.
In R/3 or in BW? 2 in R/3 and 2 in BW
7) ROUTINES?
Exist in the InfoObject, transfer routines, update routines and start routine
8) BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns, you can create structures.
9) WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Different Variable's are Texts, Formulas, Hierarchies, Hierarchy nodes & Characteristic values.
Variable Types are
Manual entry /default value
Replacement path
SAP exit
Customer exit
Authorization
10) HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level by using Navigational attributes and jump targets.
11) WHAT ARE INDEXES?
Indexes are data base indexes, which help in retrieving data fastly.

12) DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.


Help! Refer documentation
13) IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS USED?
No.
14) WHAT IS THE SIGNIFICANCE OF KPI'S?
KPI's indicate the performance of a company. These are key figures
15) AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image (correct me if I am wrong)
16) REPORTING AND RESTRICTIONS.
Help! Refer documentation.
17) TOOLS USED FOR PERFORMANCE TUNING.
ST22, Number ranges, delete indexes before load. Etc
18) PROCESS CHAINS: IF U has USED IT THEN HOW WILL U SCHEDULING DATA DAILY.
There should be some tool to run the job daily (SM37 jobs)
19) AUTHORIZATIONS.
Profile generator
20) WEB REPORTING.
What are you expecting??
21) CAN CHARECTERSTIC INFOOBJECT CAN BE INFOPROVIDER.
Of course
22) PROCEDURES OF REPORTING ON MULTICUBES
Refer help. What are you expecting? MultiCube works on Union condition
23) EXPLAIN TRANPSORTATION OF OBJECTS?
Dev---Q and Dev-------P
24) What types of partitioning are there for BW?
There are two Partitioning Performance aspects for BW (Cube & PSA)
Query Data Retrieval Performance Improvement:
Partitioning by (say) DateRange improves data retrieval by making best use of database [data range]
execution plans and indexes (of say Oracle database engine).

B) Transactional Load Partitioning Improvement:


Partitioning based on expected load volumes and data element sizes. Improves data loading into PSA
and Cubes by infopackages (Eg. without timeouts).
25) How can I compare data in R/3 with data in a BW Cube after the daily delta loads? Are there any
standard procedures for checking them or matching the number of records?
A) You can go to R/3 TCode RSA3 and run the extractor. It will give you the number of records extracted.
Then go to BW Monitor to check the number of records in the PSA and check to see if it is the same &
also in the monitor header tab.
A) RSA3 is a simple extractor checker program that allows you to rule out extracts problems in R/3. It is
simple to use, but only really tells you if the extractor works. Since records that get updated into
Cubes/ODS structures are controlled by Update Rules, you will not be able to determine what is in the
Cube compared to what is in the R/3 environment. You will need to compare records on a 1:1 basis
against records in R/3 transactions for the functional area in question. I would recommend enlisting the
help of the end user community to assist since they presumably know the data.
To use RSA3, go to it and enter the extractor ex: 2LIS_02_HDR. Click execute and you will see the record
count, you can also go to display that data. You are not modifying anything so what you do in RSA3 has
no effect on data quality afterwards. However, it will not tell you how many records should be expected in
BW for a given load. You have that information in the monitor RSMO during and after data loads. From
RSMO for a given load you can determine how many records were passed through the transfer rules from
R/3, how many targets were updated, and how many records passed through the Update Rules. It also
gives you error messages from the PSA.

26) Types of Transfer Rules?


A) Field to Field mapping, Constant, Variable & routine.
27) Types of Update Rules?
A) (Check box), Return table
28) Transfer Routine?
A) Routines, which we write in, transfer rules.
29) Update Routine?
A) Routines, which we write in Update rules
30) What is the difference between writing a routine in transfer rules and writing a routine in update
rules?
A) If you are using the same InfoSource to update data in more than one data target its better u write in
transfer rules because u can assign one InfoSource to more than one data target & and what ever logic u
write in update rules it is specific to particular one data target.
31) Routine with Return Table.

A) Update rules generally only have one return value. However, you can create a routine in the tab strip
key figure calculation, by choosing checkbox Return table. The corresponding key figure routine then no
longer has a return value, but a return table. You can then generate as many key figure values, as you
like from one data record.

32) Start routines?


A) Start routines u can write in both updates rules and transfer rules, suppose you want to restrict (delete)
some records based on conditions before getting loaded into data targets, then you can specify this in
update rules-start routine.
Ex: - Delete Data_Package ani ante it will delete a record based on the condition

33) X & Y Tables?


X-table = A table to link material SIDs with SIDs for time-independent navigation attributes.
Y-table = A table to link material SIDs with SIDS for time-dependent navigation attributes.
There are four types of sid tables
X time independent navigational attributes sid tables
Y time dependent navigational attributes sid tables
H hierarchy sid tables
I hierarchy structure sid tables

34) Filters & Restricted Key figures (real time example)


Restricted KF's u can have for an SD cube: billed quantity, billing value, no: of billing documents as
RKF's.
35) Line-Item Dimension (give me an real time example)
Line-Item Dimension: Invoice no: or Doc no: is a real time example

36) What does the number in the 'Total' column in Transaction RSA7 mean?
A) The 'Total' column displays the number of LUWs that were written in the delta queue and that have not
yet been confirmed. The number includes the LUWs of the last delta request (for repetition of a delta
request) and the LUWs for the next delta request. A LUW only disappears from the RSA7 display when it
has been transferred to the BW System and a new delta request has been received from the BW
System.

37) How to know in which table (SAP BW) contains Technical Name / Description and creation data of a
particular Reports. Reports that are created using BEx Analyzer.
A) There is no such table in BW if you want to know such details while you are opening a particular query
press properties button you will come to know all the details that you wanted.
You will find your information about technical names and description about queries in the following tables.
Directory of all reports (Table RSRREPDIR) and Directory of the reporting component elements (Table
RSZELTDIR) for workbooks and the connections to queries check Where- used list for reports in
workbooks (Table RSRWORKBOOK) Titles of Excel Workbooks in InfoCatalog (Table RSRWBINDEXT)
38) What is a LUW in the delta queue?
A) A LUW from the point of view of the delta queue can be an individual document, a group of documents
from a collective run or a whole data packet of an application extractor.
39) Why does the number in the 'Total' column in the overview screen of Transaction RSA7 differ from the
number of data records that is displayed when you call the detail view?
A) The number on the overview screen corresponds to the total of LUWs (see also first question) that
were written to the qRFC queue and that have not yet been confirmed. The detail screen displays the
records contained in the LUWs. Both, the records belonging to the previous delta request and the records
that do not meet the selection conditions of the preceding delta init requests are filtered out. Thus, only
the records that are ready for the next delta request are displayed on the detail screen. In the detail
screen of Transaction RSA7, a possibly existing customer exit is not taken into account.
40) Why does Transaction RSA7 still display LUWs on the overview screen after successful delta
loading?
A) Only when a new delta has been requested does the source system learn that the previous delta was
successfully loaded to the BW System. Then, the LUWs of the previous delta may be confirmed (and also
deleted). In the meantime, the LUWs must be kept for a possible delta request repetition. In particular, the
number on the overview screen does not change when the first delta was loaded to the BW System.
41) Why are selections not taken into account when the delta queue is filled?
A) Filtering according to selections takes place when the system reads from the delta queue. This is
necessary for reasons of performance.
42) Why is there a DataSource with '0' records in RSA7 if delta exists and has also been loaded
successfully?
It is most likely that this is a DataSource that does not send delta data to the BW System via the delta
queue but directly via the extractor (delta for master data using ALE change pointers). Such a DataSource
should not be displayed in RSA7. This error is corrected with BW 2.0B Support Package 11.
43) Do the entries in table ROIDOCPRMS have an impact on the performance of the loading procedure
from the delta queue?

A) The impact is limited. If performance problems are related to the loading process from the delta queue,
then refer to the application-specific notes (for example in the CO-PA area, in the logistics cockpit area
and so on).
Caution: As of Plug In 2000.2 patch 3 the entries in table ROIDOCPRMS are as effective for the delta
queue as for a full update. Please note, however, that LUWs are not split during data loading for
consistency reasons. This means that when very large LUWs are written to the DeltaQueue, the actual
package size may differ considerably from the MAXSIZE and MAXLINES parameters.
44) Why does it take so long to display the data in the delta queue (for example approximately 2 hours)?
A) With Plug In 2001.1 the display was changed: the user has the option of defining the amount of data to
be displayed, to restrict it, to selectively choose the number of a data record, to make a distinction
between the 'actual' delta data and the data intended for repetition and so on.
45) What is the purpose of function 'Delete data and meta data in a queue' in RSA7? What exactly is
deleted?
A) You should act with extreme caution when you use the deletion function in the delta queue. It is
comparable to deleting an InitDelta in the BW System and should preferably be executed there. You do
not only delete all data of this DataSource for the affected BW System, but also lose the entire information
concerning the delta initialization. Then you can only request new deltas after another delta initialization.
When you delete the data, the LUWs kept in the qRFC queue for the corresponding target system are
confirmed. Physical deletion only takes place in the qRFC outbound queue if there are no more
references to the LUWs.
The deletion function is for example intended for a case where the BW System, from which the delta
initialization was originally executed, no longer exists or can no longer be accessed.

Вам также может понравиться