Вы находитесь на странице: 1из 42


-> SAP BW and SAP ECC are 2 different Products or Applications which are indepen
-> Please do remember SAP BW is not one of the module on ECC like SD, MM, FI, CO
, HR, QM, PP...
-> Basically ECC is an OLTP system where we can process the Transactional Data.
-> We have many modules in ECC and it depends on the client to go with what modu
le they require as part of their business requirements.
-> All the modules in ECC are tightly integrated, because for example in SD when
we do the Goods Issue to the Customer automatically the Stock is reduced in IM.
-> In general depending on the requirements the customer will buy the license fr
om the SAP and try to implement the ECC system. As part of this the SD consultan
will try to configure the SD module and MM Consultant configure the MM m
odule like wise each module consultant configures the ECC systems as per their r
equirements. Once the configurations are done then the end user will enter the t
ransactions or postings in SAP ECC where in the data will be saved and stores in
ECC database. We as part of BW consultant we need to extract the data which was
entered by End users, Transform the data as required and load the data into SAP
BW system. Then we can do the reporting on the data loaded in to BW system.
-> In general Scenario(70%) there might be one ECC system for the small scale In
dustries, then it is enough to extract the data from the single ECC system alone
But there might be some situations where clients will have different ECC
systems based on the Business they have for example Plant wise, Region wise. Th
en we need to extract the data from different ECC systems into BW system.
-> Data is extracted from the source system(SAP R/3) and updated using a DataSou
-> When we extract the data from the Flat File or DB connect the procedure for e
xtraction will be same. But it is not the case with ECC extraction because
each module may have different procedures or diff types of extractors ba
sed on data centric to extract the data.
1. SAP R3 and SAP BW should be in a network(LAN, WAN...)
2. Source System Connection between SAP R3 and SAP BW. (SAP Connection Automatic
or Manual)
-> The main difference between the Flat File Extraction and SAP R3 Extraction...
- In case of Flat File when we assign the Data Source the Dataso
urce gets created automatically by defining the Transfer Structure.
- But In case of SAP R3 we have to login to SAP R3 application a
nd create a DataSource and bring(Replicate) the datasource to SAP BW and assign
datasource to the Infosource.
Note: SAP R3 is an OLTP appliation and have different functional modules like SD
, MM, FI-CO, HR, PP, QM....
But all these Functioanl Modules are broadly brought under three Funct
ional Areas like FI, HR and Logistics.
Depending on the type of Data you are trying to extract will be differ
ent for different functional areas.

Different types of Extractions from SAP R3 to SAP BW system:
Extractor: Extractor is nothing but the DataSource which is used for Extracting
the Data from Source System. Extraction is nothng but process to extract data.
There are 2 types of Extractors.
1. Application Specific - Data will be extracted based on
specific module or application.
2. Cross-Application Specific - Data will be extracted from different mo
dules based on requirement.
-> 1st preference we have to check if for Application Specific Extractors. If we
cannot find the Extractors in Application specific then we go for Cross-Appl.
(1). Application Specific:- Extractors developed by SAP or by customers themselv
es extract data from specific tables that are connected to a corresponding
application area. In Application Specific we have 2 types.
I. BW CONTENT EXTRACTORS: These are nothing but
Extractors that are delivered by SAP. But we cannot directly use them because th
ey will be in
delivered status and we need to convert them
into Active Status if want to use them. Applications that are included here are
SRM, LO Cockpit... The data will be extracted
from Application Specific DB Tables. (FI-AA, FI-AP, FI-AR, FI-GL, CCA, PCA, PC,
Extractors that are generated by the Customers based on some other Object. Here
SAP will not give the DataSource but there will be environment where we can gen
erate the datasource. Applications that are included here are LIS(Outdated), FI-
SL, CO-PA(Based on Operating Concern).The data will be extracted from Applicatio
n Specific DB Tables.
(2). Cross-Application Specific:- The data in generic extractors can come from v
arious sources, such as logical databases, cluster tables, and transparent
tables. This tool is not application-specific and can be used in
situations where the other types of extractor are unavailable. Cross-Applicatio
n Specific have 5 types.
V. DOMAIN (We use only for Text DataSource)
-> For any type of Extraction we have to follow the below process(aspects).
1. Procedure for Generating(Customer Generated)/Createing(Generic)/Istal
ling(Business Content) DataSource.
2. Delta Mechanism
- For any datasource we create, generate we have to identify whe
ther the datasource can be Delta Capable. Can we implement the logic for this da
ta source to return the delta records..? There might be some datasources such th
at there is no way they can support the delta or we can implement the logic for
the Extractor so that they can extract the delta records, in such cases we canno
t go with delta mechanism. Suppose if there is the Extractor that suports Delta
mechanism, then what is the logic of the extractor to identify the Deltarecords(
newly added or changed).
- If we do the Full, in IP it will be full because the IP will b
e based on the DataSource. If datasource supports delta then IP we can see INIT
DELTA and Delta. i.e If the IP is Delta capable means the DataSource should be d
elta enabled. If the datasource is Delta enabled there should be some logic behi
nd to identify the delta(new and changed) records. This is called Delta Mechanis
3. Delta Process (ABR, AIE)
- Once the delta records are identified how the delta records ar
e processed in the Delta process.
4. Initializing the data.
5. How to Enhance the DataSource(Only when required we go for enhancemen

-> DataSource is an object that defines the following objects.
1. Extract Structure(ES) - Source System(SAP ECC)
2. Transfer Structure(TS) - Source System(SAP ECC)
3. Transfer Structure in BW(TS) - SAP BW System
-> Also for any DataSource it should have ES, TS(SAP ECC), TS(BW), Source of dat
a, Delta Mechanism, Delta process.
-> When we say the Extraxt Structure, fields that has to be extracted from the d
-> When we say Transfer Structure, out of fields that are extracted what are the
fields that are transferred to BW.
- For example in the table we may have hundres fields but we may not req
uire all those fields, then we select the fields which are reqired for extractio
n. This defines the Extract Structure. Also the fields which are extracted may n
ot be required to transfer to BW, these fields define the Transfer Structure. Th
is can be done by Hide check box to exclude or hide the fields that are not requ
ired in Transfer Structure.
-> TS may be Subset of ES. But TS in ECC and TS in BW should be exact if not we
will get the errors.
Types of Data Sources:
1. DataSource for Transaction Data
2. DataSource for Master Data Texts
3. DataSource for Master Data Attributes
4. DataSource for Master Data Hierarchies
-> It would make sence to go with Generic first because we are creating everythi
ng and we can analyse how it is done then we can understand the Standard data
sources delivered by SAP.
-> Most of the Standard DataSources are based on the Function modules concept.
Why we are going for extracting the data in this method is we have the data in S
AP R3 tables and we need to extract them in to our SAP BW DataSource.

Login into SAP R3 -> SE11 -> VABK(Table)
-> Any Table which contains MANDT filed as mandatory then that is the Client Dep
endent Table. i.e Different clients will have different types of data.
-> SAP BW does not contain any Client Dependent Table, so we will never extract
the field MANDT into the SAP BW System.
Req: I need to extract the daata from VBAK table from SAP R3.
VABK(Sales Document: Header Data)
VBELN - Sales Doc Number
ERDAT - Date on Which Record Was Created
AUART - Sales Document Type
NETWR - Net Value of the Sales Order in Document Currency
WAERK - SD Document Currency

Step1: Field to Info Object Mapping.
- In this step we have to create the InfoObjects in the BW corre
sponding to R3 fields.
Step2: Make the InfoObjects ready.
yc_vbeln(Sales Doc Number)
- Char(10)
0calday(Date on Which Record Was Created) - dats(8)
yc_auart(Sales Document Type) -
yk_netwr(Net Value of the Sales Order) - Amount(1
5) - Currency
- currency
Step3: Create a DSO
Properly assign the Key fields and Data Fields.
The InfoObjects corresponding to the Primary Keyfields should be
placed as Keyfields in the DSO.
Step4: Create a InfoCube
Step5: Create an Application Component.
Step6: Create InfoSource. Create Communication Structure and connect this to DSO
with Update Rules.
Step7: Create the Generic DataSource in SAP R3 based on Transparent Table.
Creating a Generic DataSource in SAP R3:
Step7.1: Goto SAP R3 and the Tcode RSO2(Maintain Generic DataSource) or SBIW ->
Generic DataSource -> Maintain Generic DataSource
Step7.2: Here we have to use the Transaction Data type of DataSource
- Here we can expect a question like in RSO2 we have different t
ypes of DataSources i.e Transaction, Master Data Text and Master Data Attr.
Why are we not having Hierarchies here..? The Answer is since
SAP R3 is an OLTP system and it does not support any Heirarchies and hence we ar
e not having any option to create a DataSource of type Hierarchies.
Step7.3: When we create the DataSource we have different tabs available.
- Here we need to mention all three types of Descriptions i.e Sh
ort, Medium and Long Text. Where as when defining the Hierarchies it is mandatir
y to
give only short text.
- Application Component: What ever the Generic DataSource we are
creating we should assign(Designating) to the particular Application Component.
Step7.4: Since we are creating the Generic DataSource based in Table we select t
he View/Table and notice that all other methods(InfoSet, FM) are greyed out.
- Also here we notice that the 'EXTRACT STRUCTURE' is greyed out
which means that we don't create the extract structure and system automatically
- EXTRACT STRUCTURES is nothing but group of fields what we are
extracting from the Source System.
- Now the EXTRACT STRUCTURE is created automatically when ever w
e save the DataSource but with all fields initially. Since we have mentioned the
VBAK then the system automatically thinks that we are extracti
ng the fields from the table.
- DELTA UPDATE: Here Delta Update Check box indicates whether th
is DataSource is Delta update enabled or not.
- Here for every field there will be Selection and Hide Field op
- SELECTION - If want to extract the data based
in some selections. What ever the fields we want them in the data selection of
for those fields we have to select this
- HIDE FIELD - Hide plays on Transfer Structure.
All the fields for which we select the Hide Field check box, those fields will
not be available
in the Source System and Transfer struct
ure in BW also. We can customize the Transfer structure using this field.
- Now we have seen the Extract Sturcture in Source System.
Step8: Now Replicate the DataSource in SAP BW system.
-> RSA1 -> Modeling -> Source System -> Select the R3 co
nnection -> Right Click -> Data Source Overview.
- We cannot find the DataSource here bec
ause we have just created the DataSource in R3 System and has not yet replicated
it in BW.
-> Search for SAP Application Component and expand it. S
elect the Application Component where we actually designated our DataSource.
-> Now Right Click on it and Replicate the DataSource. I
t will Replicate all the DataSources in that Application Component.
-> When it is Replicating the DataSources it reads the M
ETADATA(Data about Data) or detailed information of the DataSource.
-> It also sets up parameterised Timestamp for the DataS
Note: We cannot assign a DataSource to multiple InfoSources. But one Inf
oSource can have Multiple DataSources.
Step9: Assing the DataSource to Infosource. (Select InfoSource and assign dataso
-> We can see the Transfer Structure in the R3 System only when we activate the
Transfer Rules in SAP BW.
Step10: Create the InfoPackage.
-> How can we make sure that the number of records that have come from R3 and re
cords received in SAP BW are same...?
- Search for the number of records in the table and the records
received in BW datasource.
-> RSA3 - Extractor Checkor - which is used for checking the number of records t
he data source has extracted.
-> The steps for creating the Generic Data Source from Table and View are same.
But we need to know in which scenario we use what.
-> We want to Extract certain data from R3 is available in single table then we
go for creating DataSource based on Table else if we want data to be
extracted from different table then we create a view on those ta
bles and we create the DataSource based in View.
-> If we need to join the tables with the help of a view atleast 1 field should
be common that may be key or non key field.
-> Also we can create a view on the single table also. This is advantageous over
creating the datasource on table directly.
-> There is nothing much difference between the View and Infoset Query. But tech
nical people use the view and functional people prefer Infoset.
-> Tcode that is used to create Infoset is SQ02.

-> The Main difference between the Generic Extraction from Table/View/Infoset an
d the Function Module is in case of Table/View/Infoset the Extract Structure is
automatically created where as in Function Module we need to create the
Extract Structure explicitly.
Step1: Define the Extract Structure
- SE11 -> DataType -> Structure -> Create
Step2: Define the Function Group and Function Module (SE37)
- By default the Function Gruop have 2 Include programes
1. Top Include 2. 2nd Include
- Top Include Program includes all the global declaratio
ns to be globally accessible for all the Function Modules that are in the Functi
on Group.
- The 2nd Include program will maintain information abou
t all the function modules under the group.
- We are going to write the Function Module in SAP R3 Sy
stem only.
- Declare the TYPE-POOLS 'SRSE' 'SBIWA' in the Top Inclu
de program of the Function Group.
- 'ROOSOURCE' this table will hold detailed information about all the the DataSo
urces defined in SAP R3 i.e BC datasource, Generic datasource(FI, LO, CO-PA..)..
- For Example for the DataSource '2LIS_11_VAHDR' will have many fields like Obj
version(D or A), Type(Trans, Text, Attr), Applnm(SD, FI, MM),
Extractor( MCEX_BW_LO_API- FM ).....
- Now create a Function Module as the copy of the existing Function Modu
le(MCEX_BW_LO_API) of Business content Datasource.
- Modify the Function Module as per the requirement.
- We will not change any Import and Export parameters.
- Here in Tables tab E_T_DATA we have an output table i.e when w
e use this Function Module and extract the data it just extracts the data which
available in the E_T_DATA table in to SAP BW. So the log
ic of the code should be for example we are extracting the data from the VBAK ta
ble and update into the table E_T_DATA which is of like Extract sturcture.
- So the structure of the output table(E_T_DATA) is similar to E
xtract Structure that we created with the required fields.
- Now we delete all the code and we write the code which is requ
- Here we fetch the data from Table/View and update in to the Internal T
able(E_T_DATA). What ever data we update into internal table will be
available into BW.
Step3: Now create the Generic DataSource by Tcode RSO2 by using the Function Mod
Step4: Check for the Extractor Checker in Tcode RSA3 for number of records
-> How to debug the code in Function Module is we have to go to the Sourcecode o
f function module and insert the debugger point and execute the datasource
in the Debug mode. The purpose of RSA3 is also to debug the Generic Data
-> How do you extract the data based on some selections...?
nd holds the set of selections that are passed when we are executing the FM.
RSSELECT(Interface: Selection cr
iteria) - Structure with FIELDNM, SIGN, OPTION, LOW, HIGH fields.
-> We have 3 types of Delta Mechanism settings for Generic Extractor.
(1). CALDAY:
-> If we set up the Delta based in CALDAY, how does this work...?
- we have the table VBAK and the fields vbeln, created date, cha
nged date, netwr. We can set delta based on calday on last changed date.
-> We can setup Delta Mechanism for a Generic datasource with CALDAY only if the
Source Table/view/query has got the latest changed date.
-> When we setup Delta for Generic Extractor based on CALDAY, we cannot run the
multiple Delta's on the same day and we can run only once/day at end of clock.**
- For example we run initially on Dec 30, all the records will come. Sup
pose when the records are changed on 31 Dec and on 31st we run the delta then it
will bring the changed or added records and how this works is the delta checks
for the condition like the doc changed date should be greater than 30 and less t
han or equal to 31st( >30 and <=31).
- But here in the condition it is checkng suppose if some one created th
e Doc on 30 and the condition is checking for >30 and here we have 2 options to
run the delta one is on the same day and the other is the next day. Suppose if w
e run the delta on the same day evening the condition is ( >30 and <=30) and her
e the condition fails. The other is if we run the delta on next day and the cond
ition is ( >30 and <=30) this also fails since the doc is created on 30.
- The limitation of running the delta for Generic only once is because o
f timestamp.
-> We can setup Delta Mechanism for a Generic datasource with TIMESTAMP only if
the Source has got the latest TIMESTAMP(YYYYMMDDHHMMSS).
-> We can run the Multiple Delta based on TIMESTAMP. With the help of TIMESTAMP
we can overcome the problem of CALDAY.
-> Suppose there might be some situation where if we run the delta based in TIM
ESTAMP for every 30 min and now if we have the last delta run saved at 10.30 AM
ad now the user started creating Doc at 10.55 AM and he has not
yet saved the doc before 11.00 AM. Noe we have scheduled the delta to run at 11
AM and the conditon will work like this ( >10.30 and <= 11) since the doc which
the user tryng to create has not yet been saved the records will not be updated
until he save it. when the user save it the timestamp will be created at 10.55
only which we will miss it in next delta run also since the next delta timestamp
condition will be ( >11 and <=11.30).
-> We can overcome the above situation with out the records missing......?
- We can overcome such situation by using the SAFETY INTERVAL by
giving the Lower limit(ex : 300 sec).
- If we move like this there might be possibility of getting the
Duplicate records. then to overcome this we use the DSO before loading to Cube.
- By using the Lowe Limit we are minimising the possiblity of mi
ssing the Delta records based in timestamp.
- Incase of delta with TIMESTAMP we will have the flexibility of schedul
ing the Delta Loads at any frequency( 30min, 60 min..). So there is the chance o
missng some Delta records. To overcome with this problem we spec
ify the Lower Limit of the DataSource to minimize the missing of Delta records.
-> We can setup Delta Mechanism for Generic Delta if the Source contains any Cou
nter field(Primary Keyfieild).
-> It sets some pointer till where the records are picked.
-> If we setup Delta based on NUMERIC POINTER it can only recognise the newly ad
ded records but not the modified records.
-> In Generic Delta we have 2 options
1. New Status for changed records
2. Additive Delta
New Status for changed records: It will set the new status for the changed recor
ds also.
-> When we select this radio button option and extract the data from Source to B
W it is Mandatory for us to load into DSO and then to Cube.
Additive Delta: If we create Delta with Additive Delta then we can load into Cub
e directly or from DSO to Cube also.
-> If it is CO-PA datasource we can see the timestamp in 'KEB2'
-> In Generic DataSource we can check the status of Day, time and pointer in 'RS
DataSource for Data Reconciliation:-
The DataSource is used for data reconciliation with another DataSource and t
herefore should not be used productively.
For reconciliation, the data reconciliation DataSource should be used in a s
cenario that supports direct access. The data that is extracted is compared with
the data in the DataSource that is being checked.
DataSource: Extractor Supports Direct Access:-
This field shows if the DataSource supports direct access to data.
A RemoteProvider in BI can only get data from a DataSource that supports dir
ect data access.
It is only useful to access data directly if the quantity of data to be extr
acted can be restricted to a suitable selection.
1 - supported (without preaggregation)
2 - supported (with preaggregation)
D - not supported
-> BUSINESS CONTENT OBJECTS are the Standard Objects that are delivered by SAP a
nd we can use them.
-> It is a complete set of BW Objects developed by SAP to support the OLAP tasks
. It contains the predefined roles, workbooks, queries, infocubes, DSO,
Keyfigures, Characteristics, Update Rules, InfoSources, Transfor
mations, DTP's, InfoPackages.......
Note: All BUSINESS CONTENT objects is prefixed by '0'. It will be in 'DELIVERED'
version. We cannot use the objects in Delivered version, so in order to use
we need to copy them into 'ACTIVE' version and this process is k
nown as 'INSTALLING THE BUSINESS CONTENT OBJECTS'. Changes are saved in 'M' vers
-> Normally we install Business Content in 2 occasions
1. After the content release upgrade.
2. After installing a content support package.
-> Here we have 3 screens
Screen1: The purpose of this screen is depending on the Option what we s
elect in the Screen1 the view or the contents of Screen2 will be changed.
Screen2: All the BUSINESS CONTENT OBJECTS will be in this screen2. When
we want to install any BUSINESS CONTENT Object what we will do is we have to con
the object from Screen2 to Screen3 and Install.
Screen3: All the objects that we need to install.
InfoProviders by InfoArea
InfoObjects by InfoArea
InfoSource by Application Component
Object Types
Objects in BW Patch
Transport Request

-> Collection Mode specifies how the objects have to be collected whether Automa
tic or Manual.
(1). Automatic - The dependent objects are automatically installed
(2). Manual - The dependent objects needs to be installed manu
1. Simulate Installation - Just Checks or simulates whether
the BI Content objects can be installed properly with out any errors. This does
n't install.
2. Install - Installs the BI
Content Objets in foreground
3. Install in Background - Install the BI Content Objects i
n Background. Normally we do instalation in background only and can be checked i
n SM37.
4. Installation and Transport - All the objects are installed and then w
ritten to the TR automatically.
-> Grouping specifies what are the other objects that should be collected.
(1). Only Necessary Objects
- We have a Business Content Cube 0A and we need to install it.
If we collect the Cube from Screen2 to Screen3 with Grouping as Only Necessary O
what will be collected means along with Cube it collects all o
ther objects that the Cube is dependent on i.e InfoObject, InfoArea also collect
(2). In Data Flow Before
- We have a Business Content Cube 0A and we need to install it.
If we collect the Cube from Screen2 to Screen3 with Grouping as In Data Flow Bef
what will be collected means along with Cube it collects all o
ther objects that the Cube is dependent on and along with that all other objects
which are required to load data into this Cube i.e Cube, InfoArea, InfoObjects,
InfoSources, DataSources, Comm Structure, Transfer Rules....
(3). In Data Flow After
- It collects necessary Objects and along with them the Objects
which are retrieving the data from this Cube.
Cube, InfoArea, InfoObjects + Reports, Cube, DSO.. ( Not the U
pdate rules and Infosources which are used to load data into cube).
(4). In Data Flow Before and After
- It collects Necessary Objects and In Data Flow Before and Afte
r Objects also.
-> It this check box is selected it just installs the object if the object is no
t yet installed or overwrites if the object is already installed.
- Suppose if we are installing customer number and if we select
only Install check box it just installs the object i.e it creates the copy of th
e same
object from delivered to active version. Suppose if the object
is already in active version then it just overwrites the active version object.

For ex: If the object A is having the length 10 and install the object i
t will be installed in Active version.
After installing the Object in active, we change the len
gh of object from 10 - 12 and if we select only Install check box then the objec
t will be overridden i.e the lenght of the object will be overriden to 10 from 1
-> If we select this check box it skips the object and does not install.
-> If we select both the Check boxes then it will merge both the Active and Deli
vered Version properties. It does not overwrites the object. It creates the Merg
of both the Active and Delivered version of the object by showing both active
and delivered properties.

-> When the object is available in the Active version, we have to decide whether
to keep the Active version or to install the latest SAP delivered version of
the object when there are some upgrades done by SAP.
-> When we check the Match (X), then the customer version of the object is Merge
d with the new SAP version and a new customer version is created. If we don't
this SAP delivered version of the object will be overwritten.

-> We install the Objects in Background. But before we install them in Backgroun
d we would like to check if the installation will be successfull or not.
It will not install the object but checks whether the installation will
be successfull or not.
-> Contains the display of objects. This gives why this object is collected.

-> The big organisation will have different departments and in order to process
different transactions that are happening in each department will have
seperate module like SD, MM, PP, FI, HR...
Sales and Distribution:
-> Every Company will have some customers( C1, C2, C3,C4..) and also vendors( V1
, V2, V3, V4..)
-> The company purchase the Raw materials from the vendors and convert the Raw m
aterial into Finished Goods(Production) and sell the Finished Goods to the
customers. This the outline of most of the companies(Mai
nly manufacturing companies).
-> Suppose some ABC company has a plant(Production Area) where we produce the Go
ods. We will have different Storage Locations. The place where we store the
stock in different locations is called Storage Locations. Like Storage L
ocation for Scrab, Storage location for Out of Specification, Storage location f
or Raw materials, Storage location for Unrestricted Stock, Storage Location for
slow moving stock.
-> Some very big companies buy the Raw materials from the fixed Vendors, and sto
re all the Raw materials in the Store Location for Raw Material(SL1).
This transaction indicates Purchasing.
-> Plant Manager is responsible for complete Planning in the Production. Suppose
he gets the requirement for producing some 100 kgs of chemical1 and 200 kgs
chemical2. He has to take the Raw materials from the Storage Loc
ation(SL1) to the plant. He calculates how much he need to take from Raw Materia
l1 and Raw material2 and so on by mixing them to produce the chemical1 and chemi
cal2. We call this as Batch Input. We will get product and call Batch Output.
Once the product comes out they do the QUALITY CHECK. What ever
the Product meets the Quality then they will put that products into UNRESTRICTED
STOCK where the product is ready for Consumption( Ready to Use). If the Product
doesn't meet the Quality check then they will move the product to storage locat
ion OUT OF SPECTIFICATION. Suppose if we have some products that are not sold an
d being out dated then we will move the Stock from UNRESTRICTED STOCK TO SLOW MO
-> Suppose Customer(C1) wants to buy Methane(100 KGs) and Ehane(300 KGs). Now he
sends an ENQUIRY to the companies. i.e before he finalise the company he sends
the enquiry to all the Methane and Ethane production companies.
-> Now the ENQUIRY will come to the Sales Executives and understands and sends t
he QUOTATION(catalog). QUOTATION is nothing but which specifyng the different
price catalogs with quantity and remaining all other features.
-> The Cusomer will get the QUOTATIONS from all the companies and based on the
Quotation he choosed the ABC company to buy the Methane and Ethane.
-> Now the Customer will create the PURCHASE ORDER(PO). Purchase Order is nothin
g but the quantity, date of goods required, different products he want.
Now the Purchase Order is sent to Sales Executive. Before he acc
epts the PO from the customer he will perform a check on the availability of pro
Like he will go to storage location for Unrestructed Stock and c
hecks for the availability of goods. If he has sufficient goods then he will del
iver the goods for the date mentioned by customer. But If he doesn't find the go
ods availability then he starts negotiating with the customer to deliver the ava
ilable goods by the date he mentioned and deliver the remaining goods after avai
lablity of goods.
-> Now the Sales executive creates the SALES ORDER.
10(M) 100 KGs 5000 INR
20(E) 200 KGs 7500 INR
-> In SALES ORDER we have 3 types of information.
1. Sales Order Header Level - Any Information that is
common for all the Items in the SALES ORDER is nothing but Header Level data
2. Sales Order Item Level - Any Informatin that is s
pecific to each Item is known as Item Level data.
3. Sales Order Schedule - The Information how each
Item is being scheduled to be delivered.
-> This complete information is known as SALES ORDER.
-> DELIVERY NOTE doc: This is just a receipt or document that the customer knows
what goods are going to be delivered on what date..
1. Delivery Header data
2. Delivery Item data
-> After delivery of Delivery Note then POST GOODS ISSUE(PGI) doc is created. No
w the Stock is reduced and we have to raise the BILLING DOCUMENT.
-> The Customer will pay the Money and Finance Manager will collect the money fr
om the customer.
-> At the same time after receiving the Money the Finance maager will fund the m
oney to the Plant Manager for production(Raw materials...)
-> The HUMAN RESOURCE deals with hiring of Employees in Different departments an
d their salaries, time management etc...
-> The Production related Transactions will be taken care by PRODUCTION PLANNING
-> Also the Quality Checks will be performed by QUALITY MANAGEMENT module.
-> MATERIAL MANAGEMENT involves in purchasing Raw Materials(Procurement) and Inv
entory(Storing the Stock).
-> FI-CO takes care of all the Transaction related to Finance.
-> The Difference between the WareHouse Management and Inventory is nothing but
the INVENTORY deals with the Storage of stock at a particular Storage Location
i.e we will not get the detailed information about the stock in
the Storage Location where as WAREHOUSE MANAGEMENT deals with the detailed level
of information about the stock for example if we store the stock at Bin(Box) le
vel in storage location and we can get that info and also the UP and DOWN flow o
f the stock in the WareHouse.
-> Next we are going to learn how we are going to Extract the data from all thes
e application areas.
-> LOGISTICS means from the time we buy the Raw Materials and convert them to fi
nished goods , sales order, delivery, pgi, till billing process all these
Transactions come under Logistics only. If we want to Extract th
e Logistics data from SAP R3 to SAP BW then we have to use the LO EXTRACTION.
-> Example with SALES ORDER: There are some reports that we need to give to the
users based on the Sales Order.
- Based on the date then we need to know how many sales orders a
re created on that particular date.
-> When we say Business Process whatever the Trasactions that the user is perfor
ming in SAP R3 system. SAP R3 is an OLTP system.
-> All the applications in what we have in R3 is divided into FI, HR and LOGISTI
-> We can use LO EXTRACTION to use only Logistics Information. i.e From the time
we buy the Raw material till Billing all the Transactions like Purchasing.
Inventory, Material Management, Production, Quality Management,
Sales Order, Delivery Doc, Billing ....
-> SD, MM, PP, QM, WM all these comes under Logistics.
-> Today we will take SD as the example to work with and almost steps will be sa
me except in some scenarios.
-> When we say extract SD data from R3 to BW what are the Transactions we may ha
-> The End User will create the SALES ORDERS in R3 System and not the SD Consult
-> Tcode -> VA01 ( Create Sales Order )
- This contains Header Level, Item Level and Schedule Line Items
-> VA03 -> Display Sales Order -> Give the Order number (6107) and press enter.
SOLD-TO-PARTY - Customer who place the Order
SHIP-TO-PARTY - To whom we have to shift
-> Customer creates the purchase order and sends to company(Sales Executive). Fo
r this Purchase Order Sales Executive creates the Sales Order. The Sales Order
will be created with respect to the Purchase Order created by th
e Cusotmer.
-> When we create a Sales Order in VA01 then the data will be updated in differe
nt tables as shown below.
1. VBAK - Sales Document: Header Data
2. VBAP - Sales Document: Item Data
3. VBEP - Sales Document: Schedule Line Data
4. VBUK - Sales Document: Header Status and Administ ( To find out
header status of any Sales Document order like Delivery, Sales Order, PO..)
5. VBUP - Sales Document: Item Status ( To find out Item status of
any Sales Document order like Delivery, Sales Order, PO..)
-> As a BW consultant we need to extract the data from the tables only since the
End User will enter the data through Transactions.
-> In Interviews there may be few questions if we mention in our resume like we
had worked on extracting the data from SD, MM, FI.. They may ask us to explain
the tables in SD and Tcodes in SD or Process flow in SD and how
you extracted the data.
-> For Delivery related the tcodes are
-> The tables which hold the DELIVERY related data are shown below.
1. LIKP - SD Document: Delivery Header Data
2. LIPS - SD document: Delivery: Item data
-> Now we will go for BILLING. The Tcodes for creating the BILLING are
-> The tables which involved in this Transactions are listed below.
1. VBRK - Billing Document: Header Data
2. VBRP - Billing Document: Item Data
3. VBRKUK - Billing Document Header and Status Data
-> Some more Usefull tables in SD are listed below.
3. VBFA - SALES DOCUMENT FLOW TABLE ( This can help us to
find out the flow of sales document )
Generic Extracors - Create DataSource
Customer-Generated - Generate DataSource based on Operating C
Business Content - Replicate the DataSource(Data Source alr
eady given)
-> LO EXTRACTION comes under Business Content Extractor i.e DataSource is alread
y given we need to replicate the DataSource in BW system.
-> The steps how we are going to make our datasource ready is different.
-> LO stands for LOGISTICS.
-> LO EXTRACTION is used to extract the Logistics related data from SAP R3 to SA
P BW. The Application areas that comes under LO are SD, MM, PP, QM, WM..
-> LO extraction is used to overcome with the problems of LIS extraction. The ma
in problem with LIS extraction are :
1. Huge Input/Output Volume. Previously we could extract the LO data,
suppose we would like to extract Sales Order data i.e in LIS ext
raction we have single DataSource to extract the Sales Order Header, Item and Sc
Because of this we are extracting more records i.e INPUT/OUTPUT
volume will be very huge. Because of this we are wasting the DB table space and
also the time to load the huge amount of data.
2. Degrades DataBase performance because of Information Structure
3. Degrades the OLTP system performance because of V1, V2 updates.
-> LO Extractor (or) DataSource comes as part of Business Content and hence they
will be automatically created by SAP and will be in Delivered version. If we
need to work with them then we need to install them to Activate
-> LO extractor examples :
Naming Convention:
-> 2LIS_11_VAHDR : Every LO or LIS DataSource are prefixed by '2LIS'. _ '11' ind
icates the application area Sales Orders in which datasource is
designated. 'VA' indicates it is the Sales Order i.e Tcode r
elated and HDR indicates the Header Data. From this we can come to conclude that
we are going to extract Sales Order Header Level Data.
11 - Sales Order
12 - Delivery
13 - Billing
08 - Shippment
-> 2LIS_11_VAITM : This DataSource indicates we are extracting the data from Sal
es Order Item Level data.
-> From this we can overcome the 1st problem of LIS extraction. LIS supports sin
gle datasource to extract all the levels of data where as LO exxtraction
supports single data source for each level(Header, Item, Schedul
e) of data seperately.
-> In generic extraction when ever we create a datasource the extract structure
is created, but in LO extraction datasource is already given by SAP hence
also the extract structure is also given by SAP. Incase of LO ex
traction the DataSource is readymadely defined by SAP along with Extract Structu
-> Naming Convention of Extract Structure is: For 2LIS_11_VAHDR datasource the e
xtract structure is 'MC11VA0HDR'
- The first 2 characters are by default 'MC' and next 2 characte
rs indicates application comp '11', next 2 charaters indicate event of Sales Ord
er 'VA'
'0' indicates root component, 'HDR' indicates the level of dat
a. 2LIS_11_VAITM - MC11VA0ITM .....

-> In LO extraction we use V3 update( Asynchronous with Background).
Scenario: Extract the data from 2LIS_11_VAHDR into Infocube
-> 2LIS_11_VAHDR will be in Delivered Version only since SAP given(All LOS). We
cannot use this in Delivered version so we need to install if we say install
are we converting the Delivered version to Active version or we
are making the copy of this from delivered to active.
1. Install the DataSource - RSA5
2. Inactivate the DataSource in order to maintain the Extract st
ructure - LBWE
3. Maintain the Extract Structure - LBWE
4. Generate the DataSource - LBWE
5. Specify the Update Mode - LBWE
6. Activate the DataSource - LBWE
7. Replicate the DataSource - RSDS or RSA1
Step1: Install the DataSource from the Business Content (RSA5) i.e Install the D
elivered version of the DataSource(2LIS_11_VAHDR) to Active version.
-> SBIW -> Business Content DataSources -> Transfer Business Con
tent DataSources (or) RSA5 - RSA5 contains the list of all Business Content Dat
delivered by SAP. All these datasources will be in Deliv
ered version.
Step2: Now search for the DataSources that we want extract and click on 'TRASNFE
LO datasource comes with readymade extract structure with certai
n fields, if we are not satisfied with fields of the Extract structure then we n
eed to
modify the extract structure. This process is calles as Maintani
ng the Extract Structure.
Step3: We maintain the Extract Structure in this step. We can maintain the Extra
ct Structure with Tcode - LBWE
In LBWE we can find that lot of application components. For each
application component the specific DataSources we can find.
Here we have 3 links Maintenance, DataSource and Active. We can
say that DataSource is active if it is in Green and the link is enabled. If we n
eed to maintain the Extract Strycture then the DataSource should be in-active.
If DataSource is active then In-Activate the DataSource in order
to maintain the Extract Structure.
-> Along with standard tables provided by the SAP, it also provided the
COMMUNICATION STRUCTURE(CS). For every table related to Logistics SAP has given
predefined Communication Structure. vbak - mcvbak, vbap - mcvbap
, vbep - mcvbep.....
-> The purpose of Communication Structure : It is used as in Interface t
o transfer the data to SAP BW. where tables physically hold the data.
-> We click on the Maintenance link to maintain the Extract stru
cture in LBWE.
-> When we go to the Maintenance screen it has 2 blocks left and
right where Left and Right indicates Extract and Communication Structure fields
-> All the fields in Extract Structure which are in Blue color i
ndicates those are fields of ready made Extract Structure. Now we will try to mo
ve the
field from Communication Structure and they will be move
d to Left block and they will be Black in Color.
-> There are N number of tables for sales and Communication Stru
cture, but system shows the flexibility of showing only communication structure
to DataSource we are working.(MCVBAK, MCVBUK)..
-> We cannot take out the ready made fields from Extract Structu
-> But when we try to search for a required field and could not
find the field in Communication Structure then what can we do..?
- We go with DataSource Enhancement if we cannot
find the required field in Communcation Structure also.
Step4: Generate the DataSource in LBWE. For all the fields what we have moved fr
om Communication Structure to the Extract Structure by default HIDE , FILED ONLY
KNOWN IN CUSTOMER EXIT is checked. We have to uncheck them. If w
e dont uncheck the HIDE then those specific fields will not be availale in TS(R3
), TS(BW)
INVERSION check box is disabled by default since the INVERSION i
s checked only for keyfigure fields. Only used in LO Extraction.
Step5: Before Activating the DataSource we have to mention the update modes( Dir
ect Delta, Queued Delta, Unserialised V3 update). Now Activate the DataSource.
All the DataSources corresponding to the Application com
ponent should come with single Update Mode. We cannot have each DataSource with
different Update Modes.
Step6: Now the DataSource is ready and we need to Replicate the DataSource in or
der to reflect the changes we made in R3 to BW. Check for the added fields.
-> In SAP R/3 we have 3 types of Tables.
1. Transparent Table - Application Layer 1 table and Da
tabase will have 1 table only
2. Pooled Table - Application Layer will h
ave Multiple smaller tables whereas in Database will have only single table.
3. Cluster Table - Application Layer will h
ave Multiple smaller tables whereas in Database will have only single table wher
e Multiple tables
are join
ed using the Primary and Foreign key relationships.
-> In LO Extraction when we are going with DIRECT DELTA we are using the SETUP T
ABLES which works on the concept of CLUSTER TABLES.
-> SETUP TABLES are built on the concept of CLUSTER TABLES. How does this help u
-> In case of SETUP TABLES we will have only 1 Database Table for example storin
g all the Header, Line and Schedule level data whereas in Appliation Layer
we will view all of these information seperate tables like Heade
r table, Item tables, Schedule tables....
-> The naming convention of SETUP TABLES is : 'MC11VA0HDRSETUP' i.e Extract Stru
ture and word 'SETUP'.
-> As part of DATA MIGRATION we should concentrate on bringing all the Historica
l data from R/3 to BW such that we should not miss the transactions that are
created at the time of DATA MIGRATION. After we brining the Hist
orical data we should enable Delta for the DataSource so that it should bring up
date data.
-> We will bring all the Historical data from R3 to BW by running INIT to bring
the complete data and Delta from next day to bring the update records.
-> If I need to run the Delta we have to run the INIT initially. If we run the I
NIT where does the data come from...? If we run the INIT the data comes from
SETUP tables. But the actual data is present in Database tables(
-> But before we run the INIT we need to fill the SETUP tables with the data fir
st since the SETUP tables are empty initially.
-> STATISTICAL SETUP: By running the STATISTICAL SETUP we collect the data from
Database tables and place them into SETUP tables.
-> Before running the INIT load in BW we have to fill the SETUP tables by runnin
-> Suppose if we are running the STATISTICAL SETUP and getting the data from dat
abase tables and suppose at the time of STATISTICAL SETUP running there might be
possibility of some Transactions that may happen at R3 side whic
h are not picked at that time. So we will have data discrepency.
-> To Overcome this we have to do this in Outage time only that happens on Weekn
- Lock the User( To avoid the missing of the documents during th
e migration procedure)
- Once the Users are locked we can run the STATISTICAL SETUP. Be
fore we perform the STATISTICAL SETUP we delete the contents of SETUP TABLES(LBW
- When ever we delete the SETUP tables of all the levels of data
(Header, Item, Schedule).
- In order to check whether the SETUP tables data is deleted or
not then go to SE11 -> MC11VA0HDRSETUP. Number of entries should be 0.
-> Also as a good practice along with Deleting the SETUP tables it is better to
clear the DELTA QUEUE(RSA7) contents also.
-> Now we can Schedule the STATISTICAL SETUP to collect all the Documents o
r data from database tables into SETUP tables.
-> SBIW -> Settings for Application-Specific DataSources (PI) -> LOGISTICS -> MA
- Depedning on what application data we are extracting in what option sh
ould we use will change here.
-> Here in STATISTICAL SETUP screen we have different options like Selections, T
ermination date and time, No of Tolerated Faulty Doc.
- We can run the STATISTICAL SETUP based on selctions like Sales
Org, Company Code, Sales Doc Number.
- Date and time when we need to terminate the job usually for 2
years of data we will run for 12 hrs min.
- No of Tolerated Faulty Doc : the limit of error docs in the ru
n. we give them in Multiples of 10 or 100.
- Program Menu at top and choose menu as Execute in Background.
-> Once the STATISTICAL SETUP is run successfully. we cah check for the data in
setup tables.
-> The SETUP tables data cannot be viewed, instead it holds the Cluster Keys rat
her than data.
-> After completion of this process goto SAP BW and schedule the InfoPackage wit
h INIT update. Once we run the Infopackage for example header related then
it brings the data from Header SETUP tables only.
- Once my INIT is successfull in BW then we will have the DELTA
QUEUE maintenance in R/3.
- Initially the records in DELTA QUEUE is 0 since there will be
no new data at that time. When new postings are done in R/3 then Delta Queue is
-> Then we will release the locks and the enduser will start trasactions. The ch
anged or newly created data will be collected into DELTA QUEUE.
- Now we can run the Delta loads depending on the requirement. S
o the delta records are picked from Delta Queue.
-> If we add new records(Postings) in R3 it forms a LUW with New records. Chaing
ing an existing Doc will form an LUW with Before Image and After Image.
-> How to Transfer the data from DELTA QUEUE ....?
- After chaning or adding the records we can check the data in D
elta Queue in RSA7. We will have the Delta Queue with data. For example we have
one sales doc so we can find 1 rec in Delta Queue which means
that it is 1 LUW(Logical Unit of Work). If we go and check the data enries then
we can find 2 entries with before(X) and After( )Image.
-> Now we run the Delta Load in BW then we will get the data from DELTA QUEUE.
-> DELTA QUEUE will have 2 sets of data maintainting.
-> Whenever we change or add records after the INIT then the records are update
d in the DELTA QUEUE. The data in DELTA QUEUE will be stored in 2 sets.
- For example we have changed a Sales Doc then it will be update
d into DELTA QUEUE into Delta Set and Delta Repition Set with LUW having before
and after
- When we run the Delta for the next time it picks the data from
the Delta Set and clears the Delta Set. Data will be mainintaned in Delta Repit
ion Set.
- Now the new posting has been made in R3 then the data comes to
DELTA QUEUE into Delta and Delta Repition. Now when we run the next Delta if t
last delta is successful then it will picks the data fro
m delta set.If the last delta is not succesfull then the system picks the previo
us ad also curent delta records from Delta Repition.
-> If we select the datasource in Delta Queue it deletes complete Queue. When we
want to delete only content in Queue 'SMQ1'.
-> When ever a Delta Load fails how to run the Repeat Delta:-
-> Even though the request is red, make the request QM status to Red again in th
e Monitor -> Status
-> Delete the Bad request in Manage of Cube or DSO.
-> Goto Reconstruction -> Monitor -> Set the QM status to Red.
-> Goto Infopackage -> Schedule -> start.
- Now we will get the dailog box asking the last delta update is not cor
rect please request repeat delta.
-> whenever we load the Historical data and release the Locks. The user starts p
osting the sales doc and the newly added or changed doc, if forms an LUW and the
LUW is updated in DELTA QUEUE using V2 Update(Asynchronous Updat
e). Asynchronous or V2 Update means when ever the user perform the Transaction a
nd click on save, a LUW is formed and this LUW should be updated to Delta Queue.
The Cursor will be kept waiting for until the LUW is filled into Delta Queue wi
th out allowing further transactions.
Serialized V3 update:- (Not Used in Real time)
-> In case of Direct Delta the Delta Documents were directly processed to the De
lta Queue and from there we will get the Data.
-> But In case of Serialized V3 update Delta Doc are first posted into UPDATE QU
EUE by using V2 update and then the Delta doc are collected from UPDATE
QUEUE to DELTA QUEUE by using V3 job. Once the records are avail
able in Delta Queue we can get the records by Delta load.
-> When we run the V3 job to pick the records from Update Queue to Delta Queue t
he V3 job updates the records in serialized way i.e FIFO and also sort them
Lock the User
Delete the Setup tables and Update Queue and Delta Queue.
- Setup tables deletion - LBWG
- Delete Delta Queue - RSA7
- Delete Update Queue - SM13
Now we Schedule the Statistical Setup - OLI7BW
Now we go to BW then run the INIT load.
Now the posting are done at R3 side then we can find the Delta d
ata. we will schedule V3 job update.
UnSerialized V3 update:-
-> It is same as the Serialized V3 update but the difference is that when w
e run the V3 job it picks the records and doesnot do any sorting or serialisatio
and the records are posted from Update Queue to Delta Queue
-> UnSerialized V3 update is more faster than Serialized V3 update because the S
erialised would take more time to sort and post the delta records.
Queued Delta:-
-> In case of Queued Delta we will use the Extractor Queue(LBWQ) instead of Upda
te Queue with the help of V3 model call.
Lock the User
Delete the Setup tables and Extractor Queue and Delta Queue.
- Setup tables deletion - LBWG
- Delete Delta Queue - RSA7
- Delete Extractor Queue - LBWQ
Now we Schedule the Statistical Setup - OLI7BW
Now we go to BW then run the INIT load.
Now the posting are done at R3 side then we can find the Delta d
ata. we will schedule the V3 model call.
Serialized V3 Update:
-> Initially when SAP 3.1 version was released SAP has given LIS extraction. But
due to the drawbacks in LIS SAP introduced LO extraction. Initially SAP has
has given LO extraction with only 1 update mode i.e Serialized V3 Update
V3 Job
V2 Update
- When User does the posting all the Delta Doc comes to Update Queue. Wh
en we schdule the V3 job it udates records from Update Queue to Delta Queue. How
the V3 job runs is it does the sorting of the records first in U
pdate Queue and then based on Sorting it picks the records into Delta Queue.
- Because of this we have some problems.
1. Degrades Performance of V3 job because of Sorting of LUW's ba
sed on the time of create.
2. Frequent failures of V3 job
Update Queue Delta Qu
LUW1 ------>
LUW2 ------>
LUW3(Error) ------>

LUW4 ------>
LUW5 ------>
- When we run the V3 job LUW1, LUW2 will be processed an
d LUW3 will not be processed since in error and also LUW4, LUW4 also not process
ed due to
Serialisation it has to maintain. So the V3 job fails.

3. Multiple Segments incase of LUW's with multiple Languages.
Update Queue
Delta Queue
LUW1(EN) ------>
LUW1(EN) - Seg1
LUW2(EN) ------>
LUW2(EN) - Seg1
LUW3(GE) ------>
LUW3(GE) - Seg2
LUW4(GE) ------>
LUW4(GE) - Seg2
LUW5(EN) ------>
LUW5(EN) - Seg3
LUW6(EN) ------>
LUW6(EN) - Seg3
- LUW1, LUW2 will go in one segment because they are in
EN language. LUW3, LUW4 will go in another segment and so on... So there will be
segments and also it has to follow the sorting. Becaus
e of this the performance will be degraded.

-> To solve all the above limitations SAP came out with different techniques in
plugin 2004.1
1. Direct Delta
2. Queued Delta
3. Unserialized V3 Update
Unserialized V3 Update:-
-> We use Unserialized mainly when we are implementing INVENTORY Datasource. Thi
s is because of Material Movements.
-> When we extract the data using Unserialized V3 Update to SAP BW it is mandato
ry to load the data directly into Infocube instead of (DSO and then DSO->Cube)
-> Here we are not Sorting and most problems of the V3 job is solved. But has so
me limitations.
1. Here also it degrades the performance of OLTP system because
of V2 Update. because of waiting until it updates the LUW in Update Queue. So we
not be able to perform the transactions.
2. Incase of UnSerialized V3 update it does not maintain any Log
of the LUW's which have failed because of update.
Update Queue
Delta Queue
LUW1 ------>
LUW2 ------>
LUW3(Error) ------>

LUW4 ------>
LUW5 ------>
- Since we are not following the Serialization here LUW1
, LUW2, LUW4, LUW5 are processed to the Delta Queue from Update Queue. After mov
ing all LUWs
it cleans up Update Queue and we will miss the e
rror LUW's. We cannot know what LUW was failed or which Delta Doc are failed. Th
en we have to
compare R3 and BW then we can find. We has to do
the BACK FILL again.
Direct Delta:-
-> Because of V1 Update(When user does posting, until LUW is updated into Delta
Queue the application is kept waiting) performance of OLTP system is degraded.
-> Here we dont have any Update Queue or V3 job and LUW's come directly to Delta
-> We can use the Direct Delta when we are trying to extract the Logistics data
where the frequency of posting doc are very less.
- The Direct Delta is suitable only when the frequency of transactions a
re very less.
Queued Delta:-
-> Because of V3 Model Call(Does not depend on the Transaction) when we post the
transactions and save internally the V3 model call job will be running where
the user need not wait until the LUW's are formed and updates them into
Delta Queue. Which does not degrades the Perf and maintains Logs.
-> Because of V3 Model Call it improves of OLTP system.
-> Update Queue cannot mainitain the LOG where as Extractor Queue can maintain t
he LOG(Which LUW's are failed). To see the LOG (LBWF).
-> RMBWV3* program to schdule the V3 job.
Serialized V3 Update:
V2 Update
----------------> UPDATE QUEUE ---------------> DELTA QU
-> When the End User does the Posting all the LUWs are collected into Update Que
ue with V2 Update.
-> When we Schedule the V3 Job it does the sorting based on Date and Time(FIFO)
and then it moves the Sorted LUW's Delta Queue.
-> Here we can Guatantee Serialization into BW. Because of this we have some Lim
itations: where if the BW Behaviour is either Additive or Overwrite.
* Degraded Performance of V3 Job.
* Multple Language, Multiple Segments(Language Dependent). For each run there wi
ll be one Segment with Multiple LUW's.
* Time taken for every instance must be same
* Error handling is difficult.
-> Because of these limitations SAP has given 3 update modes, where we can use t
hem based on Requirement.
V1 Update
----------------> DELTA QUEUE
* In Case of Direct Delta it uses V1 Update and directly updates into Delta Queu
e and when we run the Delta Load the LUWs are picked from Delta Queue to BW.
Here no V3 Job and uses V1 Update to post the LUWs into Delta Queue.
* V1 Update: Time Critical Update Method. When we do some posting and save, it c
onsiders it as Time Critical Update and the Cursor will be kept in waiting state
until the records are posted into Application Tables and Delta Queue. It
will not allow the users to do further postings until the records are posted
into Delta Queue. Because of this OLTP system performance is very slow.
But because of V1 the serialization is achieved.
-> We Use Direct Delta in case or Scenarios where the NO of Postings are very le
ss. We can load directly into DSO or Cube.
------------------> UPDATE QUEUE ---------------> DELTA QU
* In case of UNSERIALISED V3 UPDATE it uses V3 Module Call(Collective Run) which
will update into Update Queue(SM13). When we schedule V3 job the LUW's are
collected from Update Queue and updated into Delta Queue in Segments.
* When we run the V3 job in case of UNSERIALISED V3 UPDATE, there is no Sorting
done. We cannot guarantee the sequence of postings. The LUWs updated into Delta
Queue are V3, which is not Time Critical at all. V1 program will be runn
ing which updates the database tables and when there is the time it just runs th
V3 Module Call and updates the LUWs into Update Queue. The time of posti
ng is different and time when is it updated into Update Queue is different. When
run the V3 job it pics the records from Update Queue as they are with ou
t sorting. Sequence of postings are not Guatanteed, where we cannot load directl
into DSO.
-> This can be used in Scenario where the Sequence is not required.
-> With Unserialized V3 Update we should not update data into DSO directly becau
se of no Serialisation followed.
-> We will use this in INVENTORY(STOCK MOVEMENTS), here sequence is not required
. We can load directly into Cube.
------------------> EXTRACTOR QUEUE ---------------> DELTA QU
* It uses V1 update, sequence of postings are guaranted, Time Critical Update. I
t Updates Doc into Extractor Queue and when we run the V3 job it pics the
records from Extractor Queue to Delta Queue and here no sorting is done
and already the postings are in the Sorted order in Extractor Queue.
-> No of Segments will be Less
-> Serialization of postings are guaranteed
-> Performance of V3 job is very good since no sorting
* We use QUEUED DELTA when we want the Serialization of posting is required and
when the Postings are very Huge.
-> Queued Delta and All Initialisation is done and Deltas are running fine. Supp
ose if there is the new req to add an extra field to the Extractor.. How
will you achieve this..?
- Setup Down Time. Clean up Queues... Delete Set Up tables data.
Lock Users and make sure no postings are done. Suppose Extractor Queue is havin
g 100
records and run the V3 job and records are movedinto Del
ta Queue. Run the Delta Load, the delta records are loaded in to BW from Delta U
pdate in BW Delta Queue but the records will be available in Delta Repition, so
to clean up those records also run the delta again it will fetch no records but
cleans up the Delta Repitition also in Delta Queue. Now there is no records in b
oth Delta Queue and Extractor Queue.
- Now we add the new fields in Extractor in Dev and move TR to Q
ua and Replicate the Datasource inorder to replicate the newly added records.
- If we want the Historical data then run the Statistical Setup
else release the Locks and run the Delta to bring the data for update data for n
ewly added fields also.
-> Did you work with LO Extraction..? Yes
-> What are the Diff update modes available in LO Extraction.? Direct Delta, Que
ued Delta and UnSerialised V3 update.
-> In case of Direct Delta the LUWs are posted directly to Delta Queue and when
we run the Delta load data comes from Delta Queue to BW, In case of
Unserialised V3 Update the LUWs are updated into Update Queue(SM13) thro
ugh V3 Module call, when we run the V3 job the LUWs from Update Queue are update
d into Delta Queue and when we run the Delta Load the data is updated into BW, I
n case of Queued Delta when the postings are done they are updated into Extracto
r Queue(LBWQ) through V1 update and when we schedule the V3 job the data from Ex
tractor Queue is udpated into Delta Queue and when we run the Delta
load the data is updated into BW.
-> What is V1, V2 and V3 Updates...?
- V1 is Sysnchronous Updating which is Time Critical Update. V2 is Async
hronous Updating which is Non Time Critical Update. V3 is Asynchronous with
Background i.e V3 is a Collective run which can be run in Backgr
ound where Multiple LUWs from Update or Extractor Queue are updated as single LU
W in Delta Queue.
- When the End User does the postings it has to update data into Diff Ap
pl tables as well as Diff Queues. It does prioritise the updating data into Tabl
and Queues in the form of V1, V2 and V3 updates. All the updatin
g of V1 will be updated at the time of posting itself i.e cursor will be kept wa
itng until the data is updated into tables and queues.
-> CO-PA comes as part of Customer generated DataSource.
-> In Generic we create the DataSource. In Business Content DataSource will be r
eadymade given. In Customer Generated we will generate DataSource based on
some objects(Operating Concern).
-> Finance related transactions will be taken care by FI-CO(FINANCE - CONTROLLIN
-> We cannot extract the data related to Finance using LO extraction.
-> FINANCE is mainly used to record all the Transactions of Finance Cash. The Tr
ansactions and Reports coming out of FINANCE will be used by External Users
(Share Holders, Tax Authorities...). The Reports coming out of C
ONTROLLING will be used by internal users like Finance Managers...
-> FI will have sub-modules like :
eals with Customers)
Amount Received
from Cust, Amount to be received from Customers, Invoice Pending(Aging) Reports.
(This deals with Vendors)
(This deals with General Ledger related to all Finance transactions)
(This deals with Assets)
n diff countries they may maintain diff GL, one among them is Special Ledger)
-> CO will have different modules like
-> In FI there are readily available Business Content DataSources available for
FI-AR, FI-AP, FI-GL, FI-AA. But we don't have the Business Content DataSource
for FI-SL. So we have to generate the DataSource for FI-SL.
-> In CO there are readily available Business Content DataSources available for
CCA, PCA, PC But we don't have the Business Content DataSource for CO-PA. So we
have to generate the DataSource for CO-PA.
-> So FI-SL and CO-PA comes as part of Customer Generated DataSources.
-> When we want to extract data for CO-PA is there any Business Content DataSour
ce available..? Why do we need to explicitly generate DataSource?
- The DataSource for CO-PA is based on 'OPERATING CONCERN'. The
Operating Concern is the top most legal entity for CO-PA where the structure of
operating concern is not fixed and different with specif
ic to the client requirements. The Chracteristic fields and Value fields differ
according to the client requirements. So we cannot fix the Extract Structure als
o. So we cannot fix the DataSource. Hence we generate the DataSource for CO-PA b
- In BW just like we have Infocube which is the main object. Sam
e way FI-CO consultant when he is working with CO-PA sub module he deals with ma
in object
i.e 'OPERATING CONCERN'. Operating Concern is the top mo
st legal entity or item of CO-PA.
- As Infocube structure(Char and Keyfig) will differ based on cl
ient requirement, in CO-PA when the consultant is creating the Operating Concern
it also
will have Characteristic fields and Value fields which w
ill differ based on requirement.
-> This is the Top most Legal Entity in CO-PA. Operating Concern will have 2 typ
es of fields.
-> Operating Concern can be created with only 4 char technical name.
1. Characteristics fields (Char in BW)
2. Value Fields (Keyfig in BW)
-> Just like in BW to create the Infocube which consists of char and keyfig to b
e created first. Same way in order to create a Operating Concern we need to
first create Char fields and Key fields in R/3.
-> For ex we have got an operating concern like this :
SalesDocNumber CustNum MatNum Year COGS Netsales
Totalvalue BillingQuantity
<---------------------------------------> <-----------------------
Char Fields
Value Fields
-> Just like in our BW when we create a Infocube internally how the Fact tables
and Dimension tables are created at database level. When FI-CO consultant create
Operating Concern in R/3 internally 4 tables are created at data
base level. The table names are: C1xxxx, C2xxxx, C3xxxx, C4xxxx.
-> CO-PA can be used for PLANNING also in R3. i.e CO-PA will have both actual an
d planning tansactions also. So Operating concern is going to show both
actual and planning transactions.
->Why CO-PA will have both actual and planning trasaction data means in R3 the e
nd users can do planning on CO-PA so the operating concern contains both.
-> The user will have planning layouts and when the user will enter the planning
data and data will come and update into CO-PA operating concern.
-> We cannot create the actual transactions in CO-PA directly just like SD, MM b
y Tcodes.
- CO-PA is an Intergration Module(Application). We will not have
any actual transactions directly posted in CO-PA operating concern.
- When we peform some transations in other modules like SD, MM..
it will reflect actual transactions in CO-PA Operating concern.
- All the modules in SAP are integrated with CO-PA.
-> The Operating Concern will have both actual and planned transactions data. i.
e the FICO consultant will create the Char fields and Value fields and by using
them he will create the Operating Concern. When ever the Operati
ng concern is created automatically 4 tables are created in the database level C
C2xxxx, C3xxxx, C4xxxx. This Operating concern is going to hold
both actual and planned transactions. It will hold the planned transactions beca
use we can do planning in CO-PA. No one post the actual transactions data direct
ly because CO-PA is an integrated module and when ever the transactions are perf
ormed in another modules the actual data automatically populated into CO-PA. Thi
s is how data get generated in CO-PA.
-> How can we say that the data is an Actual or Planned Transaction data in Oper
ating Concern...?
- Based on the field VALUE TYPE in Operating Concern we can diff
erenciate between the Actual and planned data.
-> VERSION field: Actual will have only one version since it will not change but
for planned data Version field will be changed.
-> When ever we are bulding BEx Reports on CO-PA we always have to Restrict the
keyfigures with VALUE TYPE and VERSION to indicate whether planed or actual.
- V-Type Reporting: When we build reports on CO-PA we always Restrict th
e keyfigures with Value Type and Version and this type of reporting is called
V-Type Reporting(CO-PA Reporting).
-> Atual transactions data will be more detailed and Planned data will be more a
-> Now our task is to extract this data from CO-PA Operating Concern and load it
into the Cube in BW.
C1xxxx - C1 table will hold only Actual Line Item data in it.
C2xxxx - C2 table will hold planned data.
C3xxxx - C3 table is considered to be Segment Level tables -
All the char will be here and SegNum is the Fireign key
C4xxxx - C4 table is considered to be Segment Level tables -
SegNum will be Key fields(Primary) and other fields are value fields
-> C3xxxx, C4xxxx can be considered as Fact and Dimension tables in BW.
-> Which ever extraction is considered our ultimate goal is to make the DataSour
ce ready with Extract Structure in order to extract the data. Once the
DataSource is ready then we need to replicate the DataSource in BW and a
ctivate the Transfer Structure which in turn activate the Transfer Structure in
R/3 system and load the data from DataSource to the DataTargets(DSO, CUB
E) and do the reporting. But the way we design the DataSource only differs.
Step1: goto KEB0 -> 1_CO_PA%CL%ERK
- Every time we create CO-PA DataSource by default the datasourc
e is prefix with '1_CO_PA', %CL%ERK can be changed CL- Client , ERK - Descriptio
- The maximum length of the CO-PA datasource can be of length '1
- KEB0 can be used to create, disply or delete the CO-PA datasou
- Give the Operating Concern and select whether we are creating
the CO-PA datasource based on COSTING-BASED or ACCOUNT-BASED.
- If we are extracting the CO-PA data for an Product based compa
ny then we will prefer COSTING-BASED . If we are extracting the CO-PA data for a
Service Based Company then we go for ACCOUNT-BASED.
- In case of Cost based it inclues all types costs like advertis
ement costs, distribution costs, factory costs, operating costs.. into our data.
- If we select Account based it does not all the cost types.
- The second diff is when we create CO-PA datasource based on Ac
count based -> Create. Here we can find so many fields. These fields are from th
e Operating Concern which we selected at the time of creation. Here we have Shor
t, Medium and long descriptions and also the field name for Partitioning. Here b
y default all the fields are selected and we may not be required to extract all
the fields that are available. So we goto command bar and type '=init' tcode whe
re all the fields are unselected except 3 fields CO AREA, COMPANY CODE, COST ELE
MENT( Mandatory Fields).
- When we are generating the CO-PA datasource with Account Based
CO AREA, COMPANY CODE, COST ELEMENT are the Mandatory fields.
- When we are generating the CO-PA datasource with Costing Based
only COMPANY CODE is Mandatory.
- Field name for partitioning: Here it specifies at what Segment
level we want the data to be analysed. Depending on the requirement and Seggreg
we want to do then we will go for Field Name Partitionin
g. In real time we will go with either COMPANY CODE or PRODUCT(MATERIAL).
- Select the Fields which are required.
Step2: Check for any Inconsistencies.
Step3: got Infosource menu -> InfoCatalog
- Now you will have the Extract Structure with the field
s to select and Hide. Save the DataSource and CO-PA data source is extracted suc
Step4: Now replicate the CO-PA DataSource in SAP BW. When we Replicate the DataS
ource it reads the Metadata of the DataSource and it parametrise the DataSource
in the Source System(R/3).
-> In real time we will load the CO-PA data into DSO first and then from DSO to
we will move the data to InfoCube.
-> When we run the INIT it brings the data from Operating Concern into the BW Qu
-> How does the Delta works in CO-PA...?
- In CO-PA the Delta works based on the TimeStamp.
- For CO-PA there is no Statistical SetUp or any thing, when we
run the INIT load it picks up records from the table through interfacing the
Extract Structure, Transfer Structures and come into the
BW Queue.
- The TimeStamp will have both Date and Time. 20071230093456(YYY
- We can see TimeStamp related info for CO-PA DataSources in 'KE
-> Once the INIT is successfull in CO-PA during the Data Migration then we have
to wait for min 30 Min to run the next Delta. This is called 'SAFETY DELTA'.
- This is because the R3 system takes some time to set up TimeSt
-> Since the Delta is running based on TimeStamp and if the Delta fails how does
we correct it....?
- Suppose if we are running the Delta for the 2nd time at 10AM a
nd it fails to bring the delta records but the last delta timestamp will be set
to 10AM.
Now if we run the delta for the next time it will bring
the delta records from the last delta timestamp and we will miss the records of
the error delta records. To overcome this problem before we run the next delta w
e go to R3 and Tcode 'KEB5' and change the timestamp to delta before error and l
ast successfull delta. We call this as 'REALIGNMENT OF TIMESTAMP'(Changing of Ti
-> We can change the timestamp of the CO-PA DataSource in 3 different ways.
1. From a Different DataSource - We can mention the different D
atasource which we want to have the timestamp of that datasource.
2. From Date and Time - We can mention date an
d time.
3. Direct Entry - Directly give
the timestamp.
Up to and including Plug-In Release PI2003.1, a DataSource is only defined in th
e current client of the R/3 System. This means that a DataSource can only be ext
racted from this client. The DataSource has a timestamp for the delta method, an
d this timestamp is only valid for the current client. This timestamp is managed
by Profitability Analysis.
With Plug-In Release PI2004.1 (Release 4.0 and higher), timestamp management wa
s converted to a new method, called generic delta. This method works in connecti
on with an SAP BW system with Release 2.0 and higher. With this method, timestam
p management is no longer performed by Profitability Analysis, but instead by t
he Service API (interface on the R/3 side between Profitability Analysis and SAP
BW). Exclusively in
Release 3.1I, Profitability Analysis continues to support timestamp management.

Compared to timestamp management in Profitability Analysis, the generic delta a
llows for several enhancements:
o You can apply the delta method simultaneously using the same DataSource
from more than one R/3 System client because a separate timestamp is saved fo
r each logical system.
o You can apply the delta method for the same R/3 System client simultaneo
usly using the same DataSource from several SAP BW systems.
o You can perform several initializations of the delta method with differe
nt selections using the same DataSource from a given SAP BW system for the sam
e R/3 System client.
o The DataSource commands the Delta Init Simulation mode. With timestamp
management in Profitability Analysis, this mode had to be implemented using
the Simulate Delta Method Initialization function
(see SAP Note 408366).
For more information on the generic delta, see Delta Transfer, whereby the st
eps of the Specify Generic Delta for a DataSource section are performed automat
ically for Profitability Analysis when a DataSource is
created. For this, the field determining the delta is taken as the timestamp
for Profitability Analysis (TIMESTMP), and the timestamp is stored for summariz
ation levels and line item tables. However, in contrast to generic DataSources,
the TIMESTMP field is not generated in the extraction structure because this i
s not necessary for DataSources in Profitability Analysis. As with timestamp ma
nagement in Profitabilit Analysis, an upper limit of 30 minutes is set as the s
afety interval.
You find the timestamp of a DataSource for the delta method in the current lo
gical system either in the Header Information for the DataSource using the IMG
activity Display Detailed Information on DataSource or using the IMG activity C
heck Delta Queue in the Extractor IMG. The timestamp is shown here when you cho
ose the selection button i the Status column for the combination of DataSource
and SAP BW system.
DataSources created after implementing PI2004.1 automatically apply the new m
ethod. DataSources that were created in Plug-In releases prior to PI2004.1 stil
l continue to use timestamp management in Profitability
Analysis but can be converted to the generic delta. For this, an additional s
election option Convert to Generic Delta appears in the selection screen of the
IMG activity Create Transaction Data DataSource when a DataSource with timesta
mp management in Profitability Analysis i entered. Conversion from the generic
delta to timestamp management in Profitability Analysis is not supported. Conv
ersion is only possible for DataSources that are defined in the current client
of the R/3 system and for which the delta method has already been successfully
initialized or for which a delta update has successfully been performed. This i
s the case once the DataSource has the replication status Update successful. Fur
thermore, no realignments should have been performed since the last delta updat
For the conversion, the timestamp for the current R/3 System client is transfer
red from Profitability Analysis into the timestamp of the generic delta. In this
way, the transition is seamless, enabling you to continue to perform delta upda
tes after the conversion. If delta updates are to be performed from different R/
3 System clients for this DataSource, you first need to initialize the delta met
hod for these clients.
The conversion must be performed separately in each R/3 System because the time
stamp information is always dependent on the current R/3 System and is reset dur
ing the transport. If, however, a converted DataSource is inadvertently transpor
ted into a system in which it has not yet been converted, delta extraction will
no longer work in the target system because the timestamp information is deleted
during the import into the target system and is not converted to the timestamp
information of the generic delta. If in this instance no new delta initializatio
n is going to be performed in the target system for the DataSource, you can exec
ute program ZZUPD_ROOSGENDLM_FROM_TKEBWTS from SAP Note 776151 for the DataSourc
e. This program reconstructs the current time stamp information from the informa
tion for the data packages transported thus far and enters this time stamp infor
mation into the time stamp information for the generic delta. Once this program
has been applied, delta extraction should work again. Normally, however, you sho
uld ensure during the transport that the DataSource uses the same logic in the s
ource system and the target system.
After the conversion, the DataSource must be replicated again from the SAP BW
system. A successful conversion is recorded in the notes on the DataSource, whic
h you can view in the IMG activity Display Detailed Information on DataSource. S
ince the generic delta does not offer any other log functions apart from the tim
estamp information (status: Plug-In Release PI2004.1), Profitability Analysis st
ill logs the delta initialization requests or delta requests. However, the infor
mation logged, in particular the timestamps, only has a statistical character be
cause the actual timestamp management occurs in the generic delta. Since the del
ta method can be performed simultaneously for the same R/3 System client using
the generic delta from several SAP BW systems, the information logged is store
d for each logical system (in R/3) and SAP BW system. When a delta initializati
on is simulated, only the timestamp of the generic delta is set; Profitability
Analysis is not called. Consequently, no information can be logged in this case
. Messages concerning a DataSource are only
saved for each logical system (in R/3). You can use the IMG activity Display
Detailed Information on DataSource to view the information logged.
Another enhancement from Plug-In Release PI2004.1 means that you can no longe
r exclusively perform full updates for DataSources of the Extractor Checker of
the Service API that have recently been created or converted to the generic del
ta. The following update modes are possible:
o F - Full update: Transfer of all requested data
o D - Delta: Transfer of the delta since the last request
o R - Repeat transfer of a data package
o C - Initialization of the delta transfer
o S - Simulation of the initialization of the delta transfer
In the case of all update modes other than F, you have to specify an SAP BW s
ystem as the target system so that the corresponding timestamp and/or selection
information for reading the data is found. The Read
only parameter is set automatically and indicates that no timestamp informati
on is changed and that Profitability Analysis does not log the request.
Update mode I (transfer of an opening balance for non-cumulative values) is n
ot supported by Profitability Analysis.
-> Same like CO-PA, FI-SL is also an Integration application. We can do planning
in FI-SL same like CO-PA.
-> In FI-SL we generate the DataSource based on 'TABLE GROUP'.
-> Table Group: Table Group is nothing but group of 5 tables.
1. Summary Table(T)
2. Actual line item table(A)
3. Plan line item table(P)
4. Object table 1 (object/partner)(O)
5. Object table 2 (transaction attributes) (optional)(C)
-> In real time FICO consultant will create the table Group.
-> SPRO -> Define table Group -> YGRC create
-> Once the Table Group is created they will create the Special Ledger.
-> Ledger is being defined as subset of summary table.
Step1: SBIW -> Settings for Application Specific DataSource -> Financial Account
ing Special Purpose Ledger -> Generate Transfer Structure for Totals Table
Step2: Give the Summary Table name ( Here when we execute this internally it wil
l creates the Extract Strucuture for Summary Table.
Step3: Generate the DataSource. Every FI-SL datasource is prefixed by '3FI_SL'
and Ledger name will be assigned at end i.e 3FI_SL_BW
- Here Activate the Delta Check Box. Now the Extract Structure i
s generated for the DataSource.
Step4: Once the DataSource is generated(Generatiing the Extract Structure), repl
icate the DataSource in BW.
-> The Delta Mechanism of Extractor is 'Pseudo Delta'.
- Normally when we extract data from FI-SL we get actual and planned dat
a. The concept here is it supports TimeStamp delta for actuals but for planed da
it deos not support TimeStamp Delta. i.e Actual is Delta and Pla
n should always be Full.
- So here we create 2 Infopackages one for Delta and the other for Plan
data. How do we identify plan and actuals means we have a field called 'Value Ty
for reporting. For all the records where Value type is '10' indi
cates Actuals and value type '20' indicates Plan data.
- We create Infopackage with data selection value type '10' and another
Infopackage with value type '20'.
- So when we extract Plan data with value type '20'.
- To Handle this situation we will handle with Value Type. Now we run 2
InfoPackages i.e one Infopackage with value type value as '10' (Actual data)
in the dataselection same way we are going to run the other info
package with value type '20'(Planned data).
- Planned always run the Full Update and the Actual runs with INIT and D
elta later.
- But when we are doing the Full loads then we will have the duplicates
of Planned data. To overcome the above situation then we need to delete the rec
ords then and load the Full Load. Delete the existing requests where the selecti
ons are overlapped. This is called 'PSEUDO DELTA'.
When we delete the data in the Cube with the same overlapping selections
we call this as 'PSEUDO DELTA'. We do this settings in Infopackage.
- Depending on the planning data frequency we load the plan data for eit
her Monthly, Quarterly, Halfyearly..
- Eventhough there is TimeStamp based Delta mechanism but we may go for
Full Load only in order not to miss any records while loading.
- Daily run Infopackage with value type 10, full load, current month and
delete overlap request settings. But on 1st day of next month again we have to
the infopackage with previous month selections so that complete
previous months data will come and one more infopackage with current month also.
This is how the actuals data will be extracted in FI-SL.
-> Delta Mechanism of FI-SL is 'Pseudo Delta'.
-> Pseudo Delta means Fake Delta.
-> When we are extracting the planned data the DataSource will support only Full
Update. But when we are extracting the Actual Data DataSource will support
Delta Update.
-> How can we handle such scenario..?
- To Handle this situation we will handle with Value Type. Now w
e run 2 InfoPackages i.e one Infopackage with value type value as '10' (Actual d
in the dataselection same way we are going to run the other info
package with value type '20'(Planned data).
- Planned always run the Full Update and the Actual runs with IN
IT and Delta later.
- But when we are doing the Full loads then we will have the dup
licates of Planned data. To overcome the above situation then we need to delete
the records then and load the Full Load. Delete the existing requests where the
existing selections are overlapped. This is called 'PSEUDO DELTA'.
When we delete the data in the Cube with the same overlapping se
lections we call this as 'PSEUDO DELTA'.
- Since we are extracting the data from FI-SL, it does not suppo
rt delta for Planned data, i.e we will extract planned data with Full and Actual
with INIT and then Delta. Why FI-SL planned data doesnot support Delta means..?
The planed data keeps on changing because of previous planned data might change
because of the existing Actuals. Then we need to do the RollUp Forecast. Hence
it will not support Delta.
-> In LO extraction the Delta is maintained by some programs which keeps track o
f the changed records in to the Delta Queue.
-> In CO-PA delta works based on the TimeStamp.
Steps to be followed for FI-GL Extraction:
1. Identify the appropriate datasource and get the data flow diagram fro
m help.sap.com
2. Install the relevant BW objects in BI content.
3. Activate the DataSource and Replicate the datasource to BW.
4. Model the data flow.
5. Load the data into data targets
-> The Data Flow didagram is given below.
DSO(0FIGL_O10) /
\ /
-> Here we are going to consider mainly on InfoCube(0FIGL_C10), DSO(0FIGL_O10) a
nd DataSource(0FI_GL_10).
-> RSA1-> Infoproviders -> Search or check out it the InfoProvider is already ac
tivated or not. If it is not available in InfoProvider means the Objects are
not installed then we goto BI content tab and install the required objec
-> Search for 0FIGL_10, it will be present in Genereal Leder Accounting InfoArea
-> GL Accounting New -> 0FIGL_10.
-> Before we drag and drop the InfoCube Object once we cross check the settings
in Installation pane. Grouping as 'DATA FLOW BEFORE' and Collection mode as
'AUTOMATIC' and display as 'LIST'. Now drag the Object into 3rd pane and
allow it collect the necessary objects.
-> Now we can find the necessary objects that are required fot Data Flow before
installation like Application Components, Info Area, InfoCube, InfoObject,
Communication Structure, 3.x datasource, InfoPackage, Transfer Rules, 3.
x InfoSource, Transfer Structure, SOurce System, DSO, Routine, InfoSource, Trans
formation, Update Rules.
-> Here we will Install only InfoArea, InfoObject, InfoCube and DSO and exclude
all others from istalling because most of them are not required since they are
3.x flow and the remaining like IP, DTP, Transformation.. we will build
them manually because we may get some errors when we install all of them.
-> The Object which we want to exclude then from installation is select the Obje
ct and Donot Install the Objects below.
-> Now we select the Objects that are required for Installation and then 1st sim
ulate the Installation for any errors. If we don't find any errors then
we can go ahead and install the objects. Now the Info Objects are ready
for use.
-> Now for Extraction we perform the following steps.
-> RSA5 -> Select the DataSource(0FI_GL_10) -> Activate the DataSource
-> To check whether the DataSource is activated goto RSA6 and check for the data
source we activated. Also we can have a look at the details of the DataSource
in the repository RSA2(DataSource Repository).
- Here we can find the details of the DataSource like Name, Type
of Extraction data and also the Delta Process(ALED), which means that it suppor
After Image Delta Extraction where in we have to handle
the Before and Reverse Image Extraction explicitly by using the Standard DSO in
If the Delta Extraction is ABR then it is Afer, Before a
nd Reverse image then we can directly load the data into Cube.
-> Now go to RSA1 -> DataSurces -> Replicate Metadata in order to make the DataS
ource active in BW system.
- Here we can notice a small square symbol for data source which means t
hat the datasource is 3.x flow compatability so we have to migrate it to 7.x so
that we can use it iin 7.x flow by using Tranformations and DTP instead of Trans
fer rules and Update rules.
- Right click on the data source and click Migrate, now we will be asked
for migration with or With out Export. If we select with out Export at later st
ages we will not able to migrate the datasource back to 3.x version and if we se
lect with Export then we have the option to migrate back to 3.x version.
- In our case we will go with with out export to make it simple. Once it
is migrate the square will be gone.
-> Now we have to map the fields in DataSource and DataTargets. But before we pe
rform the mapping or creating the transformations it is good to check for the
sample data in datasource so that we can have what are the fields and wh
at type of data is coming to analyze further.
-> So we create the InfoPackage. This is the Initial Load we have to carefull si
nce we get more data when we extract so we give some selections provided.
- Full Update
- Run the Infopackage and check for the sample data.
-> Now the data is available in DataSource, now we load then data in data target
-> Create Transformation for the DSO and select the Object type as DSO and Sourc
e is ECC. Now we map the fields appropriately.
-> Here once the mapping is done we have to make an important point is in RULE G
ROUP, we have done the mapping in STANDARD GROUP. Also we have to
map the Technical Group also for 0RECORDMODE. Once we are done with mapp
ing all the Rule Groups then Activate the DSO.
-> Now create the DTP and load the data from PSA to DSO.
-> Now we load the data from DSO into InfoCube by creating the Transformation an
d DTP.
We had existing 0FI_GL_1 & 0FI_GL_4 flows and we implemented new GL totals flow
i.e. 0FI_GL_10...
FAGLFLEXT --> 0FI_GL_10 --> 0FIGL_O10 --> 0FIGL_C10.... new GL Totals implementa
tion was quite smooth, since this flow is completely different from old GL total
s (GLT0 --> 0FI_GL_1 --> 0FIGL_C01).
We recreated existing (on 0FIGL_C01) queries on 0FIGL_C10 (&V10) and used jump t
argets (RRI) to old line item (0FIGL_O02) wherever required...
In ur case, u can go ahead with new GL lineitems (FAGLFLEXA & BSEG & BKPF) --> 0
FI_GL_14 --> 0FIGL_O14 in parallel with existing old one (BSEG & BKPF) --> 0FI_G
L_4 --> 0FIGL_O02.
-> How do we extract any other application specific data like FI-AP, FI-AR, FI-G
L, FI-AA, PCA, CCA.. we have business content datasources readily available.
If the datasource is readily available then we need to Install t
he Business Content and Replicate the DataSource.
How to extract the data from CostCentreAccounting(CCA):
-> We can extract any business content data using the below procedure.
0CO_OM_CCA_9 - To Extract CostCentreAccounting
Step1: Install the Business Content Extrctor(DataSource) in RSA5. The Business c
ontent datasource will come with readymade Extract Structure.
- 0CO_OM_CCA_9 - Cost Centers: Actual Costs Using Delta E
Step2: We can customize the Extract Structure in RSA6 if we are not satisfied wi
th the Extract structure given by selecting the fields from Comm Structure.
- We may find some scenario where some fields are not selected a
nd it is disabled not able to select also. But we want the fields to be selected
- Normally if any one want to have the fields to be enabled for
selections we will go to RSA6 change mode and then select the required field for
selection. But if that field is disabled for selection then how
can we handle such scenario....?
- Goto SE11 -> ROOSFIELD ( This table will have inf abou
t each and every field in the DataSource)
Here we need to change the field value in the St
andard Database Table. Selection field and give the parameter as 'P'. Now the se
lection checkbox is enabled and if we want then we can select the field.
- In LO if we are not satified with the fields in Extract Sructure then
the 1st option is we try to add the fields from Communication Structure in LO.
Even if we dont find the field in Communication Structure also t
hen only as 2nd option we go for DataSource Enhancement.
- In Non-LO extraction if we didnt find the required fields in the Extra
ct Structure then we directly go for the datasource enhancement.
Step3: Then we replicate the DataSource in BW.
-> When we are extracting the date from R3 to BW how can we know that based on t
he DataSource we have to load to the Cube directly or from DSO -> Cube...?
heck for the fields Delta (ex: ABR). Take this value and now folow stpes below
EVERSE) -> now check for BIM status 'X' i.e
- Any datasource with 'ABR' status means then we can directly load the d
ata into Cube because of the Before Image option and we need not worry about
duplocate records.
- Any datasource with 'AIE' will give us only After Image only which ind
icates that we have to load to DSO and then from DSO -> Cube.
ROOSOURCE - This table will give all the details on the DataSources
in R3 system.
RODELTAM - BW Delta Process(This table will give the before and afe
r image option and also whether we can load the data directly to Cube or from DS
SAP R3 Extraction:
-> We need to have the Source System connection between R3 and BW systems in ord
er to extract the data from Source System.
-> Difference between Flat file and R3 Extraction . In Case of flat file we will
generate the datasource in BW and in R3 extraction we will
generate/create/activate the datasources in R3 system and then replicate
the data source in BW system.
-> There are 2 different types when we extract the Master Data. They are:
1. Generic Extractors
2. Business Content Extractors.
-> If there is no Ready made Business Content Extractor only then we go for Gene
ric Extractor for the Master data.
Business Content Extraction for Master Data:
0material - omsl
gl account
chart of accounts
sales group
sales office
sales region
doc type
doc category
cost center
cost element
controlling area
cusomer sales
mat plant
Extraction through Function Module:
-> The basic difference between Extracting the data from Table/View/Infoset is
in this methods the Extract Stucture is created by system where if we create
the datasource from Function Moduel we have to create the Extrac
t Structure and Function Module also.
-> Create Extract Sturucture (SE11)
-> Create Function Module (SE37)
-> Create DataSource (RSO2)
-> Creating the Function Module - SE37 or SE80
- Create the Functon Group in SE37 similar to RSAX
- Top Include
- Second Include
-> Create Function Module by using the Template Function Module - RSAX_BIW
-> Customize the code in the Function Module as per the Requirements.
-> Here we have to take care of the coding since there is the possibility of the
records duplicated because of the Display Ext calls.
- For example if we are extracting the data from VBAK table and the tabl
e contains only 6450 records, the function module has to extract only 6450 recor
But because of the Display Ext calls setting suppose if we set t
his values as 10 and Data records/call is 100000 then for each call it will brin
g till 100000 records 10 times. But we have only 6450 which will be extracted in
the first call itself and the next calls data will be duplicated and also the p
erformance will be degraded. But we cannot control the packet size in ECC, but i
n InfoPackage we can set the number of records per datapacket, based on those se
ttings the Function module has to extract the data with out any duplications.
-> To overcome this problem we have to use 'CURSOR' concept and control the reco
rds fetched per data package.
-> Also we have one more problem is based on selection we cannot control the rec
ords extractions.
-> The packet size problem is with Function Moduel only where as with other meth
ods of Generic Extraction the generated API programs will take care.
-> How can we control the Data Packet size in Generic Extraction by function Mod
ule - By using the 'CUSRSOR' concept we can control the size of packet
-> We go for Function module extraction in the scenarios where there may be some
complex logic that needs to be done, where we cannot do it in Table or
-> Packet Size with Maxline and Open Cursor, Fetch Cursor and Close Cursor. All
the selections will be collected into I_T_SELECT, we collect in to Global
buffer parameter(variable) and we loop through them and when the packet
is zero we collect the selections from the Global variables anduse those range v
ariables in the where clause.
-> Most of the Standarard Extractors are built on Funtion Modules concept only.
-> As u said we search for std DataSource (d/s) and if found, it is always advi
sable to use them, as far as your req. is fulfilled.
But sometimes u have to go ahead with generic d/s as all table/view data
is not readily available with std d/s.
I would like to give you the live example of our current project, where
we have to fetch data from EKPO-EKKO-EKKn and EKBE.
There are std d/s like for EKKO & EKPO we used 2LIS_02_ITM, for EKKN we
used 2LIS_02_ACC. But for EKBE i.e. purchasing history there is no Std. d/s and
we went with creating a generic d/s on EKBE table using T.code RSO2 in source sy
View are used to join different table, say you need data from 2 diff tab
le and there's some unique exist then you create d/s on view.
To Extract Open deliveries: The data source 2LIS_12_VAITM is available a
s a part of business content extractor, all the delivery item level information.
So to extract only open deliveries we have built generic data source with funct
ion module. We used 3conditions to identify an open delivery.
If delivery is consider to bean open delivery, if it has no PGI (post goo
ds issue) document.
If delivery is consider to bean open delivery, if it has PGI (post goods
issue) documents, and if this PGI document doesnt contain billing document.
If delivery is consider to bean open delivery, if it has PGI (post goods
issue) documents, and if this PGI documents contain billing document, and this b
illing document is not yet to be posted.
We have a database table LIPS, which contains all delivery item level in
formation. So we collect all the records from this LIPS table and cross check wh
ether give delivery is open delivery or not. If it is open delivery insert the r
ecord into E_T_DATA (internal table).

LIPS Delivery item level information.
VBFA sales Document Flow
VBRP Billing Document Item data.
-> We have a reporting requirement where the business wants to analyze based on
Industry Sector. For this we have to append 2LIS_11_VAITM with a field MBRSH
(Industry Sector) from MARA table.
Inventory Data Loading (MM) to BW PRD system:
We were performing Inventory data loading to BW PRD (BI 7.0) system from ECC 6.0
and I would like to share the pointers I prepared during the activity.
There are 3 DataSources for Inventory;
2LIS_03_BX (Stock Initialization for Inventory Management)
2LIS_03_BF (Goods Movements from Inventory Management)
2LIS_03_UM (Revaluations) - captures revaluation data.
We have used Un-serialized V3 as the update mode for inventory in LBWE.
Before starting with extraction, login to ECC PRD, check the number of records i
n the MKPF - Header: Material Document and BKPF - Accounting Document Header. MK
PF is the source table for 2LIS_03_BF and BKPF is for 2LIS_03_UM. This will giv
e you an idea of the time-window required for initialization. We have to have a
downtime which is predefined. (We had a down time of 48 hours) Take care not to
lock the user that performs the dataloading. (Yathis still happens!) The whole d
ata loading procedure was done by around 60 hrs.
Check all the relevant objects are transported correctly to PRD.
Step 1: Lock the users, all users who may do a posting to the stock will have
to be locked.
Step 2: In ECC-PRD, in table TBE11 maintain entry for NDI and BW as active.
Step 3: Delete the setup tables in LBWG for inventory (Application 03).
Step 4: Initialize stock using Tcode MCNB. Give the name of run, a future termi
nation date, and 2LIS_03_BX for transfer structure. F9- execute in background.
In SM 37, check the status of job - RMCBINIT_BW. ( When the status became Finished, w
e had around 3.36 L records and it took around 13 minutes).
Step 5: Fetch these records to BI through infopackage for datasource 2LIS_03_BX
. This infopackage will have General Initial Status as update mode. Execute the DTP
with Initial Non-Cumulative for Non-Cumulative extraction mode and fetch the records
to the InfoCube 0IC_C03. Once successful, compress the request without using th
e "No Marker Update" (checkbox empty).
Step 6: This step was started parallel to Step 5 after fetching the records to
PSA. Initialize Material movements, Tcode OLI1BW for 2LIS_03_BF. Give the name
of run, a future termination date; select Based on posting date give 31.12.9999 in pos
ting date, as its possible to make postings with a future date. Execute in backgrou
nd. In SM 37, check the job status for the job RMCBNEUA. (We had around 70 mil
lion records and it took around 12 hours for the job to finish).
Step 7: This step was run in parallel to initializing material movements. Ini
tialize Revaluation in OLIZBW. Make similar selection as above. Give company c
ode. Run in background; check in SM37 the status of job RMCBNERP. This setup ta
ble run has to be executed for every company code you have. (We had 5 Company Co
des, so 5 times we executed the run selecting one after the other Company Code,
Select 1 company code and execute. When it finishes, select the next company co
de and execute and so on. We had around 6 L records and it took around 6.5 hrs)
Step 8: Fetch these records to BI through InfoPackages. (We had Full and then IN
IT w/o data transfer. The Full infopackage was scheduled in background.)
Step 9: The users were unlocked after initialization. In theory after filling se
tup tables we can unlock users as we have used Unserialized V3 update. Execute
DTP and fetch the records to InfoCube. Compress the request using the "No Marke
r Update" (checkbox checked).
Step 10: Schedule delta jobs. In LBWE, in job control, Give Start date as immed
iate, select periodic job as daily, give print parameter- select the printer, Se
lect Schedule job. Check in SMQ1. Schedule delta infopackages.
Issues we faced:
1. The records added for Stock initialization in Inventory cube is almost double
than the records transferred. This was due to 'stock in transit'. Such movements rea
lly consist of two movements: (1) The goods issue from the 'stock in transit' an
d (2) the goods receipt in the storage location. To reflect this correctly, an a
dditional line for BW is created. Refer SAP Note 415542.
2. While loading Revaluation records to Inventory cube we got an error as Rough Fis
cal year variant not processed. This was rectified by maintaining fiscal year varia
nt in SPRO->SAP Netweaver->Business Intelligence->Settings for BI Content->Retai
ling-> Set Fiscal Year Variant. Then reloaded the data to the Cube.
To Improve the Performance of Delta Queue:
Indexing the below tables will regularly improve the Delta Queue performance and
delta load performance into BW.