Вы находитесь на странице: 1из 10

The Three Layers of SAP BW

SAP BW has three layers:

• Business Explorer: As the top layer in the SAP BW architecture, the Business
Explorer (BEx) serves as the reporting environment (presentation and analysis)
for end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx
Map for analysis and reporting activities.

• Business Information Warehouse Server: The SAP BW server, as the middle


layer, has two primary roles:

• Data warehouse management and administration: These tasks are handled by the
production data extractor (a set of programs for the extraction of data from R/3
OLTP applications such as logistics, and controlling), the staging engine, and the
Administrator Workbench.
• Data storage and representation: These tasks are handled by the InfoCubes in
conjunction with the data manager, Metadata repository, and Operational Data
Store (ODS).

• Source Systems: The source systems, as the bottom layer, serve as the data
sources for raw business data. SAP BW supports various data sources:

• R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior
to Release 3.1H (SAP BW regards them as external systems)
• Non-SAP systems or external systems
• mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM,
or R/3 components) or another SAP BW system.

SAP Data Warehouse


We have large amounts of historical sales data stored on our legacy system (i.e.
multiple files with 1 million+ records). Today the users use custom written programs
and the Focus query tool to generate sales'ish type of reports.

We are wanting that existing legacy system to go away and need to find a home for
the data and the functionality to access and report on that data. What options does
SAP afford for data warehousing? How does it affect the response of the SAP
database server?

We are thinking of moving the data onto a scaleable NT server with large amounts
of disk (10gb +) and using PC tools to access the data. In this environment, our
production SAP machine would perform weekly data transfers to this historical
sales reporting system.

Has anybody implemented a similar solution or have any ideas on a good attack
method to solve this issue?

You may want to look at SAP's Business Information Warehouse. This is their answer to
data warehousing. I saw a presentation on this last October at the SAP Technical
Education Conference and it looked pretty slick.

BIW runs on its own server to relieve the main database from query and report
processing. It accepts data from many different types of systems and has a detailed
administration piece to determine data source and age. Although the Information System
may be around for sometime it sounded like SAP is moving towards the Business
Information Warehouse as a reporting solution.

Ever heard about apples and oranges. SAP/R3 is an OLTP system where as BIW
is an OLAP system. LIS reports can not provide the functionality provided
by BIW.

-----Reply Message-----
Subject: Business Information Warehouse

Hello,

The following information is for you to get more clarity on the subject:
SAP R/3 LIS (Logistic Information System) consist of infostructures (which
are representation of reporting requirements). So whenever any event (goods
reciept, invoice reciept etc. ) takes place in SAP R/3 module, if relevant
to the infostructure, an corresponding entry is made in the infostructures.
Thus infostructures form the database part of the datawarehouse. For
reporting the data (based on OLAP features such drill-down, abc, graphics
etc.), you can use SAP R/3 standard analysis (or flexible analysis) or
Business Warehouse (which is excel based) or Business Objects (which is
third party product but can interface with SAP R/3 infostructures using BAPI
calls).

In short, the infostructures (which are part of SAP R/3 LIS) form the data
basis for reporting with BW.
Use of manual security profiles with SAP BW
Our company is currently on version 3.1H and will be moving to 4.6B late
summer 2000. Currently all of our R/3 security profiles were created
manually. We are also in the stage of developing and going live with the
add-on system of Business Warehouse (BW). For consistency, we have wish
to use manual profiles within the BW system and later convert all of our
manual security profiles (R/3 and BW) to profile generated ones.

Is there anyone else that can shed any light on this situation? (Success
or problems with using manual security profiles with BW?)

You are going to have fun doing this upgrade. The 4.6b system is a
completely different beast than the 3.1h system. You will probably find a
lot of areas where you have to extend you manually created profiles to
cover new authorisation objects (but then you can have this at any level).

In 4.6b you really have to use the profile generator, but at least there is
a utility to allow you to pick up your manually created profile and have it
converted to an activity group for you. This will give you a running start
in this area, but you will still have a lot of work to do.

The fact that you did not use PG at 3.1h will not matter as it changed at
4.5 too and the old activity groups need the same type of conversion (we
are going through that bit right now).

What Is SPRO In BW Project?


1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?

1. SPRO is the transaction code for Implementation Guide, where you can do
configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.

2. SPRO is used to configure the following settings :


* General Settings like printer settings, fiscal year settings, ODS Object Settings,
Authorisation settings, settings for displaying SAP Documents, etc., etc.,
* Links to other systems : like links between flat files and BW Systems, R/3 and BW,
and other data sources, link between BW system and Microsoft Analysis services, and
crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC
Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for
UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,

* Transport Settings : like settings for source system name change after transport and
create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.

3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed
requests in the format of the transfer structure. It is defined according to the Datasource
and source system, and is source system dependent.

IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.

SAP BW Tips by : Viji

Data load in SAP BW


What is the strategy to load for example 500,000 entries in BW (material master,
transactional data)?

How to separate this entries in small packages and transfer it to BW in automatic?

Is there some strategy for that?

Is there some configuration for that?

See OSS note 411464 (example concerning Info Structures from purchasing documents)
to create smaller jobs in order to integrate a large amount of data.

For example, if you wish to split your 500,000 entries in five intervals:

- Create 5 variants in RMCENEAU for each interval


- Create 5 jobs (SM36) that execute RMCENEAU for each variant
- Schedule your jobs
- You can then see the result in RSA3

Loading Data From a Data Target


Can you please guide me for carrying out his activity with some important steps?

I am having few request with the without data mart status. How can I use only them
& create a export datasource?

Can you please tell me how my data mechanism will work after the loading?

Follow these steps:

1. Select Source data target( in u r case X) , in the context menu click on Create Export
Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.

2. In Modelling menu click on Source Systems, Select the logical Source System of your
BW server, in the context menu click on Replicate DataSource.

3. In the DataModelling click on Infosources and search for infosource 8(name of


datatarget). If not found in the search refresh it. Still not find then from DataModelling
click on Infosources, in right side window again select Infosources, in the context menu
click on insert Lost Nodes.
Now search you will definately found.

4. No goto Receiving DataTargets ( in your case Y1,Y2,Y3) create update rules.


In the next screen select Infocube radio button and enter name of Source Datatarget (in u
r case X). click Next screen Button ( Shift F7), here select Addition radio button, then
select Source keyfield radio button and map the keyfields form Source cube to target
cube.

5. In the DataModelling click on Infosources select infoSource which u replicated earlier


and create infopackage to load data..

Difference in number of data records


-----Original Message-----
Subject: Difference in number of data records

Hello,

I have uploaded data from R/3 to BW (Controlling Datasources).

The problem is that when i use the extractor checker (rsa3) in R/3 for a
specific datasource (0CO_OM_OPA_1) it shows me that there are 1600 records.

When i load this datasource in BW it shows me that there are 400.000


records. I'm uploading data to "PSA only".
Any ideas why this is happening ?

Thanks

-----Reply Message-----
Subject: RE: Difference in number of data records

Check the 'data recs/call' and 'number of extract calls' parameters in


RSA3. Most likely the actual extract is only making one call with a larger
data rec/call number. The extraction process will collect data records
with the same key so less data has to be transferred to the BW. When you
run RSA3 you are probably getting similar records (that would normally
collect) in different data packets thereby creating more records. Try
running RSA3 with a much higher (2000) recs/call for several calls.

Upgradation Steps For BW


Upgradation steps:
1) Convert Data Classes of InfoCubes. Set up a new data class as described in SAP OSS
Note 46272.
2) Pay attention to the naming convention. Execute the RSDG_DATCLS_ASSIGN
report.
3) Run the report RSUPGRCHECK to activate objects.
4) Upgrading ABAP and JAVA in parallel may cause issues. If there is no custom
development on J2EE instance, it is recommended to drop the J2EE instance and re-
install the latest J2EE instance after the upgrade.
5) Apply SAP OSS Note 917999 if you include the Support Patch 6 with the upgrade or
you will get the error at PARCONV_UPG phase. When you upgrade to a product
containing Basis 700, the phase PARCONV_UPG terminates. You included Basis
Support Packages up to package level 6 (SAPKB70006) in the upgrade. The job
RDDGENBB_n terminates with the error RAISE_EXCEPTION. In addition to this, the
syslog contains the short dump TSV_TNEW_PAGE_ALLOC_FAILED. The program
SDB2FORA terminates in the statement "LOOP AT stmt_tab". An endless loop occurs
when the system creates the database table QCM8TATOPGA.
6) Apply the OSS note 917999 if you get the following error during PARCONV_UPG
phase: PARCONV_UPG terminates with TSV_TNEW_PAGE_ALLOG_FAILED

Critical OSS Notes:


819655 – Add. Info: Upgrade to SAP NW 2004s ABAP Oracle
820062 - Oracle Database 10g: Patchsets/Patches for 10.1.0
839574 – Oracle database 10g: Stopping the CSS services ocssd.bin
830576 – Parameter recommendations for Oracle 10g
868681 – Oracle Database 10g: Database Release Check
836517 – Oracle Database 10g: Environment for SAP Upgrade
853507 – Usage Type BI for SAP Netwaevers 2004s
847019 - BI_CONT 7.02: Installation and Upgrade Information
818322 – Add. Info: Upgrade to SAP NW 2004s ABAP
813658 - Repairs for upgrades to products based on SAP NW 2004s AS
855382 – Upgrade to SAP SEM 6.0
852008 – Release restrictions for SAP Netweaver 2004s
884678 - System info shows older release than the deployed one
558197 – Upgrade hangs in PARCONV_UPG
632429 – The upgrade strategy for the add-on BI_CONT
632429 - The upgrade strategy for the add-on FINBASIS
570810 - The upgrade strategy for the add-on PI_BASIS
632429 - The upgrade strategy for the add-on SEM-BW
069455 - The upgrade strategy for the add-on ST-A/PI
606041 - The upgrade strategy for the add-on ST-PI
632429 - The upgrade strategy for the add-on WP-PI

Post upgrade Steps :-


1. Read the Post Installation Steps documented in BW Component Upgrade Guide
2. Apply the following SAP OSS Notes 47019 - BI_CONT 7.02: Installation and Upgrade
558197 - upgrade hangs in PARCONV_UPG, XPRAS_UPG,
SHADOW_IMPORT_UPG2 836517 – Oracle Database 10g: Environment for SAP
Upgrade
3. Install the J2EE instance as Add-in to ABAP for BI 7.0 and apply the Support Patch
that equivalent to ABAP Support Patch.
4. Run SGEN to recompile programs.
5. Install the Kernel Patch.
6. Missing Variants - (also part of test script) look at the RSRVARIANT table in SE16. If
it is empty, then you will definitely need to run RSR_VARIANT_XPRA program -
RSR_VARINT_XPRA OSS -953346 and 960206 1003481
7. Trouble Shootauthorizations 820183 - New authorization concept in BI

SAP R/3 BW Source and SID Table


R/3 Source Table.field - How To Find?
What is the quickest way to find the R/3 source table and field name for a field
appearing on the BW InfoSource?

By: Sahil

With some ABAP-knowledge you can find some info:

1, Start ST05 (SQL-trace) in R/3


2, Start RSA3 in R/3 just for some records
3, After RSA3 finishes, stop SQL-trace in ST05
4, Analyze SQL-statements in ST05

You can find the tables - but this process doesn't help e.g for the LO-cockpit datasources.
Explain tables and sid tables.

A basic cube consists of fact table surrounded by dimension table. SID table links these
dimension tables to master data tables.

SID is surrogate ID generated by the system. The SID tables are created when we create a
master data IO. In SAP BW star schema, the distinction is made between two self
contained areas: Infocube & master data tables/SID tables.

The master data doesn't reside in the satr schema but resides in separate tables which are
shared across all the star schemas in SAP BW. A numer ID is generated which connects
the dimension tables of the infocube to that of the master data tables.

The dimension tables contain the dim ID and SID of a particular IO. Using this SID the
attributes and texts of an master data Io is accessed.

The SID table is connected to the associated master data tables via teh char key

Sid Tables are like pointers in C

The details of the tables in Bw :

Tables Starting with Description:

M - View of master data table


Q - Time Dependent master data table

H - Hierarchy table
K - Hierarchy SID table
I - SID Hierarchy structure
J - Hierarchy interval table

S - SID table
Y - Time Dependent SID table
T - Text Table

F - Fact Table - Direct data for cube ( B-Tree Index )


E - Fact Table - Compress cube ( Bitmap Index )

Explain the what is primary and secondary index.

When you activate an object say ODS / DSO, the system automatically generate an index
based on the key fields and this is primary index.

In addition if you wish to create more indexes , then they are called secondary indexes.
The primary index is distinguished from the secondary indexes of a table. The primary
index contains the key fields of the table and a pointer to the non-key fields of the table.
The primary index is created automatically when the table is created in the database.

You can also create further indexes on a table. These are called secondary indexes. This is
necessary if the table is frequently accessed in a way that does not take advantage of the
sorting of the primary index for the access. Different indexes on the same table are
distinguished with a three-place index identifier.

Lets say you have an ODS and the Primary Key is defined as Document Nbr, Cal_day.
These two fields insure that the records are unqiue, but lets lay you frequently want to run
queries where you selct data based on the Bus Area and Document Type. In this case, we
could create a secondary index on Bus Area, Doc Type. Then when the query runs,
instead of having to read every record, it can use the index to select records that contain
just the Bus Area and Doc type values you are looking for.

Just because you have a secondary index however, does not mean it will be used or
should be used. This gets into the cardinality of the fields you are thinking about
indexing. For most DBs, an index must be fairly selective to be of any value. That is,
given the values you provide in a query for Bus Area and Doc Type, if it will retrieve a
very small percentage of the rows form the table, the DB probably should use the index,
but if the it would result in retrieving say 40% of the rows, it si almost always better to
just read the entire table.

Having current DB statististics and possibly histograms can be very important as well.
The DB statistics hold information on how many distinct values a field has, e.g. how
many distinct values of Business Area are there, how many doc types.

Secondary indexes are usally added to ODS (which you can add using Admin Wkbench)
based on your most frequently used queries. Secondary indexes might also be added to
selected Dimension and Master data tables as well, but that usually requires a DBA, or
someone with similar privileges to create in BW.

Types of Update Methods


What are these update methods and which one has to use at what purpose.

R/3 update methods


-----------------------------------------------
1. Serialized V3 Update
2. Direct Delta
3. Queed Delta
4. Un Serialized Delta Update

By: Anoo
a) Serialized V3 Update

This is the conventional update method in which the document data is collected in the
sequence of attachment and transferred to BW by batch job.The sequence of the transfer
does not always match the sequence in which the data was created.

b) Queued Delta

In this mode, extraction data is collected from document postings in an extraction queue
from which the data is transferred into the BW delta queue using a periodic collective
run. The transfer sequence is the same as the sequence in which the data was created

c) Direct delta.

When a Document is posted it first saved to the application table and also directly saved
to the RSA7 (delta queue) from here it is being moved to BW.

So you can understand that for Delta flow in R/3 Delta queue is the exit point.

d) Queued Delta

When a document is posted it is saved to application table, and also saved to the
Extraction Queue ( here is the different to direct delta) and you have to schedule a V3 job
to move the data to the delta queue periodically and from their it is moved to BW.

e) Unserialized V3 Update

This method is largely identical to the serialized V3 update. The difference lies in the fact
that the sequence of document data in the BW delta queue does not have to agree with the
posting sequence. It is recommended only when the sequence that data is transferred into
BW does not matter (due to the design of the data targets in BW).

You can use it for Inventory Management, because once a Material Document is created,
it is not edited. The sequence of records matters when a document can be edited multiple
times. But again, if you are using an ODS in your inventory design, you should switch to
the serialized V3 update.

Вам также может понравиться