Академический Документы
Профессиональный Документы
Культура Документы
• Business Explorer: As the top layer in the SAP BW architecture, the Business
Explorer (BEx) serves as the reporting environment (presentation and analysis)
for end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx
Map for analysis and reporting activities.
• Data warehouse management and administration: These tasks are handled by the
production data extractor (a set of programs for the extraction of data from R/3
OLTP applications such as logistics, and controlling), the staging engine, and the
Administrator Workbench.
• Data storage and representation: These tasks are handled by the InfoCubes in
conjunction with the data manager, Metadata repository, and Operational Data
Store (ODS).
• Source Systems: The source systems, as the bottom layer, serve as the data
sources for raw business data. SAP BW supports various data sources:
• R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior
to Release 3.1H (SAP BW regards them as external systems)
• Non-SAP systems or external systems
• mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM,
or R/3 components) or another SAP BW system.
We are wanting that existing legacy system to go away and need to find a home for
the data and the functionality to access and report on that data. What options does
SAP afford for data warehousing? How does it affect the response of the SAP
database server?
We are thinking of moving the data onto a scaleable NT server with large amounts
of disk (10gb +) and using PC tools to access the data. In this environment, our
production SAP machine would perform weekly data transfers to this historical
sales reporting system.
Has anybody implemented a similar solution or have any ideas on a good attack
method to solve this issue?
You may want to look at SAP's Business Information Warehouse. This is their answer to
data warehousing. I saw a presentation on this last October at the SAP Technical
Education Conference and it looked pretty slick.
BIW runs on its own server to relieve the main database from query and report
processing. It accepts data from many different types of systems and has a detailed
administration piece to determine data source and age. Although the Information System
may be around for sometime it sounded like SAP is moving towards the Business
Information Warehouse as a reporting solution.
Ever heard about apples and oranges. SAP/R3 is an OLTP system where as BIW
is an OLAP system. LIS reports can not provide the functionality provided
by BIW.
-----Reply Message-----
Subject: Business Information Warehouse
Hello,
The following information is for you to get more clarity on the subject:
SAP R/3 LIS (Logistic Information System) consist of infostructures (which
are representation of reporting requirements). So whenever any event (goods
reciept, invoice reciept etc. ) takes place in SAP R/3 module, if relevant
to the infostructure, an corresponding entry is made in the infostructures.
Thus infostructures form the database part of the datawarehouse. For
reporting the data (based on OLAP features such drill-down, abc, graphics
etc.), you can use SAP R/3 standard analysis (or flexible analysis) or
Business Warehouse (which is excel based) or Business Objects (which is
third party product but can interface with SAP R/3 infostructures using BAPI
calls).
In short, the infostructures (which are part of SAP R/3 LIS) form the data
basis for reporting with BW.
Use of manual security profiles with SAP BW
Our company is currently on version 3.1H and will be moving to 4.6B late
summer 2000. Currently all of our R/3 security profiles were created
manually. We are also in the stage of developing and going live with the
add-on system of Business Warehouse (BW). For consistency, we have wish
to use manual profiles within the BW system and later convert all of our
manual security profiles (R/3 and BW) to profile generated ones.
Is there anyone else that can shed any light on this situation? (Success
or problems with using manual security profiles with BW?)
You are going to have fun doing this upgrade. The 4.6b system is a
completely different beast than the 3.1h system. You will probably find a
lot of areas where you have to extend you manually created profiles to
cover new authorisation objects (but then you can have this at any level).
In 4.6b you really have to use the profile generator, but at least there is
a utility to allow you to pick up your manually created profile and have it
converted to an activity group for you. This will give you a running start
in this area, but you will still have a lot of work to do.
The fact that you did not use PG at 3.1h will not matter as it changed at
4.5 too and the old activity groups need the same type of conversion (we
are going through that bit right now).
1. SPRO is the transaction code for Implementation Guide, where you can do
configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.
* Transport Settings : like settings for source system name change after transport and
create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed
requests in the format of the transfer structure. It is defined according to the Datasource
and source system, and is source system dependent.
IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.
See OSS note 411464 (example concerning Info Structures from purchasing documents)
to create smaller jobs in order to integrate a large amount of data.
For example, if you wish to split your 500,000 entries in five intervals:
I am having few request with the without data mart status. How can I use only them
& create a export datasource?
Can you please tell me how my data mechanism will work after the loading?
1. Select Source data target( in u r case X) , in the context menu click on Create Export
Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.
2. In Modelling menu click on Source Systems, Select the logical Source System of your
BW server, in the context menu click on Replicate DataSource.
Hello,
The problem is that when i use the extractor checker (rsa3) in R/3 for a
specific datasource (0CO_OM_OPA_1) it shows me that there are 1600 records.
Thanks
-----Reply Message-----
Subject: RE: Difference in number of data records
By: Sahil
You can find the tables - but this process doesn't help e.g for the LO-cockpit datasources.
Explain tables and sid tables.
A basic cube consists of fact table surrounded by dimension table. SID table links these
dimension tables to master data tables.
SID is surrogate ID generated by the system. The SID tables are created when we create a
master data IO. In SAP BW star schema, the distinction is made between two self
contained areas: Infocube & master data tables/SID tables.
The master data doesn't reside in the satr schema but resides in separate tables which are
shared across all the star schemas in SAP BW. A numer ID is generated which connects
the dimension tables of the infocube to that of the master data tables.
The dimension tables contain the dim ID and SID of a particular IO. Using this SID the
attributes and texts of an master data Io is accessed.
The SID table is connected to the associated master data tables via teh char key
H - Hierarchy table
K - Hierarchy SID table
I - SID Hierarchy structure
J - Hierarchy interval table
S - SID table
Y - Time Dependent SID table
T - Text Table
When you activate an object say ODS / DSO, the system automatically generate an index
based on the key fields and this is primary index.
In addition if you wish to create more indexes , then they are called secondary indexes.
The primary index is distinguished from the secondary indexes of a table. The primary
index contains the key fields of the table and a pointer to the non-key fields of the table.
The primary index is created automatically when the table is created in the database.
You can also create further indexes on a table. These are called secondary indexes. This is
necessary if the table is frequently accessed in a way that does not take advantage of the
sorting of the primary index for the access. Different indexes on the same table are
distinguished with a three-place index identifier.
Lets say you have an ODS and the Primary Key is defined as Document Nbr, Cal_day.
These two fields insure that the records are unqiue, but lets lay you frequently want to run
queries where you selct data based on the Bus Area and Document Type. In this case, we
could create a secondary index on Bus Area, Doc Type. Then when the query runs,
instead of having to read every record, it can use the index to select records that contain
just the Bus Area and Doc type values you are looking for.
Just because you have a secondary index however, does not mean it will be used or
should be used. This gets into the cardinality of the fields you are thinking about
indexing. For most DBs, an index must be fairly selective to be of any value. That is,
given the values you provide in a query for Bus Area and Doc Type, if it will retrieve a
very small percentage of the rows form the table, the DB probably should use the index,
but if the it would result in retrieving say 40% of the rows, it si almost always better to
just read the entire table.
Having current DB statististics and possibly histograms can be very important as well.
The DB statistics hold information on how many distinct values a field has, e.g. how
many distinct values of Business Area are there, how many doc types.
Secondary indexes are usally added to ODS (which you can add using Admin Wkbench)
based on your most frequently used queries. Secondary indexes might also be added to
selected Dimension and Master data tables as well, but that usually requires a DBA, or
someone with similar privileges to create in BW.
By: Anoo
a) Serialized V3 Update
This is the conventional update method in which the document data is collected in the
sequence of attachment and transferred to BW by batch job.The sequence of the transfer
does not always match the sequence in which the data was created.
b) Queued Delta
In this mode, extraction data is collected from document postings in an extraction queue
from which the data is transferred into the BW delta queue using a periodic collective
run. The transfer sequence is the same as the sequence in which the data was created
c) Direct delta.
When a Document is posted it first saved to the application table and also directly saved
to the RSA7 (delta queue) from here it is being moved to BW.
So you can understand that for Delta flow in R/3 Delta queue is the exit point.
d) Queued Delta
When a document is posted it is saved to application table, and also saved to the
Extraction Queue ( here is the different to direct delta) and you have to schedule a V3 job
to move the data to the delta queue periodically and from their it is moved to BW.
e) Unserialized V3 Update
This method is largely identical to the serialized V3 update. The difference lies in the fact
that the sequence of document data in the BW delta queue does not have to agree with the
posting sequence. It is recommended only when the sequence that data is transferred into
BW does not matter (due to the design of the data targets in BW).
You can use it for Inventory Management, because once a Material Document is created,
it is not edited. The sequence of records matters when a document can be edited multiple
times. But again, if you are using an ODS in your inventory design, you should switch to
the serialized V3 update.