Вы находитесь на странице: 1из 7

What is SAP? SAP means Systems, Applications and Products in data processing.

SAP is the third largest software company in the world. SAP is the largest business application and Enterprise Resource Planning (ERP) solution software provider in terms of revenue. SAP's products focus on ERP, which it helped to pioneer. The company's main product is MySAP ERP. The name of its predecessor, SAP R/3 give a clue to its functionality: the "R" stands for realtime data processing and the number 3 relates to a 3-tier architecture: database, application server and client (SAPgui). R/2, which ran on a Mainframe architecture, was the first SAP version. Other major product offerings include Advanced Planner and Optimizer (APO), Business Information Warehouse (BW), Customer Relationship Management (CRM), Supply Chain Management (SCM), Supplier Relationship Management (SRM), Human Resource Management Systems (HRMS), Product Lifecycle Management (PLM), Exchange Infrastructure (XI), Enterprise Portal (EP) and Knowledge Warehouse (KW). The APO name has been retired and rolled into SCM. The BW name (Business Warehouse) has now been rolled into the SAP NetWeaver BI (Business Intelligence) suite and functions as the reporting module. The company also offers a new technology platform, named SAP NetWeaver. While its original products are typically used by Fortune 500 companies, SAP is now also actively targeting small and medium sized enterprises (SME) with its SAP Business One and SAP All-in-One. Reportedly, there are over 100,800 SAP installations at more than 28,000 companies. SAP products are used by over 12 million people in more than 120 countries. What is BI Business Intelligence? BI is an abbreviation of the two words Business Intelligence, bringing the right information at the right time to the right people in the right format. It is a 5-step process to run your business smarter, starting with registering the right data correctly, collecting the data from multiple sources, transforming, combining and storing it in a data warehouse. This data should be reported, analysed and distributed to the right people at the right time in the right format. The figure below shows these steps.

Business Intelligence is about connecting people using a proper information infrastructure and performance driven culture, enabling them working smarter and more closely together towards company and personal goals. Different applications and tools may exist in a business intelligence infrastructure, including reporting tools. This type of application is mainly used to understand the business processes, and the operational results, of the organisation. This basic application contains often just a collection of reports, used for operational purposes. For each organisation, customers should be considered and treated as the most important business partner. Because of that, many organisations understand that to be fully customer oriented, they should have a 360 degree view of the customer. This BI application, often implemented side by side with a Customer Relationship management system, collects all the customer data that a organisation registers, combine it and display it to the users in different formats. An application to measure the effectiveness of the marketing activities, for example to measure the relationship between ads and the revenue by product. With marketing intelligence, we measure and analyse also market shares, market growth, brand awareness, the effectiveness of marketing campaigns and product portfolio. With this application we can get a clear view of the contribution of marketing to the other activities of the company. What is BW? SAP Business Information Warehouse (SAP BW) is the name of the Business Intelligence, analytical, reporting and Data Warehousing (DWH) solution which is one of the major enterprise software applications produced by SAP AG. BW consists among other things of components for data management (Data Warehousing Workbench), extensive data modeling capabilities, an embedded analytical engine, a suite of rich front-end analytical tools refererred to as Business Explorer (BEx), and operational tools used for importing the most current transactional data into the system. It may be helpful to consider three layers that make up the structure of SAP's BI solution Extraction, Transformation and Load (ETL) layer - responsible for extracting data from a specific source, applying transformation rules, and loading it into SAP BW system. Data warehouse area - responsible for storing the information in a various types of structures, including multidimensional structures called InfoCubes. Reporting - responsible for accessing the information in data warehouse area and presenting it in a user-friendly manner to the analyst or business user. SAP's BI is a very pervasively employed data warehouse / analytical enterprise software solution, often utilized heavily by companies that essentially run their business on SAP operational systems such as SAP Enterprise Resource Planning (SAP ERP, traditionally known as SAP R/3). SAP's BI solution contains a large number of predefined extractors, a very significant amount of pre-defined business content in the form of InfoCubes, master data (i.e. objects representing the entity for "Customer", or "Material"), authorization roles, query views and reports all delivered in the software by SAP. Some of the significant benefits of this approach include the ability to leverage SAP's substantial business knowledge from many years of developing extremely sophisticated software systems for the world's largest companies, typically shortening project development cycles in the process. The pre-defined business content can be modified to meet an organization's specific requirements, while at the same time taking advantage of general aspects of these delivered data models and reports that are somewhat foundational and have broad application in business, government, and education. SAP BW is now part of a suite of SAP applications called SAP NetWeaver. Other components of SAP NetWeaver include SAP Enterprise Portal (EP), Web Application Server (WAS), SAP Process Integration (XI, or eXchange Infrastructure) and Master Data Management (MDM).

Like most data warehouses, BW is a combination of databases and database management tools that are used to support management decision making. BW supplies the infrastructure typical of data warehouses, but also includes preconfigured data extractors, analysis and report tools, and business process models. Among the other features of BW are: Business Application Programming Interfaces (BAPIs) that enable connections to non-R/3 applications; preconfigured business content; an integrated OLAP processor; automated data extraction and loading routines; a metadata repository; administrative tools; multiple language support; and Business Explorer, a Web-based user interface. SAP Business Warehouse is an integral component of the company's mySAP Business Intelligence group of products.

Difference between OLAP and OLTP? One of the most commonly misunderstood terms is OLTP and OLAP. What are OLTP and OLAP? OLTP means Online Transaction Processing. OLAP means Online Analytical Processing. The meanings are synonymous with their names. OLTP deals with processing of data from transactional systems. For example, an application that loads the reservation data of a hotel is an OLTP system. An OLTP system is designed mainly keeping in the mind the performance of the end application. It comprises of the application, database & the reporting system that directly works on this database. The database in an OLTP system would be designed in a manner as to facilitate the improvement in the application efficiency thereby reducing the processing time of the application. For example, consider a hotel reservation application. The database of such an application would be designed mainly for faster inserts of the customer related data. It would also be designed in a manner as to get a faster retrieval of the hotel room availability information. Such a database is part of an OLTP system. Whenever a reporting tool is made to work on such an application database then it forms the OLTP system. Generally, an OLTP system refers to the type of database a reporting tool works on. During the 1980s, many applications were developed to cater to needs of many upcoming organizations. All the applications required a database to process, load & extract the data. The entire database was designed keeping in mind the performance of the end application. As the organizations developed, they felt the importance of analyzing the data that was collected. The analysis performed on such data resulted in alarming number of findings that helped the organizations to make important business decisions. Hence, more need was felt to develop full reporting solutions. This is the period when more & more reporting tools came into the market. But the performance of these reporting tools was very poor since they were made to extract data from a system/database that was mainly developed keeping in mind the performance of the application. There were 2 main reasons why a DW came into being 1. Decrease in the performance of front-end applications as more & more data was collected. A need to isolate older data was felt. 2. Importance of reporting was felt in equal terms among all organizations. Existing reporting systems were poor since they had to work on existing application databases. OLAP systems were mainly developed using data in a warehouse. Having said that a need was felt to isolate older data, it was necessary to store them in a format that would be useful in easing out the reporting bottlenecks. A need was felt to isolate the data & redesign the application data to such a format & structure that this data repository would be the prime source of business decisions. Coming back to OLAP systems, these systems were mainly developed on the isolated data. The isolated data provided a means for faster, easier & efficient reporting. OLAP system need not always do reporting from a DW. The criterion is that it must be doing reporting from a system/database that does not involve ongoing transactions. For example, some organizations create something called an ODS (Operational Data Store) which would be a replica of the transactional data. This data store would then be used for reporting. But generally OLAP is synonymous with a Data Warehouse. As the saying goes If something goes in, it has to come out, it is applicable here. Applications put the data in. The reporting systems take the data out. OLTP is nothing but OnLine Transaction Processing ,which contains a normalised tables and online data,which have frequent insert/updates/delete. But OLAP(Online Analtical Programming) contains the history of OLTP data, which is, non-volatile ,acts as a Decisions Support System and is used for creating forecasting reports. What is means of updates and what are the different updates are available in sap bi? The update type is set in the Update Parameters tab page of InfoPackage. The following update functions are available in the Infopackage scheduler: 1. Full update 2. Delta update 3. Initialization of the Delta Process 4. Early Delta Initialization 5. Build Initial Non-Cumulative You can select Full update, Initialization or Delta Update.

You can also check this in the RSA3. If you say 'F' in update mode of RSA3, it means full update which is same as InfoPackage full update. If you select 'C' in RSA3, it means Initilization in InfoPackage and 'D' in RSA3 means Delta update in InfoPackage 1. Full Update When you run the using Full Upload, what ever data is there in source everything will be pulled to BW. If you use this, you can't run the Delta. If you want to run the Delta further, you need to run the Init without Data then next time onwards it will allow you to run the Delta upload. A full update requests all data that corresponds to the selection criteria you determined in the scheduler. In a certain period of time specified in the scheduler, 100,000 data records are accumulated. With a full update, all 100 000 data records are requested. Using the Scheduler menu, you can indicate a request with full-update mode as Repair Full Request. This request can be updated into every data target, even if the data target already contains data from an initialization run or delta for this DataSource/source system combination, and has overlapping selection criteria. Using the full repair request to reload the data into the DataStore object after selectively deleting data from the DataStore object can result in inconsistencies in the data target. Updating this request can lead to duplicate data records in the data record when the repair request selections do not agree with the selections from selectively deleting from the DataStore object. 2. Delta Update peared since the last delta.

will be performed in repeat mode.

tus of the delta request has been set to red manually, the next data request nce the

original request. A repeat can only be requested in the dialog.

ffected. If you cannot delete the data from the data targets, duplicate data records may be produced when the repeat request is performed. Delta type is one of the key property for SAP Data Source. there are four types of delta types. F, A, D, E F:-FlatFileProvidesDelta A:-UseALEChange log delta D:- Application determines the delta Ex: LO, FI-AR/AP E:- Extractor determines the delta Ex:- LIS, CO-PA, CUBE Initialization of the Delta Process Init is nothing with Full upload + Delta Enables. Once you run this it will fetch entire records from the Source, delta will be automatically enables, so next time onwards only delta will be coming to BW. To request deltas, you need to have initialized the delta process. For selection criteria that do not overlap, several initialization criteria are possible for initializing the delta process when scheduling InfoPackages for data from an SAP system. This gives you the option of loading the relevant data for the delta process, step by step into the Business Information Warehouse. A delta requested after several initializations, contains the super set of all the successful initial selections as a selection condition. This selection condition can then no longer be changed for the delta. The selections for initialization are made in the scheduler of Infopackage on the Select Data tab page. You can display all initializations for a DataSource in the scheduler by choosing Scheduler Initialization Selection for Source System. You can see whether an initialization selection has been executed and what the status of the initialization request is. From this dialog box, you can also transfer initialization selections into the scheduler, and delete selected initializations again. Choosing the pushbutton Initialization Request Status brings you to the monitor, where you can check the data request. Delta initialization can only be simulated for DataSources from SAP source systems if the DataSource supports this. Generic DataSources, for which you have set a generic delta, also support the simulation of delta initialization. The option of simulating delta initialization is generally offered for loading processes from other source systems (for example, file or non-SAP systems). Early Delta Initialization With early delta initialization, you have the option of writing the data into the delta queue or into the delta tables for the application during the initialization request in the source system. This means that you are able to execute the initialization of the delta process (the init request), without having to stop the updating of data in the source system. You can only execute an early delta initialization if the DataSource extractor called in the source system with this data request supports this. Extractors that support early delta initialization were delivered with Plug-Ins as of Plug-In (-A) 2002.1 You cannot run an initialization simulation together with an early delta initialization Build Initial Non-Cumulative This update method is only available when you want to load data into a data target that contains a non-cumulative key figure.

What is a fact table? A fact table is the central table in a star schema of a data warehouse. A fact table stores quantitative information for analysis and is often denormalized. A fact table works with dimension tables. A fact table holds the data to be analyzed, and a dimension table stores data about the ways in which the data in the fact table can be analyzed. Thus, the fact table consists of two types of columns. The foreign keys column allows joins with dimension tables, and the measures columns contain the data that is being analyzed. Suppose that a company sells products to customers. Every sale is a fact that happens, and the fact table is used to record these facts. For example: Time ID Product ID Customer ID 4 17 2 8 21 3 8 4 1 Unit Sold 1 2 1

Now we can add a dimension table about customers: Customer ID Name 1 Brian Edge 2 Fred Smith 3 Sally Jones Gender M M F Income 2 3 1 Education 3 5 7 Region 4 1 3

In this example, the customer ID column in the fact table is the foreign key that joins with the dimension table. By following the links, you can see that row 2 of the fact table records the fact that customer 3, Sally Jones, bought two items on day 8. The company would also have a product table and a time table to determine what Sally bought and exactly when. When building fact tables, there are physical and data limits. The ultimate size of the object as well as access paths should be considered. Adding indexes can help with both. However, from a logical design perspective, there should be no restrictions. Tables should be built based on current and future requirements, ensuring that there is as much flexibility as possible built into the design to allow for future enhancements without having to rebuild the data. Table Type Listing: Attribute tables:

Attribute tbl for Time Independent attributes: /BI*/P<characteristic_name> stored with characteristic values

Attribute tbl for Time Dependent attributes:


/BI*/Q<characteristic_name> Fields DATETO & DATEFROM are included in time dependent attribute tbl. stored with characteristic values

Dimension tables:

Dimension tbls (i.e. DIM tables): /BI*/D<Cube_name><dim.no.> stores the DIMID, the pointer between fact tbl & master data tbl data is inserted during upload of transact.data (data is never changed, only inserted) Examples: o /bic/D(cube name)P is the package dimension of a content cube o /bic/D(cube name)U is the unit dimension of a content cube o /bic/D(cube name)T is the time dimension of a content cube o /bic/D(cube name)I is the user defined dimension of a content cube

External Hierarchy tables:


/BI*/I*, /BI*/J*, /BI*/H*, /BI*/K* /BI0/0P... are tables that occur in the course of an optimized preprocessing that contains many tables. bic/H(object name) hierarchy data of object For more information see SAP Note 514907.

Fact tables:

In SAP BW, there are two fact tables for including transaction data for Basis InfoCubes: the F and the E fact tables. o /bic/F(cube name) is the F-fact table of a content cube o /bic/E(cube name) is the E-fact table of a content cube The Fact tbl is the central tbl of the InfoCube. Here key figures (e.g. sales volume) & pointers to the dimension tbls are stored (dim tbls, in turn, point to the SID tbls). If you upload data into an InfoCube, it is always written into the F-fact table. If you compress the data, the data is shifted from the F-fact table to the E-fact table. The F-fact tables for aggregates are always empty, since aggregates are compressed automatically After a changerun, the F-fact table can have entries as well as when you use the functionality 'do not compress requests for Aggregates. E-fact tbl is optimized for Reading => good for Queries F-fact tbl is optimized for Writing => good for Loads See SAP Notes 631668 & 1461926.

Master Data tables


/BI0/P<char_name> /bic/M(object name) master data of object Master data tables are independent of any InfoCube Master data & master data details (attributes, texts & hierarchies) are stored. Master data table stores all time independent attributes (display & navigational attribues)

Navigational attributes tables:


SID Attribute table for time independent navigational attributes: /BI*/X<characteristic_name> SID Attribute tbl for time dependent navigational attributes: /BI*/Y<characteristic_name> Nav.attribs can be used for naviagtion purposes (filtering, drill down). The attribs are not stored as char values but as SIDs (master data IDs).

P table:

P-table only gets filled if you load master data explicitly. As soon as the SID table is populated, the P tbl is populated as well

SID table:

SID tbl: /BI*/S<characteristic> stores the char value (eg customer number C95) & the SID. The SID is the pointer that is used to link the master data tbls & the dimension tbls. The SID is generated during the upload (uniqueness is guaranteed by a number range obj). Data is inserted during the upload of master data or of transactional data S table gets filled whenever transaction gets loaded. That means if any new data is there for that object in the transactions then SID table gets fillled.

Text table:

Text tbl: /BI*/T<characteristic> stores the text for the chars data is inserted & changed during the upload of text data attribs for the InfoObject stored either language dependent or independent

What is Transformation?

Use The transformation process allows you to consolidate, cleanse, and integrate data. You can semantically synchronize data from heterogeneous sources. When you load data from one BI object into a further BI object, the data is passed through a transformation. A transformation converts the fields of the source into the format of the target. Features You create a transformation between a source and a target. The BI objects DataSource, InfoSource, DataStore object, InfoCube, InfoObject and InfoSet serve as source objects. The BI objects InfoSource, InfoObject, DataStore object and InfoCube serve as target objects.

Вам также может понравиться