Вы находитесь на странице: 1из 102

Modeling

PDF download from SAP Help Portal: http://help.sap.com/saphelp_nw73/helpdata/en/a3/fe1140d72dc442e10000000a1550b0/frameset.htm Created on March 30, 2014

The documentation may have changed since you downloaded the PDF. You can always find the latest information on SAP Help Portal.

Note This PDF document contains the selected topic and its subtopics (max. 150) in the selected structure. Subtopics from other structures are not included. The selected structure has more than 150 subtopics. This download contains only the first 150 subtopics. You can manually download the missing subtopics.

2014 SAP AG or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. National product specifications may vary. These materials are provided by SAP AG and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and other countries. Please see www.sap.com/corporate-en/legal/copyright/index.epx#trademark for additional trademark information and notices.

Table of content

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 1 of 102

Table of content
1 Modeling 2 Graphical Modeling 2.1 Creating Data Flows or Data Flow Templates 2.2 Adding Objects and Connections to the Data Flow 2.3 Using SAP Data Flow Templates 2.3.1 Naming Conventions for SAP Dataflow Templates 2.3.1.1 Naming Conventions for InfoProviders of SAP Dataflow Templates 2.3.1.2 Naming Conventions for InfoSources and DTPs for SAP Dataflow Templates 2.3.2 LSA100: Basic Flow - Layering Data & Logic 2.3.3 LSA110: Basic Flow Extensions - CM & Harmonization InfoSource 2.3.4 LSA300: Tactical Scalability - Flow Split Logical Partitioning 2.3.5 LSA310: Scalability - Flow Split and InfoSources 2.3.6 LSA315: Scalability - InfoSources & Single ERP Source 2.3.7 LSA320: Scalability - Entire Data Flow Split 2.3.8 LSA330: Scalability - Flow Split Using a Pass Thru DataStore Object 2.3.9 LSA400: Scalability & Domains - Strategic Flow Split 2.3.10 LSA410: Scalability & Domains - Strategic Flow Collect 2.3.11 LSA420: Scalability & Domains - Business Transformation Layer 2.4 Showing and Creating Documentation 2.5 Saving Data Flow Documentation as an HTML File 2.6 Additional Functions in Graphic Data Flow Modeling 3 Enterprise Data Warehouse Layer 3.1 DataSource 3.1.1 Functions for DataSources 3.1.2 DataSource Maintenance in BW 3.1.2.1 Editing DataSources from SAP Source Systems in BW 3.1.2.2 Creating DataSources for File Source Systems 3.1.2.3 Creating a DataSource for UD Connect 3.1.2.4 Creating DataSources for DB Connect 3.1.2.5 Creating DataSources for Web Services 3.1.3 Emulation, Migration and Restoring DataSources 3.1.3.1 Using Emulated 3.x DataSources 3.1.3.2 Migrating a DataSource 3.x Manually (SAP Source System, File, DB Connect) 3.1.3.3 Migrating 3.x DataSources (UD Connect, Web Service) 3.1.3.4 Restoring 3.x DataSources Manually 3.2 Persistent Staging Area 3.2.1 DB Memory Parameters 3.2.2 Deleting Requests from the PSA 3.2.3 Previous Technology of the PSA 3.2.3.1 Persistent Staging Area 3.2.3.1.1 Types of Data Update with PSA 3.2.3.1.2 Checking and Changing Data 3.2.3.1.3 Checking and Changing Data Using PSA-APIs 3.2.3.1.4 Versioning 3.2.3.1.5 DB Memory Parameters 3.2.3.1.6 Reading the PSA and Updating a Data Target 3.3 Creating InfoObjects 3.3.1 Creating InfoObject Catalogs 3.3.1.1 Additional Functions in the InfoObject Catalog 3.3.2 InfoObject Naming Conventions 3.3.3 Creating InfoObjects: Characteristic 3.3.3.1 Tab Page: General 3.3.3.2 Tab Page: Business Explorer 3.3.3.2.1 Mapping Geo-Relevant Characteristics 3.3.3.2.1.1 Static and Dynamic Geo-Characteristics 3.3.3.2.1.1.1 Shapefiles 3.3.3.2.1.1.2 Delivered Geo-Characteristics 3.3.3.2.1.2 SAPBWKEY Maintenance for Static Geo-Characteristics 3.3.3.2.1.2.1 Creating a Local Copy of the Shapefile 3.3.3.2.1.2.2 Downloading BW Master Data into a dBase File

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 2 of 102

3.3.3.2.1.2.3 Maintaining the SAPBWKEY Column 3.3.3.2.1.2.4 Uploading Edited Shapefiles into BW Systems 3.3.3.2.1.3 Geocoding 3.3.3.2.1.3.1 Downloading BW Master Data into a dBase File 3.3.3.2.1.3.2 Geocoding Using ArcView GIS 3.3.3.2.1.3.3 Converting dBase Files into CSV Files 3.3.3.3 Tab: Master Data/Texts 3.3.3.4 Tab Page: Hierarchy 3.3.3.5 Tab Page: Attributes 3.3.3.6 Tab Page: Compounding 3.3.3.7 Tab: BWA Index 3.3.3.8 Characteristic Compounding with Source System ID 3.3.3.8.1 Assigning a Source System to a Source System ID 3.3.3.9 Navigation Attribute 3.3.3.9.1 Creating Navigation Attributes 3.3.3.9.2 Performance of Navigation Attributes in Queries and Value Help 3.3.3.9.3 Transitive Attributes as Navigation Attributes 3.3.3.10 Conversion Routines in the BW System 3.3.3.10.1 ALPHA Conversion Routine 3.3.3.10.2 BUCAT Conversion Routine 3.3.3.10.3 EAN11 Conversion Routine 3.3.3.10.4 GJAHR Conversion Routine 3.3.3.10.5 ISOLA Conversion Routine 3.3.3.10.6 MATN1 Conversion Routine 3.3.3.10.7 NUMCV Conversion Routine 3.3.3.10.8 PERI5 Conversion Routine 3.3.3.10.9 PERI6 Conversion Routine 3.3.3.10.10 PERI7 Conversion Routine 3.3.3.10.11 POSID Conversion Routine 3.3.3.10.12 PROJ Conversion Routine 3.3.3.10.13 REQID Conversion Routine 3.3.3.10.14 IDATE Conversion Routine 3.3.3.10.15 RSDAT Conversion Routine 3.3.3.10.16 SDATE Conversion Routine 3.3.3.10.17 WBSEL Conversion Routine 3.3.4 Creating InfoObjects: Key Figure 3.3.4.1 Tab Page: Type/Unit 3.3.4.2 Tab Page: Aggregation 3.3.4.3 Registerkarte: Weitere Eigenschaften 3.3.5 Editing InfoObjects 3.3.6 Additional Functions in InfoObject Maintenance 3.3.7 Modeling InfoObjects as InfoProviders 3.4 Using Master Data and Master Data-Bearing Characteristics 3.4.1 Master Data Types: Attributes, Texts, and Hierarchies 3.4.2 Creating and Changing Master Data 3.4.2.1 Maintaining Time-Dependent Master Data 3.4.2.2 Time-Dependent Master Data from Different Systems 3.4.3 Deleting Master Data at Single Record Level 3.4.4 Deleting Attributes and Texts for a Characteristic 3.4.5 Activating Master Data 3.4.5.1 Versioning Master Data 3.4.6 Reorganizing Master Data 3.4.7 Simulate Loading of Master Data 3.4.8 Master Data Lock 3.4.9 Loading Master Data from Source Systems Directly into InfoProviders 3.5 Creating InfoProviders 3.5.1 InfoProvider Types 3.5.2 Decision Tree for InfoProviders 3.5.2.1 InfoProvider for Storing Reusable Data 3.5.2.2 InfoProvider for Defined Reporting and Analysis Requests 3.5.2.3 InfoProvider for Ad Hoc Reporting and Analysis Requests 3.6 Creating DataStore Objects

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 3 of 102

3.6.1 Setting the DataStore Object Type 3.6.2 DataStore Object Types 3.6.2.1 Standard DataStore Object 3.6.2.2 Write-Optimized DataStore Object 3.6.2.3 Creating DataStore Objects for Direct Update 3.6.2.3.1 APIs of the DataStore Object for Direct Update 3.6.3 Scenario for Using Standard DataStore Objects 3.6.4 Scenario for Using Write-Optimized DataStore Objects 3.6.5 Scenario for Using DataStore Objects for Direct Update 3.6.6 SAP-HANA-Optimized Activation of DataStore Objects 3.6.6.1 SAP HANA-Optimized DataStore Object (Obsolete) 3.6.7 DataStore Object Settings 3.6.8 Additional Functions in DataStore Object Maintenance 3.6.8.1 InfoProvider Properties 3.6.8.2 DB Memory Parameters 3.6.8.3 Partitioning 3.6.8.3.1 Partitioning InfoProviders Using Characteristic OFISCPER 3.6.8.4 Repartitioning 3.6.8.5 Multidimensional Clustering 3.6.8.5.1 Definition of Clustering 3.6.9 Performance Tips for DataStore Objects. 3.6.10 Integration in the Data Flow 3.7 Using Semantic Partitioning 3.7.1 Creating a Semantically Partitioned Object 3.7.2 The Wizard 3.7.3 Creating Transformations for a Semantically Partitioned Object 3.7.4 Creating a DTP for a Semantically Partitioned Object 3.7.5 Creating Process Chains for a Semantically Partitioned Object

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 4 of 102

1 Modeling
Concept
Modeling data in the BW principally involves data staging and modeling the layers of a Data Warehouse. The concept of layered scalable archictecture (LSA) assists you in designing and implementing various layers in the BW system for data acquisition, Corporate Memory, data distribution and data analysis. Here we differentiate between two main layers: the Enterprise Data Warehouse layer and the Architected Data Mart layer. The following graphic illustrates the structure of the different layers:

More information: Enterprise Data Warehouse Layer Architected Data Mart Layer The tool you use for modeling is the Data Warehousing Workbench. The graphical user interface makes it easier for you to create data flows. You can model topdown data flows using this interface. SAP provides you with predefined data flow templates. These are best practice models that help you to create optimized models. More information: Graphical Modeling You can also create bottom-up objects by using the object trees in the Data Warehousing Workbench. Objects created in this way can then be used in the graphical data flow modeling. For more information on creating individual objects, read the relevant object documentation.

2 Graphical Modeling
Use
A graphical user interface enables you to easily create and document data flows and data flow templates in the Data Warehousing Workbench. In the following cases, you can make full use of the advantages of graphic data flow modeling:
Application Scenario Top-down modeling of new data flows Advantages Top-down modeling makes quick and structured modeling possible directly in the BW system. To start with, you can create the logical data flow with all of its elements without having to store the objects on the database. This allows you to create a blueprint of your data models directly in the BW system. In a subsequent step, you can add the required technical properties to these objects and therefore persist them to the BW metadata tables. Structuring your implementation using data flows You can group modeling objects and store them as a persistent view for your enterprise model. You can also include existing models (from SAP NetWeaver 7.0 or higher). You can get a clearer overview of the data flows by structuring them according to application areas or Data Warehouse layers, for example. You can document your data flows and the objects that they contain. You can use the BW transport connection to collect and transport the objects belonging to a data flow. You can use naming conventions to use and reuse the same models. Organisation of existing models in data flows after upgrading to the current release Using data flows as templates to set modeling standards You can model templates quickly and simiply by copying them to your data flow and adapting them. SAP NetWeaver BW provides Best Practice models in the form of predefined SAP data flow templates for Layered Scalable Architecture (LSA). You can use templates to create your own company-wide standards.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 5 of 102

Using templates can help to reduce development costs.

The data flow or data flow template is a standalone TLOGO object type (DMOD). Data flows and data flow templates can be transported and have a repository connection and document connection as well as a connection to version management. Explanation of the Terms Data Flow and Data Flow Template Dataflow A data flow in the Data Warehouse displays a set of metadata objects in BW and their interrelationships. The relationships are displayed using transformations or are contained directly within the objects (as with MultiProviders, for example). The data flow specifies which objects are required for design time (modeling objects) and which processes and transformations are required at runtime (runtime objects), in order to be able to transfer semantically related data to BW, where the data can be consoldiated and integrated. A data flow of object type DMOD can contain persistent and non-persistent objects . We define persistent objects as any objects that are already saved in the metadata tables in the database and which can also be displayed in the object trees of the Data Warehousing Workbench, for example, in the InfoProvider tree. A persistent object can be contained in multiple data flows and can therefore be used again in different data flows. We define non-persistent objects as objects that have been created in the data flow maintenance but only attributes such as object type and name have been specified for these objects. These objects have also not been saved in the database. Non-persistent objects can only be displayed and used in the data flow in which they were created. They also cannot be displayed in any other object tree of the Data Warehousing Workbench. Data Flow Template A data flow can be saved and used as a data flow template if the data flow only contains non-persistent objects. These objects are used as place holders to define the object and several other properties (technical name and description). A data flow template describes a data flow scenario with all the required objects and can provide a scenario documentation, including information on why the objects are modeled in this way. Data flow templates are ideal for storing and documenting best practice modeling knowledge, which you can use to define the data flows. Data flow templates support the complex modeling of differentiated Data Warehouse layers in a Layered Scalable Architecture as well as fast modeling of simple standard data flows. You can also define your own templates to set standards for your organization. SAP provides documented data flow templates that you can use in your BW system when implementing an LSA. Integration into the Data Warehousing Workbench Data flows (and data flow templates) are displayed in the Data Warehousing Workbench modeling screen with the symbol in a separate object tree. The data flows are structured like InfoProviders by using InfoAreas. The folders (organized by object type) underneath a data flow show you which persistent objects (including runtime objects and transformations) are contained in the data flow. Here the remaining objects for a persistent object are displayed downwards in the data flow (as in the other object trees in the Data Warehousing Workbench). These objects are displayed even if they are not contained in the data flow that is modeled here. This means that it is easier to include relevant objects in the modeled data flow. In the object tree, the symbol in the object information column indicates whether a data flow has been created as a template. You can open the data flow maintenance screen for a data flow by choosing the context menu command Display or Change.

Note
Note that the SAP data flow templates provided by SAP are only displayed in the object tree if they exist in an active version. The active versions are then displayed in the Business Information Warehouse InfoArea. For use in customer-specific data flows however, the SAP data flow templates do not need to be copied over from the content to the active version. You can display the SAP data flow templates in Data Flow Maintenance by pressing into your data flow in the content version. Data Flow Display in the Data Flow Maintenance The data flow maintenance shows a data flow (or template) as a network with the following objects: Modeling objects that contain other objects or are derived from them: for example, InfoSets, MultiProviders, semantically partitioned objects, aggregation levels, HybridProviders Modeling objects that are used as the source and/or target for data transfer processes and transformations: for example, InfoCubes, DataStore objects, DataSources, open hub destinations Objects that have source or destination objects: Transformations, data transfer processes Other objects: InfoPackages that DataSources in BW have as a destination Query and reporting objects do not belong to the data flow and are not displayed. The components of the data flow are displayed as nodes (boxes) or connections (arrows). InfoPackages and modelling objects are displayed in the data flow maintenence screen as nodes. Data transfer processes, transformations as well as relationships between composite or derived modeling objects and the objects they contain are displayed as connection by default. Nodes for persistent modeling objects (including transformations) are highlighted in blue, nodes for persistent runtime objects are highlighted in gray and nonpersistent objects are highlighted in white. The object symbols for connections are displayed in white, provided that an active version of the object is available. and integrate them

2.1 Creating Data Flows or Data Flow Templates


Use
Using the data flow tree in the Data Warehousing Workbench: Modeling, you can create a data flow or data flow template. The procedure is basically the same for both cases. When you save the data flow, you decide whether it should be saved as a data flow or as a template. You can also use the data flow display in the Data Warehousing Workbench to save existing data flows as separate data flows with the TLOGO object type 'DMOD'.

Procedure
Creating Data Flows or Data Flow Templates Using the Data Flow Tree You are in the Data Warehousing Workbench: Modeling in the data flow tree. 1. Select the InfoArea to which you want to assign the new data flow, or create a new InfoArea. 2. In the InfoArea context menu, choose Create Data Flow. A screen appears. Under Data Flow, enter a technical name and a description of the data flow.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 6 of 102

Choose

. The data flow maintenance screen appears. Alternatively you can use the transaction RSDF to access the data flow maintenance screen.

Note
The technical name of a data flow is limited to 30 characters. The last few characters are filled with TMPL if you are creating a data flow template. 3. Add the required objects to the data flow. There are different ways of adding persistent and/or non-persistent objects (including adding objects from data flows and data flow templates). For more information, see Adding Objects and Connections to the Data Flow and Using SAP Data Flow Templates.

Note
If you use persistent objects in your data flow and then you save the flow as a template later on, the system uses the persistent objects to create nonpersistent objects for the template. The technical names and descriptions of the persistent objects are applied to the non-persistent objects. 4. Connect the objects to each other. For more information, see Adding Objects and Connections to the Data Flow. 5. Create documentation for the data flow and the associated objects. For more information, see Creating and Showing Documentation. 6. Check the data flow for consistency.

Note
A data flow is consistent and can be activated if all persistent objects contained in the data flow exist and have the object status 'active'. If the data flow contains non-persistent objects, warnings appear during the consistency check. However the data flow can still be saved and activated. A data flow template is consistent and can be activated, provided that it only contains non-persistent objects. 7. There are two ways of saving and activating a data flow: To save the data flow as a normal data flow, choose and then . To save the data flow as a data flow template, proceed as follows: 1. Choose Data Flow Save as Template from the menu. 2. A dialog box appears where you can change the InfoArea assignment, technical name and description of the template. The last characters of the technical name are filled with TMPL. 3. Choose .

Note
A data flow template can only contain non-persistent objects. You can also save an existing data flow as a template. The template is also created for the existing data flow. Creating a Data Flow Using the Data Flow Display of an Object You are in an object tree in Data Warehousing Workbench. 1. Choose Display Data Flow in the object context menu. 2. A screen appears. Decide how you want to display the data flow (upwards, downwards or upwards and downwards). Choose 3. Make a selection the data flow display of the object .

2.2 Adding Objects and Connections to the Data Flow


Procedure
Adding Non-Persistent Objects 1. Using the toolbar on the left side of the data maintenance screen, you can insert objects into the data flow. Drag and drop the required object symbol into the edit window. An empty, white node is displayed for the object type in the edit window.

Note
Note that you cannot specify the type of the DataStore object or InfoCube when selecting a DataStore object or InfoCube as a non-persistent object. At this stage, you cannot specify whether a DataStore object is a standard or a write-optimized DataStore object. You can only do this when you create the persistent object. 2. Choose Change in the node context menu to give the object a technical name and a description. Adding Persistent Objects Creating an Object in the Data Flow Maintenance 1. To create a persistent object in the data flow maintenance screen, first create a non-persistent object and choose Create in the context menu of the nonpersistent object. The object maintenance screen of the relevant object type appears (for example, DataStore object). Here you can specify the object properties as well as save and activate the object. Depending on the object type, you can select the object type (for example, write-optimized DataStore object) in the create object dialog box or on the object maintenance screen.

Note
When you create a composite or derived object (for example, a HybridProvider), then persistent predecessor objects or subobjects and the corresponding connections to the higher-level object are automatically created or transferred (in the example with the HybridProvider, a DataStore object and an InfoCube or a VirtualProvider and an InfoCube).

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 7 of 102

2. When you go back, you return to the data flow maintenance. Composite or derived objects are automatically displayed with all valid predecessor objects or subobjects. 3. There are two ways to create runtime objects (InfoPackage and data transfer process) and transformations: You can create objects by using the object context menu. Data transfer processes and transformations are usually created using the target object context menu (except for DataSources). InfoPackages are created using the DataSource context menu. Data transfer processes and transformations can also be created using connections. First establish a connection between the source and the target. The first connection is made as a transformation. The other connections are created as data transfer processes (except the DataSource). Use the connection context menu or double-click a connection to access the object maintenance. Here you create the object. Note that you can only create data transfer processes and transformations if active source and target versions exist. Once you have created and added a runtime object or a transformation, any new sources are also displayed in the data flow maintenance. Adding Existing Objects and Data Flows Using an Existing Object from an Object Tree of the Data Warehousing Workbench You can add objects from a Data Warehousing Workbench object tree to the data flow maintenance by using drag and drop. Hold down the control key for multiple selection. Transferring an Existing Object in the Data Flow Maintenance 1. On the application toolbar in the data flow maintenance, choose . 2. A dialog box appears. First select the object type and then the required object. 3. To add the object and return to the network, choose open. . If you want to add further objects to the data flow, choose Add. In this case, the dialog box remains

Using an existing object with objects from the previous or next level 1. Add an object from an existing data flow to your current data flow. 2. Choose Show Objects on Previous Level or Show Objects on Next Level in the object context menu. The objects connected to the object one level higher/lower in the data flow are added to the current data flow. Connecting objects on the previous level is particularly useful for composite or derived objects, such as MultiProviders. Use Existing Data Flow 1. Add an object from an existing data flow to your current data flow. 2. In the context menu of the object, choose Use Data Flow of Object. A dialog appears where you can specify the data flow direction that you want to display. You can choose between the options Upwards, Downwards and Upwards and Downwards. The selected data flow for the initial object is displayed on the next screen. You can integrate the data flow into your data flow by using the option . Use data flow or data flow template (object DMOD) in a data flow You can also add existing data flows or data flow templates of object type DMOD to the current data flow. In this way, you can use data flow templates, for example, to ensure that the data flows are modeled according to organizational standards and best practices. In the current data flow, you can add and combine multiple data flows and templates. Using the data flow templates for specific layers of the Layered Scalable Architecture, for example, you can create a aggregated data flow for your scenario. 1. Choose . A screen appears. The left screen area displays the data flows and data flow templates listed by InfoArea. The Data Flows area shows all the data flows (user-defined and SAP data flow templates). The Data Flow Templates area shows all the user-defined and SAP data flow templates. The SAP Data Flow Templates only shows the the data flow templates supplied by SAP.

Note
SAP data flow templates that are only included in the content version are also displayed here. It is not necessary to have an active version of SAP data flow templates, to use the templates in data flow maintenance and to integrate them into other data flows. More information: Using SAP Data Flow Templates. 2. Select a data flow or a data flow template. The selected data flow or data flow template is displayed in the right screen area. 3. Choose Continue to add the data flow or template to the current data flow. The data maintenance screen appears. Here you see the data flow or template that you have applied. In this case, the technical name and description of non-persistent objects are retained when the objects are copied. The objects now belong to the current data flow. Creating Connections Between Objects To establish a connection between two objects, hold down the left mouse button and drag the mouse pointer from object to the next object. The system automatically creates the required connection type. For example, a standard arrow from the subobjects to the composite object is created for composite objects. A transformation is created for the first connection between and a source and a target and a data transfer process is created for subsequent connections.

Note
When you create an InfoPackage using the DataSource context menu, the connection between the DataSource and the InfoPackage is displayed automatically when you return to the data flow maintenance. Integration with the Data Flow Wizard Any objects created in data flow maintenance using the Data Flow Wizard are automatically displayed in the data flow, once you have finished creating the data flow. For more information on the data flow wizard, see Generating Simple Wizard-Based Data Flows.

2.3 Using SAP Data Flow Templates


Prerequisites
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 8 of 102

The SAP data flow templates do not have to be transferred from the technical content. They are visible in graphic data flow modeling and can be used as templates immediately. However, they are only displayed in the data flow object tree of the Data Warehousing Workbench, if they have been saved actively.

Context
Data flow templates help you to implement a layered scalable architecture (LSA) in your BW system. The LSA is a reference architecture that has a structure based on the principles of the Enterprise Data Warehouse. It ensures consistent, scalable and flexible implementation of the Business Warehouse. The larger the BW system is or could potentially become, the more important it is to standardize using LSA. The LSA also provides important guidelines for smaller BW systems. More information on LSA: Building and Running a Data Warehouse

Procedure
1. Proceed as described in Creating Data Flows or Data Flow Templates, steps 1 and 2. 2. Choose . You see a list of the SAP data flow templates. 3. Click on an SAP data flow template to view it and to read the corresponding documentation. 4. Select a template that meets your requirements and choose . 5. To create persistent objects in your data flow, proceed as described in "Creating an Object in Data Flow Maintenance". For more information, see Adding Objects and Connections to the Data Flow

2.3.1 Naming Conventions for SAP Dataflow Templates


Concept
Dataflow templates build on one another: Templates with higher numbers are based on ones with lower numbers and enhance them. Template LSA200 is an enhancement of template LSA100 for example. LSA100 covers the most simple data flow. Subsequent dataflows build on this and are more complex.

2.3.1.1 Naming Conventions for InfoProviders of SAP Dataflow Templates


Concept
The naming of InfoProviders is regulated by the naming conventions for DataStore objects and the naming conventions for semantically partitioned objects. The InfoProviders of the data flow templates are named as follows: Layer: 1. position The first letter of the layer where the InfoProvider is located Area: 2.-5. position A four-character abbreviation that describes the data content (the DataSource or business scenario for example) Sequence number: 6. position The sequence number of the InfoProvider if there is more than InfoProvider in this layer for this area. Domain: 7. position A single character abbreviation that stands for the domain Partition: 8. position A single-character abbreviation that stands for a further partitioning of the InfoProvider Example Name of the InfoProvider:
P 1 L 2 S 3 H 4 D 5 0 6 U 7 0 8

1. position: P = Propagation layer 2.-5. position: LSHD = Area: DataSource for Sales Order 6. position: 0 = First InfoProvider in this area in this layer 7. position: U = Domain US 8. position: 0 = No further semantic partitioning

2.3.1.2 Naming Conventions for InfoSources and DTPs for SAP Dataflow Templates
Concept
Naming Conventions for the InfoSources When a semantically partitioned object is activated, two InfoSources are generated: an inbound InfoSource before the semantically partitioned object and an outbound InfoSource after it. The name of the InfoSource is formed from the name of the semantically partitioned object.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 9 of 102

Example: The name of the semantically partitioned object is PLSHD0: PLSHD0_I: Name of the InfoSource before the semantically partitioned object (I - Inbound ) PLSHD0_O: Name of the InfoSource after the semantically partitioned object (O - Outbound) Naming Conventions for the DTPs Data transfer processes (DTPs) can be generated for semantically partitioned obejcts. The name of the DTP is formed from the technical name of the source and the technical name of the target.

LSA100: Basic Flow - Layering Data & Logic


Concept
SAP data flow template LSA100 provides a simple and basic structure for the levels and data flow logic. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

LSA110: Basic Flow Extensions - CM & Harmonization InfoSource


Concept
SAP data flow template LSA110 introduces an extra layer: The Corporate Memory (CM). This is intended to store the load history, as this is not satisfactory in the PSA. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 10 of 102

2.3.4 LSA300: Tactical Scalability - Flow Split Logical Partitioning


Concept
SAP data flow template LSA300 shows a solution for if an individual InfoProvider cannot offer the required level of service. In this case, the LSA recommends semantically partitioning the InfoProvider. Instead of storing the data in just one InfoProvider, it is split over multiple InfoProviders with the same structure, thus splitting the data flow. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

2.3.5 LSA310: Scalability - Flow Split and InfoSources


Concept
To obtain greater transparency and flexibility, the LSA recommends using InfoSources in front of all InfoProviders that store data. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 11 of 102

LSA315: Scalability - InfoSources & Single ERP Source


Concept
SAP data flow template LSA315 is a simplification of SAP data flow template LSA310. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

2.3.7 LSA320: Scalability - Entire Data Flow Split


Concept
SAP data flow template LSA320 offers a solution for cases where performance problems occur while loading in a simple data flow. In this template, the data flow is split immediately after the PSA. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 12 of 102

2.3.8 LSA330: Scalability - Flow Split Using a Pass Thru DataStore Object
Concept
SAP data flow template LSA330 is very similar to template LSA320. The difference is the pass-thru DataStore object. This makes it possible to handle errors at a very early stage and simplifies data splitting. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

LSA400: Scalability & Domains - Strategic Flow Split


Concept
In global BW systems, administration over various time zones represents something of a challenge. The LSA implements data domains as strategic semantic partitioning here. This makes it possible to keep the dates in the various time zones apart. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 13 of 102

LSA410: Scalability & Domains - Strategic Flow Collect


Concept
If more than one source system is connected to a BW system for the same business processes, the LSA recommends using data domains. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

LSA420: Scalability & Domains - Business Transformation Layer


Concept
SAP data flow template LSA420 is based on SAP data flow template LSA400. SAP data flow template LSA420 illustrates the role of the Business Transformation layer. For further information, see the documentation in the BW system for this SAP data flow template. The documentation is displayed automatically when you view the SAP data flow template.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 14 of 102

2.4 Showing and Creating Documentation


Use
A data flow and its objects can contain documentation. You can show or edit the documentation on the data flow maintenance screen.

Procedure
Showing Documentation To display the documentation, choose .

A documentation screen area appears below the data flow. Once you have selected a data flow object, the documentation for this object is displayed. If you have not selected a data flow object, the documentation for the data flow is displayed. The documentation title consists of the TLOGO object type, the object description and the technical object name. For the data flow, only the object type and technical name are shown in the title. To switch to a documentation for a different object, select the required data flow object. To switch to the data flow documentation, select the background of the data flow maintenance screen. Creating and Editing Documentation 1. To create documentation for the data flow or a selected object, choose in the documentation area of the data flow maintenance screen (change mode). 2. Choose and specify the format of the documentation that you want to create. You can create a plain text file or you can use HTML (if you have special formatting requirements). You can change this setting for every document (in other words, every data flow object). 3. Create the documentation in the editor. If the documentation is available as a local file, you can add it by choosing .

Note
If you want HTML documentation, we recommend that you create the documentation in an external HTML editor. Then you can transfer it to the documentation maintenance screen for graphic modeling. You can do this by either dragging and dropping the HTML file into the editor or by inserting the file using the function .

4. Save the documentation with the data flow. Additional Functions Search To search for specific character strings in the documentation, choose Export To save the documentation on a local computer, choose . .

2.5 Saving Data Flow Documentation as an HTML File


Use
You can display the data flow documentation for an entire data flow in HTML format as a preview, and save it as an HTML file on a computer, or print it. In this way, you can keep a record of the current modeling status and obtain a good basis for documentation or for an audit of your BW project.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 15 of 102

The data flow documentation contains the following information: Header data: date on which the HTML page was created, and the system that contains the data flow Overview table for the objects of the data flow. The object names are linked, and clicking these links displays information about the objects. For each active object in a data flow: If available: documentation created in graphical modeling. Metadata description from the Metadata Repository: Table containing a description of the object containing the technical name, descriptions (short and long), object version, the user who last changed the object, the last change, and the logical system name If available: links to BW documents about the object Table containing an overview of the metadata for the object, for example contained objects such as InfoObjects or object-specific details For some object types, you can show and hide additional details. For example, for transformations, you can show or hide the field details and rule details (formula definitions and ABAP code of the routines, including the start routine, end routine, and expert routines), and for InfoSets, you can show or hide the InfoProviders and their fields and the join conditions. For transformations: Table showing an overview of field mapping For each conversion, formula, or routine: list of fields used in the overview of field mapping For data transfer processes: List of fields used in the filter with information on the filter conditions List of key fields for error stack Tables containing overviews of objects in the environment, such as required objects, objects from which data was received, or objects to which data was passed. From the preview, you can display an HTML file containing the documentation for the queries for all InfoProviders and planning functions contains in the data flow. You can also save this documentation as an HTML file.

Procedure
Calling the Data Flow Documentation as a Preview You can display the data flow documentation from the graphical modeling and from the data flow display: If you are in the graphical modeling, choose from the Data Warehousing Workbench toolbar or from the graphical modeling toolbar.

Note
You can only call the function if there is an active version of the data flow object and the current version of the object is the active version. For example, if you activate a data flow after editing it, the modified version (M version) is retained in graphical modeling and you cannot call the function. In this case, in the header data area, in the Version field, choose the Active entry, or exit the data flow maintenance and reopen the data flow. If you are in the data flow display, choose from the data flow display toolbar.

The system displays the HTML documentation for the data flow as a preview in a window. Navigating in the Preview The following navigation options are available: From the overview table of the objects in the data flow or, if the data flow exists as a data flow object, from the Required Objects table for the data flow object: you can display the documentation for the individual objects contained in the data flow by clicking the object names. You can return to the start of the file by clicking the Up link in the top right of the documentation area for an object. You can return to the previous navigation status in the preview by choosing Back (right mouse button) in the context menu of the embedded browser. A standard Web browser is embedded using SAP HTML Viewer in the preview screen. This means you can use standard browser functions such as Back, Find and Print, which you can access in the context menu (right-click) or by pressing the relevant keyboard shortcuts. If you open links to BW documents that are displayed in the preview window, you can return to the previous page of the preview by choosing Back in the embedded browser. Setting the Detail Level of the Information in the Preview The following functions are available in the toolbar of the preview screen:
Function Description Shows the field details and rule details for all transformations contained in the data flow. Hides the field details and rule details for all transformations contained in the data flow.

You can also show or hide detailed information, such as field and/or rule details for transformations or the InfoSet details at the level of individual objects. To do this, choose the corresponding buttons, which are displayed in the documentation area for the object. Display Query Documentation The query documentation is not contained directly in the HTML preview of the data flow. To switch to the preview of the query documentation, choose from the toolbar of the preview screen. In the query documentation, you can use the InfoProvider overview to display the documentation of the queries of an InfoProvider by clicking the object name of the InfoProvider. You can return from the preview of the query documentation to the preview of the data flow by choosing Saving the Documentation from the Preview as a File or Files You can save the documentation as a file or files on a computer by choosing in preview toolbar. This saves the documentation as it is currently displayed in the preview, that is, with information shown or hidden. The following options are available when saving: With data flow documentation (HTML file) With query documentation (HTML file) With data flow graphic (JPG file) Once you have confirmed your selection, you specify a storage location and enter a file name. The system uses the technical name of the data flow as the default file name. The file names for the different files use the following naming convention: Data flow documentation: <specified file name>.HTML in the toolbar of the preview screen.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 16 of 102

Query documentation: <specified file name>_QUERY.HTML Data flow graphic: <specified file name>.jpg If you open the HTML files, they are displayed in the browser. As in the preview, a number of navigation options are available in the HTML files: As in the preview, from the overview of the objects, you can display the documentation for individual objects by clicking the object names, and can return to the start of the file from there by choosing the Up link. You can also return to the previous navigation status by choosing Back (right-click) in the context menu of the browser. If BW documents exist for the objects, you can also display these from the HTML files. When you open the first BW document, you log on to the BW system on which the documents are stored. You can also return to the previous page here by choosing Back from the context menu of the browser (right mouse button). Note that the SAP front end (SAP GUI) must be installed on the computer before you can log on to the BW system. Depending on your browser settings, at the level of the individual objects in the HTML files, you can also show and hide details such as transformations or InfoSets. Printing the Documentation from the Preview You can print the data flow documentation or query documentation (or, for example, save them as PDF files by using a PDF printer driver) by choosing toolbar of the data flow preview or the query preview. in the

2.6 Additional Functions in Graphic Data Flow Modeling


Use
Functions on the Toolbar in the Data Flow Maintenance Screen
Function Description Undoes the last changes made - one by one, provided that the changes do not affect any persistent objects in the data flow. Only undoes the last changes made to non-persistent objects. Does not undo the function Show Objects on Previous/Next Level for a persistent object and cannot undo a persistent object that was changed or created on the data flow maintenance screen. Redoes the editing steps one by one, if you have returned to a previous editing step by choosing undo. Displays the documentation screen area for the selected object. If no data flow object is selected, the documentation screen area for the data flow is displayed. Here you create documentation for the data flow and the associated objects. More information: Creating and Showing Documentation Rearranges the objects in the data flow display to make the layout clearer. The setting is applied in the dialog box and saved with the data flow.

Displays the data flow either vertically or horizontally. The setting is applied in the Zooms into the graphic. Zooms out of the graphic. Zooms in/out of the graphic or resizes the graphic to fit the screen area. Shows or hides the navigation window. This window makes it easier to navigate in a complex data flow that cannot be displayed in full on the screen. Searches for an object in the data flow. Enter a character string and select the search criteria. If the system finds an object with a technical name and/or description containing the character string, the data flow object is highlighted in yellow. Display the next object, if multiple matches were found. Prints the graphic. Saves the graphic as a JPEG file on a local computer. Using this function you can You can save documentation for the complete data flow (including metadata descriptions from the metadata repository) as an HTML file. For more information, see Saving Data Flow Documentation as an HTML File. Shows or hides the technical names of the objects. The setting is applied in the Complete Data Flow dialog box and saved with the data flow. dialog box and saved with the data flow.

You can add data transfer processes, transformations or InfoPackages to the data flow if their reference objects are already contained in the data flow. The reference objects for data transfer processes are transformations. The source objects and target objects are the reference objects for transformations. InfoPackages have DataSources as reference objects. Note that objects are only added to the data flow if you have not hidden their object types in the .

In this dialog box, you can configure the following settings: You can hide different types of objects: You can hide data transfer processes or transformations from the data flow display. You can also hide InfoPackages and source systems in the display. Use the relevant checkboxes to configure further settings to display the data flow: You can display all objects as nodes, show technical object names, specify a horizontal display for the data flow and automatically customize the layout for

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 17 of 102

every user action. These checkboxes are the same as these pushbuttons: , , and . . If you select a pushbutton, the relevant checkbox is changed in the displayed with the saved settings. Display data transfer processes and transformations in the data flow maintenance screen as connections. You can also use this function to display the processes and transformations as nodes. The advantage of this is that the the object descriptions are visible when the objects are displayed as nodes (unlike when displayed as connections). The setting is applied in the dialog box and saved with the data flow.

The settings are saved with the data flow. The next time the data flow is opened, it is

With this function, you can highlight all the objects that are located before or after a selected object in the data flow. If you then select a different data flow element, only the connections remain highlighted. This function removes the highlighting from the data flow connections. Adds existing objects to the data flow. More information: Adding Objects and Connections to the Data Flow Removes selected objects from the data flow. The objects are not deleted from the system. Hold down the Ctrl key to select more than one object.

Functions for Objects Standard functions are available in the context menu for persistent objects (nodes and connections). These functions are also available in the object trees in the Data Warehousing Workbench. Double-click the object to access the object maintenance. The following data-flow-specific functions are also available: Use Data Flow for Object More information: Adding Objects and Connections to the Data Flow Show objects on previous level Shows the objects that are connected one level directly below in the data flow - before the persistent object. This is particularly useful for composite or derived objects such as MultiProviders. Show objects on next level Shows the objects that are connected one level directly above in the data flow - after the persistent object. Remove from Data Flow Removes objects from the data flow. The objects are not deleted from the system. Double-click non-persistent objects (nodes and connections) to open a dialog box, where you can enter or change the name and description of the object. The following functions are available in the context menu: Remove from Data Flow Removes objects from the data flow. The objects are not deleted from the system. Create Creates a persistent object from the non-persistent object. Change Assigns a technical name and a description to the non-persistent object. Use Existing Object (not for data transfer processes and transformations) Replaces a non-persistent object with an existing persistent object in the data flow. Display (only in display mode) Background Functions in Data Flow Maintenance The following functions are available in the context menu for the data flow maintenance background: Add object Adds existing objects to the data flow. Apply data flow template Add existing data flows or data flow templates to the data flow. Copying a Data Flow To copy a data flow or data flow template, choose Copy from the data flow context menu in the data flow tree. Use the Wizard for Data Flow Copies to copy the data flow and simultaneously create a copy of the objects contained in the data flow. You can also create a basic copy that contains the same objects as the original data flow.

Note
During a simple copy (without a wizard), non-persistent objects are copied and the technical names and desciptions are retained. The objects can only ever exist in one data flow. During a Wizard-based copy, the non-persistent objects are also copied. However these objects are not displayed in the wizard. They only become visible in the copied data flow. For more information on wizard-based copies, see Copying a Data Flow. Displaying Properties Choose made. Data Flow Properties to display the technical name, description, InfoArea and information related to the responsible person and the last change

Activating Objects in a Data Flow If you make changes to an object, this can also affect other objects in the data flow. DataSource changes result in an inactive transformation for example, as the transformation generally has to be changed if changes are made to the DataSource structure. To take account of dependencies like this, you can directly activate the objects in the data flow in the data flow maintenance transaction. To do this, choose Data Flow Activate All Objects in Data Flow . You can either activate all persistent objects in the dataflow or stipulate that all objects with status Inactive are activated and/or all objects whose M-version does not match the A-version.

Note

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 18 of 102

Before activating, make sure that all objects in the data flow are consistent. 3.x objects are ignored during activation and have to be activated manually. Displaying the Content Version for SAP Data Flow Templates You can display the delivery version for SAP data flow templates on the data flow maintenance screen. Choose D Version in the Version field, in the data flow header data. Exporting and Importing Versions You can export a version of a data flow or a data flow template as an XML file or import an XML file. In data flow maintenance, choose Managment and choose Export or Import in the context menu of the relevant version. More information: Version Management Deleting Objects Objects that have been deleted - but not in data flow maintenance - are highlighted in red in data flow maintenance and must be manually removed from the data flow. Objects deleted in data flow maintenance are automatically removed from the data flow after deletion. Deleting a Data Flow To delete a data flow or data flow template, choose Delete from the data flow context menu in the data flow tree. The non-persistent objects are also deleted. The persistent data flow objects are not deleted. Transporting a Data Flow You can transport a data flow or a data flow template by using the standard transport system or BW transport connection: To transport the data flow using the BW transport system, choose Environment Transport Using Workbench . In this case, you can collect the objects contained in the data flow and transport them. The source system dependencies in the target system are taken into account. To transport the data flow using the standard transport system, choose Environment Manual Transport . Specify a package and a transport request in the dialog screens that follow. In this case, only the data flow itself (object type DMOD) is transported and not the objects contained in the data flow. More information: Transporting BW Objects and Creating, Delivering and Installing BI Content Displaying BW Documents for a Data Flow Besides the documentation that you can create directly on the data flow and on data flow objects, you can also create and display BW documents for a data flow by choosing . More information: Documents Goto Version

3 Enterprise Data Warehouse Layer


Concept
For a detailed description of the Enterprise Data Warehouse layer, see Enterprise-Data-Warehouse-Schicht.. In the SAP NetWeaver Business Warehouse, the layers of the Enterprise Data Warehouse are mapped using the following objects: Data Acquisition Layer: Persistent Staging Area (PSA) Quality And Harmonization layer: Transformation Complex requirements could mean the data in the layer needs to be constantly saved. Depending on the requirements, the data is saved in: Standard DataStore Object Write-Optimized DataStore Objects Data Propagation Layer: Standard DataStore Object Using Semantic Partitioning Corporate Memory: Write-Optimized DataStore Objects

3.1 DataSource
Definition
A DataSource is a set of fields that provide the data for a business unit for data transfer into the BW. From a technical perspective, the DataSource is a set of logically-related fields that are provided to transfer data into the BW in a flat structure (the extraction structure) or in multiple flat structures (for hierarchies). There are four types of DataSource: DataSource for transaction data DataSource for master data DataSource for attributes DataSource for texts DataSource for hierarchies

Use
DataSources supply the metadata description of source data. They are used to extract data from a source system and to transfer the data to the BW system. They are also used for direct access to the source data from the BW system.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 19 of 102

The following image illustrates the role of the DataSource in the BW data flow:

The data can be loaded into the BW system from any source in the DataSource structure using an InfoPackage. You determine the target into which data from the DataSource is to be updated during the transformation. You also assign DataSource fields to target object InfoObjects in BW. Scope of DataSource Versus 3.x DataSource 3.x DataSource In the past, DataSources have been known in the BW system under the object type R3TR ISFS; in the case of SAP source systems, they are DataSource replicates. The transfer of data from this type of DataSource (referred to as 3.x DataSources below) is only possible if the 3.x DataSource is assigned to a 3.x InfoSource and the fields of the 3.x DataSource are assigned to 3.x InfoSource InfoObjects in transfer structure maintenance. A PSA table is generated when the 3.x transfer rules are activated, thus activating the 3.x transfer structure. Data can be loaded into this PSA table. If your data flow is modeled using objects that are based on the old concept (3.x InfoSource, 3.x transfer rules, 3.x update rules) and the process design is built on these objects, you can continue to work with 3.x DataSources when transferring data into the BW from a source system. DataSource As of SAP NetWeaver 7.0, a new object concept is available for DataSources. It is used in conjunction with the changed objects concepts in data flow and process design (transformation, InfoPackage for loading to the PSA, data transfer process for data distribution within the BW). The object type for a DataSource in the new concept - called DataSource in the following - is R3TR RSDS. DataSources for transferring data from SAP source systems are defined in the source system; the relevant information of the DataSources is copied to the BW system by replication. This is referred to as DataSource replication in the BW system. DataSources for transferring data from other sources are defined directly. A unified maintenance UI in the BW system, the DataSource maintenance, enables you to display and edit the DataSources of all the permitted types of source system. In DataSource maintenance you specify which DataSource fields contain the decision-relevant information for a business process and should therefore be transferred. When you activate the DataSource, the system generates a PSA table in the entry layer of the BW. You can then load data into the PSA. You use an InfoPackage to specify the selection parameters for loading data into the PSA. In the transformation, you determine how the fields of the are assigned to the BW InfoObjects. Data transfer processes facilitate the further distribution of data from the PSA to other targets. The rules that you set in the transformation are applied here. Overview of Object Types A DataSource cannot exist simultaneously in both object types in the same system. The following table provides an overview of the (transport-relevant) metadata object types. The table also includes the object types for DataSources in SAP source systems:
DataSource Type BW: Object Type of A or R Version BW: SAP Source System: Object Type of Shadow Version Object Type of A Version (Source System Independent) R3TR SHDS (shadow object delivered in its own table with release and version) 3.x DataSource R3TR ISFS R3TR SHFS for non-replicating source systems SHMP for replicating source systems, that is, SAP source systems (shadow object delivered in its own table with source system key) R3TR OSOA R3TR OSOD R3TR OSOA SAP Source System: Object Type of D Version

DataSource

R3TR RSDS

R3TR OSOD

Recommendation We recommend that you adjust the data flow for the DataSource as well as the process design to the new concepts if you want to take advantage of these concepts You can do this by using the system-based, automatic migration for data flows. For more information, see Data Flow in the Data Warehouse and Migrating a Data Flow.

3.1.1 Functions for DataSources


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 20 of 102

3.1.1 Functions for DataSources


Use
You can perform the following DataSource functions in the Data Warehousing Workbench object tree. The functions available differ depending on the object type (DataSource - RSDS, DataSource 3.x - ISFS) and source system: In the context menu for an application component, you can perform the following functions: For both object types: Replicate metadata for all DataSources assigned to this application component. For object type RSDS: Create DataSource. In the context menu for a DataSource, you can perform the following functions: For both object types: Display, delete, manage, create transformation, create data transfer process, create InfoPackage. For object type RSDS: Change, copy (though not with an SAP source system as the target). For object type ISFS: Create transfer rules, migrate. Only for DataSources from SAP source systems (both object types): Display DataSource in source system, replicate metadata. In the DataSource repository (transaction RSDS), you can perform the following functions. The functions available here also depend on the object type: For both object types: Display, delete, replicate. For object type RSDS: Change, create, copy (however, not with an SAP source system as the target), restore DataSource 3.x (if the DataSource is the result of migration and the migration was performed using the With Export option). For object type ISFS: Migrate.

Features
The following table provides an overview of the functions available in the Data Warehousing Workbench and DataSource repository for DataSources and 3.x DataSources:
Function Create Description Additional Information

If you want to create a new DataSource for transferring DataSource Maintenance in BW data using UD Connect, DB Connect or from flat files, you first specify the name of the DataSource, the source system - if applicable - and the data type of the DataSource. DataSource maintenance appears. You can enter the required data on the tab pages here.

Display

You are now in display mode for the DataSource. You can display a DataSource 3.x or a DataSource (emulation). You cannot switch to change mode from the emulated display.

DataSource Maintenance in BW Emulation, Migration, and Restoring DataSources

Change

DataSource maintenance change mode appears. To transfer data from SAP source systems, you use this interface to select the fields from the DataSource to be transferred and to make specifications for format and conversion of field contents from the DataSource.

DataSource Maintenance in BW

Copy

You can use a DataSource as a template to create a new DataSource. This function is not available if you want to use an SAP source system as the target. For SAP source systems, you can create DataSources in the source system in generic DataSource maintenance (RS02).

Delete

When you delete a DataSource, the dependent objects (such as a transformation or InfoPackage) are also deleted.

Manage

The overview screen for requests in the PSA appears. Here you can select the requests that contain the data you want to call in PSA maintenance.

Persistent Staging Area

For SAP source systems: Display DataSource in source system For SAP source systems: Replicate metadata

The DataSource display in the SAP source system appears.

The BW-relevant metadata for DataSources in SAP source Replication of DataSources systems is transferred to BW from the source system using replication.

Create transformations

In the transformation, you determine how to assign the DataSource fields to InfoObjects in BW. In the data transfer process, you determine how to distribute the data from the PSA to additional targets in BW. In the InfoPackage, you determine selections for transferring data to BW. If the DataSource 3.x is assigned to an InfoSource, determine how the DataSource fields are assigned to the InfoObjects of the InfoSource and how the data is to be transferred to the InfoObjects.

Creating Transformations

Create Data Transfer Process

Creating Data Transfer Processes

Create InfoPackage

For DataSources 3.x: Create transfer rules

For DataSources 3.x: Migrate

You can migrate a DataSource 3.x to a DataSource, thereby converting the metadata on the database. The DataSource 3.x can be restored to its status before the migration if the associated objects of DataSource 3.x

Emulation, Migration, and Restoring DataSources

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 21 of 102

(DataSource ISFS, mapping ISMP, transfer structure ISTS) are exported during migration. Before migrating, you are advised to create the data flow with a transformation based on a DataSource 3.x. You also have the option of using an emulated DataSource 3.x. As well as manually migrating individual DataSources, you can also migrate a complete data flow. More information:Migrating Data Flows. We recommend using this system-backed automatic migration when migrating data flows.

3.1.2 DataSource Maintenance in BW


Use
In DataSource maintenance in BW you can display DataSources and 3.x DataSources. You can use this interface to create or change DataSources for file source systems, UD Connect, DB Connect, and Web services in BW. In DataSource maintenance, you can edit DataSources from SAP source systems. In particular, you can specify which fields you want to transfer into BW. In addition, you can determine properties for extracting data from the DataSource and properties for the DataSource fields. You can also change these properties. You call DataSource maintenance from the context menu of a DataSource ( Display, Change) or, if you are in the Data Warehousing Workbench, from the context menu of an application component in an object tree ( Create DataSource). Alternatively you can call DataSource maintenance from the DataSource repository. In the Data Warehousing Workbench toolbar, choose DataSource to access the DataSource repository.

3.1.2.1 Editing DataSources from SAP Source Systems in BW


Use
A DataSource is defined in the SAP source system along with its properties and field list. In DataSource maintenance in BW, you define which fields of the DataSource are to be transferred to BW. You can also change the properties for extracting data from the DataSource and properties for the DataSource fields.

Prerequisites
You have replicated the DataSource to BW.

Procedure
You are in an object tree in the Data Warehousing Workbench. 1. Select the required DataSource and choose Change. 2. Go to the General tab page. Select PSA in the CHAR format if you want to generate the PSA for the DataSource with character-type fields of type CHAR rather than a typed structure. Use this option if conversion during loading causes problems, for example because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type. After activating the DataSource, you can then load data into the PSA and correct it there. 3. Go to the Extraction tab page. 1. Under Adapter, you define how the data will be accessed. The options depend on whether the DataSource supports direct access and real-time data acquisition. 2. If you select Number Format Direct Entry, you can specify the character for the thousand separator and the decimal point character that are to be used for the DataSource fields. If a User Master Record has been specified, the system applies the settings of the user who is taken when the conversion exit is executed. This is generally the BW background user. 4. Go to the Fields tab page. 1. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BW. 2. If required, change the setting for the Format of the field. 3. If you choose an External Format, make sure that the output length of the field (external length) is correct. Change the entries if necessary. 4. If necessary, specify a conversion routine that converts data from an external format into an internal format. 5. Under Currency/Unit, you can change the entries for the referenced currency and unit fields. 5. Check, save and activate your DataSource.

Result
When you activate the DataSource, BW generates a PSA table and a transfer program. You can now create an InfoPackage. You define the selections for the data request in the InfoPackage. The data can be loaded into the inbound layer of the BW system, the PSA. Alternatively, you can access the data directly if the DataSource supports direct access, and you have defined a VirtualProvider in the data flow.

3.1.2.2 Creating DataSources for File Source Systems


Use
Before you can transfer data from a file source system, the metadata (the file and field information) must be available in BW in the form of a DataSource.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 22 of 102

Prerequisites
Note the following with regard to CSV files: Fields that are not filled in a CSV file are filled with a blank space if they are character fields and with a zero (0) if they are numerical fields. If separators are used inconsistently in a CSV file, the incorrect separator (which is not defined in the DataSource) is read as a character and both fields are merged into one field and can be shortened. Subsequent fields are no longer in the correct order. A line break cannot be used as part of a value, even if the value is enclosed with an Escape sign. Note the following with regard to CSV files and ASCII files: The conversion routines that are used determine whether you have to specify leading zeros. See Conversion Routines in BW Systems. For dates, you usually use the format YYYYMMDD, without internal separators. Depending on the conversion routine that is used, you can also use other formats. Notes on Loading When you load external data, you have the option of loading the data from any workstation into BW. For performance reasons however, you should store the data on an application server and load it from there into BW. This means that you can also load the data in the background. If you want to load a large amount of transaction data into BW from a flat file and you can specify the file type of the flat file, you should create the flat file as an ASCII file. From a performance point of view, loading data from an ASCII file is the most cost-effective method. Loading from a CSV file takes longer because in this case, the separator characters and escape characters have to be sent and interpreted. In some circumstances, generating an ASCII file may involve more effort.

Procedure
You are in the Data Warehousing Workbench in the DataSource tree. 1. Select the application components in which you want to create the DataSource and choose Create DataSource. 2. On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy. The DataSource maintenance screen appears. 3. Go to the General tab page. 1. Enter descriptions for the DataSource (short, medium, long). 2. As required, specify whether the DataSource builds an initial non-cumulative, and can return duplicate data records within a request. 3. Specify whether you want to generate the PSA for the DataSource in the character format. If the PSA is not typed it is not generated in a typed structure but is generated with character-like fields of type CHAR only. Use this option if conversion during loading causes problems, for example, because there is no appropriate conversion routine, or if the source cannot guarantee that data is loaded with the correct data type. In this case, after you have activated the DataSource you can load data into the PSA and correct it there. 4. Go to the Extraction tab page. 1. Define the delta process for the DataSource. You can use the generic delta. Using a delta-relevant field whose value rise monotonously over time, the system determines which data to transfer at runtime. More information: Using Generic BW Deltas. 2. Specify whether you want the DataSource to support direct access to data. 3. Real-time data acquisition is not supported for data transfer from files. 4. Select the adapter for the data transfer. You can load text files or binary files from your local work station or from the application server. Text-type files only contain characters that can be displayed and read as text. CSV and ASCII files are examples of text files. For CSV files you have to specify a character that separates the individual field values. In BW you have to specify this separator character and an escape character which specifies this character as a component of the value, as required. After you have specified these characters you have to use them in the file. ASCII files contain data in a specified length. The defined field length in the file must be the same as the length of the assigned field in BW. Binary files contain data in the form of Bytes. A file of this type can contain any type of Byte value, including Bytes that cannot be displayed or read as text. In this case, the field values in the file have to be the same as the internal format of the assigned field in BW. Choose Properties if you want to display the general adapter properties. 5. Select the path to the file that you want to load or enter the name of the file directly, for example C:/Daten/US/Kosten97.csv. You can also create a routine that determines the name of your file. If you do not create a routine to determine the name of the file, the system reads the file name directly from the File Name field. 6. Depending on the adapter and the file to be loaded, there are further settings that need to be made. For binary files: Specify the character record settings for the data that you want to transfer. Text-type files: Specify how many rows in your file are header rows and can therefore be ignored when the data is transferred. Specify the character record settings for the data that you want to transfer. For ASCII files: If you are loading data from an ASCII file, the data is requested with a fixed data record length. For CSV files: If you are loading data from an Excel CSV file, specify the data separator and the escape character. In the Data Separator field, specify the separator that your file uses to divide the fields. If the data separator character is a part of the value, the file indicates this by enclosing the value in particular start and end characters. Enter these start and end characters in the Escape Characters field. You chose the ; character as the data separator. However, your file contains the value 12;45 for a field. If you set " as the escape character, the value in the file must be "12;45" so that 12;45 is loaded into BW. The complete value that you want to transfer has to be enclosed by the escape characters. If the escape characters do not enclose the value but are used within the value, the system interprets the escape characters as a normal part of the value. If you have specified " as the escape character, the value 12"45 is transferred as 12"45 and 12"45" is transferred as 12"45". In a text editor (for example, note pad) check the escape character and the separator currently being used in the file. These depend on the country version of the file you used. Note that if you do not specify an escape character, the space character is interpreted as the escape character. We recommend using a different character as the escape character. If you select the Hex flag, you can specify the data separator and the escape character in hexadecimal format. When you enter a character for the data separator and the escape character, these are displayed as hexadecimal code after the entries have been checked. A two character

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 23 of 102

5.

6.

7. 8.

entry for a data separator or an escape sign is always interpreted as a hexadecimal entry. 7. Make the settings for the number format (thousand separator and character used to represent a decimal point) if necessary. 8. Make the settings for currency conversion if necessary. 9. Make any further settings that are dependent on your selection if necessary. Go to the Proposal tab page. Here you create a proposal for the field list of the DataSource based on the sample data of your file. 1. Specify the number of data records that you want to load and choose Upload Sample Data. The data is displayed in the upper area of the tab page in the format of your file. The system displays the proposal for the field list in the lower area of the tab page. 2. In the table of proposed fields, use Copy to Field List to select the fields you want to copy to the field list of the DataSource. In the default setting, all fields are selected. Go to the Fields tab page. Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If you did not transfer the field list from a proposal, you can define the fields of the DataSource here. If the system detects changes between the proposal and the field list when switching from the Proposal tab to the Fields tab, a dialog box is displayed where you can specify whether you want to copy changes from the proposal to the field list. 1. To define a field, choose Insert Row and enter a field name. 2. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BW. 3. Instead of generating a proposal for the field list, you can enter InfoObjects to define the fields for the DataSource. Under Template InfoObject, specify InfoObjects for the fields in BW. This allows you to transfer the technical properties of the InfoObjects into the DataSource field. Entering InfoObjects here does not equate to assigning them to DataSource fields. This assignment is made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field. 4. Change the data type of the field if necessary. 5. Specify the key fields of the DataSource. These fields are generated as a secondary index in the PSA. This is important to ensure good performance for data transfer process selections, in particular with semantic grouping. 6. Specify whether lowercase is supported. 7. Specify whether the source provides the data in the internal or external format. 8. If you choose the external format, make sure that the output length of the field (external length) is correct. Change the entries if necessary. 9. If necessary, specify a conversion routine that converts data from an external format into an internal format. 10. Select the fields that you want to be able to set selection criteria for when scheduling a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage. 11. Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage. 12. If necessary, define whether the data to be selected is language-dependent or time-dependent in the field type. Check, save and activate the DataSource. Go to the Preview tab page. If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview. This function allows you to check whether the data formats and data are correct.

Result
The DataSource is created and is visible in the Data Warehousing Workbench in the DataSource overview for the file source system in the application component. When you activate the DataSource, the system generates a PSA table and a transfer program. You can now create an InfoPackage. You define the selections for the data request in the InfoPackage. The data can be loaded into the inbound layer of the BW system, the PSA. Alternatively, you can access the data directly if the DataSource supports direct access, and you have defined a VirtualProvider in the data flow.

3.1.2.3 Creating a DataSource for UD Connect


Use
To transfer data from UD Connect sources into BW, metadata (the information about the source object and source object elements) has to be available in BW in the form of a DataSource.

Prerequisites
You have connected a UD Connect source system. Note the following background information: Using InfoObjects with UD Connect Data Types and Their Conversion Using the SAP Namespace for Generated Objects

Procedure
You are in the Data Warehousing Workbench in the DataSource tree. 1. Select the application components in which you want to create the DataSource and choose Create DataSource. 2. On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy. The DataSource maintenance screen appears. 3. Go to the General tab page. 1. Enter descriptions for the DataSource (short, medium, long). 2. As required, specify whether the DataSource builds an initial non-cumulative, and can return duplicate data records within a request. 4. Go to the Extraction tab page. 1. Define the delta process for the DataSource.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 24 of 102

5.

6.

7. 8.

2. Specify whether you want the DataSource to support direct access to data. 3. UD Connect does not support real-time data acquisition. 4. The system displays Universal Data Connect (Binary Transfer) as the adapter for the DataSource. Choose Properties if you want to display the general adapter properties. 5. Select the UD Connect source object. The connection to the UD Connect source is established. All source objects that are available in the selected UD Connect source can be selected using input help. Go to the Proposal tab page. The system displays the elements of the source object (for JDBC it is these fields) and creates a mapping proposal for the DataSource fields. The mapping proposal is based on the similarity of the names of the source object element and DataSource field and the compatibility of the respective data types. Note that source object elements can have a maximum of 90 characters. Both upper and lower case are supported. 1. Check the mapping and change the proposed mapping as required. Assign the non-assigned source object elements to free DataSource fields. You cannot map elements to fields if the types are incompatible. The system produces an error message in this case. 2. Choose Copy to Field List to select the fields that you want to transfer to the field list for the DataSource. All fields are selected by default. Maintain the Fields tab page. Here you edit the fields that you transferred to the field list of the DataSource from the Proposal tab page. If the system detects changes between the proposal and the field list when switching from the Proposal tab to the Fields tab, a dialog box is displayed where you can specify whether you want to copy changes from the proposal to the field list. 1. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BW. 2. If required, change the values for the key fields of the source. These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping. 3. If required, change the data type for a field. 4. Specify whether the source provides the data in the internal or external format. 5. If you choose an External Format, ensure that the output length of the field (external length) is correct. Change the entries, as required. 6. If required, specify a conversion routine that converts data from an external format into an internal format. 7. Select the fields for which you want to be able to set selection criteria when you schedule a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage. 8. Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage. 9. Under Field Type, specify whether the data to be selected is language-dependent or time-dependent, as required. If you did not transfer the field list from a proposal, you can define the fields of the DataSource directly. Choose Insert Row and enter a field name. You can specify InfoObjects in order to define the DataSource fields. Under Template InfoObject, specify InfoObjects for the fields of the DataSource. This allows you to transfer the technical properties of the InfoObjects into the DataSource field. Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field. Check, save and activate the DataSource. Go to the Preview tab page. If you select Read Preview Data, the number of data records you specified in your field selection is displayed in a preview. This function allows you to check whether the data formats and data are correct.

Result
The DataSource is created and added to the DataSource overview for the UD Connect source system in the application component in the Data Warehousing Workbench. When you activate the DataSource, the system generates a PSA table and a transfer program. You can now create an InfoPackage. You define the selections for the data request in the InfoPackage. The data can be loaded into the entry layer of the BW system, the PSA. Alternatively you can access the data directly if the DataSource allows direct access and you have a VirtualProvider in the definition of the data flow.

3.1.2.4 Creating DataSources for DB Connect


Use
Before you are able to transfer data from a database source system, the metadata, meaning the information on the tables, views, and fields, must be available in BW in the form of a DataSource.

Prerequisites
See Requirements for Database Tables or Views You have connected a DB Connect source system.

Procedure
You are in the Data Warehousing Workbench in the DataSource tree. 1. Select the application components in which you want to create the DataSource and choose Create DataSource. 2. In the next screen, enter a technical name for the DataSource, select the type of the DataSource and choose Copy. The DataSource maintenance screen appears. 3. Go to the General tab page. Enter descriptions for the DataSource (short, medium, long). As required, specify whether the DataSource builds an initial non-cumulative, and can return duplicate data records within a request. 4. Go to the Extraction tab page. Define the delta process for the DataSource. Specify whether you want the DataSource to support direct access to data. The system displays Database Table as the adapter for the DataSource. Choose Properties if you want to display the general adapter properties.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 25 of 102

Choose Properties if you want to display the general adapter properties. Select the source from which you want to transfer data. Application data is assigned to a database user in the Database Management System (DBMS). You can specify a database user here. In this way you can select a table or view that is in the schema of this database user. To perform an extraction, the database user used for the connection to BW (also called BW user) needs read permission in the schema of the database user. If you do not specify the database user, the tables and views of the BW user are offered for selection. Call the value help for field Table/View. In the next screen, select whether tables and/or views should be displayed for selection and enter the necessary data for the selection under Table/View. Choose Execute. The database connection is established and the database tables are read. The Choose DB Object Names screen appears. The tables and views belonging to the specified database user that correspond to your selections are displayed on this screen. The technical name, type and database schema for a table or view are displayed.

Note
Only use tables and views in the extraction whose technical names consist solely of upper case letters, numbers, and underscores (_). Problems may arise if you use other characters. Extraction and preview are only possible if the database user used in the connection (BW user) has read permission for the selected table or view. Some of the tables and views belonging to a database user might not lie in the schema of the user. If the responsible database user for the selected table or view does not match the schema, you cannot extract any data or call up a preview. In this case, make sure that the extraction is possible by using a suitable view. For more information, see Database Users and Database Schemas. 5. Go to the Proposal tab page. The fields of the table or view are displayed here. The overview of the database fields tells you which fields are key fields, the length of the field in the database compared with the length of the field in the ABAP data dictionary, and the field type in the database and the field type in the ABAP dictionary. It also gives you additional information to help you check the consistency of your data. A proposal for creating the DataSource field list is also created. Based on the field properties in the database, a field name and properties are proposed for the DataSource. Conversions such as from lowercase to uppercase or from " " (space) to "_" (underscore) are carried out. You can also change names and other properties of the DataSource field. Type changes are necessary, for example, if a suitable data type is not proposed. Changes to the name could be necessary if the first 16 places of field names on the database are identical. The field name in the DataSource is truncated after 16 places, so that a field name could occur more than once in proposals for the DataSource.

Note
When you use data types, be aware of database-specific features. For more information, see Requirements for Database Tables and Views. 6. Choose Copy to Field List to select the fields that you want to transfer to the field list for the DataSource. All fields are selected by default. 7. Go to the Fields tab page. Here you edit the fields that you transferred from the Proposal tab page to the field list of the DataSource. If the system detects changes between the proposal and the field list when switching from the Proposal tab to the Fields tab, a dialog box is displayed where you can specify whether you want to copy changes from the proposal to the field list. Under Transfer, specify the decision-relevant DataSource fields that you want to be available for extraction and transferred to BW. If required, change the values for the key fields of the source. These fields are generated as a secondary index in the PSA. This is important in ensuring good performance for data transfer process selections, in particular with semantic grouping. Specify whether the source provides the data in the internal or external format. If you choose the external format, ensure that the output length of the field (external length) is correct. Change the entries, as required. If required, specify a conversion routine that converts data from an external format into an internal format. Select the fields for which you want to be able to set selection criteria when you schedule a data request using an InfoPackage. Data for this type of field is transferred in accordance with the selection criteria specified in the InfoPackage. Choose the selection options (such as EQ, BT) that you want to be available for selection in the InfoPackage. If required, define if the data to be selected is language-dependent or time-dependent in the field type. 8. Check the DataSource. The field names are checked for upper and lower case letters, special characters, and field length. The system also checks whether an assignment to an ABAP data type is available for the fields. 9. Save and activate the DataSource. 10. Go to the Preview tab page. If you select Read Preview Data, the specified number of data records is displayed in a preview. This function allows you to check whether the data formats and data are correct. If you can see in the preview that the data is incorrect, try to localize the error. See also: Localizing Errors

Result
The DataSource is created and is visible in the Data Warehousing Workbench in the DataSource overview for the database source system under the application component. When you activate the DataSource, the system generates a PSA table and a transfer program. You can now create an InfoPackage. You define the selections for the data request in the InfoPackage. The data can be loaded into the entry layer of the BW system, the PSA. Alternatively you can access the data directly if the DataSource supports direct access and you have a VirtualProvider in the definition of the data flow.

3.1.2.5 Creating DataSources for Web Services


Use
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 26 of 102

In order to transfer data into BW using a Web service, the metadata first has to be available in BW in the form of a DataSource.

Procedure
You are in the Data Warehousing Workbench in the DataSource tree. 1. Select the application components in which you want to create the DataSource and choose Create DataSource. 2. On the next screen, enter a technical name for the DataSource, select the type of DataSource and choose Copy. The DataSource maintenance screen appears. 3. Go to the General tab page. 1. Enter descriptions for the DataSource (short, medium, long). 2. If necessary, specify whether the DataSource may potentially deliver duplicate data records within a request. 4. Go to the Extraction tab page. Define the delta process for the DataSource. DataSources for Web services support real-time data acquisition. Direct access to data is not supported. 5. Go to the Fields tab page. Here you determine the structure of the DataSource either by defining the fields and field properties directly, or by selecting an InfoObject as a Template InfoObject and transferring its technical properties for the field in the DataSource. You can modify the properties that you have transferred from the InfoObject further to suit your requirements by changing the entries in the field list. Entering InfoObjects here does not equate to assigning them to DataSource fields. Assignments are made in the transformation. When you define the transformation, the system proposes the InfoObjects you entered here as InfoObjects that you might want to assign to a field. 6. Save and activate the DataSource. 7. Go to the Extraction tab page. The system has generated a function module and a Web service with the DataSource. They are displayed on the Extraction tab page. The Web service is released for the SOAP runtime. 8. Copy the technical name of the Web service and choose Web Service Administration. The administration screen for SOAP runtime appears. You can use the search function to find the Web service. The Web service is displayed in the tree of the SOAP Application for RFC-Compliant FMs. Select the Web service and choose Web Service WSDL (Web Service Description Language) to display the WSDL description.

Result
The DataSource is created and is visible in the Data Warehousing Workbench in the application component in the DataSource overview for the Web service source system. When you activate the DataSource, the system generates a PSA table and a transfer program. Before you can use a Web service to transfer data into BW for the DataSource, create a corresponding InfoPackage (push package). If an InfoPackage is already available for the DataSource, you can test the Web service push in Web service administration. See also: Web Services

3.1.3 Emulation, Migration and Restoring DataSources


Use
Emulation 3.x DataSources (object type R3TR ISFS) exist in the BW database in the metadata tables that were available in releases prior to SAP NetWeaver 7.0. The emulation permits you to display and use the DataSource 3.x using the interfaces of the new DataSource concept. The DataSource (R3TR RSDS) is instantiated from the metadata tables of the DataSource 3.x. You can display a 3.x DataSource as an emulated DataSource in DataSource maintenance in BW. You can also model the data flow with transformations for an emulated DataSource if there are already active transfer rules and a transfer structure and a PSA for the 3.x DataSource. Once you have defined the objects of the data flow, you can set the processes for data transfer (loading process using InfoPackage and data transfer process), along with other data processing processes in BW. We recommend that you use process chains. Emulation and definition of the objects and processes of the data flow that are based on the emulation in accordance with the new concept are a preparatory step in manually migrating the DataSource.

Note
If you use an emulated DataSource 3.x, note that the InfoPackage does not use all of the settings defined in the 3.x data flow because in the new data flow it only loads the data into the PSA. To prevent problems arising from misunderstandings about using the InfoPackage, we recommend that you only use the emulation in development and test systems. More information: Using Emulated 3.x DataSources. Migration You can migrate a 3.x DataSource that transfers data into BW from an SAP source system or a file or uses DB Connect to transfer data into a DataSource. Besides manually migrating an individual DataSource, you can also have the option of performing a system-based, automatic migration of an entire data flow. We recommend that you use automatic migration for migrating data flows and their components. 3.x XML DataSources and 3.x DataSources that use UD Connect to transfer data cannot be migrated directly. However, you can use the 3.x versions as a copy template for a Web service or UD Connect DataSource. Manual Migration (SAP Source Systems, Files, DB Connect) If the 3.x DataSource already exists in a data flow based on the old concept, you use emulation first to model the data flow with transformations and data transfer processes and then test it. During migration you can delete the data flow you were using before, along with the metadata objects. The figure below illustrates the process of manual DataSource migration:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 27 of 102

When you migrate a 3.x DataSource (R3TR ISFS) in an original system, the system generates a DataSource (R3TR RSDS) with a transport connection. The 3.x DataSource is deleted, along with the 3.x metadata object mapping (R3TR ISMP) and transfer structure (R3TR ISTS), which are dependent on it. If a PSA or InfoPackage (R3TR ISIP) already exist for the 3.x DataSource, they are transferred to the migrated DataSource, along with the requests that have already been loaded. After migration, only the information about how data is loaded into the PSA is used in the InfoPackage. Existing delta processes continue to run. The delta process does not need to be reinitialized. You can export the 3.x objects, 3.x DataSource, mapping and transfer structure during the migration so that these objects can be restored. The collected and serialized objects are stored in a local table (RSDSEXPORT). You can now transport the migration into the target system. When you import the transport into the target system, in the after import, the system migrates the 3.x DataSource (R3TR ISFS) (as long as it is available in the target system) to a local DataSource (R3TR RSDS), without exporting the objects that are to be deleted. The 3.x DataSource, mapping (R3TR ISMP) and transfer structure (R3TR ISTS) objects are deleted and the related InfoPackages are migrated. The data in the DataSource (R3TR RSDS) is transferred to the PSA. For more information, see Migrating a DataSource 3.x Manually (SAP Source System, File, DB Connect). Manual migration by copying (UD Connect, Web service) You cannot migrate in the way described above If you are transferring data into BW using a Web service and have previously used XML DataSources that were created on the basis of a file DataSource. If you are transferring data into BW using UD Connect and have previously used a UD Connect DataSource that was generated using an InfoSource. You have the following options: XML DataSource 3.x -> Web Service DataSource You can make a copy of a generated 3.x XML DataSource in a source system of type Web Service. When you activate the DataSource, the system generates a function module and a Web service. On your interface, these are different to the 3.x objects. The 3.x objects (3.x DataSource, mapping, transfer rules and generated function module and Web service) are therefore obsolete and can be deleted manually. UD Connect DataSource 3.x -> UD Connect DataSource For a 3.x UD Connect DataSource, you can make a copy in a source system of type UD Connect. The 3.x objects (3.x DataSources, mapping, transfer rules and the generated function module) are obsolete after they have been copied and can be deleted manually. More information: Migrating 3.x DataSources (UD Connect, Web service). Automatic migration You can migrate an entire data flow and its components by using automatic migration. The step with DataSource migration for SAP source systems, files and DB Connect in the automatic migration process corresponds to the step for exporting DataSource 3.x, mapping (R3TR ISMP) and transfer structure (R3TR ISTS) objects in the manual migration process. UD Connect DataSources 3.x or XML DataSources 3.x cannot be migrated using automatic data flow migration. If you are processing data flows that contain a UD Connect DataSource 3.x or an XML DataSource 3.x, you can migrate the remaining data flow components using automatic migration and then create the new UD Connect DataSource or Web Service DataSource by means of manual copying. For more information on system-based, automatic data flow migration, see Migrating a Data Flow. Restoring 3.x DataSources You can restore a DataSource 3.x from the DataSource (R3TR RSDS) for SAP source systems, files, and DB Connect. The 3.x metadata objects must also be exported and archived with the migration of the DataSource 3.x into the original system for files and DB Connect. In restoring, the system reproduces the 3.x DataSource (R3TR ISFS), mapping (R3TR ISMP), and transfer structure (R3TR ISTS) objects with their pre-migration status. Restoring Manually

Note
Only use this function if unexpected problems occur with the new data flow after migration and these problems can only be solved by restoring the data flow used previously. The figure below illustrates the restore process:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 28 of 102

When you restore, the 3.x DataSource (R3TR ISFS), mapping (R3TR ISMP) and transfer structure (R3TR ISTS) objects that were exported are generated with a transport connection in the original system. The DataSource (R3TR RSDS) is deleted. The system tries to retain the PSA. This is only possible if a PSA existed for the 3.x DataSource before migration. This may not be the case if an active transfer structure did not exist for the 3.x DataSource or if the data for the DataSource was loaded using an IDoc. InfoPackages (R3TR ISIP) for the DataSource are retained in the system. Available targets are displayed in the InfoPackage (this also applies to InfoPackages that were created after migration). However, in InfoPackage maintenance, you have to reselect the targets into which you want to update data. The transformation (R3TR TRFN) and data transfer process (R3TR DTPA) objects that are dependent on the DataSource (R3TR RSDS) are retained and can be deleted manually, as required. You can no longer use data transfer processes for direct access or real-time data acquisition. You can now transport the restored 3.x DataSource and the dependent transfer structure and mapping objects into the target system. When you transport the restored 3.x DataSource into the target system, the DataSource (R3TR RSDS) is deleted in the after image. The PSA and InfoPackages are retained. If a transfer structure (R3TR ISTS) is transported with the restore process, the system tries to transfer the PSA for this transfer structure. This is not possible if no transfer structure exists when you restore the 3.x DataSource or if IDoc is specified as the transfer method for the 3.x DataSource. The PSA is retained in the target system but is not assigned to a DataSource/3.x DataSource or to a transfer structure.

Note
You can also use the restoration function to correct replication errors. If a DataSource was inadvertently replicated in the object type R3TR RSDS, you can change the object type of the DataSource in R3TR ISFS by restoring it. More information: Manually Restoring 3.x DataSources. Restoring Automatically Similar to automatic migration, it is also possible to automatically restore a data flow and its components. However, we recommend that you only use the restore process for problems with a new data flow that only be solved by restoring. For more information, see Migrating a Data Flow.

3.1.3.1 Using Emulated 3.x DataSources


Use
You can display an emulated 3.x DataSource in DataSource maintenance in BI. Changes are not possible in this display. In addition, you can use emulation to create the (new) data flow for a 3.x DataSource with transformations, without having to migrate the existing data flow that is based on the 3.x DataSource.

Note
We recommend that you use emulation before migrating the DataSource in order to model and test the functionality of the data flow with transformations, without changing or deleting the objects of the existing data flow. Note that use of the emulated Data Source in a data flow with transformations has an effect on the evaluation of the settings in the InfoPackage. We therefore recommend that you only use the emulation in a development or test system. Constraints An emulated 3.x DataSource does not support real-time data acquisition, using the data transfer process to access data directly, or loading data directly (without using the PSA).

Prerequisites
If you want to use transformations in the modeling of the data flow for the 3.x DataSource, the transfer rules and therefore the transfer structure must be activated for the 3.x DataSource. The PSA table to which the data is written is created when the transfer structure is activated.

Procedure
To display the emulated 3.x DataSource in DataSource maintenance, highlight the 3.x DataSource in the DataSource tree and choose Display from the context menu. To create a data flow using transformations, highlight the 3.x DataSource in the DataSource tree and choose Create Transformation from the context menu. You also use the transformation to set the target of the data transferred from the PSA. To permit a data transfer to the PSA and further updating of the data from the PSA to the InfoProvider, select the DataSource 3.x in the DataSource tree and choose Create InfoPackage or Create Data Transfer Process in the context menu. We recommend that you use the processes for data transfer to prepare for the migration of a data flow and not in the production system.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 29 of 102

Result
If you defined and tested the data flow with transformations using the emulation, you can migrate the DataSource 3.x after a successful test.

3.1.3.2 Migrating a DataSource 3.x Manually (SAP Source System, File, DB Connect)
Use
To take advantage of the new concepts in a data flow using 3.x objects, you must migrate the data flow and the 3.x objects it contains. You can migrate a DataSource 3.x manually or use automatic data flow migration. For more information about the automatic migration of data flows, see Migrating a Data Flow.

Procedure
To migrate a DataSource 3.x manually, perform the following steps: 1. 2. 3. 4. 5. In the original system (development system), in the Data Warehousing Workbench, choose Migrate in the context menu of the 3.x DataSource. If you want to restore the 3.x DataSource at a later time, choose With Export on the next screen. Specify a transport request. Transport the migrated DataSource to the target system (quality system, productive system). Activate the DataSource in the target system.

3.1.3.3 Migrating 3.x DataSources (UD Connect, Web Service)


Prerequisites
The UD Connect source system and the Web service source system are available. The UD Connect source system uses the same RFC destination, and therefore the same BI Java Connector, as the 3.x DataSource.

Context
To take advantage of the new concepts in a data flow using 3.x objects, you must migrate the data flow and the 3.x objects it uses. 3.x XML DataSources and 3.x UD Connect DataSources cannot be migrated in the standard way because the 3.x objects are created in the Myself system and in the new data flow the DataSources need to be created in separate source systems for Web Service and UD Connect. However, you can "migrate" a 3.x DataSource of this type. This involves copying the 3.x DataSource into a source system.

Procedure
1. 2. 3. 4. 5. 6. In the original system (development system), in the Data Warehousing Workbench, choose Copy in the context menu of the 3.x DataSource. On the next screen, enter the name of the DataSource under DataSource. Under Source System, specify the Web service or UD Connect source system to which you want to migrate the DataSource. Delete the dependent 3.x objects (3.x DataSource, mapping, transfer rules and any generated function modules and the Web service). Transport the DataSource and the deletion of 3.x objects into the target system. Activate the DataSource.

Results
When you activate the Web service DataSource, the system generates a Web service and a RFC-compliant function module for the data transfer. When you activate the UD Connect DataSource, the system generates a function module for extraction and data transfer.

3.1.3.4 Restoring 3.x DataSources Manually


Use
In the original system, you can restore 3.x DataSources from DataSources that were migrated using the standard method (SAP source system, file, DB Connect). With a transport operation, you restore the 3.x DataSource in the target system as well. You can restore the 3.x DataSource manually or by automatic data flow migration. For more information about automatically migrating and restoring data flows, seeMigrating Data Flows.

Note
Only use this function if unexpected problems occur with the new data flow after migration and these problems can only be solved by restoring the data flow used previously.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 30 of 102

You can also use this function to revoke a replication to the incorrect object type (R3TR RSDS).

Prerequisites
For file source system and DB Connect: You exported and archived the relevant 3.x objects when migrating the 3.x DataSource.

Procedure
1. In the maintenance screen for the DataSource (transaction RSDS) in the original system (development system), choose DataSource Restore 3.x DataSource. 2. Enter a transport request. 3. If required, delete the dependent transformation (R3TR TRFN) and data transfer process (R3TR DTPA) objects. 4. Transport the restored 3.x DataSource (R3TR ISFS), along with its dependent objects, into the target system.

3.2.3.1 Persistent Staging Area


Use
The Persistent Staging Area (PSA) is the inbound storage area in BW for data from the source systems. The requested data is saved, unchanged from the source system. Request data is stored in the transfer structure format in transparent, relational database tables of the BW system in which the request data is stored in the format of the DataSource. The data format remains unchanged, meaning that no summarization or transformations take place, as is the case with InfoCubes.

Note
When you load flat files, the data does not remain completely unchanged as it may be modified by conversion routines (for example, the date format 31.12.1999 might be converted to 19991231 in order to ensure the uniformity of the data). The possible coupling of the load process from the further processing in BW contributes to an improved load performance. If errors occur when data is processed further, the operative system is not affected. The PSA delivers the backup status for the ODS layer (until the total staging process is confirmed). The duration of the data storage in the PSA is medium-term, since the data can still be used for reorganization. However, for updates to DataStore objects, data is stored only for the short term.

Features
A transparent PSA table is created for every DataSource that is activated. The PSA tables each have the same structure as their respective DataSource. They are also flagged with key fields for the request ID, the data package number, and the data record number. InfoPackages load the data from the source into the PSA. The data from the PSA is processed with data transfer processes. With the context menu entry Manage for a DataSource in the Data Warehousing Workbench you can go to the PSA maintenance for data records of a request or delete request data from the PSA table of this DataSource. You can also go to the PSA maintenance from the monitor for requests of the load process. Using partitioning, you can separate the dataset of a PSA table into several smaller, physically independent, and redundancy-free units. This separation can improve performance when updating data from the PSA. In the Implementation Guide with SAP NetWeaver Business Intelligence Links to Other Source Systems Maintain Control Parameters for the Data Transfer you define the number of data records needed to create a new partition. Only data records from a complete request are stored in a partition. The specified value is therefore a threshold value.

Constraints
The number of fields is limited to a maximum of 255 when using TRFCs to transfer data. The length of the data record is limited to 1962 bytes when you use TRFCs.

3.6.8.2 DB Memory Parameters


Use
You can maintain database storage parameters for PSA tables, master data tables, InfoCube fact- and dimension tables, as well as DataStore object tables and error stack tables of the data transfer process (DTP). Use this setting to determine how the system handles the table when it creates it in the database: Use Data Type to set in which physical database area (tablespace) the system is to create the table. Each data type (master data, transaction data, organizationand Customizing data, and customer data) has its own physical database area, in which all tables assigned to this data type are stored. If selected correctly, your table is automatically assigned to the correct area when it is created in the database.

Note
We recommend you use separate tablespaces for very large tables. You can find information about creating a new data type in SAP Note 0046272 (Introduce new data type in technical settings). Via Size Category, you can set the amount of space the table is thought to need in the database. Five categories are available in the input help. You can also see here how many data records correspond to each individual category. When creating the table, the system reserves an initial storage space in the database. If the table later requires more storage space, it obtains it as set out in the size category. Correctly setting the size category prevents there being too many small extents (save areas) for a table. It also prevents the wastage of storage space when creating extents that are too large. You can use the maintenance for storage parameters to better manage databases that support this concept.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 31 of 102

You can find additional information about the data type and size category parameters in the ABAP Dictionary table documentation, underTechnical Settings. PSA Table For PSA tables, you access the database storage parameter maintenance by choosing Goto Technical Attributes in DataSource maintenance. In dataflow 3.x, you access this setting Extras Maintain DB-Storage Parameters in the menu of the transfer rule maintenance. You can also assign storage parameters for a PSA table already in the system. However, this has no effect on the existing table. If the system generates a new PSA version (a new PSA table) due to changes to the DataSource, this is created in the data area for the current storage parameters. InfoObject Tables For InfoObject tables, you can find the maintenance of database storage parameters under Extras Maintain DB Storage Parameters in the InfoObject maintenance menu. InfoCube/Aggregate Fact and Dimension Tables For fact and dimension tables, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the InfoCube maintenance menu. DataStore Object Tables (Activation Queue and Table for Active Data) For tables of the DataStore object, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the DataStore object maintenance menu. DTP Error Stack Tables You can find the maintenance transaction for the database memory parameters for error stack tables by choosing Extras Settings for Error Stack in the DTP maintenance.

3.2.2 Deleting Requests from the PSA


Use
If you do not regularly delete data from the PSA, the PSA tables can grow to an unlimited size. Large tables increase the costs of data retention, the downtime for maintenance tasks and the performance of the loading process. This function allows you to delete requests from the PSA. This reduces the volume of data in the PSA. Examples of applications are deleting incorrect requests or deleting delta requests that have been updated successfully in an InfoProvider and that no further deltas should be loaded for. You can create selection patterns in the process variant Deleting Requests from the PSA and thus delete requests flexibly.

Procedure
Including the deletion of requests from the PSA in process chains You are in the plan view of the process chain where you want to insert the process variant. 1. To insert a process variant for deleting requests from the PSA in the process chain, select process type Deletion of Requests from the PSA from process category Further BW Processes by double-clicking. 2. In the next dialog box, enter a name for the process variant and choose Create. 3. On the next screen, enter a description for the process variant and choose Continue. The maintenance screen for the process variant appears. Here, you define the selection patterns to which requests should be deleted from the PSA. 4. Enter a DataSource and a source system. You can use the placeholders Asterisk * and Plus + to select requests with a certain character string flexibly for multiple DataSources or source systems.

Tip
Character string ABC* results in the selection of all DataSources that start with ABC, and end with any other characters. The character string ABC+ results in the selection of all DataSources that start with ABCfollowed by any other single character. 5. If you set the indicator Exclude Selection Pattern, this pattern is ignored in the selection. Settings regarding the age and status of a selection pattern (request selections) are ignored for excluded selection patterns.

Tip
For example, you define a selection pattern for the DataSources ABC*. To exclude certain DataSources for this selection pattern, create a second selection pattern for the DataSources ABCD* and set the indicator Exclude Selection Pattern. This selects all DataSources that start with ABC, with the exception of those that start with ABCD. 6. Enter a date or a number of days in the field Older than, in order to define the time when the requests should be deleted. 7. If you only want to select requests with a certain status, set the corresponding indicator. You can select the following status indicators: Delete Successfully Updated Requests Only Delete Incorrect Requests that were not Updated

Note
With Copy Request Selections you can copy the settings for the age and status of a selection pattern (request selections) to any number of selection patterns. Select the selection pattern to which you want to copy the settings, place the cursor on the selection pattern from which you want to copy, and choose Copy Request Selections. 1. Save your entries and return to the previous screen. 2. On the next screen, confirm the insertion of the process variant into the process chain.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 32 of 102

The plan view of the process chain appears. The process variant for deleting requests from the PSA is included in your process chain. Deleting requests for a DataSource in the Data Warehousing Workbench from the PSA You are in an object tree in the Data Warehousing Workbench. 1. Select the DataSource that you want to delete requests from the PSA for and choose Manage. 2. On the next screen, select one or more requests from the list and choose 3. When asked whether you want to delete the request(s), confirm. The system deletes the requests from the PSA table. Delete Request from DB.

You can also delete requests in DataSource maintenance. Choose Goto Manage PSA (pushbutton

). Follow the instructions above, starting from step 2.

Result
If you delete requests from the PSA, they remain physically in a partitioned PSA table for the time being. The requests are first deleted logically (from table RSTSODSREQUEST) and are given a deletion flag in PSA partitioning administration (table RSTSODSPART). You can no longer access these requests. The requests are not deleted physically from the PSA table until all requests in a partition have been logically deleted and have thus been given the deletion flag in PSA partitioning administration.

Note
The change log is stored as a PSA table. For information about deleting requests from the change log, seeDeleting from the Change Log.

3.2.3 Previous Technology of the PSA


The PSA is the entry layer for data in BW. During the load process, the data is updated to PSA tables that were generated for active DataSources. The PSA is managed using the DataSource. The previous technology of the PSA was oriented to the transfer structure. The PSA table is generated for an active transfer structure in this case. The PSA as a standalone application is managed in an object tree of the Administrator Workbench. You can still use this technology when your data model is based on the previously available objects and rules (DataSource 3.x, transfer rule 3.x, update rule 3.x). However, we recommend that you use the concepts for DataSources and transformations available after SAP NetWeaver 7.0, which includes using the new technology of the PSA.

3.2.3.1 Persistent Staging Area


Purpose
The Persistent Staging Area (PSA) is the inbound storage area in BW for data from the source systems. The requested data is saved, unchanged from the source system. Request data is stored in the transfer structure format in transparent, relational database tables in BI. The data format remains unchanged, meaning that no summarization or transformations take place, as is the case with InfoCubes.

Note
When loading flat files, the data does not remain completely unchanged, since it is adjusted by conversion routines, where necessary (for example, the date format 31.21.1999 is converted to 19991231 in order to ensure that the data is uniform). You define the PSA transfer method in transfer rule maintenance. If you set the PSA when extracting data, performance will be improvided if you use TRFCs to load the data. The temporary storage facility in the PSA also allows you to check and change the data before the update to data targets. Isolating the load process from further processing in BW also helps to improve load performance. Unlike a data request with IDocs, a data request in the PSA also provides you with variousoptions for further updating data to the data targets. Isolating the load process from further processing in BW also helps to improve loading performance. The operative system is not affected if errors occur during further processing of data. The PSA delivers the backup status for the ODS (until the total staging process is confirmed). The duration of the data storage in the PSA is medium-term, since the data can still be used for reorganization. For updates to ODS objects however, data is only stored for the short term. In the PSA tree in the Administrator Workbench, a PSA is displayed for every InfoSource. You can access the PSA tree in the Administrator Workbench using either Modeling or Monitoring. The requested data records appear, divided according to request, under the source system they belong to for an InfoSource in the PSA tree.

Features
The data records in BW are transferred to the transfer structure when you load data with the PSA transfer method. A TRFC is performed for each data package. Data is written to the PSA table from the transfer structure, and stored there. A transparent PSA table is created for each transfer structure that is activated. The PSA tables each have the same structure as their transfer structures. They are also flagged with key fields for the request ID, the data package number, and the data record number. Since the requested data is stored unchanged in the PSA, it might contain errors if there were errors in the source system. If the requested data records have been written to the PSA table, you cancheck the data for the request and change incorrect data records. Depending on the type of update, data is transferred from the PSA table into the communication structure using the transfer rules. From the communication structure, the data is updated to the corresponding data target. Using partitioning, you can separate the dataset of a PSA table into several smaller, physically independent, and redundancy-free units. This separation can mean improved performance when you update data from the PSA. In the BW Customizing Implementation Guide, under Business Information Warehouse

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 33 of 102

Connections to Other Systems Maintain Control Parameters for Data Transfer, you set the number of data records to create a partition from. Only data records from a complete request are stored in a partition. The specified value is a threshold value.

Note
As of SAP BW 3.0, you can use the PSA to load hierarchies from the DataSources released for this purpose. The corresponding DataSources will be delivered with Plug-In (-A) 2001.2, at the earliest. You can also use a PSA to load hierarchies from files.

Constraints
The maximum number of fields is 255 when using TRFCs to transfer data. The maximum length of the data record is 1962 bytes when using TRFCs. Data transfer with IDocs cannot be used in connection with the PSA.

3.2.3.1.1 Types of Data Update with PSA


Prerequisites
You have defined the PSA transfer method in the transfer rules maintenance.

Features
Processing options for the PSA transfer method In contrast to a data request with IDocs, a data request in the PSA also gives you various options for a further update of the data in the data targets. Upon selection, you need to weigh data security against performance for the loading process. If you create an InfoPackage in the BW Scheduler, determine the type of data update on the Processing tab page. The following processing options are available in the PSA transfer method:
Processing Option PSA and Data Targets/InfoObjects in Parallel (By Package) Description More Information

A process is started to write the data from this data package The maximum number of processes, which is set in the into the PSA for each data package. If the data is source system inMaintaining Control Parameters for Data successfully updated in the PSA, a second parallel process is started. In this process, the transfer rules are used for the package data records, data is adopted by the communication structure, and it is finally written to the data targets. Posting of the data occurs in parallel by package. This method is used to update data into the PSA and the data targets with a high level of performance. The BW system receives the data from the source system, writes it to the PSA, and starts the update immediately and in parallel into the corresponding data target. Transfer, does not restrict the number of processes in BW. Therefore, many dialog processes in the BW system could be necessary for the loading process. Make sure that enough dialog processes are available in the BW system. If the data package contains incorrect data records, you have several options allowing you to continue working with the records in the request. You can specify how the system should react to incorrect data records. More information: Handling Data Records with Errors. You also have the option of correcting data in the PSA and updating it from here (refer toChecking and Changing Data). Note the following when using transfer and update routines: If you choose this processing option and then request processing takes place in parallel during loading, the global data is deleted because a new process is used for every data package in further processing.

PSA and then to Data Target/InfoObject (by Package)

A process that writes the package to the PSA table is If the data package contains incorrect data records, you started for each data package. When the data has been have several options allowing you to continue working successfully updated to the PSA, the same process writes with the records in the request. More information:Handling the data to the data targets. The data is posted in serial by package. Compared with the first processing option, you have better control over the whole data flow with a serial update of data in packages, because the BW system carries it out using only one process for each data package. Only a certain number of processes are necessary for each data request in the BW system. This number is defined in the settings made in the maintenance of the control parameters in customizing for extractors. Data Records with Errors. You also have the option of correcting data in the PSA and updating it from here (refer toChecking and Changing Data). Note the following when using transfer and update routines: If you choose this processing option and then request processing takes place in parallel during loading, the global data is deleted because a new process is used for every data package in further processing. When using the InfoPackage in a process chain, this setting is hidden in the scheduler. This is because the setting is represented by its own process type in process chain maintenance and is maintained there. Handling Duplicate Data Records (only possible with the processing type Only PSA): The system indicates when master data or text DataSources transfer potential duplicate data records for a key into the BW system. The Ignore Duplicate Data Records indicator is also set by default in this case. In BW, the last data record of a request is updated for a particular

Only PSA

Using this method, data is written to the PSA and is not updated any further. You have the advantage of having data stored safely in BW and having the PSA, which is ideal as a persistent incoming data store for mass data as well. The setting for the maximum number of processes in the source system can also have a positive impact on the number of processes in BW. To further update the data automatically in the corresponding data target, wait until all the data packages have arrived and have been successfully updated in the

PSA, and select Update in DataTarget from the Processing key by default when data records are transferred more than tab page when you schedule the InfoPackage in the once. Any other data records in the request with the same Scheduler. key are ignored. If the Ignore Duplicate Data Records

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 34 of 102

A process that writes the package to the PSA table is started for each data package. If you then trigger further processing and the data is updated to the data targets, a process is started for the request that writes the data packages to the data targets one after the other. Posting of the data occurs in serial by request.

indicator is not set, duplicate data records will cause an error. The error message is displayed in the monitor. Note the following when using transfer and update routines: If you choose this processing option and request processing takes place serially during loading, the global data is kept as long as the process with which the data is processed is in existence.

Further updating from the PSA Several options are available to update the data from the PSA into the data targets. To immediately update the request data in the background, select the request in the PSA tree and choose Context Menu (Right Mouse Button) Start Update Immediately. To schedule a request update using the Scheduler, select the request in the PSA tree and choose Context Menu (Right Mouse Button) Schedule Update. The Scheduler (PSA Subsequent Update) appears. Here you can define the scheduling options for background processing. For data with flexible update, you can also specify and select update parameters where data needs to be updated. To further update the data automatically in the corresponding data target, wait until all the data packages have arrived and have been successfully updated in the PSA, and select Update in DataTarget from the Processing tab page when you schedule the InfoPackage in the Scheduler.

Note
When using the InfoPackage in aprocess chain, this setting is hidden in the scheduler. This is because the setting is represented by its own process type in process chain maintenance and is maintained there. Simulating/canceling update from PSA To simulate the data update for a request using the Monitor, select the request in the PSA tree, and choose Context menu (right mouse button) Simulate/Cancel update. The monitor detail screen appears. On the Detail tab page, select one or more data packages and choose Simulate Update. In the following screen, define the simulation selections and select Execute Simulation. Enter the data records for which you want to simulate the update and choose Continue. You see the data in the communication structure format. In the case of data with flexible updating, you can change to the view for data target data records. In the data target screen you can display the records belonging to the communication structure for selected records in a second window. If you have activated debugging, the ABAP Debugger appears and you can execute the error analysis there. More information:Update Simulation in the Extraction Monitor Processing several PSA requests at once To process several PSA requests at once, select the PSA in the PSA tree and choose Context Menu (Right Mouse Button) Process Several Requests. You have the option of starting the update for the selected requests immediately or using the scheduler to schedule them. The individual requests are scheduled one after the other in the scheduler. You can delete the selected requests collectively using this function. You can also call detailed information, the monitor, or the content display for the corresponding data target.

Note
During processing, a background process is started for every request. Make sure that there are enough background processes available. See also: Tab Page: Processing

3.2.3.1.2 Checking and Changing Data


Use
The PSA offers you the option of checking and changing data before you update it further from the PSA table in the communication structure and in the current data target. You can check and change data records to Remove update errors.

Example
If lower case letters or characters that are not permitted have been used in fields, you can remove this error in the PSA. Validate data.

Example
For example, if, when matching data, it was discovered that a customer should have been given free delivery for particular products, but the delivery had in fact been billed, then you can change the data record accordingly in the PSA.

Prerequisites
You have determined the PSA transfer method in transfer rule maintenance for an InfoSource, and have loaded data into the PSA.

Procedure
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 35 of 102

You have two options for checking and changing the data: 1. You can edit the data directly. 1. In the PSA tree in the Administrator Workbench, select the request for which you want to check the data and choose Context menu (secondary mouse button) Edit Data . A dialog box is displayed in which you can select which data package and which data records you want to edit for this package. 1. When you have made your selections choose Continue. The request data maintenance screen opens. 1. Select the records you want to edit, select Change, and enter the correct data. Save the edited data records. 2. Since the data is stored in a transparent database table in the dictionary, you can change the data using ABAP programming with PSA-APIs. Use the programming for a PSA-API with complex data checks or changes to the data that occur regularly.

Caution
If you change the number of records for a request in the PSA, meaning adding or deleting records, a correct record count in the BW monitor is no longer guaranteed when posting or processing a request. Therefore, we recommend not changing the number of records for a request in the PSA.

Result
The corrected data is now available for continued updates.

3.2.3.1.3 Checking and Changing Data Using PSA-APIs


Use
To perform complex checks on data records, or to carry out specific changes to data records regularly, you can use delivered function modules (PSA-APIs) to program against a PSA table. If you want to execute data validation with program support, select Tools ABAP Workbench Development ABAP Editor andcreate a program. When using transfer routines or update routines, it may be necessary to manually read data in the PSA table after the routine has finished.

Tip
Employee bonuses are loaded into an InfoCube and sales figures for employees are loaded into a PSA table. If an employee's bonus is to be calculated in a routine in the transformation - in accordance with his/her sales - the sales must be read from the PSA table.

Procedure
1. Call up the function module RSSM_API_REQUEST_GET to get a list of requests with request ID for a particular InfoSource of a particular type. You have the option of restricting request output using a time restriction and/or the transfer method. You must know the request ID, as the request ID is the key that makes managing data records in the PSA possible. 2. With the request information received so far, and with the help of the function module, you can read RSAR_ODS_API_GET data records from the PSA table write RSAR_ODS_API_PUT changed data records in the PSA table. RSAR_ODS_API_GET You can call up the function module RSAR_ODS_API_GET with the list of request IDs given by the function module RSSM_API_REQUEST_GET. The function module RSAR_ODS_API_GET no longer recognizes InfoSources on the interface, rather it recognizes the request IDs instead. With the parameter I_T_SELECTIONS, you can restrict reading data records in the PSA table with reference to the fields of the transfer structure. In your program, the selections are filled and transferred to the parameter I_T_SELECTIONS. The import parameter causes the function module to output the data records in the parameter E_T_DATA. Data output is unstructured, since the function module RSAR_ODS_API_GET works generically, and therefore does not recognize the specific structure of the PSA. You can find information on the field in the PSA table using the parameter E_T_RSFIELDTXT. RSAR_ODS_API_PUT After merging or checking and subsequently changing the data, you can write the altered data records into the PSA table with the function module RSAR_ODS_API_PUT. To be able to write request data into the table with the help of this function module, you have to enter the corresponding request ID. The parameter E_T_DATA contains the changed data records.

Result
The corrected data is now available for continued updates.

3.2.3.1.4 Versioning
Use
If you make an incompatible change to the transfer structure (for example, length changes or the deletion of fields), a version is assigned to the PSA table.

Features
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 36 of 102

When the system detects an incompatible change to the transfer structure, a new version of the PSA, meaning a new PSA table, is created. Data is written to the new table when the next request is updated. The original table remains unchanged and is given a version. You can continue to use all of the PSA functions for each request that was written to the old table. Data is read from a PSA table in the appropriate format. If the request was written to the PSA table before the transfer structure was changed, the system uses the format that the transfer structure had before the change. If the request was been written to the PSA table after the transfer structure was changed, the system uses the format that the transfer structure has after the change.

Note
If you program against function module RSAR_ODS_API_GET, you can determine that the data is read into the structure of the current version from an old version using parameter I_CURRENT_DATAFORMAT.

3.6.8.2 DB Memory Parameters


Use
You can maintain database storage parameters for PSA tables, master data tables, InfoCube fact- and dimension tables, as well as DataStore object tables and error stack tables of the data transfer process (DTP). Use this setting to determine how the system handles the table when it creates it in the database: Use Data Type to set in which physical database area (tablespace) the system is to create the table. Each data type (master data, transaction data, organizationand Customizing data, and customer data) has its own physical database area, in which all tables assigned to this data type are stored. If selected correctly, your table is automatically assigned to the correct area when it is created in the database.

Note
We recommend you use separate tablespaces for very large tables. You can find information about creating a new data type in SAP Note 0046272 (Introduce new data type in technical settings). Via Size Category, you can set the amount of space the table is thought to need in the database. Five categories are available in the input help. You can also see here how many data records correspond to each individual category. When creating the table, the system reserves an initial storage space in the database. If the table later requires more storage space, it obtains it as set out in the size category. Correctly setting the size category prevents there being too many small extents (save areas) for a table. It also prevents the wastage of storage space when creating extents that are too large. You can use the maintenance for storage parameters to better manage databases that support this concept. You can find additional information about the data type and size category parameters in the ABAP Dictionary table documentation, underTechnical Settings. PSA Table For PSA tables, you access the database storage parameter maintenance by choosing Goto Technical Attributes in DataSource maintenance. In dataflow 3.x, you access this setting Extras Maintain DB-Storage Parameters in the menu of the transfer rule maintenance. You can also assign storage parameters for a PSA table already in the system. However, this has no effect on the existing table. If the system generates a new PSA version (a new PSA table) due to changes to the DataSource, this is created in the data area for the current storage parameters. InfoObject Tables For InfoObject tables, you can find the maintenance of database storage parameters under Extras Maintain DB Storage Parameters in the InfoObject maintenance menu. InfoCube/Aggregate Fact and Dimension Tables For fact and dimension tables, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the InfoCube maintenance menu. DataStore Object Tables (Activation Queue and Table for Active Data) For tables of the DataStore object, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the DataStore object maintenance menu. DTP Error Stack Tables You can find the maintenance transaction for the database memory parameters for error stack tables by choosing Extras Settings for Error Stack in the DTP maintenance.

3.2.3.1.6 Reading the PSA and Updating a Data Target


Use
You can use this process to further update data from the PSA. This takes place after all data packages arrived in the PSA and were successfully updated there.

Note
Note that it is not possible to create more than one process of type Read PSA and Update Data Target for one request or InfoPackage at any one time. You cannot simultaneously update into more than one data target. Updating into more than one data target can currently only occur sequentially. This process replaces the indicator Subsequently Update into Data Targets on the Processing tab page in the Scheduler. When using an InfoPackage in a

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 37 of 102

process chain, this indicator is grayed out in the Scheduler and the Read PSA and Update Data Target process is controlled by process chain maintenance.

Note
Any settings previously made in the InfoPackage are then ignored.

Procedure
1. In the SAP BW Menu, choose Administration Process Chains. In the Administrator Workbench, choose Process Chain Maintenance from the symbol bar. The Process Chain Maintenance Planning View screen appears. 2. In the left-hand screen area of the required Display Component, navigate to the process chain in which you want to insert the process. Double-click to select it. Alternatively, you can create a new process chain. The system displays the process chain plan view in the right-hand side of the screen. You can find additional information underCreating a Process Chain. 3. In the left-hand screen area, choose Process Types. The system now displays the process categories available. 4. Insert the Read PSA and Update Data Target application process into the process chain using Drag&Drop. The dialog box for inserting a process variant appears. 5. In the Process Variants field, enter the name of the application process you want to insert into the process chain. A value help is available, which lists all process variants that have already been created. Choose Create if you want to create a new process variant. A dialog box appears, in which you can enter a description for your application process. Enter the description for your application process and choose Next. The process chain maintenance screen appears. In the upper screen area, the system displays the following information for the variant: Technical name Description (You can make an entry in this field) Last Changed by Last changed on 6. There are two ways of specifying which requests are to be further updated into which data targets: In the table, in the Object Type column, you can choose Execute InfoPackage, and then one or more InfoPackages to be included in the process chain. Select neither PSA Table nor Data Target. As a result, during the chain run, those requests are updated that were loaded with the specified InfoPackages into the PSA within the chain. Data targets and PSA tables are stored in the InfoPackages. Select PSA Table and Data Target. You can also choose Request as the Object Type in the table, and then one or more requests. As a result, only the selected requests are updated from the specified PSA table into the specified data target.

Note
Only use this setting when calling up the process for the first time. Afterwards, the request is already in the data target and must then be deleted before updating again. Furthermore, this setting cannot be transported as the requests numbers are local to the system and the specified request definitely does not exist in the target system. 1. Save your entries and go back. The Process Chain Maintenance Planning View screen appears.

Result
You have inserted the Read PSA and Update Data Target application process into the process chain.

Note
You can find further information about the additional steps taken when creating a process chain here:Creating a Process Chain.

3.3 Creating InfoObjects


Use
Business evaluation objects are known in BW as InfoObjects. They are divided into characteristics, key figures, units, time characteristics and technical characteristics. InfoObjects are the smallest information units in BW. They structure the information needed to create InfoProviders. InfoObjects with attributes or texts can themselves also be InfoProviders (if used in a query). Characteristics are sorting keys, such as company code, product, customer group, fiscal year, period, or region. They specify classification options for the dataset and are therefore reference objects for the key figures. In the InfoCube, for example, characteristics are stored in dimensions. These dimensions are linked by dimension IDs to the key figures in the fact table. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube. In general, an InfoProvider contains only a sub-quantity of the characteristic values from the master data table. The master data includes the permitted values for a characteristic. These are known as the characteristic values. The key figures provide the values that are reported on in a query. Key figures can be quantity, amount, or number of items. They form the data part of an InfoProvider. Units are also required so that the values for the key figures have meanings. Key figures of type amount are always assigned a currency key and key figures of type quantity also receive a unit of measurement. Time characteristics are characteristics such as date, fiscal year, and so on. T echnical characteristics are used for administrative purposes only within BW. An example of a technical characteristic is the request number in the InfoCube. This is generated when you load a request as an ID and helps you locate the request at a later date. Special features of characteristics: If characteristics have attributes, texts or hierarchies at their disposal, then refer to these as master data-bearing characteristics. More information: Using Master

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 38 of 102

Data and Master Data-Bearing Characteristics Master data is data that remains unchanged over a long period of time. Master data contains information that is always needed in the same way. References to this master data can be made in all InfoProviders. A hierarchy is always created for a characteristic. This characteristic is the basic characteristic for the hierarchy (basic characteristics are characteristics that do not reference other characteristics). Like attributes, hierarchies provide a structure for the values of a characteristic. The company location is an example of an attribute for Customer . You use this, for example, to form customer groups for a specific region. You can also define a hierarchy to make the structure of the Customer characteristic clearer. Special features of key figures: A key figure is assigned additional properties that influence the way that data is loaded and how the query is displayed. This includes the assignment of a currency or unit of measure, setting aggregation and exception aggregation, and specifying the number of decimal places in the query.

Procedure
1. Creating InfoObject Catalogs Create an InfoObject catalog to group the InfoObjects according to application-specific aspects. More information: Creating InfoObject Catalogs 2. Creating Characteristics Create the characteristics. The characteristics determine the granularity (the degree of detail) at which the key figures are stored in the InfoCube. More information: Creating InfoObjects: Characteristics 3. Creating Key Figures Create the key figures. The key figures provide the values that are reported on in a query. More information: Creating InfoObjects: Key Figures

3.3.1 Creating InfoObject Catalogs


Prerequisites
The InfoObjects that you want to transfer to the InfoObject catalog must be active. If you want to define an InfoObject catalog in the same way as an InfoSource, then the InfoSource has to be available and active.

Context
An InfoObject catalog is a collection of InfoObjects grouped according to application-specific criteria. There are two types of InfoObject catalogs: Characteristic and Key Figure. An InfoObject catalog is assigned to an InfoArea. It is a purely organizational aid and is not intended for analysis purposes. For example, you can group together into an InfoObject catalog all InfoObjects that play a role in analysis in the area of "Sales and Distribution". This makes it much easier for you to handle what might turn out to be a very large number of InfoObjects in any given context. An InfoObject can be included in several InfoObject catalogs.

Procedure
1. You are in the Modeling functional area of the Data Warehousing Workbench. 2. Create an InfoArea, to which you want to assign the new InfoObject catalog. To do this, choose Create InfoArea from the context menu in the InfoObject tree. 3. In the context menu of the InfoArea, choose Create InfoObjectCatalog. If you want to make a copy of an existing InfoObject catalog, specify a reference InfoObject catalog. 4. Choose either Characteristic or Key Figure as the InfoObject type, and choose Create. 5. Copying InfoObjects: On the left side of the screen, there are various templates to choose from. These allow you to get a better overview in relation to a particular task. For performance reasons, the default setting is an empty template. Using the pushbuttons, select an InfoSource (only the InfoObjects for the communication structure of the InfoSource are displayed), an InfoCube, a DataStore object, an InfoObject catalog or all InfoObjects. On the right side of the screen you compile your InfoObject catalog. Transfer the desired InfoObjects into the InfoObject catalog using Drag&Drop You can also simultaneously select multiple InfoObjects. 6. Activate the InfoObject catalog.

3.3.1.1 Additional Functions in the InfoObject Catalog


Use
You can display, create or change documents for your InfoObject catalog. See: Documents Info Functions

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 39 of 102

Various information functions are available with regard to the status of the InfoObject catalog: Log display for activation and deletion runs for the InfoObject catalog Current system settings, the object catalog entry

You can display all the properties of your InfoObject catalog in a clear tree structure.

You can compare the following InfoObject catalog versions: Active and modified versions of an InfoObject catalog Active and Content versions of an InfoObject catalog Modified and Content versions of an InfoObject catalog This allows you to compare the properties.

You can transport the InfoObject catalog. The system automatically collects all BW objects that are required to ensure a consistent status in the target system.

You can determine which other objects in the BW system use a specific InfoObject catalog. You can determine the effect of making a particular change and whether this is permitted at a given time. InfoObject Maintenance In the main menu, choose Extras to access the transaction for displaying, creating, and changing InfoObjects.

3.3.2 InfoObject Naming Conventions


Use
As is the case for other objects in BW, the customer namespace A-Z is also reserved for InfoObjects. When you create an InfoObject, the name you give it has to begin with a letter. BW Content InfoObjects start with 0. For more information about namespaces, see Namespaces for BW Objects.

Integration
If you change an InfoObject in the SAP namespace, your modified InfoObject is not overwritten immediately when you install a new release, and your changes remain in place for the time being. BW Content InfoObjects are initially delivered in the D version. If you use the BW Content InfoObject, it is activated. If you change the activated InfoObject, a new M version is generated. When this M version is activated, it overwrites the previous active version.

Caution
Keep in mind that when you are determining naming conventions for InfoObjects, the length of an InfoObject is restricted to 60 characters. Included here is the concatenated value if other characteristics are compounded to other InfoObjects. See also Tab Page: Compounding.

3.3.3 Creating InfoObjects: Characteristic


Procedure
1. In the context menu for your InfoObject catalog for characteristics, choose Create InfoObject. 2. Enter a name and a description 3. Enter a reference characteristic or a template InfoObject. If you choose a template InfoObject, copy its properties and use them for the new characteristic. You can edit the properties if required. For more information about reference characteristics, see Reference InfoObjects under Tab Page: Compounding. 4. Confirm your entries. 5. Maintain the General tab page. You have to enter a description, data type and data length. The following settings and tab pages are optional. 6. Maintain the Business Explorer tab page 7. Maintain the Master Data/Texts tab page 8. Maintain the Hierarchy tab page 9. Maintain the Attributes tab page. This tab page is only active if you have set the With Master Data flag on the Master Data/Texts tab page. 10. Maintain the Compounding tab page 11. If the InfoObject has been indexed on the BWA, you can edit the BWA Index tab page. 12. Save and activate the characteristic you have created.

Note
Before you can use characteristics, they have to be activated. If you choose Save, the system creates all characteristics that have been changed and saves the table entries. They cannot be used for analysis and reporting yet though. If there is an older active version, this is kept at first. The system only creates the relevant objects created in the data dictionary (data elements, domains, text tables, master data tables and programs) once you have activated the characteristics. In InfoObject maintenance, you can switch at any time between any D, M, or A versions of an InfoObject.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 40 of 102

3.3.3.1 Tab Page: General


Definition
On this tab page you specify the basic properties of the characteristic.

Structure
Dictionary Specify the Data Type and the Data Length . The system provides input help which offers you selection options. The following data types are supported for characteristics:
Char: Numc: Dats: Tims: Numbers and letters Numbers only Date Time Character length 1 - 60 Character length 1 - 60 Character length 8 Character length 6

Other Lowercase Letters Allowed/Not Allowed If this indicator is set, the system differentiates between lowercase letters and uppercase letters when you use a screen template to input values. If this indicator is not set, the system converts all the letters into uppercase letters when you use a screen template to input values. No conversion occurs during the load process or in the transformation. This means that values with lowercase letters cannot be updated to an InfoObject that does not allow lowercase letters.

Note
If you choose to allow the use of lowercase letters, you must be aware of the system response when you enter variables: If you want to use the characteristic in variables , the system is only able to find the values for the characteristic if the lowercase letters and the uppercase letters are typed in accurately on the input screen for variables. If, on the other hand, you do not allow the use of lowercase letters, any characters that you type in the variable screen are converted automatically into uppercase letters. Conversion Routine The standard conversion for the characteristic is displayed. If this standard conversion is unsuitable, you can override it by specifying a conversion routine in the underlying domain. See Conversion Routines in BW Systems . Attribute Only If you select Attribute Only , the created characteristic can be used only as a display attribute for another characteristic, not as a navigation attribute. Furthermore, you cannot transfer the characteristic into InfoCubes. However, you can use it in DataStore objects or InfoSets. Characteristic Is Document Property You can specify that a characteristic is used as a Document Property . This enables you to assign a comment (this can be any document) to a combination of characteristic values. See also Documents and the example for Characteristic Is Document Property .

Note
Since it does not make sense to use this comment function for all characteristics, you need to identify explicitly the characteristics that you want to appear in the comments. If you set this indicator, the system generates a property (attribute) for this characteristic in the meta model of the document management system. For technical reasons, this property (attribute) has to be written to a (dummy) transport request (the appropriate dialog box appears) but it is not actually transported. Constant By assigning a Constant to a characteristic, you give it a fixed value. The characteristic then exists on the database (for example, for verifications) but it does not appear in reporting. Assigning a constant is most useful with compound characteristics.

Example
The Storage Location characteristic is compounded with the Plant characteristic. If you only run one plant within the application, you can assign a constant to the plant. The validation for the storage-location master table runs correctly using the constant value for the plant. In the query, however, the storage location only appears as a characteristic. Exception: If you want to assign the constant SPACE (type CHAR) or 00..0 (type NUMC) to the characteristic, enter # in the first position. Transfer Routine When you create a transfer routine, it is valid globally for the characteristic and is included in all the transformation rules that contain the InfoObject. However, the transfer routine is only run in one transformation with a DataSource as a source. The transfer routine is used to correct data before it is updated in the characteristic. During data transfer, the logic stored in the individual transformation rule is executed first. Then the transfer routine for the value of the corresponding field is executed for each InfoObject that has a transfer routine. In this way, the transfer routine can store InfoObject-dependent coding that only needs to be maintained once, but that is valid for all transformation rules.

3.3.3.2 Tab Page: Business Explorer


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 41 of 102

3.3.3.2 Tab Page: Business Explorer


Use
On this tab page you determine the properties that are required in the Business Explorer for reporting on or analyzing characteristics.

Structure
General Settings In some cases, you can make the following settings for the InfoObjects contained in the InfoProvider on an InfoProvider by InfoProvider basis. The settings are then only valid in the relevant InfoProvider. See also Additional Functions in InfoCube Maintenance and Additional Functions in DataStore Object Maintenance. Display For characteristics with texts: Under Display, you select whether you want to display text in the Business Explorer and if yes, which text. You can choose from the following display options: No Display, Key, Text, Key and Text, or Text and Key. This setting can be overwritten in queries. Text Type For characteristics and texts: In this field you set whether you want to display the short, medium or long text in the Business Explorer. Description BEx In this field, you determine the description that appears for this characteristic in the Business Explorer. You choose between the long and short descriptions of the characteristic. This setting can be overwritten in queries. For more information, see Priority Rule with Formatting Settings. Selection The selection defines if and how the characteristic values have to be restricted in queries. If you choose the Unique for Every Cell option, the characteristic must be restricted to one value in each column and in every structure of all the queries. You cannot use this characteristic in aggregates. Typical examples of this kind of characteristic are Plan/Actual ID or Value Type. Filter Selection in Query Definition This field defines how the selection of filter values or the restriction of characteristics is determined when you define a query. When you restrict characteristics, the values from the master data table are usually displayed. For characteristics that do not have master data tables, the values from the SID Table are displayed instead. In many cases it is more useful to only display those values that are also contained in an InfoProvider. Therefore you can also choose the setting Only Values in InfoProvider. Filter Selection in Query Execution This field defines how the selection of filter values is determined when a query is executed. When queries are executed, the selection of filter values is usually determined by the data that is selected by the query. This means that only the values for which data has been selected in the current navigation status are displayed. However, in many cases it can be useful to include additional values. Therefore you can also choose the settings Only Values in InfoProvider and Values in Master Data Table. If you make this selection, however, you may get the message "No data found" when you select your filter values. These settings for input help can also be overwritten in the query. For more information, see Priority Rule with Formatting Settings. Filter Display in Query Execution This field defines how the display of filter values is determined when a query is executed. If, for example, the characteristic has very few characteristic values, it may be useful to display the values as a dropdown list box. Base Unit of Measure You specify a unit InfoObject that is a unit of measure. The unit InfoObject must be an attribute of the characteristic. This unit InfoObject is used when quantities are converted for the master data-bearing characteristic in the Business Explorer. For more information, see Quantity Conversion. Unit of Measure for Characteristic You can define units of measure for the characteristic. The system hereby creates a DataStore object for units of measure. You can specify the name of the quantity DataStore object, the description, and the InfoArea into which you want to add the object. The system proposes the name: UOM<Name of InfoObject to which the quantity DataStore Object is being added>. For more information please refer to: Prerequisites for InfoObject-Specific Quantity Conversion. Currency Attribute You select a unit InfoObject that is a currency that you have created as an attribute for the characteristic. In this way, you can define variable target currencies in the currency translation types. The system determines the target currency using the master data when you perform currency translation in the Business Explorer and dynamically when loading data. See also the example for Defining Target Currencies Using InfoObjects. Authorization Relevance You choose whether a particular characteristic is included in the authorization check when you are working with the query. Set the Authorization-Relevant indicator for a characteristic if you want to create authorizations that restrict the selection conditions for this characteristic to single characteristic values. You can only mark the characteristic as Not Authorization-Relevant if it is no longer being used as a field for the authorization object. See also: Analysis Authorizations BEx Map Geographical Type For each geo-relevant characteristic you have to specify a geographical type. There are four options to choose from.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 42 of 102

1. Static Geo-Characteristic: For this type you can use Shapefiles (country borders, for example), to display the characteristic on a map in the Business Explorer. 2. Dynamic Geo-Characteristic: For this type geo-attributes are generated that make it possible, for example, to display customers as a point on a map. 3. Dynamic Geo-Characteristic with Attribute Values: For this type the geo-attributes of a geo-characteristic of type 2, which is an attribute, are used. 4. Static Geo-Characteristic with Geo-Attributes: As static geo-characteristics, with the addition of generated geo-attributes. See also Static and Dynamic Geo-Characteristics. If you choose the Not a Geo-Characteristic option, this characteristic cannot be used as a geo-characteristic for displaying information on the BEx Map. Geographical attributes of the InfoObject (such as 0LONGITUDE, 0ALTITUDE) are deleted. Geographical Attribute If you have selected the Dynamic Geo-Characteristic with Attribute Values geographical type for the characteristic, on this tab page you specify the characteristic attribute whose geo-attributes you want to use. Uploading Shapefiles For static geo-characteristics: Use this function to upload the geo-information files that are assigned to the characteristic. These files are stored in the BDS as files that logically belong to the characteristic. See also Shapefiles. Downloading Geo-Data For dynamic geo-characteristics: You use this function to load the master data for a characteristic to your PC, where you can use your GIS tool to geocode the data. Finally, you use a flat file to load the data again as a normal data load into the relevant BW master data table.

3.3.3.2.1 Mapping Geo-Relevant Characteristics


Definition
In order to be able to display geographical BW data, a connection between this data and the respective geographical characteristics must be established. This process is described as Mapping Geo-Relevant Characteristics.

Structure
The geographical information about geographical boundaries of areas that are displayed using static geo-characteristics is stored in Shapefiles. In the shapefile, a BW-specific attribute called the SAPBWKEY is responsible for connecting an area on the map with the corresponding characteristic value in BW. This attribute matches the characteristic value in the corresponding BW master data table. This process is called SAPBWKEY Maintenance for Static GeoCharacteristics . See SAPBWKEY Maintenance for Static Geo-Characteristics

Note
You can use ArcView GIS or other software that has functions for editing dBase files to carry out the SAPBWKEY maintenance (MS Excel, for example). With data in point form that is displayed using dynamic geo-characteristics, geographical data is added to BW master data. The process of assigning geographical data to entries in the master data table is called geocoding . See Geocoding

Note
The software ArcView GIS from ESRI (Environmental Systems Research Institute) geocodes the InfoObjects.

Integration
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 43 of 102

Integration
You can execute the geocoding with the help of the ArcView GIS from ESRI software. As well as geocoding, ArcView also offers a large number of functions for special, geographical problems that are not covered by SAP NetWeaver Business Intelligence. With ArcView, you can create your own maps, for example, a map of your sales regions. You can find more detailed information about this in the ArcView documentation. When you buy SAP NetWeaver BW, you receive a voucher that you can use to order ArcView GIS from ESRI. The scope of supply also contains a CD developed specially by SAP and ESRI. The CD contains a range of maps covering the whole world in various levels of detail. All maps on this data CD are already optimized for use with SAP NetWeaver BW. The .dbf files for the maps already contain the column SAPBWKEY that is predefined with default values. For example, the world map (cntry200) in the column SAPBWKEY already contains the typical values in SAP systems for countries. You can immediately use this map to geographically evaluate your data; no SAPBWKEY maintenance is necessary.

Note
You can get additional detailed maps in ESRI Shapefile format from ESRI.

3.3.3.2.1.1 Static and Dynamic Geo-Characteristics


Definition
Static and dynamic characteristics describe data with a geographical reference (for example, characteristics such as customer, sales region, country). Maps are used to display and evaluate this geo-relevant data.

Structure
There are four different types of geo-characteristic: 1. Static geo-characteristics A static geo-characteristic is a characteristic that describes a surface (polygon), whose geographical coordinates rarely change. Country and region are examples of static geo-characteristics. Data from areas or polygons are stored inShapefiles that define the geometry and the attributes of the geo-characteristics. 2. Dynamic geo-characteristics A dynamic geo-characteristic is a characteristic that describes a location (information in point form), whose geographical coordinates can change more frequently. Customer and plant are examples of dynamic geo-characteristics because they are rooted to one geographical "point" that can be described by an address, and the address data of these characteristics can often change. A range of standard attributes are added to this geo-characteristic in SAP NetWeaver BW. These standard attributes store the geographical coordinates of the corresponding object for each row in the master data table. The geo-attributes concerned are:
Technical Name LONGITUDE LATITUDE ALTITUDE Description Longitude of the location Latitude of the location Altitude of the location (height above sea level) Identifies how precise the data is ID for the data source Data Type DEC DEC DEC Length 15 15 17

PRECISID SRCID

NUMC CHAR

4 4

Note
At present, only the LONGITUDE and LATITUDE attributes are used. ALTITUDE, PRECISID and SRCID are reserved for future use. If you reset the geographical type of a characteristic to Not a Geo-Characteristic, these attributes are deleted in the InfoObject maintenance. 3. Dynamic geo-characteristics with values from attributes To save you having to geocode each dynamic geo-characteristic individually, a dynamic geo-characteristic can get its geo-attributes (longitude, latitude, altitude) from another dynamic characteristic that has been geocoded already (postal code, for example). Customers and plants are examples of this type of dynamic geo-characteristics with values from attributes (type 3). The system treats this geo-characteristic as a regular dynamic geo-characteristic that describes a location (geographical information as a point on map). The geo-attributes described above are not added to the master data table on the database level. Instead, the geo-coordinates are stored in the master data table of a regular attribute of the characteristic.

Note
You want to define a dynamic geo-characteristic for Plant with the postal code as an attribute. The geo-coordinates are generated from the postal code master data table during the runtime.

Note
This method prevents redundant entries from appearing in the master data table. 4. Static geo-characteristics with geo-attributes A static geo-characteristic that includes geo-attributes (longitude, latitude, altitude) which geo-characteristics of type 3 are able to refer to. The postal code, for example, can be used as a static geo-characteristic with geo-attributes.

Note
0POSTCD_GIS (postal code) is used as an attribute in the dynamic geo-characteristic 0BPARTNER (business partner) that gets its geo-coordinates from this attribute. In this way, the location information for the business partner is stored on the level of detail of the postal code areas.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 44 of 102

See also: Delivered Geo-Characteristics

3.3.3.2.1.1.1 Shapefiles
Definition
ArcView GIS software files from ESRI that contain digital map material of areas or polygons (shapes). Shapefiles define the geometry and attributes of static geocharacteristics. Note that shapefiles have to be available in the format of the World Geodetic System 1984 (WGS 84).

Use
Shapefiles serve as a basis for displaying BW data on maps.

Structure
Format The format of ArcView shapefiles uses the following files with special file enhancements: .dbf-dBase file that saves the attributes or values of the characteristic . shp-saves the current geometry of the characteristic .shx-saves an index for the geometry These three files are saved for each static geo-characteristic in the Business Document Service (BDS) and loaded to the local computer from BDS when you use BEx Map. Shapefile Data from the ESRI BW Mapping Data CD The map data from the ESRI BW mapping data CD was selected as the basic reference data level to provide you with a detailed map display and also with thematic mapping material on the levels of world maps, continents and individual countries. The reference data levels include country boundaries, state boundaries, towns, streets, railways, lakes, and rivers. The mapping data is geographically subdivided into data for 21 separate maps. There is mapping data for: a world map seven maps on continent level, for example, Asia, Europe, Africa, North America, South America. 13 maps on country level: How current the data for the countries varies. Most of the country boundaries are as they were between 1960-1988, some countries have been updated to their position in 1995. The names of the shapefiles on the ESRI BW mapping data CD follow a three-part naming convention. The first part consists of an abbreviation of the thematic content of the shapefile, for example, cntry stands for a shape file with country boundaries. The second part of the name indicates the level of detail. There are, for example, three shapefiles with country boundary information at different levels of detail. The least detailed shapefile begins with cntry1, whereas the most detailed shapefile begins with cntry3. The third part of the name indicates the version number of the shapefile, based on the last two digits of the year beginning with the year 2000. Therefore, the full name of the shapefile with the most detailed country boundary information is cntry300. All shapefiles on the ESRI BW mapping data CD already contain the SAPBWKEY column. For countries, the two-figure SAP country key is entered in the SAPBWKEY column.

Note
The file Readme.txt on the ESRI BW mapping data CD contains further, detailed information on the delivered shapefiles, the file name conventions used, the mapping data descriptions and specifications, data sources, and how up-to-date the data is.

Integration
At runtime, the shapefiles are downloaded from the BW system to the IGS (Internet Graphic Server). The files are copied into the ../data/shapefiles directory. If a specific shapefile is already in this directory, it is not copied again. If in the meantime, the shapefile has been changed in the Business Document Service (BDS), the latest version is automatically copied into the local directory. Depending on the level of detail, shapefiles can be quite large. The shapefile cntry200.shp with the country boundaries for the entire world is around 2.2 megabytes. For smaller organizational units, such as federal states, the geometric information is saved in multiple shapefiles. You can assign a characteristic to several shapefiles (for example, federal states in Germany, France, and so on).

3.3.3.2.1.1.2 Delivered Geo-Characteristics


Definition
SAP SAP NetWeaver BW delivers a range of geo-characteristics with the Business Content.

Structure
The following are the most important delivered geo-characteristics: Static geo-characteristics

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 45 of 102

Technical Name 0COUNTRY 0DATE_ZONE 0REGION

Description Country key Time zone Region (federal state, province)

Dynamic geo-characteristics
Technical Name 0AP_OF_FUND 0TV_P_LOCID Description Location number IATA location

Dynamic geo-characteristics with values from attributes


Technical Name 0BPARTNER 0CONSUMER 0CUSTOMER 0PLANT 0VENDOR Attributes 0POSTCD_GIS 0POSTCD_GIS 0POSTCD_GIS 0POSTCD_GIS 0POSTCD_GIS Description Business Partner Consumer Customer Number Factory Vendor

Static geo-characteristics with geo-attributes


Technical Name 0CITYP_CODE 0CITY_CODE 0POSTALCODE 0POSTCD_GIS Description City district code for city and street file City code for city and street file Postal/ Zip code Postal code (geo-relevant)

3.3.3.2.1.2 SAPBWKEY Maintenance for Static GeoCharacteristics


Use
During runtime, BW data is combined with a corresponding Shapefile. This enables the BW data to be displayed in geographical form (country, region, and so on) using color shading, bar charts, or pie charts. The SAPBWKEY makes sure that the BW data is assigned to the appropriate Shapefile.

In the standard Shapefiles delivered with the ESRI BW map CD, the SAPBWKEY column is already filled with the two-character SAP country keys (DE, EN, and so). You can use these Shapefiles without having to maintain the SAPBWKEY beforehand.

Prerequisites
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 46 of 102

Prerequisites
You have marked the geo-relevant characteristic as geo-relevant in the InfoObject maintenance. Before you are able to follow the example that explains how you maintain the SAPBWKEY for static geo-characteristics, SAP DemoContent must be active in your BW system. You can use ArcView GIS from ESRI to maintain the SAPBWKEY, or you can use other software (MS Excel or FoxPro, for example) that has functions for displaying and editing dBase files.

Process
For static geo-characteristics (such as Country or Region) that represent the geographical drilldown data for a country or a region, you have to maintain the SAPBWKEY for the individual country or region in the attributes table of the Shapefile. The attributes table is a database table stored in dBase format. Once you have maintained the SAPBWKEY, you load the Shapefiles (.shp, .dbf, .shx) into BW. The Shapefiles are stored in the Business Document Service (BDS), a component of the BW server. The following section uses the example of the 0D_COUNTRY characteristic to describe how you maintain the SAPBWKEY for static geo-characteristics. You use the CNTRY200 Shapefile from the ESRI BW map data CD. The CD contains the borders of all the countries in the world. The maintenance of the SAPBWKEY for static geo-characteristics consists of the following steps. 1. 2. 3. 4. You create a local copy of the Shapefile from the BW data CD (.shp,.shx,.dbf). You download BW master data into a dBase file. You open the dBase attributes table for the Shapefile (.dbf) in Excel, and maintain the SAPBWKEY column. You load the edited Shapefile into the BW system. In this example scenario using the 0D_COUNTRY characteristic, the SAPBWKEY column is already maintained in the attributes table and corresponds with the SAP country keys in the master data table. If you maintain a Shapefile where the SAPBWKEY has not been maintained, or where the SAPBWKEY is filled with values that do not correspond to BW master data, you proceed as described in the steps above.

Result
You are now able to use the characteristic as a static geo-characteristic in the Business Explorer. Every user that works with a query containing this static geocharacteristic is able to attach a map to the query and analyze the data on the map directly.

3.3.3.2.1.2.1 Creating a Local Copy of the Shapefile


Use
You need a local copy of the Shapefile before you are able to maintain the SAPBWKEY column in the attributes table of the shapefile.

Procedure
1. Use your file manager (Windows Explorer, for example) to localize the three files cntry200.shp, cntry200.shx and cntry200.dbf on the ESRI BW map data CD and copy the files to the C:\SAPWorkDir directory, for example. 2. You must deactivate the Read-only option before you are able to edit the files. (Select the files and choose the Properties option from the context menu (rightclick). Under Attributes, deactivate the Read-only option). If you do not have access to the ESRI BW map data CD, proceed as follows:

Note
The files are already maintained in the BW Business Document Service (BDS). The following example explains how, for the characteristic 0D_COUNTRY in InfoCube 0D_SD_C0, you download these files from the BDS to your local directory. 1. 2. 3. 4. 5. 6. Log on to the BW system and go to the InfoObject maintenance screen (transaction RSD1). This takes you to the Edit InfoObjects: Start dialog box. In the InfoObject field, enter 0D_COUNTRY and choose Display. The Display Characteristic 0D_COUNTRY: Details screen appears. Choose the Business Explorer tab page. In the BEx Map area, 0D_COUNTRY is displayed as a static geo-characteristic. Choose Display Shape Files. This takes you to the Business Document Navigator that already associates three shape files with this characteristic. Open up the shape files completely in the BW Meta Objects tree. Select the .dbf file BW_GIS_DBF and choose Export Document. This loads all the files to your local SAPWorkDirectory. (The system proposes the C:\SAPWorkDir directory as your SAPWorkDirectory). 7. Repeat the last step for the .shp (BW_GIS_SHP) and .shx (BW_GIS_SHX) files.

3.3.3.2.1.3.1 Downloading BW Master Data into a dBase File


Prerequisites
You have created a local working copy of the Shapefile.

Context
To maintain the SAPBWKEY column in the Shapefile attribute table, you have to specify the corresponding BW country key for every row in the attribute table. As this information is contained in the BW master data table, you have to download it into a local dBase file to compare it with the entries in the attribute table and maintain the SAPBWKEY.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 47 of 102

Procedure
1. 2. 3. 4. 5. Log on to the BW system and go to the InfoObject maintenance screen (transaction RSD1). This takes you to the Edit InfoObjects: Start screen. In the InfoObject field, enter 0D_COUNTRY and choose Display. The Display Characteristic 0D_COUNTRY: Detail dialog box appears. Choose the Business Explorer tab page. In the BEx Map area, 0D_COUNTRY is displayed as a static geo-characteristic. Choose Geo Data Download (Everything). Accept the file name proposed by the system by choosing Transfer.

Note
The proposed file name is made up of the technical name of the characteristic and the .dbf extension, therefore, in this case the file is called 0D_COUNTRY.DBF.

Note
If the Geo Data Download (Everything) pushbutton is deactivated (gray), there is no master data for the InfoObject. If this is the case, download the texts for the InfoObject manually to get to the SAPBWKEY. See also: Creating InfoObjects: Characteristics, Tab Page: Master Data/Texts

Results
The status bar contains information on how much data has been transferred.

Note
If you have not specified a directory for the file name, the file is saved in the local SAP work directory.

3.3.3.2.1.2.3 Maintaining the SAPBWKEY Column


Prerequisites
You have completed the following steps: Creating a Local Copy of the Shapefile Downloading BW Master Data into a dBase File Integration The SAPBWKEY is maintained in the dBase file with the suffix .dbf. This file contains the attributes table.

Procedure
1. 2. 3. 4. Launch Microsoft Excel and choose File Open... From the dropdown box in the Files of type field, choose dBase Files (*.dbf). In the C:\SAPWorkDir directory, open the cntry200.dbf file. The attributes table from the Shapefile is displayed in an Excel worksheet. Repeat this procedure for the 0D_COUNTRY.DBF file that you created in the step Downloading BW Master Data into the dBase File. This file shows you which values from the SAPBWKEY are used for which countries. 5. In the 0D_COUNTRY.DBF file, use the short description ( 0TXTSH column) to compare the two tables.

Note
ESRI delivers an ESRI BW map data CD. This CD contains the SAPBWKEY (corresponding to the SAP country key) for the characteristic 0D_COUNTRY. This is why the SAPBWKEY column in the cntry200.dbf file is already filled with the correct values. Copy the SAPBWKEY manually to the attributes table in the Shapefile - if you are using a different country key - if you are working with characteristics for which the SAPBWKEY column has not been defined, or is filled with invalid values

Note
If you are working with compound characteristics, copy the complete SAPBWKEY, for example, for region 01 compounded with country DE copy the complete value DE/01. Do not under any circumstances change the sequence of the entries in the attributes table (for example, by sorting or deleting the rows!) If you were to change the sequence of the entries, the attributes table would no longer agree with the index and the geometric files. 6. When you have finished maintaining the SAPBWKEY column, save the attributes table in the Shapefile, in this example, cntry200.dbf.

3.3.3.2.1.2.4 Uploading Edited Shapefiles into BW Systems


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 48 of 102

Prerequisites
You have completed the following steps: Creating a Local Copy of the Shapefile Downloading BW Master Data into a dBase File Maintaining the SAPBWKEY Column

Procedure
The last step is to attach the shapefile set (.shp, .shx, .dbf) to the InfoObject, by uploading it into the Business Document Service (BDS) on the BW server. 1. 2. 3. 4. Log on to the BW system and go to the InfoObject maintenance screen (transaction RSD1). This takes you to the Edit InfoObjects: Start screen. In the InfoObject field, specify 0D_COUNTRY and choose Maintain. This takes you to the Change Characteristic 0D_COUNTRY: Detail screen. In the Business Explorer tab page, choose Upload Shape Files. The Business Document Service: File Selection dialog box appears. Select the cntry200.shp file and choose Open. The Business Document Service suggests entries for the file name, description, and so on, and allows you to enter key words that will make it easier for you to find the file in the BDS at a later date. 5. Choose Continue. 6. The system automatically asks you to upload the cntry200.dbf and cntry200. shx files for the shapefile.

Result
You have uploaded the edited shape file into the BW system. You can now use the characteristic in Business Explorer. Every user that works with a query that contains the 0D_COUNTRY InfoObject can now attach a map to the query and analyze the data on the map.

3.3.3.2.1.3 Geocoding
Use
To display dynamic geo-characteristics as points on a map, you have to determine the geographic co-ordinates for each master data object.

Note
The master data table for dynamic geo-characteristics is, therefore, extended with a number of standard geo-attributes such as LONGITUDE and LATITUDE ( see Static and Dynamic Geo-Characteristics).

Prerequisites
You have marked the geo-relevant characteristic as geo-relevant in the InfoObject maintenance. See the tab page: Business Explorer

Note
Before you are able to follow the example that explains geocoding, SAP DemoContent must be active in your BW system.

Process
Geocoding is implemented with ArcView GIS software from ESRI. ArcView GIS determines the geographic coordinates of BW data by identifying a column with geo-relevant characteristics in a reference shape file. To carry out this process, you have to load the BW master data table into a dBase file. The geographical coordinates are determined for every master data object. After you have done this, convert the dBase file with the determined geo-attributes into a CSV file ( c omma- s eparated v alue file), which you can use for a master data upload into the BW master data table. The following steps explain the process of geocoding dynamic geo-characteristics using the 0D_SOLD_TO characteristic ( Sold-to Party) from the 0D_SD_C03 Sales Overview Demo-Content InfoCube. 1. 2. 3. 4. You download BW master data into a dBase file. You execute the geocoding with ArcView GIS. You convert dBase files into a CSV file. You schedule a master data upload for the CSV file.

Note
The system administrator is responsible for the master data upload.

Result
You are now able to use the characteristic as a dynamic geo-characteristic in the Business Explorer. Each user that works with a query that contains this geocharacteristic can now analyze the data on a map.

3.3.3.2.1.3.1 Downloading BW Master Data into a dBase File

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 49 of 102

Context
The first step in SAPBWKEY maintenance for dynamic geo-characteristics and their geocoding is to download the BW master data table into a dBase file.

Procedure
1. 2. 3. 4. 5. Log on to the BW system and go to the InfoObject maintenance screen (transaction RSD1). This takes you to the Edit InfoObjects: Start dialog box. In the InfoObject field, enter the name of the dynamic geo-characteristic that you want to geocode (in this example: 0D_SOLD_TO). Choose Display. The Display Characteristic 0D_SOLD_TO: Detail dialog box appears. Choose the Business Explorer tab page. In the BEx Map area, 0DSOLD_TO is displayed as a dynamic geo-characteristic. Choose Geo Data Download (All).

Note
If you only want to maintain those entries that have been changed since the last attribute master data upload, choose Geo Data Download (Delta). The geo-data has to be downloaded in the delta version before you execute the realignment run for the InfoObject. Otherwise the delta information is lost. 6. The system asks you to select a geo-attribute that you want to include in the dBase file. The system only displays those attributes that were defined as georelevant. In this case, select both attributes: 0D_COUNTRY and 0D_REGION. 7. Choose Transfer Selections. 8. Transfer the file name suggested by the system and choose Transfer.

Note
The proposed file name is made up of the technical name of the characteristic and the .dbf extension. You can change the file name and create a directory. If you do not specify a path, the file is automatically saved in the SAP work directory.

Results
The status bar contains information on how much data has been transferred.

3.3.3.2.1.3.2 Geocoding Using ArcView GIS


Use
Using geocoding, you enhance dynamic geo-characteristics from BW master data with the geographical attributes degrees of longitude and latitude.

Prerequisites
You have installed the ArcView software from ESRI on your system and requested the desired, geographic data from ESRI, if these are not already included in the delivered data CD. You have completed the following step: Downloading BW Master Data into a dBase File

Procedure
Note
The following procedure is an example procedure that you can reconstruct using the demo contents. You can find additional details about ArcView geocoding and functionality in the ArcView documentation.

Note
In ArcView GIS you can execute many commands easily from the context menu. To open the context menu, select an element and right-click on it. 1. 2. 3. 4. Open using Programs ArcGIS ArcCatalog . Under Address Locators, double-click on the entry New Address Locator. In the Create New Address Locator window, select the entry Single Field (File) and click on OK. In the New: Single Field (File) Address Locator window, enter the name of the service and the description, for example, Geocoding Service SoldTo. Under Reference Data, enter the path for the reference shapefile, for example, g_stat00.shp and from the Fields dropdown menu, select the most appropriate entry, in this case, SAPBWKEY. Under Output Fields, activate the control box X and Y Coordinates. In the navigation menu, the new service is displayed under Address Locators. Open using Programs ArcGis ArcMap and start with A New, Empty Map in the entry dialog. Choose OK. In the standard toolbar, click on the Add Data symbol and add the corresponding dBase file, for example, SoldTo.dbf as a new table. Click with the secondary mouse button in the main tree on the entry for the table that you created and select Geocode Addresses. The Choose an address locator to use... window is opened. All available services are displayed in this window. Click Add and, in the Add Address Locator window under Search in:, choose the entry Address Locators. Select the service that you created in step four (in this example, Geocoding Service SoldTo) and click on Add. In the Choose an address locator to use ... window, select the service again, and click OK. The Geocode Addresses window is opened. Under Address Entry Fields, choose the suitable entry, for example 1_0D_Regio. This is the field that tallies with the reference data. Under the output

5. 6. 7. 8. 9. 10.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 50 of 102

Output Shapefile or Feature Class, enter the path under which the result of the geocoding is to be saved. Choose OK. The data is geocoded. 11. After you have checked the statistics in the Review/Rematch Addresses window, click Done.

Result
The dynamic geo-characteristics for your master data have now been enhanced with additional geo-information in the form of the columns X (longitude) and Y (latitude). In ArcMap this information is displayed by points displayed in the right-hand side of the work area. To check whether the result appears as you had planned, you can place the points on the relevant map. Proceed as follows: 1. Click on the symbol on the toolbar. 2. Select the reference Shapefile that you used in step four, for example, g_stat00.shp. 3. Choose Add. The map is displayed in the work area on a level under the points.

3.3.3.2.1.3.3 Converting dBase Files into CSV Files


Prerequisites
You have completed the following steps: Downloading BW Master Data into a dBase File Geocoding Using ArcView GIS Integration As a result of the geocoding, you receive the dBase file Geocoding_Result.dbf. This file contains the BI master data enhanced by the columns X and Y. Since the attribute table is saved in dBase file format, you now have to convert the table into a CSV ( c omma s eparated v alue) format that can be executed by the BW Staging Engine. You can convert the table in Microsoft Excel.

Procedure
1. 2. 3. 4. 5. 6. Launch Microsoft Excel and choose File Open ... From the selection list in the field Files of Type, choose dBase Files (*.dbf). Open the file Geocoding_Result.dbf. The attribute table with the geo-attributes is displayed in Excel. Choose File Save As ... From the Save as Type selection list, choose CSV (Comma Delimited). Save the table.

Results
You have converted the dBase file into a CSV file with the geo-attribute for the dynamic geo-characteristic 0D_SOLD_TO. You system administrator can now schedule a master data upload.

Note
When you upload the CSV file, you have to map the values in column X to the attribute 0LONGITUDE, and the values in column Y to the attribute 0LATITUDE.

3.3.3.3 Tab: Master Data/Texts


Definition Use
On this tab page, you define whether or not the characteristic has attributes and/or texts.

Structure
With Master Data If you set this flag, the characteristic can have attributes. The system then generates a P table for this characteristic. This table contains the key of the characteristic and any attributes it has. It is used as a check table for the SID table. When loading transaction data, the system checks whether there is a characteristic value in the P Table if the referential integrity is used. Choose Maintain Master Data to call the maintenance dialog for editing attributes from the main menu. The master data table can have a time-dependent and a time-independent part. More information: Master Data Types: Attributes, Texts, and Hierarchies . In the Attribute Maintenance transaction, you specify whether or not an attribute is time-dependent.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 51 of 102

With Texts Here you specify whether the characteristic has texts. If you want to use texts with a characteristic, you have to select at least one text. The Short Text (20 characters) option is set by default, but you can also choose medium-length texts (40 characters) or long texts (60 characters). Language-Dependent Texts You can specify whether or not the texts in the text table are language-dependent. If you specify language-dependent, the language is a key field in the text table. Otherwise, there is no language field in the text table.

Note
For certain BW Content characteristics, customer (0CUSTOMER) for example, there is not point in setting them as non-language-dependent. Time-Dependent Texts If you want texts to be time-dependent (the date is included in the key of the text table), you make the relevant settings here. See also: Using Master Data and Master Data-Bearing Characteristics Master Data Maintenance with Authorization Check If you set this flag, you can use authorizations to protect the attributes and texts for this characteristic from being maintained at single-record level. If you activate this option, you can enter the characteristic values for which the user has authorization for each key field in the master data table. You do this in the profile generator in role maintenance using authorization object S_TABU_LIN. More information: Authorizations for Master Data . If you do not set this flag, you can only allow access to or lock the entire master data maintenance (for all characteristic values). DataStore Object for Checking Characteristic Values If you create a DataStore object for checking the characteristic values in a characteristic, in the transformation or in the update and transfer rules, the valid values for the characteristic are determined from the DataStore object and not from the master data. The DataStore object must contain the characteristic itself and all the fields in the compound as key figures. Characteristic Is .... InfoSource: If you want to use a characteristic as an InfoSource using direct update, you have to assign an application component to the characteristic. The system displays the characteristic in the InfoSource tree in the Data Warehousing Workbench. You can assign DataSources and source systems to the characteristic here. You can then also load attributes, texts, and hierarchies for the characteristic. In the following cases you cannot use an InfoObject as an InfoSource with direct update: The characteristic is characteristic 0SOURSYSTEM (source system ID). The characteristic has no master data, texts or hierarchies. There is no point in loading data for the characteristic. It is not a characteristic but a unit or key figure. You can find more information at http://help.sap.com/nw70 Application Help SAP Library SAP NetWeaver SAP NetWeaver by Key Capability Information Integration Business Intelligence Data Warehousing Transformation Old Transformation Concept InfoSource 3.x . If you want to generate an export DataSource for a characteristic, the characteristic has to be an InfoSource with direct update. It also has to be assigned to an application component. InfoProvider: This flag specifies whether the characteristic is an InfoProvider. If you want to use a characteristic as an InfoProvider, you have to assign an InfoArea to it. The system displays the characteristic in the InfoProvider tree in the Data Warehousing Workbench. You can use the characteristic as an InfoProvider in reporting and analysis. You can only use a characteristic as an InfoProvider if the characteristic contains texts or attributes. You can define queries for the characteristic (for the characteristic's master data that is) if you are using a characteristic as an InfoProvider. In this case, you can activate dual-level navigation attributes (navigation attributes for navigation attributes) for this characteristic in its role as an InfoProvider on the Attributes tab page. More information: InfoObjects as InfoProviders . Export DataSource: If you set this flag, you can extract the characteristic's attributes, texts, and hierarchies to other BW systems. More information: Data Mart Interface . Master Data Access You have the following options for accessing master data at query runtime: 1. Standard : The values from the characteristic's master data table are displayed. This is the default setting. 2. Own Implementation : You can define an ABAP class to implement the access to master data yourself. To do this, you need to implement interface IF_RSMD_RS_ACCESS. You also have to be proficient in ABAP OO. An example of this is time characteristic 0FISCYEAR, which is delivered with the the Business Content. 3. Direct : If the characteristic is selected as an InfoProvider, you can access the data in a source system using direct access. If you choose this option, you have to use a data transfer process to connect the characteristic to the required DataSource. You also have to assign the characteristic to a source system. 4. SAP HANA Attribute View : If you are using a SAP HANA database, you can create virtual master data. More information: Using Virtual Master Data We recommend using the default setting. If you have special requirements with regard to reading master data, you can use a customer-defined implementation. Navigation attributes are only supported for access types Own Implementation , Direct and SAP HANA Attribute View , if the InfoObject is used in an VirtualProvider. We advise against using direct access to master data in performance-critical scenarios. Permitted for Real-Time Data Acquisition If you select this checkbox, you can use real-time data acquisition to fill the characteristic with data.

3.3.3.4 Tab Page: Hierarchy


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 52 of 102

Definition Use
If you want to create a hierarchy, or upload an existing hierarchy from a source system, you have to set the With Hierarchies indicator. The system generates a hierarchy table with hierarchical relationships for the characteristic. You are able to determine the following properties for the hierarchy: Whether or not you want to create Hierarchy Versions for a hierarchy. Whether you want the entire hierarchy or just the hierarchy structure to be time-dependent. Whether you want to allow the use of Hierarchy Intervals. Whether you want to activate the Sign Reversal function for nodes. The characteristics that are permitted in the hierarchy nodes: If you want to use the PSA to load your hierarchy, you must select InfoObjects for the hierarchy basic characteristic that you want to upload as well. All the characteristics you select here are included in the communication structure for hierarchy nodes, together with the characteristics compounded to them. For hierarchies that are loaded using IDocs, it is a good idea to also select the permitted InfoObjects. This makes maintenance of the hierarchy more transparent, because only valid characteristics are available for selection. If you do not select an InfoObject here, only text nodes are permitted as nodes that can be posted to in hierarchies. See also: Hierarchies Using Master Data and Master Data-Bearing Characteristics

3.3.3.5 Tab Page: Attributes


Definition Use
On this tab page, you determine whether or not the characteristic has display or navigation attributes, and if so, which properties they have.

Note
This tab page is only available if you have set the With Master Data indicator on the Master Data/Texts tab page. In the query, display attributes provide additional information about the characteristic. Navigation attributes, on the other hand, are treated like normal characteristics in the query, and can also be evaluated on their own.

Structure
Attributes are InfoObjects that exist already, and that are assigned logically to the new characteristic. There are the following ways to maintain attributes for a characteristic: Choose attributes from the Attributes of the Assigned DataSources list. Use F4 Help for the fields that are ready for input in the Attributes of the Characteristic list, to display all the InfoObjects. Choose the attributes that you need. In the Attributes list, specify directly in the fields that are ready for input the name of an InfoObject that you want to use as an attribute. If the InfoObject that you want to use does not yet exist, you have the option of creating a new InfoObject at this point. Any new InfoObjects that you create are inactive. They are activated when the existing InfoObject is activated. Properties Choose Detail/ Navigation Attributes to display the detailed view. In the detail view, you set the following: Time Dependency You can decide for each attribute individually whether it is to be time-dependent. If only one attribute is time-dependent, a master data table is created. However, there can still be attributes for this characteristic that are not time-dependent. All time-dependent attributes are in one table. That is, they all have the same time-dependency and all time-constant attributes are in one table.

Example
Characteristic: Business Process Table /BI0/PABCPROCESS - for time-constant attributes

Characteristic: Business Process Characteristic value: 1010

Attribute: Cost Center Responsible Attribute value: Jones

Table /BI0/QABCPROCESS - for time-dependent attributes


Business Process Characteristic value: 1010 Valid From 01.01.2000 02.06.2000 Valid To 01.06.2000 01.10.2000 Company Code Attribute value: A Attribute value: B

View /BI0/MABCPROCESS connects these two tables:


Business Process Valid From Valid To Company Code Cost Center Responsible

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 53 of 102

1010

01.01.2000 02.06.2000

01.06.2000 01.10.2000

A B

Jones Jones

Note
In master data updates, you can either load time-dependent and time-constant data individually, or together. Sequence of Attributes in Input Help You can determine the sequence in which the attributes for a characteristic are displayed in the input help. There are the following values for this setting: 00: The attribute is not displayed in the input help 01: The attribute appears in the first position (far left) in the input help 02: The attribute appears in the second position in the input help. 03: ...... Altogether, only 40 fields are permitted in the input help. In addition to the attributes, the characteristic itself, its texts, and the compound characteristics are generated in the input help. The total number of fields cannot be greater than 40. Navigation Attribute The attributes are defined as display attributes by default. You can activate an attribute as a navigation attribute in the relevant column. You might want to give this navigation attribute a description and a short text. These texts for navigation attributes can also be supplied by the basic InfoObject. If the text of the characteristic changes, the texts for the navigation attribute are adjusted automatically. This process requires very little maintenance and translation resources.

Caution
When you are defining and executing queries, it is not possible to use the texts to distinguish between navigation attributes and characteristics. As soon as a characteristic appears several times (as a characteristic and as a navigation attribute) in an InfoProvider, you must give the navigation attribute a different name. For example, you could call the characteristic Cost Center and call the navigation attribute Person Responsible for the Cost Center. See also the topic Elimination of Internal Business Volume: The characteristic pair Sent Cost Center and Received Cost Center has the same reference characteristic and has to be differentiated by the text. Authorization Relevance You can mark navigation attributes as authorization relevant independently of the assigned basic characteristics. Navigation Attributes for InfoProviders For characteristics that are identified as InfoProviders, you can maintain two-level navigation attributes (that is, navigation attributes of navigation attributes) using Navigation Attribute InfoProviders. This is used for master data reporting on the characteristic. See also Modeling InfoObjects as InfoProviders. It has no effect on characteristics used in other InfoProviders. In other words, if you use this characteristic in an InfoCube, the 2-level navigation attributes are not available for reporting on this InfoCube.

3.3.3.6 Tab Page: Compounding


Use
In this tab page, you determine whether you want to compound the characteristic to other InfoObjects. You sometimes need to compound InfoObjects in order to map the data model. Some InfoObjects cannot be defined uniquely without compounding.

Example
For example, if storage location A for plant B is not the same as storage location A for plant C, you can only evaluate the characteristic Storage Location in connection with Plant. In this case, compound characteristic Storage Location to Plant, so that the characteristic is unique. One particular option with compounding is the possibility of compounding characteristics to the source system ID. You can do this by setting the Master Data Locally for Source Sys. indicator. You may need to do this if there are identical characteristic values for the same characteristic in different source systems, but these values indicate different objects.

Recommendation
The extensive use of compounded InfoObjects can influence performance, particularly if you include a lot of InfoObjects in compounding. Do not try to display hierarchical links through compounding. Use hierarchies instead.

Note
A maximum of 13 characteristics can be compounded for an InfoObject. Note that characteristic values can also have a maximum of 60 characters. This includes the concatenated value, meaning the total length of the characteristic in compounding plus the length of the characteristic itself. Reference InfoObjects If an InfoObject has a reference InfoObject, it has its technical properties: For characteristics these are the data type and length as well as the master data (attributes, texts and hierarchies). The characteristic itself also has the operational semantics. For key figures, these are the key figure type, data type and the definition of the currency / unit of measure. However, the referencing key figure can have another aggregation. These properties can only be maintained with the reference InfoObject. Several InfoObjects can use the same reference InfoObject. InfoObjects of this type automatically have the same technical properties and master data. The operational semantics, that is the properties such as description, display, text selection, relevance to authorization, person responsible, constant, and

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 54 of 102

attribute exclusively, are also maintained with characteristics that are based on one reference characteristic.

Example
The characteristic S old-to Party is based on the reference characteristic Customer and, therefore, has the same values, attributes, and texts. More than one characteristic can have the same reference characteristic: The characteristics Sending Cost Center and Receiving Cost Center both have the reference characteristic Cost Center. See the documentation on eliminating internal business volume. Characteristic Constants By assigning a constant to a characteristic, you give it a fixed value. The characteristic then exists on the database (for example, verifications), but it is not visible in the query. The Storage Location characteristic is compounded with the Plant characteristic. If you only run one plant within the application, you can assign a constant to the plant. The validation for the storage-location master table runs correctly using the value for the plant.

Note
Exception: If the constant SPACE (type CHAR) or 00..0 (type NUMC) is assigned to the characteristic, specify character # in the first position.

3.3.3.7 Tab: BWA Index


Concept
This tab page is only displayed if the InfoObject has already been indexed on the SAP NetWeaver BW Accelerator (BWA). On this tab page, you can view the BWA indexes for this characteristic. You can view individual indexes. The system displays a status for each index. By choosing Maintain BW Accelerator Index, you can edit existing indexes and create new ones. More information: Indexing BW Data in SAP NetWeaver BW Accelerator.

3.3.3.8 Characteristic Compounding with Source System ID


Use
If there are identical characteristic values describing different objects for the same characteristic in various source systems, you have to convert the values in such a way in SAP BW so as to make them unique.

Tip
For example, the same customer number may describe different customers in different source systems. You can carry out conversion in the transfer rules for this source system or in the transfer routine for the characteristic. If work involved in conversion is too great, you can compound the characteristic to the InfoObject Source System ID (0SOURSYSTEM). This means it is automatically filled with master data. The source system ID is a 2-character identifier for a source system or a group of source systems in BW. The source system ID is updated with the ID of the source systems that provides the data. Assigning the same ID to more than one source system creates a group of source systems. The master data is unique within each group of source systems.

Tip
You already have 10 source systems within which the master data is unique. Five new source systems are now added, resulting in overlapping. You can now assign the 10 existing source systems to ID 'OL' (with text 'Old Systems') and the 5 new system to ID 'NE' (Text: 'New Systems'). Note: You now need to reload the data. If you use characteristic Source System ID, you have to assign an ID to each source system. If you did not assign an ID to each source system, an error will occur when you load master data for the characteristics that use the Source System ID as attribute or in the compounding. This is because, in data transfers, the source system to source system ID assignment is used to determine which value is updated for the characteristic Source System ID. Master Data that is Local in the Source System (or Group of Source Systems) If you have master data that is only unique locally for the source system in SAP BW, you can compound the relevant characteristics to the Source System ID characteristic. In this way, you can separate identical characteristic values that refer to different objects in different systems. Data transfers from one BW system into another BW system are an exception, that is, where this 1:1 assignment does not apply. See also the Exception Scenario: Data Mart section in Assigning a Source System to a Source System ID.. RRI (Report-Report-Interface) and Drag & Relate Characteristics to be traced in your original system using the RRI (Report-Report-Interface) or Drag & Relate should have characteristic 0SOURSYSTEM as their attribute. Otherwise problems might occur during the system copy. When you integrate your BW system into SAP Enterprise Portal, the Source system characteristic is used to define the logical system of the business objects corresponding to the characteristic values. In this system then, functions specified using Drag& Relate are called (for example the detail display of an order or a cost center). Every Business Content characteristic that corresponds to a business object has characteristic Source System as its attribute. If you assign more than one source system to a source system ID, you can define one system of this group as default system. This system is then used in the Report-Report-Interface and in Drag & Relate for the return jump. This default system is only used if the origin of the data was not yet uniquely defined by

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 55 of 102

characteristic 0LOGSYS. Deleting and Removing a Source System ID You can only delete the assignment to a source system ID if it is no longer used in the master or transaction data. Use the Release IDs that are not in use function here.

3.3.3.8.1 Assigning a Source System to a Source System ID


Use
Assigning a source system to a source system ID is necessary if, for example, you want to compound a characteristic to the InfoObject Source System ID. More information: Characteristic Compounding with Source System ID. When data is transferred, the source system to source system ID assignment is used to determine which value is updated for the source system ID characteristic. The source system ID indicates the source system from which data is delivered.

Procedure
1. To do this, choose Tools Assign Source System to Source System ID 2. Choose Suggest Source System IDs. 3. Save your entries. from the main menu of the Data Warehousing Workbench.

Note
The source system ID can be changed only when it is no longer used in the master data or transaction data. To do this, use the function Release IDs that are not in use on the maintenance screen for source system ID assignment. Exception Scenario: Data Mart Data transfers from one BW system (Source BW) into another BW system (Source BW) are cases where this 1:1 assignment does not apply. The system ID of the source BW cannot be used here because the various objects (that were differentiated in the source BW by compounding to the source system IDs) would otherwise overlap. When you transfer data from the source BW to the target BW, the source system IDs are copied from the source BW. If these IDs are not yet recognized in the target BW, then you have to create them. You can therefore create source system IDs for logical systems that are not used as BW source systems. Exception Scenario: Data Mart Data transfers from one BW system (Source BW) into another BW system (Source BW) are cases where this 1:1 assignment does not apply. The system ID of the source BW cannot be used here because the various objects (that were differentiated in the source BW by compounding to the source system IDs) would otherwise overlap. When you transfer data from the source BW to the target BW, the source system IDs are copied from the source BW. If these IDs are not yet recognized in the target BW, then you have to create them. You can therefore create source system IDs for logical systems that are not used as BW source systems.

Procedure 1. To do this, choose Tools Assign Source System to Source System ID from the main menu of the Data Warehousing Workbench.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 56 of 102

2. Choose Create. 3. Enter the logical system name and a description, and confirm you entries (in this example, the name would be OLTP1 or OLTP2). 4. In the Source System ID column enter the ID name that you also entered in BW1 for the corresponding source system. (In this example it would be ID 01 or ID 02). 5. Save your entries.

3.3.3.9 Navigation Attribute


Use
Characteristic attributes can be converted into navigation attributes. They can be selected in the query in exactly the same way as the characteristics for an InfoCube. In this case, a new edge / dimension is added to the InfoCube. During the data selection for the query, the data manager connects the InfoProvider and the master data table ('join') in order to fill the Query.

Example
Costs of the cost center drilled down by person responsible: You use the attribute 'Cost Center Manager' for the characteristic 'Cost Center'. If you want to navigate in the query using the cost center manager, you have to create the attribute 'Cost Center Manager' as a navigation attribute, and flag it as a navigation characteristic in the InfoProvider. When executing the query there is no difference between navigation attributes and the characteristics for an InfoCube. All navigation functions in the OLAP processor are also possible for navigation attributes.

Note
Extensive use of navigation attributes leads to a large number of tables in the connection ('join') during selection and can impede the performance of the following actions: Deletion and creation of navigation attributes (construction of attribute SID tables) Change of time-dependency of navigation attributes (construction of attribute SID tables) Loading master data (adjustment of attribute SID tables) Call up of input help for a navigation attribute Execution of queries Therefore, only make those attributes into navigation attributes that you really need for reporting. See Performance of Navigation Attributes in Queries and Input Help. See also: Creating Navigation Attributes

3.3.3.9.1 Creating Navigation Attributes


Prerequisites
You are in InfoObject maintenance and have selected the tab page Attributes.

Procedure
1. Specify the technical name of the characteristic that you want to use as a navigation attribute, or create a new attribute by choosing Create. You can also directly transfer proposed attributes of the InfoSource.

Note
In order to use the characteristic as a navigation attribute, make sure the InfoObject is first assigned as an attribute, and that the option Attribute Only is not activated for the characteristic on the General tab page. 2. By clicking on the symbol Navigation Attribute On/Off in the relevant column, you can define an attribute as a navigation attribute. 3. When you set the indicator as Authorization Relevant the navigation attribute is checked upon executing an authorization query. 4. Choose the Characteristic Texts indicator, or specify a name in the Navigation Attribute Description field.

Note
If you turn a characteristic attribute into a navigation attribute, you can assign a text to the navigation attribute to distinguish it from a normal characteristic in reporting.

Results
You have created a characteristic as a navigation attribute for your superior characteristic. Further Steps to Take You must activate the created navigation attributes in the InfoProvider maintenance. The default is initially set to Inactive so as not to implicitly include more attributes than are necessary in the InfoCube.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 57 of 102

Note
Navigation attributes can affect performance. See also Performance of Navigation Attributes in Queries and Input Help. Note: You can create or activate navigation attributes in the InfoCube at any time. Once an attribute has been activated, you can only deactivate it if it is not used in aggregates. In addition, you must record your navigation attributes in queries so that they are included in reporting.

3.3.3.9.2 Performance of Navigation Attributes in Queries and Value Help


Use
From a system performance point of view, you should model an object on a characteristic rather than on a navigation attribute. The reasons for this are as follows: In the enhanced star schema of an InfoCube, navigation attributes lie one join further out than characteristics. This means that a query with a navigation attribute has to run an additional join (compared with a query with the same object as a characteristic) in order to arrive at the values. This is also true for DataStore objects. For the same reason, in some situations, restrictions for particular values in the navigation attribute (values that have been defined in the query) are not taken into account by the database optimizer when it creates run schedules. This can result in inefficient run schedules, particularly if the restrictions are very selective. In most cases, you can solve this problem by indexing the navigation attribute in the corresponding master data tables (see below). If a navigation attribute is used in an aggregate, the aggregate has to be adjusted using a change run as soon as new values are loaded for the navigation attribute (when master data for the characteristic belonging to the navigation attribute is loaded). This change run is usually one of the processes that are critical to the system performance of a production BW system. This is why by avoiding using navigation attributes, or not using navigation attributes in aggregates, you can improve the performance of this process. On the other hand, not using navigation attributes in aggregates can lead to poor query response times. The data modeler needs to find the right balance. Additional Indexing It is sometimes advisable to manually create additional indexes for master data tables, to improve system performance for queries with navigation attributes. A typical scenario would be if there were performance problems during the selection of characteristic values, for example: In BEx queries containing navigation attributes, where the corresponding master data table is large (more than 20,000 entries), there is usually a restriction placed on the navigation attributes. In the input help for this type of navigation attribute.

Example
You want to improve the performance of navigation attribute A in characteristic C. You have restricted navigation attribute A to certain values. If A is timeindependent, you need to refer to the X table of C (/BI0/XC or /BIC/XC). If A is time-dependent, you need to refer to the Y table of C (/BI0/YC or /BIC/YC). This table contains a column S__A (A = navigation attribute). Using the ABAP dictionary, for example, you need to create an additional database index for this column: SAP Easy Access Tools ABAP Workbench Development Dictionary .

Note
You must verify whether the index that you have created has actually improved performance. If there is no perceivable improvement, you must delete the index, as maintaining defunct indexes can lead to poor system performance when data is loaded (in this case master data) and has an impact on the change run.

3.3.3.9.3 Transitive Attributes as Navigation Attributes


Use
If a characteristic was included in an InfoCube as a navigation attribute, it can be used for navigating in queries. This characteristic can itself have further navigation attributes, called transitive attributes. These attributes are not automatically available for navigation in the query. However, this procedure describes how you can display the transitive attributes in the query via modeling. An InfoCube contains InfoObject 0COSTCENTER (cost center). This InfoObject has navigation attribute 0COMP_CODE (company code). This characteristic in turn has navigation attribute 0COMPANY (company for the company code). In this case 0COMPANY is a transitive attribute that you can switch on as navigation attribute.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 58 of 102

Procedure
In the following procedure, we assume a simple scenario with InfoCube IC containing characteristic A, with navigation attribute B and transitive navigation attribute T2, which does not exist in InfoCube IC as a characteristic. You want to display navigation attribute T2 in the query.

Creating Characteristics Create a new characteristic dA (denormalized A) which has the transitive attributes requested in the query as navigation attributes (for example T2) and which has the same technical settings for the key field as characteristic A. After creating and saving characteristic dA, go to transaction SE16, select the entry for this characteristic from table RSDCHA (CHANM = <characteristic name> and OBJVERS = 'M') and set field CHANAV to 2 and field CHASEL to 4. This renders characteristic dA invisible in queries. This is not technically necessary, but improves readability in the query definition since the characteristic does not appear here. Start transaction RSD1 (InfoObject maintenance) again and activate the characteristic. Including Characteristics in the InfoCube Include characteristic dA in InfoCube IC. Switch on its navigation attribute T2. The transitive navigation attributes T2 are now available in the query. Modifying Transformation Rules Now modify the transformation rules for InfoCube IC so that the newly included characteristic dA is calculated in exactly the same way as the existing characteristic A. The values of A and dA in the InfoCube must be identical.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 59 of 102

Creating InfoSources Create a new InfoSource. Assign the DataSource of characteristic A to the InfoSource. Loading Data Technical explanation of the load process: The DataSource of characteristic A must define the master data table of characteristic A as well as of characteristic dA. In this example the DataSource delivers key field A and attribute B. A and B must be updated to the master data table of characteristic A. A is also updated to the master data table of dA (namely in field dA) and B is only used to determine transitive attribute T2, which is read from the updated master data table of characteristic B and written to the master data table of characteristic dA. Since the values of attribute T2 are copied to the master data table of characteristic dA, this results in the following dependency, which must be taken into consideration during modeling: If a record of characteristic A changes, it is transferred from the source system when it is uploaded into the BW system. If a record of characteristic B changes it is transferred from the source system when it is uploaded into the BW system. However, since attribute T2 of characteristic B is read and copied when characteristic A is uploaded, a data record of characteristic A might not be transferred to the BW system during a delta upload of characteristic A because it has not changed. However, the transitive dependent attribute T2 might have changed for this record only but the attribute would not be updated for dA. The structure of a scenario for loading data depends on whether or not the extractor of DataSource A is delta enabled. Loading process: Scenario for non-delta-enabled extractor If the extractor for DataSource A is not delta enabled, the data is updated to the two different InfoProviders (master data table of characteristics A and dA) using an InfoSource and two different transformation rules.

Scenario for delta-enabled extractor If it is a delta-enabled extractor, a DataStore object from which you can always execute a full update in the master data table of characteristic dA is used. With this solution, the data is also updated to two different InfoProviders (master data table of characteristic A and new DataStore object which has the same structure

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 60 of 102

as characteristic A) in a delta update using a new InfoSource and two different transformation rules. Transformation rules from the DataStore object are also used to write the master data table of characteristic dA with a full update.

For both solutions, the transformation rules in the InfoProvider master data table of characteristic dA must cause attribute T2 to be read. For complicated scenarios in which you read from several levels, function modules will be retrieved that execute this service. It is better for the coding for reading the transitive attributes (in the transformation rules) if you include the attributes to be read in the InfoSource right from the beginning. This means that you only have transformation rules that perform one-to-one mapping. The additional attributes that are included in the InfoSource are not filled in the transfer rules. They are only computed in the transformation rules in a start routine, which must be created. The advantage of this is that the coding for reading the attributes (which can be quite complex) is stored in one place in the transformation rules. In both cases the order at load time must be adhered to and must be implemented either organizationally or using a process chain. It is essential that the master data to be read (in our case the master data of characteristic B) already exists in the master data tables in the system when the data providing the DataSource of characteristic A is loaded. Change the master data from characteristic B so that it is also visible with the next load into A / dA.

3.3.3.10 Conversion Routines in the BW System


Use
Conversion routines are used in the BW system so that the characteristic values (key) of an InfoObject can be displayed or used in a different format to how they are stored in the database. They can also be stored in the database in a different format to how they are in their original form, and supposedly different values can be consolidated into one. Conversion routines that are often implemented in the BW system are now described.

Integration
In the BW system, conversion routines essentially serve to simplify the input of characteristic values for a query runtime. For example, for cost center 1000, 1000 is entered instead of the the long value with leading zeros 0000001000 (from the database) is not to be entered. Conversion routines are therefore linked to characteristics (InfoObjects) and can be used by them. Conversion routines can also be set with data loading. At the DataSource there are two conversion routines: one that is entered in the SAP source system and entered in the BW system at replication, and one that is defined in the BW system or was already defined for BW Content DataSources. In the DataSource maintenance you can define if the data is delivered in external or internal format, or if the format should be checked. The conversion routine from the source system is hidden there. The conversion routine from the source system is used in the InfoPackage in the value help. The conversion routine in the BW system is checked upon loading (OUTPUT & INPUT), executed (INPUT) or ignored (in this case, when the DataSource is checked there is a warning if a conversion routine is nevertheless entered), depending on the setting made in the field. It is also used for the display (OUTPUT) and maintenance (INPUT) of data in PSA. In many cases it is desirable to store the conversion routines of these fields in the corresponding InfoObject on the BW system side too. When the fields of the DataSource are assigned to the InfoObjects, a conversion routine is assigned by default in the transformation rules. You can choose whether or not to execute this conversion routine. Conversion routines PERI5, PERI6 and PERI7 are not executed automatically since these conversions are only performed when data is extracted to the BW system. When loading data you now have to consider that when extracting from SAP source systems the data is already in the internal format and is not converted. When loading flat files and when loading using a BAPI or DB Connect, the conversion routine displayed signifies that an INPUT conversion is executed before writing to the PSA. For example, the date field is delivered from a flat file in the external format'10.04.2003'. In the transformation rules, this field can be converted to internal

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 61 of 102

format '20030410' according to a conversion routine. A special logic is used in the following cases: For numeric fields, a number format transformation is performed if needed (if no conversion routine is specified). For currencies, a currency conversion is also performed (if no conversion routine is specified). If required, a standard transformation is performed for the date and time (according to the user settings). Conversion routine RSDAT offers a more flexible user-independent date conversion. Conversion routines ALPHA, NUMCV, and GJAHR check whether data exists in the correct internal format before it is updated. For more on this see the extensive documentation in the BW system in the transaction for converting to conforming internal values (transaction RSMDCNVEXIT). If the data is not in the correct internal form an error message is issued. BW Content objects are delivered with conversion routines if they are also used by the DataSource in the source system. The external presentation is then the same in both systems. The name of the conversion routines of the DataSource fields that are used is transferred to the BW system when the DataSources are replicated from the SAP source systems.

Features
A conversion occurs according to the data type of the field when changing the content of a field from the display format into the SAP-internal format and vice versa, as well as for output using the ABAP WRITE instruction. The same is true for output using a BW system query. If this standard conversion is unsuitable you can override it by specifying a conversion routine in the underlying domains. You do this in the BW system by specifying a conversion routine in InfoObject maintenance in the General Tab Page. See Defining Conversion Routines for more technical details.

3.3.3.10.1 ALPHA Conversion Routine


Use
The ALPHA conversion is used by default in the BW system for characteristics of type character. The ALPHA conversion routine is registered automatically when a characteristic is created. If you do not want to use this routine, you have to remove it manually. The ALPHA conversion routine is used, for example, with account numbers or document numbers.

Features
When converting from an external into an internal format this checks whether the entry in the INPUT field is wholly numerical, whether it consists of digits only, possibly with blank spaces before and/or after. If yes, the sequence of digits is copied to the OUTPUT field, right-aligned, and the space on the left is filled with zeros ('0'). Otherwise the sequence of digits is copied to the output field from left to right and the space to the right remains blank. For conversions from an internal to an external format (function module CONVERSION_EXIT_ALPHA_OUTPUT) the process is reversed. Blank characters on the left-hand side are omitted from the output.

Example
Input and output fields are each 8 characters long. A conversion from the external to the internal format takes place: 1. '1234 ' '00001234' 2. 'ABCD ' 'ABCD '

3.3.3.10.2 BUCAT Conversion Routine


Use
The BUCAT conversion routine converts the internal presentation of the budget type (0BUD_CAT) into the external presentation (0BUD_CAT_EX), using the active entries in the master data table for the budget type InfoObject (0BUD_CAT).

Example
Conversion from an external format into an internal format: '1' 'IM000003'

3.3.3.10.3 EAN11 Conversion Routine


Use
The EAN11 conversion routine is used for European Article Numbers (EAN) and the American Universal Product Code (UPC).

Features
It converts the external presentation, according to settings in transaction W4ES (in the ERP system), into the internal SAP presentation. In the SAP system, lefthand zeros are not saved as, according to EAN standards, these are not required. For example, the EAN '123'is the same as the EAN '00123'. As such, the lefthand zeros are dispensed with. UPC-E code short forms are converted into the long form. The EAN11 conversion routine formats the internal presentation of each EAN type, according to settings in transaction W4ES, for output. This ensures that the

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 62 of 102

internal presentation does have left-hand zeros, or that UPC codes are converted to the short form.

3.3.3.10.4 GJAHR Conversion Routine


Use
Conversion routine GJAHR is used when entering the business year in order to allow an abbreviated, two-digit entry. A business year has four digits in the internal format.

Features
When converting from an external into an internal format this checks whether the entry in the INPUT field is wholly numerical, whether it consists of digits only, possibly with blank spaces before and/or after. 1. If a two-digit sequence of numbers is entered then these are put in the third and fourth spaces of the OUTPUT field. The left-hand spaces are filled with 19 or 20 according to the following rule: Two-digit sequence < 50. Fill from left with 20. Two-digit sequence >= 50. Fill from left with 19. 2. A sequence that does not have two-digits is transferred to the output field from left to right. Blank characters are omitted.

Example
Conversion from an external format into an internal format: 1. 2. 3. 4. '12' '2012' '51' '1951' '1997' '1997' '991# '991#

3.3.3.10.5 ISOLA Conversion Routine


Use
Conversion routine ISOLA converts the two-digit ISO language abbreviation INPUT into its SAP-internal OUTPUT presentation.

Features
These are assigned using the LAISO and SPRAS fields in table T002. An INPUT that cannot be converted (because it is not defined as T002-LAISO) produces an error message and triggers the UNKNOWN_LANGUAGE exception. Because they are compatible, single-digit entries are supported in that they are transferred to OUTPUT unchanged. They are not checked against table T002.

Note
The difference between upper and lower case letters is irrelevant with two-digit entries however with single-digit entries, upper and lower case letters stand for different languages.

3.3.3.10.6 MATN1 Conversion Routine


Use
This conversion routine changes internal material numbers, stored in the system, into the external material numbers displayed in the interface and vice versa, according to settings in transaction OMSL. With regard to the specific details of the conversion, read the help for the appropriate input field of the transaction.

3.3.3.10.7 NUMCV Conversion Routine


Use Features
When converting from an external into an internal format this checks whether the entry in the INPUT field is wholly numerical, whether it consists of digits only, possibly with blank spaces before and/or after. If yes, the sequence of digits is copied to the OUTPUT field, right-aligned, and the space on the left is filled with zeros ('0'). Otherwise the blank characters are removed from the sequence of digits, the result is transferred, left-aligned, into the output field, and this is then filled from the right with blank characters. Converting from the internal format into the external format (conversion routine CONVERSION_EXIT_NUMCV_OUTPUT) does not produce any changes. The output field is set the same as the input field.

Example
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 63 of 102

Example
Input and output fields are each 8 characters long. A conversion from the external to the internal format takes place: 1. 2. 3. 4. '1234 ' '00001234' 'ABCD ' 'ABCD ' ' 1234 ' '00001234' ' AB CD' 'ABCD '

3.3.3.10.8 PERI5 Conversion Routine


Use
The PERI5 conversion serves to convert a five-figure calendar quarter in an external format (Q.YYYY, for example) into the internal format (YYYYQ). Y stands for year (here four digits) and Q for quarter (single digit: 1,2,3, or 4). The separator ('.'or '/') has to correspond to the date format in the user settings.

Features
Permitted entries for the date format DD.MM.YYYY are QYY (two digits for year without separator), Q.YY (two digits for year with separator), QYYYY (four digits for year without separator), and Q.YYYY (four digits for year with separator). Permitted entries for the date format YYYY/MM/DD would be YYQ, YY/Q, YYYYQ, YYYY/Q.

Example
Examples where the date format in the user settings is DD.MM.YYYY. A conversion from the external to the internal format takes place: 1. '2.02' '20022' 2. '31999' '19993' 3. '4.2001' '20014'

3.3.3.10.9 PERI6 Conversion Routine


Use
Conversion routine PERI6 is used with six-digit entries for (fiscal year) periods.

Features
The internal format for six-digit periods is YYYYPP (200206, for example, for period 06 of fiscal year 2002). When the external format is converted to the internal format, this checks whether the entries in the INPUT parameter with external date format (separators, order) comply with user settings. The separator ('.'or '/') has to correspond to the date format in the user settings. Different abbreviated entries are possible and these are converted correctly into the internal format.

Example
For the external date format DD.MM.YYYY in the user settings, the following conversion takes place from external to internal formats: 1. 2. 3. 4. '12.1999' '199912' '1.1999' '199901' '12.99' '199912' '1.99' '199901'

3.3.3.10.10 PERI7 Conversion Routine


Use
Conversion routine PERI7 is used with seven-digit entries for (fiscal year) periods.

Features
The internal format for seven-digit periods is YYYYPPP (2002006, for example, for period 006 of fiscal year 2002). When the external format is converted to the internal format, this checks whether the entries in the INPUT parameter with external date format (separators, order) comply with user settings. The separator ('.'or '/') has to correspond to the date format in the user settings. Different abbreviated entries are possible and these are converted correctly into the internal format.

Example
For the external date format DD.MM.YYYY in the user settings, the following conversion takes place from external to internal formats: 1. '012.1999' '1999012' 2. '12.1999' '1999012' 3. '1.1999' '1999001'

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 64 of 102

3. 4. 5. 6.

'1.1999' '1999001' '012.99' '1999012' '12.99' '1999012' '1.99' '1999001'

3.3.3.10.11 POSID Conversion Routine


Use
The POSID conversion routine converts the external presentation of the program position (0PROG_PO_EX) into the internal presentation (0PROG_POS), using the active master data table entries for the program position InfoObject (0PROG_POS).

Example
Conversion from an external format into an internal format: P-2411 P24110000

3.3.3.10.12 PROJ Conversion Routine


Use
There are extensive possibilities in the ERP system project system for editing the external presentation of the project and PSP elements (project coding, editing mask). These features are included in the ERP conversion routine. This comprehensive logic cannot be mapped in the BW system. For this reason, the characteristic 0PROJECT_EX exists in the attributes of InfoObject 0PROJECT and the external description is stored there. As the external description is entered on the screen, conversion routine 'CONVERSION_EXIT_PROJ_INPUT' reads the corresponding internal description 0PROJECT and uses this for internal processing. If no master data has been loaded into the BW system (master data generated by uploading transaction data), the internal description has to be input in order to execute a query.

Example
Internal format: 0PROJECT: 'A0001' External format: 0PROJECT_EX: 'A / 0001'

3.3.3.10.13 REQID Conversion Routine


Use
The REQID conversion routine converts the external presentation of the appropriation request (0APPR_REQU) into the internal presentation (0APPR_RE_ED), using the active entries in the master data table for the appropriation request InfoObject (0APPR_RE_ED).

Example
Conversion from an external format into an internal format: P-2411-2 P24110002

3.3.3.10.14 IDATE Conversion Routine


Use
This conversion routine assigns the appropriate internal date presentation (YYYYMMDD) to an external date (01JAN1994, for example).

Note
Call up the test report RSSCA1T0 to be able to better visualize the functionality of this routine. This test report contains the complete date conversion with external as well as internal presentations.

Example
Conversion from an external format into an internal format: '02JAN1994' '19940102'

3.3.3.10.15 RSDAT Conversion Routine


Use
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 65 of 102

Converts a date in an external format into the internal format.

Features
First, the system tries to convert the date in accordance with the user settings ( System User Profile system cannot perform the conversion in this way, it automatically tries to identify the format. Valid formats: DD.MM:YYYY MM/DD/YYYY MM-DD-YYYY YYYY.MM.DD YYYY/MM/DD YYYY-MM-DD For automatic recognition, the year has to be in four-digit format. If the date is specified as 14.4.72, this is not unique and can cause errors. Own Data Fixed Values Date Format ). If the

Note
Note: If the system can sensibly specify a date from the format in the user settings, this conversion is performed. In this example, if the format in the user settings is DD.MM.YYYY, the date is converted to 19720414, since the system conversion recognizes the date.

Example
Conversion from an external format into an internal format: 4/14/1972 19710414

3.3.3.10.16 SDATE Conversion Routine


Use
This conversion routine assigns the appropriate internal date presentation (YYYYMMDD) to an external date (01.JAN.1994, for example).

Note
Call up the test report RSSCA1T0 to be able to better visualize the functionality of this routine. This test report contains the complete date conversion with external as well as internal presentations.

Example
Date formatting definition in the user master record: DD.MM.YYYY Conversion from an external format into an internal format: '02.JAN.1994' '19940102'

3.3.3.10.17 WBSEL Conversion Routine


Use
The project system in the ERP system offers extensive options for editing the external presentation of the project and WBS elements (project coding, editing mask). These features are included in the ERP conversion routine. This comprehensive logic cannot be mapped in the BW system. For this reason, characteristic 0WBS_ELM_EX exists in the attributes of InfoObject 0WBS_ELEMT and the external description is stored there. As the external description is entered on the screen, conversion routine 'CONVERSION_EXIT_WBSEL_INPUT' reads the corresponding internal description 0WBS_ELEMT and uses this for internal processing. If no master data has been loaded into the BW system (master data generated by uploading transaction data), the internal description has to be input in order to execute a query.

Example
Internal format: 0WBS_ELEMT: 'A0001-1' External format: 0WBS_ELM_EX: 'A / 0001-1'

3.3.4 Creating InfoObjects: Key Figure


Procedure
1. In the context menu of your InfoObject catalog for key figures, choose Create InfoObject.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 66 of 102

2. Enter a name and a description 3. Define a reference key figure or a template InfoObject, as required. Template InfoObject: If you choose a template InfoObject, copy its properties to your new key figure so that you can edit them. Reference key figure: With a reference key figure, the original value is filled from the referenced key figure. However as a result, calculations may be performed differently with this key figure (either with different aggregations or elimination of internal business volume in the query). When you create update rules, the system does not propose a key figure with a reference. Therefore, it is not possible to create update rules. 4. Confirm your entries. 5. Edit Tab Page: Type/Unit. 6. Edit Tab Page: Aggregation. 7. Edit Tab Page: Additional Properties. 8. If you created your key figure with a reference, you see an additional Elimination tab page. 9. Save and activate the key figure you have created.

Note
Key figures have to be activated before they can be used. If you choose Save, the system creates all the changed key figures in the InfoObject catalog and saves the table entries. They cannot be used for analysis and reporting yet though. The older active version is retained at first. The system only creates the corresponding data dictionary objects (data elements, domains, programs) once you have activated the key figure.

3.3.4.1 Tab Page: Type/Unit


Use Features
Key Figure Type Specify the Type. Amounts and quantities need unit fields. Data Type Specify the Data Type. For the amount, quantity, and number, you can choose between the decimal number and the floating point number, which guarantees more accuracy. For the key figures date and time, you can choose the decimal display to apply to the fields. The following combinations of key figure and data type are possible:
Key Figure Type AMO Amount Data Type CURR: Currency field, created as DEC FLTP: Floating point number with 8 byte precision QUA Quantity QUAN: Quantity field, created as DEC FLTP: Floating point number with 8 byte precision NUM Number DEC: Calculation field or amount field with comma and sign FLTP: Floating point number with 8 byte precision INT integer DAT Date INT4: 4 byte integer, whole number with +/- sign DATS: Date field (YYYYMMDD), created as char(8) DEC: Calculation field or amount field with comma and sign TIM Time TIMS: Time field (hhmmss), created as CHAR(6) DEC: Calculation field or amount field with comma and sign

Currency/Quantity Unit You can assign a Fixed Currency to the key figure. If this field is filled, the key figure bears this currency throughout BW. You can also assign a variable currency to the key figure. In the Unit/Currency field, you determine which InfoObject bears the key figure unit. For quantities or amount key figures, this field must be filled or you must enter a fixed currency or amount unit.

3.3.4.2 Tab Page: Aggregation


Use Features
Aggregation: There are four aggregation options: Minimum (MIN): The minimum value of all the values in this column is displayed in the results row. Maximum (MAX): The maximum value of all the values in this column is displayed in the results row. Summation (SUM): The sum of all the values in this column is displayed in the results row. No aggregation (X, if more than one value) (NO2): A value is only shown in the result cell if all values entered into the result cell have the same value. In the case of standard aggregation NO2, the exception aggregation must also be NO2.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 67 of 102

Note
Key figures with standard aggregation NO2 can only be used in DataStore Objects for direct updates in planning mode, and in MultiProviders that contain these DataStore objects. Exception Aggregation This field determines how the key figure is aggregated in the Business Explorer in relation to the exception characteristic. This reference characteristic must be unique in the query. In general, this refers to time.

Example
The key figure Number of Employees would, for example, be totaled using the characteristic Cost Center, and not a time characteristic. Here you would determine a time characteristic as an exception characteristic with, for example, the aggregation Last Value. See also: Examples in the Data Warehousing Workbench Reference Characteristic for Exception Aggregation In this field, select the characteristic in relation to which the key figure is to be aggregated with the exception aggregation. Often this is a time characteristic. However, you can use any characteristic you wish. Flow/Non-Cumulative Value You can select the key figure as a Cumulative Value. Values for this key figure have to be posted in each time unit for which values for this key figure are to be reported. Non-Cumulative with Non-Cumulative Change The key figure is a non-cumulative. You have to enter a key figure that represents the non-cumulative change of the non-cumulative value. There do not have to be values for this key figure in every time unit. For the non-cumulative key figure, values are only stored for selected times (markers). The values for the remaining times are calculated from the value in a marker and the intermediary non-cumulative changes. Non-Cumulative with Inflow and Outflow The key figure is a non-cumulative. You have to specify two key figures that represent the inflow and outflow of a non-cumulative value.

Note
For non-cumulatives with non-cumulative change, or inflow and outflow, the two key figures themselves are not allowed to be non-cumulative values, but must represent cumulative values. They must be the same type (for example, amount, quantity) as the non-cumulative value. See also: Aggregation Rules for Standard Aggregation and Exception Aggregation Modeling Non-Cumulatives with Non-Cumulative Key Figures Aggregation Behavior of Non-Cumulative Key Figures

3.3.4.3 Registerkarte: Weitere Eigenschaften


Use Features
Business Explorer You can make the following settings in part specific to data target for the InfoObjects contained in the data target. The settings are then only valid in the respective data target. See also Additional Functions in InfoCube Maintenance and Additional Functions in ODS Object Maintenance. Decimal places You can determine how many decimal places the field has by default in the Business Explorer. This layout can be overwritten in queries. Layout This field describes with which scaling the field in the Business Explorer is displayed by default. This layout can be overwritten in queries. For more information, see Priority Rule with Formatting Settings. Miscellaneous: Key figure with maximum accuracy If you choose this indicator, then the OLAP processor calculates internally with packed numbers that have 31 decimal places. This results in greater accuracy and reduced rounding differences. Normally, the OLAP processor calculates with floating point numbers. Attribute Only If you choose Attribute Only, the created key figure can only be used as an attribute for another characteristic but cannot be used as a dedicated key figure in the InfoCube.

3.3.5 Editing InfoObjects


Prerequisites
You have already created an InfoObject.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 68 of 102

See also: Creating InfoObjects: Characteristics Creating InfoObjects: Key Figures

Procedure
You are in the Data Warehousing Workbench in the modeling view of the InfoObject tree. Select the InfoObject you want to maintain, and, using the context menu, choose Change. Alternatively, select the InfoObject you want to maintain, and choose the Maintain InfoObjects icon from the menu bar. You get to the InfoObject maintenance. Change Options It is usually possible to change the meaning and the text of an InfoObject. However, only limited changes can be made to certain properties if the InfoObject is used in InfoProviders. With key figures, for example, you cannot change the key figure type, the data type, or the aggregation, as long as the key figure is still being used in an InfoProvider. Use the Check function to get hints on incompatible changes. With characteristics, you can change compounding and data type, but only if no master data exists yet. You cannot delete characteristics that are still in use in an InfoProvider, an InfoSource, compounding or as an attribute. It is a good idea, therefore, to execute a where-used list, whenever you want to delete a characteristic. If the characteristic is being used, you have to first delete the InfoProvider or the InfoObject from the InfoProvider. If errors occur, or applications exist, an error log appears automatically.

3.3.6 Additional Functions in InfoObject Maintenance


Use Features
In addition to functions for creating, changing, and deleting InfoObjects, additional functions are available in InfoObject maintenance.

You can display all the InfoObject settings made on the InfoObject maintenance tab pages in a clear tree structure.

You can compare the following InfoObject versions: Active and modified versions of an InfoObject Active and Content versions of an InfoObject Modified and Content versions of an InfoObject This allows you to compare all the settings made on the InfoObject maintenance tab pages.

You can select and transport InfoObjects. The system automatically collects all BW objects that are required to ensure a consistent status in the target system.

You can determine which other objects in the BW system use a specific InfoObject. You can determine the effect of changing an InfoObject in a particular way and whether this is permitted at a given time. Analyzing InfoObjects Choose Edit Analyze InfoObject your InfoObjects. to access the analysis and repair environment. You use the analysis and repair environment to check the consistency of

See Analysis and Repair Environment. Object Browser Using AWB In the main menu, choose These include the following: Structural dependencies, such as the InfoObjects that make up an InfoCube Connections between BW objects, such as the data flow from a source system through an InfoCube to a query You can display and export the listed dependencies in HTML format. Hyperlinks In InfoObject maintenance, technical objects (such as data elements or attributes) are often underlined. If this is the case, in the context menu of the InfoObject, you can access a selection of functions such as branching to the detail view (dictionary), table contents, table type, and so on. Double-click to get to the detail display. Activation in the Background In some cases, activating an InfoObject can be quite time consuming. This is the case if you are converting a large amount of data for example. The activation process terminates after a specified length of time. In this case, you can use a background job to activate InfoObjects. In InfoObject maintenance, choose Characteristic Activate in Background . Maintaining Database Memory Parameters For Characteristics: Use this setting to determine how the system handles the table when it creates it in the database: To access this function, choose Extras in the main menu. More information: DB Memory Parameters. Environment Object Browser via DWB to call this function and display links between various BW objects.

3.3.7 Modeling InfoObjects as InfoProviders


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 69 of 102

Prerequisites
In the InfoObject maintenance on the Master Data/Texts tab page you set the indicator with master data.

Context
You can indicate an InfoObject of type characteristic as an InfoProvider if it has attributes. The data is then loaded into the master data tables using the transformation rules. You can also define BEx queries for the master data of the characteristic.

Procedure
1. You are in the InfoObject maintenance of the characteristic you want to use as the InfoProvider. Choose tab page Master Data/Text and assign an InfoArea to the characteristic. The characteristic is subsequently displayed in the InfoProvider tree in the Data Warehousing Workbench. More information: Tab Page: Master Data/Texts 2. Choose the Attributes tab page. With Navigation Attributes InfoProvider, you can select two-level navigation attributes (that is, the navigation attributes of the navigation attributes of the characteristic). These are then available like normal characteristics in the query definition. More information: Tab Page: Attributes 3. Save and activate the characteristic. 4. Create a transformation for the characteristic. 5. Create a data transfer process for the characteristic.

Results
You can load the attributes and texts for the characteristic. The characteristic is available in the BEx Query Designer as an InfoProvider.

3.4 Using Master Data and Master Data-Bearing Characteristics


Use
Master data is data that remains unchanged over a long period of time. Master data contains information that is always needed in the same way. Characteristics can bear master data in the BW system. With master data you are dealing with attributes, texts or hierarchies. If characteristics have attributes, texts or hierarchies at their disposal, then refer to these as master data-bearing characteristics. More information: Master Data Types: Attributes, Texts and Hierarchies

Example
The master data of a cost center contains the name, the person responsible, the relevant hierarchy area, and so on. The master data of a supplier contains the name, address, and bank details.

Procedure
Create a Characteristic with Master Data You can assign attributes, texts, hierarchies, or a combination of this master data to a characteristic. More information: Creating InfoObjects: Characteristics Edit Master Data If a characteristic bears master data, you can edit it in the BW system in the master data maintenance. More information: Creating and Changing Master Data Delete Master Data You can delete master data at single record level; you can also delete all master data that exists for a characteristic directly from the master data table. More information: Deleting Master Data at Single Record Level and Deleting Attributes and Texts for a Characteristic Activate Master Data You must first activate master data so that it is available for reporting and analysis purposes. More information: Activating Master Data Reorganize Master Data You can reorganize the dataset for texts and attributes belonging to a characteristic. This reduces the volume of data and improves performance. More information: Reorganizing Master Data Simulate Loading of Master Data You can first simulate the loading of a master data package before you load the data into the BW system. More information: Simulating the Loading of Master Data Load Master Data Directly into an InfoProvider When loading master data, you can specify that data is not extracted from the PSA of the DataSource but is requested directly from the data source at DTP

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 70 of 102

runtime. More information: Loading Master Data to InfoProviders Straight from Source Systems Use Master Data-Bearing Characteristics as InfoProviders You can flag a characteristic as an InfoProvider if it has attributes and/or texts. The characteristic is then available as an InfoProvider for analysis and reporting purposes. More information: Modeling InfoObjects as InfoProviders

3.4.1 Master Data Types: Attributes, Texts, and Hierarchies


Use
There are three different types of master data in BW: 1. Attributes Attributes are InfoObjects that are logically subordinate to a characteristic. You cannot select attributes in the query.

Example
You assign the attributes Person responsible for the cost center and Telephone number of the person responsible for the cost center (characteristics as attributes), as well as Size of the cost center in square meters (key figure as attribute) to a Cost Center. 2. Texts You can create text descriptions for master data or load text descriptions for master data into BW. Texts are stored in a text table.

Example
In the text table, the Name of the person responsible for the cost center is assigned to the master data Person responsible for the cost center. 3. Hierarchies A hierarchy serves as a context and structure for a characteristic according to individual sort criteria. For more detailed information, see Hierarchies.

Features
Time-dependent attributes: If the characteristic has at least one time-dependent attribute, a time interval is specified for this attribute. As master data must exist between the period of 01.01.1000 and 12.31.1000 in the database, the gaps are filled automatically (see Maintaining Time-Dependent Master Data ). Time-dependent texts: If you create time-dependent texts, the system always displays the text for the key date in the query. Time-dependent texts and attributes: If texts and attributes are time-dependent, the time intervals do not have to agree. Language-dependent texts: In Creating InfoObjects: Characteristics, you specify whether texts are language-dependent (for example, with product names: German - Auto, English - car) or are not language-dependent (for example, customer names). The system only displays texts in the selected language. If texts are language-dependent, you have to load all texts with a language indicator. Only texts exist: You can also create texts only for a characteristic, without maintaining attributes. When you load texts, the system automatically generates the entries in the SID table.

3.4.2 Creating and Changing Master Data


Use
In the master data maintenance, you can manually change attributes and texts or create new ones. Data is always maintained per characteristic. There are two different master data maintenance sessions: Creating or changing master data: You can add new master data records to a characteristic, change individual master data records, or select several master data records and assign global changes to them. Deleting master data at single record level: You can delete individual records or select and delete several records. You cannot run the two sessions at the same time. This means that if you choose the Change function in the master data maintenance screen, the deletion function is deactivated and is only reactivated once you have saved your changes. if you select a master data record in the master data maintenance and choose Delete, the create and change function is deactivated and is only reactivated once you have finished the deletion process by choosing Save.

Prerequisites
If master data is maintained for a master data-bearing characteristic, you can re-create this master data and additional master data records.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 71 of 102

Procedure
1. You are in the Modeling functional area of the Data Warehousing Workbench. In the InfoObject tree, choose Maintain Master Data from the context menu for your InfoObject. A selection screen appears for restricting the master data. 2. Use the input help to select the required data. You get to the list header for the selection. The list header is also displayed if no hits have been found for your selection, so that you can enter new master records for particular criteria. 3. Choose Create to add new master records. New records are tagged onto the end of the list.

Caution
If a newly created record already exists in the database but does not appear in the processing list (because you have not selected it in the selection screen) there is no check. Instead, the old records are overwritten. 4. Double-clicking a data record takes you to the individual maintenance. Make the relevant changes in the following change dialog box.

Note
If you change master data in the BW system, you must adjust the respective source system accordingly. Otherwise the changes will be overwritten in the BW system the next time data is uploaded. Master data that you have created in the BW system is retained even after you have uploaded data from the source system. 5. Select multiple records and choose Change to carry out mass changes. A change dialog box appears in which the attributes and texts are offered. Enter the relevant entries that are then transferred to all the selected records. 6. Save your entries. Note the exception for time-dependent master data.

3.4.2.1 Maintaining Time-Dependent Master Data


Use
The maintenance of the master data is more complex with time-dependent master data, as the validity period of a text is not necessarily in concordance with that of an attribute master record.

Example
The InfoObject User master record has the time-dependent attribute Personnel number, and the time-dependent text User name. If the user name changes (after marriage, for example), the personnel number still remains the same.

Prerequisites
In the InfoObject Maintenance, make sure that the relevant InfoObject is flagged as 'time-dependent'.

Procedure
To maintain texts with time-dependent master data, proceed as follows : 1. Select the master data that you want to change, and select one of the three text pushbuttons. If you choose Display text, a list appears containing all the texts for this characteristic value. By double clicking, you can select a text. A dialog box appears with the selected text for the characteristic value. If you choose Change text, a list appears containing all the texts for this characteristic value. By double clicking, you can select a text. A dialog box appears with the selected text for the characteristic value, which you can then edit. If you choose Create text, a dialog box appears in which you can enter a new text for the characteristic value. The texts always refer to the selected characteristic value. 2. Select Save

Note
When you select time-dependent master data with attributes, the list displays the texts that are valid until the end of the validity period of the characteristic value. When you change and enter new texts, the lists are updated. Master data must exist between the period of 01.01.1000 and 12.31.9999 in the database. When you create data, gaps are automatically filled. When you change or initially create master data, in some cases you have to adjust the validity periods of the adjoining records accordingly.

Note
If a newly created record already exists in the database but does not appear in the processing list (because you have not selected it in the selection screen) there is no check. Instead, the old records are overwritten.

Note
If you change master data in BW, you must adjust the respective source system accordingly. Otherwise the changes will be overwritten in BW the next time you upload data. Master data that you have created in BW remains even after you have uploaded data from the source system.

3.4.2.2 Time-Dependent Master Data from Different Systems


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 72 of 102

3.4.2.2 Time-Dependent Master Data from Different Systems


Use
You have the option of uploading time-dependent characteristic attributes from different systems, even if the time intervals of the attributes are different.

Features
If you load time-dependent characteristic attributes from different source systems, these are written in the master data table, even if the time intervals are different.

Example
From source system 1, load attribute A with the values 10, 20, 30 and 40. From source system B, load attribute B with the values 15, 25, 35 and 45. The time intervals of the last two values are different.

The system inserts another row into the master data table:
dateto 01.01.1999 01.03.1999 01.06.1999 01.09.1999 11.09.1999 datefrom 28.02.1999 31.05.1999 31.08.1999 10.09.1999 30.09.1999 Person responsible Mrs Steward Mr Major Mr Calf Mrs Smith Mrs Smith Cost center Vehicles Accessories Light bulbs Light bulbs Pumps

3.4.3 Deleting Master Data at Single Record Level


Use
If you want to selectively delete master data, you have two options: In master data maintenance, you have a deletion mode at single record level as well as being able to create and change master data. You can use the report RSDMDD_DELETE_BATCH.

Note
You can only delete master data records if no transaction data exists for the master data that you want to delete, the master data is not used as attributes for an InfoObject, and there are no hierarchies for this master data.

Procedure
Selective Deletion in Master Data Maintenance: 1. You are in the Modeling functional area of the Data Warehousing Workbench. In the InfoObject tree, choose Maintain master data from the context menu for your InfoObject. A selection screen appears for restricting the master data. 2. Use the input help to select the required data.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 73 of 102

3. You get to the list overview for the selection and have two options: In the list, select the master data records to be deleted, choose Delete and save your entries. First select additional master data using Data Selection, highlight the master data records that are to be deleted, and choose Delete. Repeat the selection on demand where necessary and choose Save to finish. The records marked for deletion are first written into the deletion buffer. If you choose Save, the system generates a where-used list for the records marked for deletion. Master data that is no longer being used in other objects is deleted. Selective Deletion Using the Report 1. 2. 3. 4. 5. Enter report RSDMDD_DELETE_BATCH in the ABAP Editor (transaction SE38) and create a variant for it. Execute the variant. Enter the InfoObject with the master data that you want to delete. In Filter Data, you can specify which data you want to delete. You can specify a deletion mode for the data (parameter P_SMODE). More information: Deleting Attributes and Texts for a Characteristic 6. You can simulate the report before running it (parameter P_SIMUL). 7. You can schedule the report in the background. If you want to clear up your master data at regular intervals, you can also integrate the report into a process chain using process type ABAP program.

3.4.4 Deleting Attributes and Texts for a Characteristic


Prerequisites
In order to delete master data there must be no transaction data in the BW system for the master data in question, it must not be used as an attribute for InfoObjects and there must not be any hierarchies for this master data.

Context
You can delete attributes and texts directly from the master data table. In contrast to deleting at single record level, you can use this function to delete all existing attributes and texts of a characteristic in one action.

Procedure
1. You are in the Modeling functional area of the Data Warehousing Workbench. In the InfoObject tree, choose Delete Master Data from the context menu of your InfoObject. 2. When you delete master data, you can choose whether entries in the SID table of the characteristic are to be retained or deleted. If you delete the SID table entry for a particular characteristic value, the SID value assigned to the characteristic value is lost. If you reload attributes for this characteristic value later, a new SID value has to be created for the characteristic value. This has a negative effect on the runtime required for loading. In some cases, deleting entries from the SID table can also lead to serious data inconsistencies. This occurs if the list of used SID values generated from the where-used list is not comprehensive. Delete, Retaining SIDs For the reasons given above, you should choose this option as standard. Even if, for example, you want to make sure that individual attributes of the characteristic that are no longer needed are deleted before you load master data attributes or texts, the option of deleting master data but retaining the entries from the SID table is also absolutely adequate. Delete with SIDs Note that deleting entries from the SID table is only necessary, or useful, in exceptional cases. Deleting entries from the SID table does make sense if, for example, the composition of the characteristic key is fundamentally changed and you want to swap a large record of characteristic values with a new record with new key values. 3. You can choose whether the texts are deleted. 4. You can select search mode. The search mode has major effects on the where-used list's runtime The following search modes are available: O only one usage per value: Once the where-used list finds a value, the system no longer searches for this value in the other InfoProviders. This search mode is the default setting. This mode is useful if you simply want to delete all values that are no longer used but do not need to carry out any more complex searches and want to keep the run-time of the where-used list to a minimum. P one usage per value per InfoProvider: The system searches for each value in every InfoProvider. Each time a hit is found in an InfoProvider, the search stops in that InfoProvider. This search mode is useful if you are interested in the usage: you can find out which InfoProvider you have to delete from before you can delete them all. E one usage is enough: After a value is found once, the search stops. This setting is useful if you only want to completely delete the attributes and texts. A all usages of all values: The system searches for each value in every InfoProvider; each usage is counted. This is a complete where-used list. This search time has the longest runtime and should only be used if you are interested in all the usages. 5. You can choose to execute the deletion operation in the background. 6. You can simulate master data deletion. Here, the where-used list is executed without deleting master data. However, you should note: the list of usages may not be complete as no locks are set during the simulation. 7. Choose Start. The program checks the entries in the master data table one after the other to see if they are used in other objects. If the master data is used, you can only delete records that are not used or you can display the where-used list. 8. Once the deletion is complete, a message appears and you can display the log. The log displays the where-used list and the list of all deleted records.

3.4.5 Activating Master Data


Use
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 74 of 102

Use
When you update data from an SAP system, attributes are imported in an inactive state. The new attributes must be activated before you can access them in reporting and analysis. More information: Versioning Master Data Texts are active immediately and are available for reporting and analysis purposes. You do not need to activate them manually.

Prerequisites
Attributes and texts have already been loaded into the BW system.

Procedure
General Procedure Upon activation there are two scenarios to choose from: If you are already using the available master data in InfoCubes in aggregates, you cannot activate master data records individually. In this case, proceed as follows: 1. In the menu, choose Tools Hierarchy/Attribute Change... . 2. Execute the change run. More information: System Response to Changes to Master Data and Hierarchies The system now automatically restructures and activates the master data and its aggregates.

Note
Please note that this process can take several hours if the volume of data is relatively high. Therefore, you should simultaneously activate all of the characteristics that are affected by changes to their master data, at regular intervals. If the master data is not used in aggregates, proceed as follows: 1. In the InfoObject tree, choose Activate Master Data from the context menu for your characteristic. The master data is activated and is available immediately for reporting and analysis. You are using an SAP HANA database. If you are using a SAP HANA database, you can activate master data when data is loaded, as aggregates cannot be in use during this process. Proceed as follows: 1. In the DTP for your InfoObject, go to the Update tab and select the Activate Master Data flag.

3.4.5.1 Versioning Master Data


Use
Attributes and hierarchies are available in two versions, an active (A) version and a modified (M) version. Texts are active immediately after they have been loaded. Existing texts are overwritten when new texts are loaded. Attribute versions are managed in the P table and in the Q table. Time-independent attributes are stored in the P table and time-dependent attributes are stored in the Q table. From left to right, the P table contains the key fields of the characteristic (for example, 0COSTCENTER: CO_AREA and COSTCENTER), the technical key field OBJVERS (versioning), the indicator field CHANGED ID (versioning), and 0 or more attribute fields that can be display attributes or navigation attributes. The structure of the Q table is identical to the structure of the P table, with the addition of the 0DATEFROM and 0DATETO fields to map the time-dependency. The OBJVERS and CHANGED fields must always be taken into account in versioning: If you load master data that does not yet exist, an active version of this data is added to the table. If, when you reload the data, the value of the attribute changes, the active entry is flagged For Deletion (CHANGED = D) and the M/I (modified(insert)) version of the new record is added. You are loading master data for the 0COSTCENTER characteristic. After activation, the P table looks like this:

Later, you load new records. These new records are given the OBJVERS entry M and the CHANGED entry I. The available records, for which new data has been loaded, are given the OBJVERS entry D for "to be deleted":

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 75 of 102

Before the new records can be displayed in reporting, you have to start the change run (see System Response Upon Changes to Data: Aggregate). During the change run, the old record is deleted and the new record is set to active. In BW reporting, it is always the active version that is read. InfoSets are an exception to this rule, as the Most Recent Reporting function can be switched on in the InfoSet Builder. In such an InfoSet, the most recent records are displayed in reporting, even if they are not yet active. For more information see Most Recent Reporting for InfoObjects.

3.4.6 Reorganizing Master Data


Use
You can reorganize the dataset for texts and attributes belonging to a basic characteristic. The reorganization process finds and removes redundant data records in the attribute tables and text tables. This reduces the volume of data and improves performance.

Features
For a given basic characteristic, the system firstly compares data in the active and modified versions of the time-dependent and non-time-dependent attributes with each other. If there are no differences between the active and the modified versions, the redundant data is compressed. In a second step, the system checks time-dependent texts and attributes to see whether time intervals exist with identical attribute values or text entries. If this is the case, the affected time intervals are combined into larger intervals.

Note
Firstly, the attribute Cost Center Manager (0RES_PERSON) is changed (as the only attribute) for a Cost Center and then reset to its original value using a second load process. Therefore, the name of the Cost Center Manager has not actually changed. In this case, the reorganization means that the data record is deleted for the changed version (M version). For a Cost Center, the same person is entered as Cost Center Manager for the period 01.06.2001-31.12.2001 and for the period 01.01.2002 -31.03.2002. The process of reorganization combines these two intervals into one, providing that the other time-dependent attributes for the cost center are consistent across both intervals.

You can carry out the master data reorganization process as a process type in process chain maintenance.

Activities
During master data organization for attributes and texts, the system sets locks preventing access to the basic characteristic currently being processed. These locks correspond to the locks preventing the loading of the master data attributes and texts. This means that it is not possible to load, delete or change master data for this characteristic during the reorganization process. When assigning locks, the system distinguishes between locks for attributes and locks for texts. This means that you can load texts for this characteristic during a reorganization that only affects attributes, and vice versa.

3.4.7 Simulate Loading of Master Data


Use
This function allows you to simulate the loading of a master data package in the data flow with 3.x objects before loading the data into BW. This means you can be aware of errors in the data loading early on and remove problems in advance.

Integration
You call the function by selecting the data request that you want to examine in the Monitor for Extraction Processes and Data Transfer Processes and selecting Simulate Update on the Detail tab page in the context menu of a data package. See: Update Simulation in the Extraction Monitor

Features
In the case of data without errors , the loading simulation provides you with a detailed description of the processes that are run during loading. The left-hand

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 76 of 102

frame structures the various master data types that can be loaded in a tree. Either: Time-dependent texts and/or time-constant texts, or Time-dependent master data attributes and/or time-constant texts On the level below the master data types, you will see the different database operations that are carried out during loading. (For example, modifying, inserting, deleting) By clicking a master data type or a database operation, or by using drag-and-drop on these objects in the right-hand frame, you can obtain a detailed view of the respective uploaded data. In the case of incorrect data, only the master data types and not the database operations, are displayed in the left-hand frame. The corresponding error log appears in the lower frame.

3.4.8 Master Data Lock


Use
During the master data load procedure, the master data tables concerned are locked so that, for example, data cannot be loaded at the same time from different source systems, which would bring about inconsistencies. In certain cases, for example, if a program termination occurs during the load process, the locks are not automatically removed after the load process. You then have to manually delete the master data locks.

3.4.9 Loading Master Data from Source Systems Directly into InfoProviders
Use
In the data transfer process (DTP) maintenance screen, you can specify that data is not extracted from the PSA of the DataSource but is requested directly from the data source at DTP runtime. The flag Do not extract from PSA; use direct access to data source is displayed for the Full extraction mode, if the DTP source is a DataSource. We recommend that you only use this flag for small data sets, especially small master data sets. Extraction is based on synchronous direct access to the DataSource, where the data is not displayed in a query (as is usually the case with direct access). Instead the data is directly updated to a data target without being saved in the PSA.

Note
This flag is also available for DataSources from operational data provisioning (ODP) source systems. In this case, extraction is performed using the ODP data replication interface and you can also load mass data directly into an InfoProvider. More information: Loading Data from ODP Source Systems Directly into InfoProviders Dependencies If you set this flag, you do not need an InfoPackage to extract data from the source. Note that the file is available on the server when you are extracting from a file source system. Extraction using the direct access mode has the following implications, especially for SAP source systems (SAPI extraction): The data is extracted synchronously. This places special requirements on main memory, especially in the source system. The SAPI extractors might behave differently than for asynchronous loading, because they are informed by direct access. No SAPI customer enhancements are processed. Fields that were added to the DataSource with append technology are left empty. Exits RSAP0001, exit_saplrsap_001, exit_saplrsap_002 and exit_saplrsap_004 are not executed. If errors occur during processing, you need to extract the data again in BW because the PSA is not available in the buffer. This means that it is not possible to create a delta. The filter in the DTP only contains fields that are allowed as selection fields by the DataSource. If a PSA is available, you can use all the fields for filtering in the DTP.

3.5 Creating InfoProviders


Use
The Data Warehouse layer and the architected data mart layer are made up of InfoProviders. InfoProviders are BW objects that data is loaded into or which display views of data. You analyze this data in BEx queries. There are InfoProvider types in which the data is stored physically and InfoProvider types that are only views on the data. In BEx Query Designer, they are seen as uniform objects however. More information: The following graphic shows how InfoProviders are integrated in the dataflow:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 77 of 102

Prerequisites
Make sure that all the InfoObjects you want to add to the InfoProvider are available in an active version. Create and activate any InfoObjects that do not exist yet. Instead of creating a new InfoProvider, you can install an InfoProvider from the BI Content delivered by SAP. More information: Installing BI Content

Procedure
There are two ways of creating a new InfoProvider in the Data Warehousing Workbench: In graphic modeling In the InfoProvider tree Create InfoProvider in Graphic Modeling You have created a new data flow in the data flow tree or are editing an existing data flow. More information: Creating Data Flows or Data Flow Templates 1. To create an InfoProvider in the data flow maintenance screen, first create a non-persistent object and choose Create in the context menu for the nonpersistent object. The InfoProvider processing screen appears. Create InfoProvider in the InfoProvider Tree 1. Select the InfoArea to which you want to assign the new InfoProvider, or create a new InfoArea. In the Data Warehousing Workbench, choose InfoProvider Create InfoArea . 2. In the context menu for the InfoArea, choose the type of InfoProvider that you want to create. 3. Enter a name and a description for the InfoProvider. You can also enter details for the objects on which you are basing the InfoProvider. 4. Choose Create. The InfoProvider processing screen appears. Further Procedure: 1. Copying InfoObjects: On the left side of the screen, there are various templates to choose from. These allow you to get a better overview in relation to a particular task. For performance reasons, the default setting is an empty template. You use the pushbuttons to select different objects as templates. The InfoObjects that are to be added to the InfoProvider are divided into the categories characteristic, time characteristic, key figure, and unit. You have to transfer at least one InfoObject from each category. On the right-hand side of the screen, you define the InfoProvider. Use drag and drop to assign the InfoObjects in the dimensions and the Key Figures folder. You can select several InfoObjects at once. You can also transfer entire dimensions using drag and drop. The system assigns navigation attributes automatically. These navigation attributes can be switched on to analyze data in BEx. Or: You also have the optoin of inserting the InfoObjects without selecting a template: This makes sense if you already know exactly which InfoObjects you want to add to the InfoProvider. In the context menu for the folders for dimensions or key figures, choose Insert InfoObjects. In the dialog box, you can now enter and add up to 10 InfoObjects either directly or using input help. You can then reassign them using drag and drop. 2. For more information about the object-specific settings, see: Creating InfoCubes Creating DataStore Objects Creating a Semantically Partitioned Object Creating HybridProviders Creating MultiProviders The method for creating VirtualProviders is the same as for InfoCubes. The method for creating InfoSets is not the same as for other InfoProviders. More information: Creating InfoSets 3. In the context menu for the Key Figures folder and the Data Fields folder, you can Insert New Hierarchy Nodes. You can thus sort the key figures in a hierarchy. You then get a better overview of large quantities of key figures when defining queries. More information: Defining New Queries 4. If you want to follow changes to your InfoProvider over a given period, you can create a manual version. To do this, choose Goto Version Management in the main menu. Modeling

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 78 of 102

More information: Version Management 5. Save or activate the InfoProvider. Only activated InfoProviders can be supplied with data and used for reporting and analysis.

3.5.1 InfoProvider Types


Use
InfoProviders can have widely differing characteristic values. Every InfoProvider provides data for a query however. Most InfoPorviders are modelled in the BW system. Some basis objects can be used both on their own and in other InfoProviders. InfoObjects as InfoProviders InfoCubes that are: Standard, with data perseistence in the BW system or in BWA Standard, SAP HANA-optimized Real-time capable Semantically partitioned DataStore objects that are: Standard Write-optimized DataStore objects for direct update Semantically partitioned These InfoProviders are loaded with data using staging. There are also InfoProviders that are modeled in the BW system and are comprised of other InfoProviders: InfoSet MultiProvider Aggregation level HybridProvider, with the following types: Based on a DataStore object Based on direct access CompositeProvider You can also make a BEX query - that was a defined on InfoProviders - available as an InfoProvider again. Query as InfoProvider Some InfoProviders are modeled in the BW system but their data is usually not in the BW system: VirtualProviders that are: DTP-based With BAPI With Function module Based on an SAP HANA Model Some InfoProviders, TransientProviders, are not modelled in the BW system. They are derived from another object instead: TransientProvider Derived from a Classic InfoSet TransientProviders Derived from an Analytic Index Creating TransientProviders on SAP HANA Models

3.5.2 Decision Tree for InfoProviders


Concept
This decision help is aimed at users who are just getting to know SAP NetWeaver Business Warehouse (but who are familiar with Data Warehouses) and users who are not familiar with the current release and the latest InfoProviders. Decision help: The number of InfoProviders is very high. Therefore you need a decision help that enables you to find the InfoProviders that are suited to your needs. Therefore the aim is to build a small, initial (test) scenario and not to build an entire Data Warehouse. You need to be able to quickly find an InfoProvider, so that you have a suitablle "container" for your data. Learning aid: You need to have an overview of the different InfoProviders and you need to be able to check whether you have selected the correct InfoProvider for your requirements. The arrangement of InfoProviders is based on the layered scalable architecture (LSA). Call the decision tree: To call the right decision tree, you first need to answer the following basic question: What is the main purpose of the new InfoProvider? InfoProvider for storing reusable data InfoProvider for defined reporting and analysis requests InfoProvider for ad hoc reporting and analysis requests

Constraints
The decision tree cannot display all possibilities/properties as this would make it too complex and less user-friendly (if only transaction data is used for example, the planning is only covered briefly). The decision tree cannot replace a project in which the entire Data Warehouse is defined. To improve performance when loading InfoCubes, we recommend using either a BW Accelerator (BWA) or a SAP HANA database. You can then store the InfoCube data in the BW Accelerator (BWA index). However, this InfoCube has several restrictions, such as selective and request-based deletion, compression and data archiving. If these restrictions are not relevant for you, you should follow the above recommendation. If these restrictions are relevant, use a standard InfoCube.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 79 of 102

More information: Restrictions for the InfoCube with Data Persistence in the BWA if using a SAP HANA database, create SAP HANA-optimized InfoCubes.

3.5.2.1 InfoProvider for Storing Reusable Data


Concept

3.5.2.2 InfoProvider for Defined Reporting and Analysis Requests


Concept

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 80 of 102

3.5.2.3 InfoProvider for Ad Hoc Reporting and Analysis Requests


Concept

3.6 Creating DataStore Objects


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 81 of 102

Context
A DataStore object serves as a storage location for consolidated and cleansed transaction data or master data at document (atomic) level. This data can be evaluated using a BEx query. A DataStore object contains key fields (document number/item for example) and data fields that can also contain character fields (order status or customer for example) as key figures. The data in a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems. Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables. The cumulative update of key figures is supported for DataStore objects, just as it is with InfoCubes, but with DataStore objects it is also possible to overwrite data fields. This is particularly important with document-related structures. If documents are changed in the source system, these changes include both numeric fields, such as the order quantity, and non-numeric fields, such as the ship-to party, status and delivery date. To reproduce these changes in the DataStore objects in the BW system, you have to overwrite the relevant fields in the DataStore objects and set them to the current value. By overwriting and using the existing change log, you can also make a source delta enabled. This means that the delta that is further updated to the InfoCubes, being calculated from two successive afterimages for example.

Procedure
1. Follow the general procedure for creating an InfoProvider. All DataStore object-specific settings are described below. More information: Creating InfoProviders 2. You can specify that the DataStore object is to be partitioned semantically. This can only be done for standard DataStore objects. More information: Using Semantic Partitioning. 3. Copying InfoObjects:

Note
There must be at least one key field. Further Restrictions: You can - Create a maximum of 16 key fields (if you have more key fields, you can combine fields using a routine for a key field (concatenate)). If you are using a SAP HANA database, you can add more than 16 characteristics in the key in a DataStore object for direct updating in planning mode. - Create a maximum of 749 fields - Use 1962 bytes (minus 44 bytes for the change log) - You cannot include key figures as key fields 4. Using Settings you can make various settings and define the properties of the DataStore object. More information: DataStore Object Settings. 5. Under Indexes, you can create secondary indexes from the context menu. This improves the load and query performance of the DataStore object. Primary indexes are created automatically by the system. If the values in the index fields uniquely identify each record in the table, select Unique Index in the dialog box. Note that errors can occur during activation if the values are not unique. The description of the indexes is preset by the system. To create a folder for the indexes, choose Continue from the dialog box. You can now drag the required key fields over to the index folder. You can create a maximum of 16 secondary indexes. These are also transported automatically. More information: Indexes. 6. You can use to check whether the DataStore object is consistent. 7. Save and activate the DataStore object. When you activate the DataStore object, the system generates an export DataSource. You use this to update the DataStore object data to further InfoProviders.

Results
You can now create a transformation and a data transfer process for the DataStore object to load data. If you have loaded data into a DataStore object, you can use this DataStore object as the source for another InfoProvider. See: Further Processing of Data in DataStore Objects. You can display and delete the loaded data in DataStore object administration. More information: DataStore Object Administration.

3.6.1 Setting the DataStore Object Type


Use
The following decision tree is intended to help you set the right DataStore object type for your purposes: The decision nodes are formed by the following functions and properties: Data provision with load process: Data is loaded using the data transfer process (DTP). Delta calculation: Delta values are calculated from the loaded and activated data records in the DataStore object. These delta values can be written to InfoCubes, for example, by delta recording. Single record reporting: Queries are run based on DataStore objects that return just a few data records as the result. Unique data: Only unique data records are loaded and activated for DataStore keys. Existing records cannot be updated.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 82 of 102

The graphic shows that a DataStore object must be used for direct updating if the data is not provided via the load process. In this case, the data is provided via APIs. See also: APIs of the DataStore Object for Direct Update. If the data will be provided via the load process, you need a standard DataStore object or a write-optimized DataStore object, depending on how you want to use it. We make the following recommendations: Use a standard DataStore object and set the Unique Data Records flag if you want to use the following functions: Delta calculation: Single record reporting: Unique data: Use a standard DataStore object if you want to use the following functions: Delta calculation Single record reporting Use a standard DataStore object and set the Create SIDs on Activation and Unique Data Records flags if you want to use the following functions: Delta calculation Unique data Use a standard DataStore object and set the Create SIDs on Activation flag if you want to use the following function: Delta calculation Use a write-optimized DataStore object if you want to use the following function: Unique data Use a write-optimized DataStore object and set the Duplicate Data Records flag if you want to use the following functions: Single record reporting Note the following with regard to determining the DataStore object type: Performance Tips for DataStore Objects. You can find more information about DataStore object types under: Standard DataStore Object Write-Optimized DataStore Objects

3.6.2 DataStore Object Types


Use
Type Standard Structure Consists of three tables: activation queue, table of active data, change log Consists of the table of active data only Data Supply From data transfer process SID Generation Yes Details Standard DataStore Object Example Scenario for Using Standard DataStore Objects

Write-optimized

From data transfer process

No

Write-Optimized DataStore Objects

Scenario for Using WriteOptimized DataStore Objects Scenario for Using DataStore Objects for Direct Update

For direct update

Consists of the table of active data only

From APIs

No

Creating DataStore Objects for Direct Update

3.6.2.1 Standard DataStore Object


Definition
DataStore object consisting of three transparent, flat tables (activation queue, active data, and change log) that permits detailed data storage. When the data is

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 83 of 102

activated in the DataStore object, the delta is determined. This delta is used when the data is updated in connected InfoProviders from the DSO. The standard DataStore object is filled with data during the extraction and load process in the BW system.

Structure
In the database, a standard DataStore object is represented by three transparent tables: Activation queue: Serves to save DataStore object data records that are to be updated, but that have not yet been activated. After activation, this data is deleted if all requests in the activation queue have been activated. See also: Example of Activating and Updating Data. Active data: A table containing the active data (A table). Change log : Contains the change history for the delta update from the DataStore object into other data targets, such as DataStore objects or InfoCubes. The tables of active data are built according to the DataStore object definition. This means that key fields and data fields are specified when the DataStore object is defined. The activation queue and the change log are almost identical in structure: the activation queue has an SID as its key, the package ID and the record number; the change log has the request ID as its key, the package ID, and the record number.

This graphic shows how the various tables of the DataStore object work together during the data load. Data can be loaded performantly from several source systems simultaneously because a queuing mechanism enables a parallel INSERT. The key allows records to be labeled consistently in the activation queue. The data arrives in the change log from the activation queue and is written to the table for active data upon activation. During activation, the requests are sorted according to their logical keys. This ensures that the data is updated to the table for active data in the correct request sequence. See also Example of Activating and Updating Data. DataStore Data and External Applications The BAPI for reading data, BAPI_ODSO_READ_DATA_UC, enables you to make DataStore data available to external systems.

Caution
In the previous release, BAPI BAPI_ODSO_READ_DATA was used for this. It is now obsolete.

3.6.2.2 Write-Optimized DataStore Object


Definition
DataStore object that only consists of one table of active data. Data is loaded using the data transfer process.

Use
Data that is loaded into write-optimized DataStore objects is available immediately for further processing. You use write-optimized DataStore objects in the following scenarios: You use a write-optimized DataStore object as a temporary storage area for large sets of data if you are executing complex transformations for this data before it is written to the DataStore object. T the data can then be posted to further (smaller) InfoProviders. You only have to create the complex transformations once for all data. You use write-optimized DataStore objects as the EDW layer for saving data. Business rules are only applied when the data is posted to other InfoProviders. The system does not generate SIDs for write-optimized DataStore objects, and you do not need to activate them. This means that you can save and further process data quickly. Reporting is possible on the basis of these DataStore objects. We recommend using them as a consolidation layer however, and updating the data to additional InfoProviders, standard DataStore objects, or InfoCubes.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 84 of 102

Structure
Since the write-optimized DataStore object only consists of the table of active data, you do not have to activate the data, as is necessary with the standard DataStore object. This means that you can process data more quickly. The loaded data is not aggregated; thus retaining the history of the data. If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains however, so that the aggregation of data can take place later in standard DataStore objects. Technical Key The system generates a unique technical key for the write-optimized DataStore object. The standard key fields are not necessary with this type of DataStore object. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical keys. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID), and the Data Record Number field (0RECORD). Only new data records are loaded to this key. Duplicate Data Records You can specify that duplicate data records should be allowed. In this case there is no check whether the data is unique. If you allow duplicate data records, the active table of the DataStore object may contain several records with the same key. If you do not set this flag, but do check the uniqueness of the data, the system generates a unique index in the semantic key of the InfoObject. This index has the technical name "KEY". Since write-optimized DataStore objects do not have a change log, the system does not generate deltas in the sense of before image and after images. When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted. Delta Consistency Check A write-optimized DataStore object is often used like a PSA. Data that is loaded into the DataStore object and then retrieved from the Data Warehouse layer should be deleted after a reasonable period of time. If you are using the DataStore object as part of the consistency layer though, data that has already been updated cannot be deleted. The delta consistency check in DTP delta management prevents a request that has been retrieved with a delta from being deleted. The Delta Consistency Check indicator in the settings for the write-optimized DataStore object is normally deactivated. If you are using the DataStore object as part of the consistency layer, it is advisable to activate the consistency check. When a request is being deleted, the system checks if the data has already been updated by a delta for this DataStore object. If this is the case, the request cannot be deleted. Asynchronous Deletion You delete records from a write-optimized DataStore object asynchronously. When the DataStore object is used in a transformation rule for reading however, note that the deletions are ignored if the records have not been reorganized. Usage in BEx Queries For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. In comparison with standard DataStore objects however, you can expect slightly worse performance because the SID values have to be created during reporting. If you want to use write-optimized DataStore objects in BEx queries, we recommend that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results might ensue when the data is aggregated in the query. DataStore Data and External Applications The BAPI for reading data, BAPI_ODSO_READ_DATA_UC, enables you to make DataStore data available to external systems. In the previous release, BAPI BAPI_ODSO_READ_DATA was used for this. It is now obsolete.

3.6.2.3 Creating DataStore Objects for Direct Update


Context
The DataStore object for direct update differs from the standard DataStore object in terms of how the data is processed. In a standard DataStore object, data is stored in different versions (active, delta, modified), whereas a DataStore object for direct update contains data in a single version. Data is thus stored in precisely the same form in which it was written to the DataStore object for direct update by the application. The DataStore object for direct update consists only of a table for active data and usually receives its data from external systems through APIs for filling or deleting. Loading data by DTP is not supported. DataStore objects for direct update are therefore not displayed in the administration tools or in the monitor. However, you can update the data in DataStore objects for direct update to additional InfoProviders. Since no change log is generated, however, you cannot perform a delta update to the InfoProviders at the end of this process. If you switch a standard DataStore object that already has update rules to Direct Update, the update rules are set to inactive and can no longer be processed.

Note
You can only switch between DataStore object types Standard and Direct Update or change the Planning Mode indicator if the DataStore object does not contain any data yet. In the context of BW-Integrated Planning, you can use a DataStore object for direct update if the Planning Mode indicator is set. In this case, data can only be written to the DataStore object using BW-Integrated Planning or the Analysis Process Designer. The APIs that were designed for the DataStore object for direct update without planning mode cannot be used in planning mode. This is due to the fact that only BW-Integrated Planning can ensure that all SID values exist for the characteristic values stored in the DataStore object. BW-Integrated Planning also ensures consistency with the planning model (characteristic relationships, data slices). More information: InfoProviders You can also use the DataStore object for direct update as a data target for an analysis process. More information: Analysis Process Designer The DataStore object for direct update is also required by various applications, such as SAP Strategic Enterprise Management (SEM) for example, as well as other external applications.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 85 of 102

Procedure
1. Follow the general procedure for creating a DataStore object. More information: Creating DataStore Objects 2. On the editing screen for the DataStore object, choose Settings Type of DataStore Object Change . The default setting is Standard. Change this to Direct Update. 3. If you want to use the DataStore object for BW-Integrated Planning, set the Planning Mode indicator under Properties. 4. Save the DataStore object, and activate it.

Results
The DataStore object for direct update is available as an InfoProvider in BEx Query Designer and can be used for analysis purposes. If the Planning Mode indicator is set, this DataStore object can also be used as the basis for defining aggregation levels.

Next Steps
APIs of the DataStore Object for Direct Update

3.6.2.3.1 APIs of the DataStore Object for Direct Update


Concept
The DataStore object for direct update consists of a table for active data only. If the Planning Mode indicator is not set, the DataStore object receives its data from external systems using APIs for filling or deleting. The following APIs exist: RSDRI_ODSO_INSERT: inserts new data (with keys not yet in the system) RSDRI_ODSO_INSERT_RFC: see above, can be called up remotely RSDRI_ODSO_MODIFY: inserts data having new keys; for data with keys already in the system, the data is changed. RSDRI_ODSO_MODIFY_RFC: see above, can be called up remotely RSDRI_ODSO_UPDATE: changes data with keys in the system RSDRI_ODSO_UPDATE_RFC: see above, can be called up remotely RSDRI_ODSO_DELETE_RFC: deletes data All APIs contain parameter i_smartmerge. This is relevant when using an SAP HANA database. A merge must be performed for database tables on an SAP HANA database. This merge is normally done automatically, but must be triggered explicitly for BW objects. In the case of the DataStore object for direct writing, parameter i_smartmerge is set by default. The loading process is not supported by the BW system. The advantage in the way it is structured is that the data is available sooner. Data is made available for analysis and reporting immediately after being loaded.

3.6.3 Scenario for Using Standard DataStore Objects


Use
The diagram below shows how standard DataStore objects are used in this example of updating order and delivery information, and the status tracking of orders, meaning which orders are open, which are partially-delivered, and so on.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 86 of 102

There are three main steps to the entire data process: 1. Loading the data into the BW system and storing it in the PSA The data requested by the BW system is stored initially in the PSA. A PSA is created for each DataSource and each source system. The PSA is the storage location for incoming data in the BW system. Requested data is saved, unchanged, to the source system. 2. Processing and storing the data in DataSource objects In the second step, the DataSource objects are used on two different levels. 1. On level one, the data from multiple source systems is stored in DataSource objects. Transformation rules permit you to store the consolidated and cleaned up data in the technical format of the BW system. On level one, the data is stored on the document level (for example, orders and deliveries) and constitutes the consolidated database for further processing in the BW system. Data analysis is therefore not usually performed on the DataSource objects at this level. 2. On level two, transfer rules subsequently combine the data from several DataStore objects into a single DataStore object in accordance with business-related criteria. The data is very detailed, for example, information such as the delivery quantity, the delivery delay in days, and the order status, are calculated and stored per order item. Level 2 is used specifically for operative analysis issues, for example, which orders are still open from the last week. Unlike multidimensional analysis, where very large quantities of data are selected, here data is displayed and analyzed selectively. 3. Storing data in the InfoCube In the final step, the data is aggregated from the DataStore object on level two into an InfoCube, meaning in this scenario, that the InfoCube does not contain the order number, but saves the data, for example, on the levels of customer, product, and month. Multidimensional analysis is also performed on this data using a BEx query. You can still display the detailed document data from the DataStore object whenever you need to. Use the report/report interface from a BEx query. In this way, you are able to analyze the aggregated data from the InfoCube, and target the specific level of detail you want to access in the data.

3.6.4 Scenario for Using Write-Optimized DataStore Objects


Use
A plausible scenario for write-optimized DataStore objects is exclusive saving of new, unique data records, for example in the posting process for documents in retail. In the example below, however, write-optimized DataStore objects are used as the EDW layer for saving data. This is meant to allow data to be quickly saved and forwarded. The data stored temporarily in a write-optimized DataStore object can be used to enhance the data before forwarding or to check the data.

There are three main steps to the entire data process: 1. Loading the data into the BW system and storing it in the PSA The data requested by the BW system is stored initially in the PSA. A PSA is created for each DataSource and each source system. The PSA is the storage location for incoming data in the BW system. Requested data is saved, unchanged, to the source system. 2. Processing and storing the data in DataSource objects In the second step, the data is posted at the document level to a write-optimized DataStore object ("pass thru"). The data is posted from here to another write-optimized DataStore object, known as the corporate memory. The data is then distributed from the "pass thru" to three standard DataStore objects, one for each region in this example. The data records are deleted after posting. 3. Storing data in the InfoCube In the final step, the data is aggregated from the DataStore objects to various InfoCubes depending on the purpose of the query, for example for different distribution channels. Modeling the various partitions individually means that they can be transformed, loaded and deleted flexibly.

3.6.5 Scenario for Using DataStore Objects for Direct Update


PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 87 of 102

Use
The following graphic shows a typical operational scenario for for DataStore Objects for direct update:

DataStore objects for direct update ensure that the data is available quickly. The data from this kind of DataStore object is accessed transactionally. The data is written to the DataStore object (possibly by several users at the same time) and reread as soon as possible. It is not a replacement for the standard DataStore object. It is an additional function that can be used in special application contexts. The DataStore object for direct update consists of a table for active data only. It retrieves its data from external systems via fill or delete APIs. See DataStore Data and External Applications. The loading process is not supported by the BW system. The advantage to the way it is structured is that it is easy to access data. Data is made available for analysis and reporting immediately after it is loaded.

3.6.6 SAP-HANA-Optimized Activation of DataStore Objects


If you are using an SAP HANA database, the activation of standard DataStore Objects is optimized for SAP HANA. The change log data is saved in a transparent table. As the concept of inactive data is supported, memory usage is slight. SAP HANA-optimized activation is also supported in 3.x data flows and in real-time data acquisition (RDA).

Prerequisites
You are using SAP HANA Support Package Stack 05 with revision 57 or higher.

Related Information
Handling Inactive Data to Optimize the Use of the Main Memory in SAP HANA

3.6.6.1 SAP HANA-Optimized DataStore Object (Obsolete)


Recommendation
We recommend that you do not use the DataStore object flagged as SAP HANA-optimized any more. Standard DataStore objects are now automatically optimized for activation in SAP HANA. You can still use existing SAP HANA-optimized DataStore objects, but we recommend reconverting them. To do this, you use report RSDRI_RECONVERT_DATASTORE. You cannot create any new DataStore objects with the flag SAP HANA-Optimized. The SAP HANA-optimized DataStore object is a standard DataStore object that is optimized for use with the SAP HANA database. By using SAP HANAoptimized DataStore objects, you can achieve significant performance gains when activating requests. The change log of the SAP HANA-optimized DataStore object is displayed as a table on the BW system. However, this table does not save any data, which helps to save memory space. When the change log is accessed, the data content is calculated using a calculation view. Data is read from the history table for the temporal table of active data in the SAP HANA database.

Note
If you want to view the change log data in the ABAP Dictionary, a warning appears explaining that the table does not exists in the database. This is due to optimization - the table in the database is replaced by a calculation view. The table for active data is a temporal table that consists of three components: History table, main table and delta table. Data activation is started on the BW system and executed in SAP HANA. No data is transferred to the application server during activation.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 88 of 102

Caution
In the DataStore object editing screen, you can choose the Unique Data Records property. However, this does not improve system performance when using an SAP HANA-optimized DataStore object. The uniqueness of the data is not checked, meaning that data consistency cannot be guaranteed. SAP HANA-optimized DataStore objects write a change log entry for every activated record - even if the activation process did not change the active data. This can result in an increased data volume in scenarios where the DataStore object is used to generate a delta (for example, loading master data). In this case, you can specify which activation procedure is used in the maintenance screen for runtime parameters (either system-wide or for individual DataStore objects). You can access the maintenance transaction for runtime parameters in Customizing under SAP Customizing Implementation GuideSAP NetWeaver Business Warehouse Performance Settings Maintain Runtime Parameters of DataStore Objects. Alternatively, you can access this transaction in the Administration section in the Data Warehousing Workbench, under Current Settings DataStore Objects. Then choose SAP HANA Expert Settings Compress Change Log, to stop unneeded entries being written to the change log. Differences to a Normal Standard DataStore Object The SAP HANA-optimized DataStore object contains the additional field IMO__INT_KEY in the active data table. This field is required for optimizing SAP HANA and is hidden in queries. A before/after image is still written during activation, even if no changes are made to the active data. It cannot be used as a source of update flows in a 3.x data flow. More information: Data Flow in Business Warehouse The complete history of a request is not saved. Only the start status and end status (relating to an activation) are saved. Since real-time data acquisition (RDA) usually involves small data volumes for each activation step, SAP HANA optimization does not produce any advantages. The use of SAP HANA-optimized DataStore objects for RDA is therefore not supported.

Prerequisites
You are using a SAP HANA database.

3.6.7 DataStore Object Settings


Use
When creating and changing a DataStore object, you can make the following settings: DataStore Object Type Select the DataStore object type. You can choose between Standard, Direct Update, and Write-Optimized, where Standard is the default value, while Direct Update is only intended for special cases. You can switch the type as long as there is still no data in the DataStore object yet. More information: DataStore Object Types Type-Specific Settings The following settings are only available for certain DataStore object types: For Write-Optimized DataStore Objects: Allow duplicate data records This flag is only relevant for write-optimized DataStore objects. With these objects, the technical key of the active tables always consists of the fields Request, Data Package, and Data Record. The InfoObjects that appear in the maintenance dialog in the Semantic Key folder form the semantic key of the write-optimized DataStore object. If this flag is set, no unique index is generated with technical name "KEY" for the InfoObjects in the semantic key. This means that there can be more than one record with the same key in the active table of the DataStore object. DataStore Objects for Direct Update: Planning mode The following applies to DataStore objects for direct updates using the planning mode flag: Use of characteristics: All characteristics must be contained in the key of the DataStore object. This also applies to characteristics that contain currencies and quantities. Use of key figures:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 89 of 102

All key figures in a data part, which are valid for the direct update of the DataStore objects without planning mode, can be used. In addition, key figures with standard aggregation no aggregation (X, if more than one value) (NO2) can be included in the data part. However, BW-integrated planning only allows you to change key figures with standard aggregation SUM and NO2. For Standard DataStore Objects: Creating SID Values With the Generation of SID Values flag, you define whether SIDs are created for the new characteristic values in the DataStore object when the data is activated. There are three options: During Reporting: When data is activated in the DataStore object, no SIDs are retrieved for the new characteristic values. This reduces the runtime during activation. The SIDs are generated during query execution. During Activation: The SIDs are generated when the data is activated. This reduces runtime when executing the query. Never Generate SIDs: SIDs are never generated. This is the best option for all DataStore objects that are likely to be used for further processing in other DataStore objects or InfoCubes. If you create a query for a DataStore object with this property, the query cannot be executed. You can still define InfoSets with this DataStore object and execute queries for it. Loading Unique Data Records If you only load unique data records (data records with non-recurring key combinations) into the DataStore object, performance is improved if you set the Unique Data Records flag in DataStore object maintenance. This setting is displayed as the default setting when activating data in administration or the corresponding process in the process chain. You can also overwrite this setting there if required (for initial data loading for example). The records are then updated more quickly because the system no longer needs to check whether the record already exists. You have to be sure that no duplicate records are loaded because this leads to termination. Check whether the DataStore object might be write-optimized. Options for 3.x Data Flows: Automatic Further Processing If you still use a 3.x InfoPackage to load data, you can activate several automatisms to further process the data in the DataStore object. If you use the data transfer process and process chains that we recommend using, you cannot use these automatisms. We recommend always using process chains. More information: Including DataStore Objects in Process Chains Settings for automatic further processing: Automatically Setting Quality Status to OK If you set this flag, the system automatically will set the quality status to OK after loading data into the DataStore object. You should activate this function. You should only deselect this flag if you want to check the data after loading. Activating the DataStore Object Data Automatically If you set this flag, data with quality status OK is transferred from the activation queue to the active data table, and the change log is updated. Activation is carried out by a new job that is started after loading into a DataStore object is complete. If the activation process terminates, there can be no automatic updating. Updating Data from the DataStore Object Automatically If you set this flag, the DataStore object data is updated automatically. Once the data has been activated, it is posted to the relevant InfoProviders. An initial update is carried out automatically with the first update. If the activation process terminates, there can be no automatic updating. The update is carried out by a new job that is started once activation is complete.

Note
Only activate automatic activation and update if you are sure that these processes do not overlap. For information about other settings, see Performance Tips for DataStore Objects and Runtime Parameters of DataStore Objects

3.6.8 Additional Functions in DataStore Object Maintenance


Use
Documents You can display, create or change documents for DataStore objects. See also: Documents Version Comparison You can compare changes in DataStore object maintenance for the following DataStore object versions: Active and Modified version Active and Content version Modified and Content version Transport Connection You can select and transport DataStore objects. The system automatically collects all BW objects that are required to ensure a consistent status in the target system. Where-Used List You can find out which other objects in the BW system use a specific DataStore object. You can determine how a given change would affect a DataStore object and whether or not this change is currently permitted. Business Content In BW Content DataStore objects, you can jump to the transaction for installing BW Content, copy the DataStore object, or compare it with the customer version. More information: Business Content (Versions). Structure-Specific Properties of InfoObjects In the InfoObject's context menu, you can assign specific properties to InfoObjects. These properties are only valid in the DataStore object you are currently processing.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 90 of 102

Most of these settings correspond to the settings that you can make globally for an InfoObject. For characteristics, these are Display, Text Type, and Filter Value Selection upon Query Execution. See the corresponding sections under Tab Page: Business Explorer.. You can also specify constants for characteristics. By assigning a constant to a characteristic, you assign it a fixed value. This means that the characteristic is available in the database (for validation for example) but not displayed in the query anymore (no aggregation/drilldown is possible for this characteristic). It is particularly useful to assign constants to compound characteristics.

Example
Example 1: The Storage Location characteristic is compounded with the Plant characteristic. If only one plant is ever run in the application, you can assign a constant to the plant. Validation against the storage location master table is then performed using this value for the plant. In the query, the storage location only appears as a characteristic however. Example 2: For an InfoProvider, specify that only the constant 2005 appears for the year. In a query based on a MultiProvider that contains this InfoProvider, the InfoProvider is ignored if the selection is for 2004. This improves query performance, as the system knows that it does not have to search for records. Exception: If constant SPACE (type CHAR) or 00..0 (type NUMC) is assigned to the characteristic, specify character # in the first position. For key figures, you have the settings Decimal Places and Display. See the corresponding sections under Tab Page: Additional Properties.. Info Functions Various information functions are offered for the status of the DataStore object: Log display for the save, activation, and deletion runs for the DataStore object DataStore object status in the ABAP/4 Dictionary and on the database Object catalog entry Performance Settings: To set the DB memory parameters, choose you can also use Clustering. Extras DB Performance . If you are using DB2 UDB for UNIX, Windows and Linux as your database platform,

3.6.8.1 InfoProvider Properties


Concept
In the transaction for modeling InfoProviders, you can make the following settings by choosing Query/Cache tab page: Here, you can make various settings that you can call via the Query Monitor. Here, you only make the settings per InfoProvider and not per query like in Query Monitor. More information: Query Properties Roll up tab page: You only see this tab page when working with InfoCubes. You can make settings for the aggregates here. Load register page: You only see this tab page when working with DataStore objects. You can make settings here that you can also make in the Settings filer in the DataStore object maintenance transaction. DB Performance tab page: You only see this tab page when working with InfoCubes. You can make the following settings here: Delete index before each data load and rebuild again afterwards: This deletes the index before each data load and rebuilds it again afterwards: Delete index before each delta load and rebuild again afterwards: This also deletes the index before each delta load (smaller amounts of data, that is) and rebuilds it again afterwards. With smaller amounts of data, it is often not worth rebuilding, as the effort involved is greater than the gain. However, for delta loads with a large quantity of data (more than a million records), it does actually make sense to delete the indexes and rebuild them completely after loading. Recalculate DB statistics after each load: By recalculating the statistics after each load, you can ensure better performance with data analysis. Recalculate statistics after delta upload too: This also recalculates the statistics after each delta load (smaller amounts of data, that is). With smaller amounts of data, it is often not worth recalculating, as the effort involved is greater than the gain. Percentage of IC data for rebuilding the statistics: Here, you can define the percentage of data in the InfoCube to use when rebuilding the statistics. Rebuilding the statistics then takes less time than if all data is taken. Zero elimination in the summarization module: Here, entries in the fact table where all key figures are identical with the initial value are deleted when the fact table is compressed. Whether or not zero elimination makes sense depends on your scenario. If you want to see characteristic combiations with key figures that are zero in reporting and analysis, you should not carry out zero elimination. During compression, elimination of zero values makes runtime longer. No update of the non-cumulative marker: This stipulates that the marker is not updated when changes to non-cumulatives are posted to an InfoCube with non-cumulative key figures. You can use this option to load non-cumulative changes from the past to an InfoCube after initialization with the current non-cumulative has already occurred. You then need to perform a compression with this option before using this data in the analysis. Delete index during rollup: Normally, the index of the F table for the aggregate is deleted and rebuilt during rollup. If you have deactivated this automatic setting, you can reactivate it Extras InfoProvider Properties Change in the main menu:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 91 of 102

here for this specific InfoProvider.

3.6.8.2 DB Memory Parameters


Use
You can maintain database storage parameters for PSA tables, master data tables, InfoCube fact- and dimension tables, as well as DataStore object tables and error stack tables of the data transfer process (DTP). Use this setting to determine how the system handles the table when it creates it in the database: Use Data Type to set in which physical database area (tablespace) the system is to create the table. Each data type (master data, transaction data, organizationand Customizing data, and customer data) has its own physical database area, in which all tables assigned to this data type are stored. If selected correctly, your table is automatically assigned to the correct area when it is created in the database.

Note
We recommend you use separate tablespaces for very large tables. You can find information about creating a new data type in SAP Note 0046272 (Introduce new data type in technical settings). Via Size Category, you can set the amount of space the table is thought to need in the database. Five categories are available in the input help. You can also see here how many data records correspond to each individual category. When creating the table, the system reserves an initial storage space in the database. If the table later requires more storage space, it obtains it as set out in the size category. Correctly setting the size category prevents there being too many small extents (save areas) for a table. It also prevents the wastage of storage space when creating extents that are too large. You can use the maintenance for storage parameters to better manage databases that support this concept. You can find additional information about the data type and size category parameters in the ABAP Dictionary table documentation, underTechnical Settings. PSA Table For PSA tables, you access the database storage parameter maintenance by choosing Goto Technical Attributes in DataSource maintenance. In dataflow 3.x, you access this setting Extras Maintain DB-Storage Parameters in the menu of the transfer rule maintenance. You can also assign storage parameters for a PSA table already in the system. However, this has no effect on the existing table. If the system generates a new PSA version (a new PSA table) due to changes to the DataSource, this is created in the data area for the current storage parameters. InfoObject Tables For InfoObject tables, you can find the maintenance of database storage parameters under Extras Maintain DB Storage Parameters in the InfoObject maintenance menu. InfoCube/Aggregate Fact and Dimension Tables For fact and dimension tables, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the InfoCube maintenance menu. DataStore Object Tables (Activation Queue and Table for Active Data) For tables of the DataStore object, you can find the maintenance of database storage parameters under Extras DB Performance Maintain DB Storage Parameters in the DataStore object maintenance menu. DTP Error Stack Tables You can find the maintenance transaction for the database memory parameters for error stack tables by choosing Extras Settings for Error Stack in the DTP maintenance.

3.6.8.3 Partitioning
Use
You can use partitioning to divide the entire dataset of an InfoProvider into several smaller units that are independent and redundancy-free. This separation can improve performance during the data analysis or when deleting data from the InfoProvider.

Prerequisites
You can only implement the partitioning process by using one of the two partitioning criteria, Calendar Month (0CALMONTH) or Fiscal Year/Period (0FISCPER). The InfoProvider must contain at least one of the two InfoObjects.

Note
If you want to partition an InfoProvider using characteristic Fiscal Year/Period (0FISCPER), you first need to set characterstic Fiscal Year Variant (0FISCVARNT) to constant. More information: Partitioning InfoProviders Using Characteristic OFISCPER

Integration
Partitioning is supported by the following databases: SAP HANA Database IBM DB2 for z/OS IBM DB2 for i

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 92 of 102

Oracle Microsoft SQL Server To improve the performance of DB2 for Linux, UNIX and Windows, you can use clustering. If using IBM DB2 for i5/OS as your DB platform, you need a database version of at least V5R3M0 and need to have component DB2 Multi Systems installed. Note that BW systems with active partitioning in this system constellation can only be copied to other IBM iSeries with the SAVLIB/RSTLIB process (homogeneous system copy). If using this database, you can also partition PSA tables. You first need to activate this function using RSADMIN parameter DB4_PSA_PARTITIONING = 'X' . For more information see SAP Note 815186.

Features
Features When you activate the InfoProvider, the table is saved to the database, with a number of partitions that corresponds to the value area. You can define the value area yourself.

Example
You choose partitioning criterion 0CALMONTH and define the value area: from 01.1998 to 12.2003 6 years * 12 months + 2 = 74 partitions created (two partitions for values that lie outside the area, that is < 01.1998 or > 12.2003). You can also define the maximum number of partitions that can be created for this table on the database.

Example
You choose partitioning criterion 0CALMONTH and define the value area: from 01.1998 to 12.2003 You set the maximum number of partitions to 30 30 The value area results in: 6 years * 12 calendar months + 2 marginal partitions (to 01.1998, from 12.2003) = 74 single values. The system creates a partition every three months (meaning a partition corresponds to exactly a quarter), meaning that 6 years * 4 partitions/year + 2 marginal partitions = 26 partitions are created in the database. This only improves performance if the time InfoProvider's time characteristics are consistent however. In the case of partitioning with 0CALMONTH, this means that all values in a data record's 0CAL* characterstics have to match each other.

Example
In the following example, only record 1 is consistent, while records 2 and 3 are inconsistent:

Make sure that the value area can only be changed if the InfoProvider does not contain any data. If data has already been loaded to the InfoProvider, you need to repartition. More information: Repartitioning

Recommendation
We recommend "Partition on demand". This means that you should not make partitions too small or too large. If you make the time selection too small, the partitions will be too large. If the time is too far in the future, the number of partitions will be too large. We therefore recommend creating a partition for one year for example and repartitioning the InfoProvider once this has expired.

Activities
In InfoProvider maintenance, choose Extras DB Performance Partitioning and define the value area. If necessary, set the maximum number of partitions.

3.6.8.3.1 Partitioning InfoProviders Using Characteristic OFISCPER


Use
PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved. Page 93 of 102

Use
You can partition InfoProviders using two characteristics: calendar month (0CALMONTH) and fiscal year/period (0FISCPER). The special feature of the fiscal year/period characteristic (0FISCPER) being compounded with the fiscal year variant (0FISCVARNT) means that you have to use a special procedure when partitioning an InfoProvider using 0FISCPER.

Prerequisites
During partitioning using 0FISCPER, values are calculated within the partitioning interval specified in InfoProvider maintenance. For this to happen, the value for 0FISCVARNT must be known at the time of partitioning, meaning that it must be set to Constant.

Procedure
For InfoCubes: 1. You are in InfoCube maintenance. Set the value for characteristic 0FISCVARNT to constant. Carry out the following steps: 1. In the context menu for the dimension folder, choose Object-Specific InfoObject Properties. 2. Enter a constant for characteristic 0FISCVARNT. Choose Continue. 2. Choose Extras DB Perfomance Partitioning . The Define Partitioning Condition dialog box appears. You can now use Selection to select characteristic 0FISCPER. Choose Continue. 3. The Value Range (Partitioning Condition) dialog box appears. Enter the required data. More information: Partitioning. For DataStore Objects: 1. You are in DataStore maintenance. Set the value for characteristic 0FISCVARNT to constant. Carry out the following steps: 1. Select either the Key Fields folder or the Data Fields folder (depending on where characteristics 0FISCPER and 0FISCVANT are located). 2. Choose Provider-Specific Properties from the context menu for characteristic 0FISCVARNT. 3. Enter a constant for characteristic 0FISCVARNT. Choose Continue. 2. Choose Extras DB Perfomance Partitioning . The Define Partitioning Condition dialog box appears. You can now use Selection to select characteristic 0FISCPER. Choose Continue. 3. The Value Range (Partitioning Condition) dialog box appears. Enter the required data. More information: Partitioning.

3.6.8.4 Repartitioning
Use
Repartitioning can be useful if you have already loaded data to your DataStore object, and: You did not partition the DataStore object when you created it. You loaded more data into your DataStore object than you had planned when you partitioned it. The period of time for partitioning you chose for partitioning is too short. Some partitions contain no data or little data due to data archiving over a period of time.

Integration
All database providers support this function except DB2 for Linux, UNIX, Windows and MAXDB. For DB2 for Linux, UNIX, and Windows, you can use clustering or reclustering instead. More information: Multidimensional Clustering.

Features
Merging and Adding Partitions When you merge and add partitions, DataStore object partitions are either merged at the bottom end of the partitioning schema (merge), or added at the top (split). Ideally, this operation is only executed for the database catalog. This is the case if all the partitions that you want to merge are empty and no data has been loaded outside of the time period you initially defined. The runtime of the action is only a few minutes. If there is still data in the partitions you want to merge, or if data has been loaded beyond the time period you initially defined, the system saves the data in a shadow table and then copies it back to the original table. The runtime depends on the amount of data to be copied. Complete Partitioning When complete partitioning is performed, the tables of the DataStore object are completely converted. The system creates shadow tables with the new partitioning schema and copies all of the data from the original tables into the shadow tables. As soon as the data is copied, the system creates indexes and the original table replaces the shadow table. After the system has successfully completed the partitioning request, the tables exist in the original state (shadow table), as well as in the modified state with the new partitioning schema (original table). You can manually delete the shadow tables after repartitioning has been successfully completed to free up the memory. Shadow tables have the namespace /BIC/4F<Name of DataStore object> and /BIC/4E<Name of DataStore object>. Monitoring You can monitor the repartitioning requests using a monitor. The monitor shows you the current status of the processing steps. When you double-click, the relevant logs appear. The following functions are available in the context menu of the request or editing step: Delete: You delete the repartitioning request. It no longer appears in the monitor and you cannot restart. All tables remain in their current state. The DataStore objet may be inconsistent. Reset Request: You reset the repartitioning request. This deletes all the locks for the DataStore object and all its shadow tables. Reset Step: You reset the canceled editing steps so that they are reset to their original state. Restart: You restart the repartitioning request in the background. You cannot restart a repartitioning request if it still has status Active (yellow) in the monitor. Check whether the request is still active (transaction SM37) and, if necessary, reset the current editing step before you restart. Background Information About Copying Data By default, the system copies a maximum of six processes in parallel. The main process splits dialog processes in the background. These dialog processes each copy small data packages and finish with a COMMIT. If a timeout causes one of these dialog processes to terminate, you can restart the affected copy

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 94 of 102

operations, after you have altered the timeout time. To do this, choose Restart Repartitioning Request. Background Information About Error Handling Even if you can restart the individual editing steps, you should not reset the repartitioning request or the individual editing steps without first performing an error analysis. During repartitioning, the relevant DataStore object is locked against modifying operations to avoid data inconsistencies. In the initial dialog, you can manually unlock objects. This option is only intended for cases where errors have occurred and should only be used after the logs and datasets have been analyzed. Transport Since the metadata in the target system is adjusted without the DB tables being converted when you transport DataStore objects, repartitioned DataStore objects may only be transported when the repartitioning has already taken place in the target system. Otherwise inconsistencies that can only be corrected manually occur in the target system.

Activities
The repartitioning function can be accessed in the Data Warehousing Workbench under Administration. You can schedule repartitioning in the background by choosing Initialize. You can monitor the repartitioning requests by choosing Monitor.

3.6.8.5 Multidimensional Clustering


Use
Multidimensional clustering (MDC) allows you to save the sorted data records in the active table of a DataStore object. Data records with the same key field values are saved in the same extents (related database storage unit). This prevents data records with the same key values from being spread over a large memory area and thereby reduces the number of extents to be read upon accessing tables. Multidimensional clustering greatly improves active table queries.

Prerequisites
Currently, the function is only supported by the database platform IBM DB2 Universal Database for UNIX and Windows.

Features
Multidimensional clustering organizes the data records of the active table of a DataStore object according to one or more fields of your choice. The selected fields are also marked as MDC dimensions. Only data records that have the same values in the MDC dimensions are saved in an extent. In the context of MDC, an extent is called a block. The system creates block indexes from within the database for the selected fields. Block indexes link to extents instead of data record numbers and are therefore much smaller than row-based secondary indexes. They save memory space and the system can search through them more quickly. This accelerates table requests that are restricted to these fields. You can select the key fields of an active table of a DataStore object as an MDC dimension. Multidimensional clustering was introduced in Release SAP NetWeaver 7.0 and can be set up separately for each DataStore object. For procedures, see Definition of Clustering.

3.6.8.5.1 Definition of Clustering


Use Prerequisites
You can only change clustering if the DataStore object does not contain any data. You can change the clustering of DataStore objects that are already filled using the Reclustering function. For more information, see Reclustering.

Features
In the DataStore maintenance, select Extras DB Performance Clustering . You can select MDC dimensions for the DataStore object on the Multidimensional Clustering screen. Select one or more InfoObjects as MDC dimensions and assign them consecutive sequence numbers, beginning with 1. The sequence number shows whether a field has been selected as an MDC dimension and determines the order of the MDC dimensions in the combined block index.

Note
In addition to block indexes for the different MDC dimensions within the database, the system creates the combined block index. The combined block index contains the fields of all the MDC dimensions. The order of the MDC dimensions can slightly affect the performance of table queries that are restricted to all MDC dimensions and those that are used to access the combined block index. When selecting, proceed as follows: Select InfoObjects that you use to restrict your queries. For example, you can use a time characteristic as an MDC dimension to restrict your queries. Select InfoObjects with a low cardinality. For example, the time characteristic 0CALMONTH instead of 0CALDAY. You cannot select more than three InfoObjects.

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 95 of 102

Assign sequence numbers using the following criteria: Sort the InfoObjects according to how often they occur in queries (assign the lowest sequence number to the InfoObject that occurs most often in queries). Sort the InfoObjects according to selectivity (assign the lowest sequence number to the InfoObject with the most different data records).

Caution
Note: At least one block is created for each value combination in the MDC dimension. This memory area is reserved independently of the number of data records that have the same value combination in the MDC dimension. If there is not a sufficient number of data records with the same value combinations to completely fill a block, the free memory remains unused. This is so that data records with a different value combination in the MDC dimension cannot be written to the block. If for each combination that exists in the DataStore object, only a few data records exist in the selected MDC dimension, most blocks have unused free memory. This means that the active tables use an unnecessarily large amount of memory space. Performance of table queries also deteriorates, as many pages with not much information must be read.

Example
The size of a block depends on the PAGESIZE and the EXTENTSIZE of the tablespace. The standard PAGESIZE of the DataStore tablespace with the assigned data class DODS is 16K. Up to Release SAP BW 3.5, the default EXTENTSIZE value was 16. As of Release SAP NetWeaver 7.0, the new default EXTENTSIZE value is 2. With an EXTENTSIZE of 2 and a PAGESIZE of 16K, the memory area is calculated as 2 x 16K = 32K, which is reserved for each block. The width of a data record depends on the width and number of key fields and data fields in the DataStore object. If, for example, a DataStore object has 10 key fields, each with 10 bytes, and 30 data fields with an average of 9 bytes each, a data record needs 10 x 10 bytes + 30 x 9 bytes = 370 bytes. In a 32K block, 32768 bytes/370 bytes could write 88 data records. At least 80 data records should exist for each value combination in the MDC dimensions. This allows optimal use of the memory space in the active table.

3.6.9 Performance Tips for DataStore Objects.


Use
To achieve good activation performance for DataStore objects, you should note the following points: Creating SID Values Generating SID values takes a long time and can be avoided in the following cases: Do not set the 'Generate SID values' flag, if you only use the DataStore object as a data store. If you do set this flag, SIDs are created for all new characteristic values. If you are using line items (document number or time stamp, for example) as characteristics in the DataStore object, set the flag in characteristic maintenance to show that they are "attribute only". SID values can be generated in parallel, irrespective of the activation settings. More information: Runtime Parameters of DataStore Objects Partitioning You can use partitioning to divide the entire dataset of an InfoProvider into several smaller units that are independent and redundancy-free. This separation can improve performance during the data analysis or when deleting data from the InfoProvider. More information: Partitioning Clustering on the table for active data (A table) Clustering at database level makes it possible to access DataStore objects much more quickly. Select the characteristic that you want to use to access data as the clustering criterion. For more information, see Multidimensional Clustering. Indexing Selection criteria should be used for queries on DataStore objects. If key fields are specified, the existing primary index is used. This frequently accessed characteristic should be left-aligned. If you have not specified the key fields completely in the selection criteria (you can check this in the SQL trace), you can improve the runtime of the query by creating additional indexes. These secondary indexes can be created in DataStore object maintenance. However, you should note that many of the secondary indexes impair the load performance. Activation Times for the Standard DataStore Object The following table shows how much activation runtime is saved. The saving always refers to a standard DataStore object, which the SIDs were generated for during activation.
Flag Generate SIDs on Activation Unique Data Records Generate SIDs on Activation Unique Data Records Generate SIDs on Activation Unique Data Records x x x Saving in Runtime ca. 25%

ca. 35%

ca. 45%

The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes. The specified percentages are based on experience, and can differ depending on the system configuration. If you want to use the DataStore object as a consolidation level, we recommend that you use the write-optimized DataStore object. This makes it possible to

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 96 of 102

provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation. More information: Scenario for Using Write-Optimized DataStore Objects. Further Activation Settings If you are sure that the active table does not contain any records with the same key, choose the setting Only process new, unique data records. This overrides the settings on the DataStore object and the system does not need to read the records again. More information: Activation of Data in DataStore Objects MPP-optimized activation For MPP databases, the system uses performance-optimized data in the DataStore object. This means the activation runtime can be significantly reduced for standard DataStore objects. More information: Activation for MPP Architectures

3.6.10 Integration in the Data Flow


Use
Update Transformation rules define the rules that are used to write data to a DataStore object. They are very similar to the transformation rules for InfoCubes. The main difference is the behavior of data fields in the update. When you update requests into a DataStore object, you have an overwrite option as well as an addition option. More information: Aggregation Type The delta process that is defined for the DataSource also influences the update. When loading files, the user must select a suitable delta process so that the correct transformation type is used. More information: Delta Process Unit fields and currency fields operate just like normal key figures, meaning that they must be explicitly filled using a rule. Scheduling and Monitoring The processes for scheduling the data transfer process for updating data into InfoCubes and DataStore objects are identical. It is also possible to schedule the activation of DataStore object data and the update from the DataStore object into the related InfoCubes or DataStore objects. The individual steps, including processing the DataStore object, are logged in the monitor. There is a separate detailed monitor for executed request operations (such as activation or rollback). Loadable DataSources In full-update mode, each transaction data DataSource contained in a DataStore object can be updated. In delta-update mode, only those DataSources that are flagged as delta-enabled DataStores can be updated.

3.7 Using Semantic Partitioning


Context
A semantically partitioned object is an InfoProvider that consists of several InfoCubes or DataStore objects with the same structure. Semantic partitioning is a property of the InfoProvider. You specify this property when creating the InfoProvider. Semantic partitioning divides the InfoProvider into several small, equally sized units (partitions). A semantically partitioned object offers the following advantages compared to standard InfoCubes or standard DataStore objects: Better performance with mass data: The larger the data volume, the longer the runtimes required for standard DataStore objects and standard InfoCubes. Semantic partitioning means that the data sets are distributed over several data containers. This means that runtimes are kept short even if the data volume is large. Close data connection: Error handling is better. If a request for a region ends with an error, for example, the entire InfoProvider is unavailable for analysis and reporting. With a semantically partitioned object, the separation of the regions into different partitions means that only the region that caused the error is unavailable for data analysis. Working with different time zones: EDW scenarios usually involve several time zones. With a semantically partitioned object, the time zones can be separated by the partitions. Data loading and administrative tasks can therefore be scheduled independently of the time zone. Analysis and Reporting You can use the semantically partitioned object for reporting and analysis, as you can with any other InfoProvider. You can also choose to only update selected partitions in an InfoCube, for example, or include selected partitions in a MultiProvider and use them for analysis, as shown in the following graphic:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 97 of 102

Notes on updating deltas: If the semantically partitiioned object is made up of InfoCubes, deltas can be updated to the target InfoProvider using DTPs without any restrictions. If the source is a semantically partitioned object that is made up of DataStore objects, only full DTPs can be created. If you want to update using deltas, you have to select the partitions of the semantically partitioned object as the source, rather than the semantically partitioned object itself. If the target is a semantically partitioned object, you can perform the DTPs using the target semantically partiioned object's wizard. The source of the DTPs would then have to be the outbound InfoSource of the source semantically partitioned object rather than the semantically partitioned object itself. Note on analysis: If you update the entire semantically partitioned object to another InfoProvider, the navigation attributes cannot be used in the analysis. This is because the InfoSource that compiles the individual partitions for the update does not support navigation attributes. If you only update some of the partitions, this restriction does not apply. Note on query performance: Due to Partition Pruning, the data is processed quickly - with or without the use of a BW accelerator. With Partition Pruning, only those partitions are read that contain the data requested in the query.

Procedure
1. Create a DataStore object or an InfoCube with the Semantically Partitioned property. More information: Creating a Semantically Partitioned Object 2. Create a transformation. 3. Create a data transfer process. More information: Creating a DTP for a Semantically Partitioned Object 4. Create a process chain. More information: Creating Process Chains for a Semantically Partitioned Object

Results
You have now created a semantically partitioned object with a data flow. In the editing screen of the semantically partitioned object, choose Display Monitor to see an overview of the status of your partitions and the DTPs. The displayed requests statuses tell you whether the requests are active and whether all the data is up to date. This last point means that the system checks whether the last request to be retrieved in every partition is the same, or whether one of the partitions contains a newer request than in the other partitions.

3.7.1 Creating a Semantically Partitioned Object


Prerequisites
In the Data Warehousing Workbench, you can deactivate the display of trees, to make more space available for large applications. In the left navigation pane, choose Hide for Large Applications.

Context
The semantically partitioned object consists of known objects, an InfoCube or a DataStore object. The semantic partitioning creates several objects with the same structure - the partitions. Here a wizard is available to help you minimize the required effort. You define the template for the partitions (the reference structure). The partitions are identical and are derived from the structure. You can only create and change the reference structure. The partitions are write-protected to make sure that they remain identical. To keep the process of creating a semantically partitioned object as simple as possible, different objects are generated when the object is activated. The following

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 98 of 102

graphic shows the objects that are generated (upper area) and the objects that you need to create yourself (lower area).

Part of data flow is generated by data flowing out of the partitions. Here an InfoSource is created as well as simple dummy transformations that are not executed by the DTP. Part of the data flow is also created by data flowing into the partitions: An InfoSource is also generated with simple dummy transformations. This InfoSource represents a data entry layer for all partitions and makes it easier for you to connect sources. A semantically partitioned object can only be transported as a whole object. The generated objects are not transported. Instead they are generated in the target system.

Procedure
1. You are in the Data Warehousing Workbench in the InfoProvider tree. In the InfoArea, select Create InfoCube or Create DataStore Object. 2. Make the required entries. You can only select Standard as the type for InfoCubes. If you are using an SAP HANA database, the Optimized for SAP HANA option is set by default. 3. Choose the property Semantically Partitioned. The system now automatically creates an "envelope" (for the semantically partitioned object) in which the different objects are merged. A screen appears where you can define the semantically partitioned object. The right area of the screen displays the InfoProvider definition. The left area of the screen contains a wizard that helps you to create the required objects. More information: The Wizard 4. Define your InfoCube or DataStore object. In doing so, you define the reference structure for the partitions.

Note
Note that an InfoCube for a semantically partitioned object cannot contain any non-cumulative key figures. 5. In the wizard, choose Edit Partitions. The reference structure is automatically saved. 6. Select the partition criteria. All characteristics are allowed for an InfoCube but only key fields are allowed for a DataStore object. You can select a maximum of five partition criteria. Partition characteristics should have maximum stability and only a small number of changes. 7. Define the partitions. Here you can define individual values, intervals and conditions. Choose Add Partition to create more partitions. The partitions are automatically given a name which you can change if required. 8. Check and save the partitions. When the check is performed, the system makes sure that the partitions do not overlap. 9. In the wizard, choose Start Activation. The objects are generated and a log is displayeds.

Results
You have now created the semantically partitioned object. The wizard helps you to perform the remaining steps.

Note
Provided that you have not loaded any data in your object, you can perform all possible types of changes in the editing screen of the reference structure. If you have already loaded data, you can only add InfoObjects in the reference structure. However, you cannot delete any InfoObjects. In the editing screen of the partitions, you can delete empty partitions and add new partitions. You can change the partitioning criteria of existing partitions. More information: Repartitioning Semantically Partitioned Objects

3.7.2 The Wizard


Concept
A wizard helps you to create the required components of a semantically partitioned object. The wizard has two views: In the Modeling view, you define the components of the semantically partitioned object. In the Component view, you can display the components of a semantically partitioned object. The Modeling view:

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 99 of 102

Once a step has been completed, a green point is displayed next to it. The next step is indicated by a blue arrow. The first three steps relate directly to the semantically partitioned object. You can also return to completed steps and make changes. The next steps required to use a semantically partitioned object are listed under Further Options. The Component view: As soon as you have created a part of the semantically partitioned object, you can switch to the Modeling view. Here you are provided with an overview of your partitions and you can also go to the overview of the DTPs and the process chains.

3.7.3 Creating Transformations for a Semantically Partitioned Object


Prerequisites
You have created your sources (DataSources, InfoSources and any further InfoProviders required).

Context
You create a transformation or transformations when you want to connect several sources, and the target is the InfoSource that was generated when the semantically partitioned object was activated.

Procedure
1. In the wizard, choose Create Transformation In the dialog box that appears, the generated InfoSource is automatically entered as the transformation target. 2. Enter the source. This can be a DataSource, an InfoSource or another InfoProvider. 3. Choose Create Transformation. A suggestion for the transformation is created and displayed. 4. Edit the transformation and activate it.

3.7.4 Creating a DTP for a Semantically Partitioned Object


Context
The data transfer process is not part of the semantically partitioned object but the wizard still helps you to create the data transfer processes. Templates make it easier to create several DTPs of the same type. A DTP template contains default parameters that can be used to create DTPs for the partitions. You can create DTP templates so that you can quickly and easily create a large number of DTPs with the same default settings. The Create dialog box for DTPs only displays the parameters that are actually needed for semantically partitioned objects. There is also a status overview that allows you to quickly identify inconsistencies. The maintenance screen is divided into three areas: DTP generation, DTP templates and the detailed view. Area: DTP Generation The DTP Generation area displays all objects belonging to the semantically partitioned object. Here you can add or delete DTPs from templates using drag and drop. To make processing easier, you can filter for each source or each partition. Various statuses are displayed. DTP status: After a DTP has been generated, the status is green. It remains green so long as the DTP remains unchanged. If the DTP is changed, the status also changes. DTP filter: Here you can see what type of filter is being used: an automatically created and generated filter or a user-defined filter. To display the properties of a filter that has been generated, click on the DTP template: The status of the relationship between the assigned DTP and the DTP template is displayed here. If the template is changed after the assignment or if the assigned DTP is changed, the status is displayed as not equal to. Area: DTP Templates A standard template is provided. You cannot change the template. You can make changes to the settings after assigning the template as a DTP however. You can also create your own templates. You can store these templates in folders to have a clearer overview. A template that is still being used in the form of DTPs cannot be deleted. Area: Detailed View The details of a previously selected DTP or the details of a template are displayed in the detailed view. In the case of a DTP, the details of the DTP and the details of the underlying template are displayed. This detailed view is also a status display. Any deviations from the template are shown in red, and everything identical to the template is shown in green. symbol . To edit the properties of a user-defined filter, click on the symbol.

Procedure
1. In the wizard, choose Create Data Transfer Process. 2. If you want to use your own template: Create a folder in DTP Templates. 3. Create a template in this folder. Choose 4. Enter a description for the template and choose .

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 100 of 102

5. In Extraction, you can select the extraction mode, package size and key date for master data. You can switch on the currency conversion.

Note
You can only select extraction mode Delta in the case of semantically partitioned objects that are based on InfoCubes. In addition, you can specify the tables from which the data is extracted. For extraction from the DataStore object, you have the following options: Active Table (with Archive): The data is read from the active table and from the archive or from a near-line storage if one exists. You can choose this option even if there is no active data archiving process yet for the DataStore object. Active Table (without Archive): The data is only read from the active table. If there is data in the archive or in a near-line storage at the time of extraction, this data is not extracted. Archive (Only Full Extraction): The data is only read from the archive or from a near-line storage. Data is not extracted from the active table. Change Log: The data is read from the DataStore object's change log. For extraction from the InfoCube, you have the following options: InfoCube Tables: Data is only extracted from the database (E table and F table and aggregates). Archive (Only Full Extraction): Only data that is archived or is in a near-line storage is read. 6. In the Update area, you can perform settings for error handling. The following settings are possible: Switched off: If an error occurs, the error is reported as a package error in the DTP monitor. The error is not assigned to the data record. The system does not build the cross-reference table to determine the data record number. Processing is quicker. The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety. No update, no reporting (default): If errors occur, the system terminates the update of the entire data package. The request is not released for reporting and analysis. The incorrect record is highlighted so that the error can be assigned to the data record. The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety. Update valid records, no reporting (request red): This option enables you to update valid data that is released for reporting and analysis only after the administrator has checked the incorrect, not updated records, and has manually released the request. The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP. Update valid records, reporting possible: The valid records are available immediately for reporting and analysis purposes. Automatic follow-on activities are performed automatically (such as modifying aggregates). The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP. 7. Under Execute, you can enter information about the request status: Technical Request Status: This parameter specifies the behavior of a request generated by the current data transfer process, if the system logged warnings when processing the request. The technical request status can be set to green or red. Overall Status of Request: This parameter determines the behavior of a request generated by the current data transfer process, if the technical part of processing is finished and the overall status is to be set. The following settings are allowed for the parameter: Set Overall Status Automatically: If this setting is selected, the overall status is automatically set to the same status as the technical status, if the request is completed with the technical status green or red. Set Overall Status Manually: If this setting is selected, the overall status of the request initially remains unchanged, if the technical part of processing was completed with status red or green. In particular, this means that data for a green request is not released for reporting or further processing. The overall status has to be set manually by the user or by a process in a process chain. 8. Under DTP Filter, you can choose between Create Automatically or User-Defined: If you select Create Automatically, the system generates a filter, based on the partition properties, when the DTPs are generated. The filter ensures that only data relevant for the partition is loaded. If you select User-Defined, no filter is created. In this case, you have to make sure that the data is filtered in the source or in the transformation. You should expect longer loading times here. In some cases, the system might select User-Defined automatically if the filter cannot be generated. 9. Choose . 10. Select the partitions that you want to assign the DTP to and drag the template onto the partitions. 11. Choose . The DTP is now generated from the template.

3.7.5 Creating Process Chains for a Semantically Partitioned Object


Context
The process chains are not part of the semantically partitioned object but the wizard still helps you to create the process chains.

Procedure
1. In the wizard, choose Create Process Chain The screen is now divided into three areas: The left screen area contains an overview of your DTPs. The upper right area shows the process chains that have been generated and in the lower right area, you can create your process chains in the detailed view. 2. In the overview, select the DTPs that you want to run together in a process chain. Choose Add in the detailed view. 3. The system suggests a path and a sequence, which you can change if necessary. If you assign several DTPs to a path, the DTPs are executed in the

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 101 of 102

specified sequence. If you select a separate path for each DTP, these DTPs are executed at the same time. 4. Save and generate the process chain. If you choose 5. Select a scheduling option and choose Create. Generate, you create a start process.

6. Save your entries and choose Back. Your process chains are now displayed in the Generated Process Chains screen area. In the process chain maintenance screen (transaction RSPC), these process chains appear under the node Created by semantically partitioned object. 7. To execute the process chains, you need to open the process chain maintenance screen. To do this, choose view. 8. Choose Schedule. The process chain is executed according to your settings. Process Chain Maintenance in the detailed

PUBLIC 2014 SAP AG or an SAP affiliate company. All rights reserved.

Page 102 of 102

Вам также может понравиться