Вы находитесь на странице: 1из 16

1. Performance issues in summary 2. Query performance analyse 3. Cache monitor 4. ST03n 5. ST13 6. ST14 7. Statistics 8. ST02 9.

BW Administration Cockpit 10. Optimizing performance of InfoProviders 11. ILM (Information Lifecycle Management) 12. BWA 13. Query analyzing example 14. General Hints

1. The common reasons for performance issues in summary


Causes for high DB-runtimes of queries

no aggregates/BWA DB-statistics are missing Indexes not updated read mode of the query is not optimal small sized PSAPTEMP DB-parameters not optimal (memory and buffer) HW: buffer, I/O, CPU, memory are not sufficient Useage of OLAP Cache?

Causes for high OLAP runtimes


high amount of transmitted cells, because read mode is not optimal user exits in query execution usage of big hirarchies

Causes for high frontend runtimes


high amount of transmitted cells and formattings to the front-end high latencies in refering WAN/LAN

insuffincient client hardware

2. Query performance analyse


I think this is a really important point (including the OLAP cache) and should be explained a little bit deeper. TA RSRT To get exact runtimes for before/after analyze use this transaction with or without Cache/BWA etc. choose query execute and debug -> dont use cache -> show statistic data Button Properties activate cache mode (also able to activate for the whole InfoProvider) you should use the grouping, if you use multiprovider where data of only one Cube are changed independent from the other ones. So you can avoid the invalidation of the cache. Following grouping procedures are available: 1) no grouping 2) grouping depending on InfoProvider Types 3) grouping depending on InfoProvider Types InfoCubes Seperately 4) every Provider seperate 1) All results of an Infoprovider are stored together. If data of one of the Infoprovider are changed the whole cache must be recreated. This setting should be used when all the Infoprovider, which are used from the multiprovider, have the same load cycle. 2) All the results are stored grouped by the type of the InfoProvider. This option should be used when a basic InfoCubes are combined with an realtime InfoCube. 3) Is the same as 2) with additionally the feature that every result of an Infocubes are stored seperately. It should be used when you change/fill the cubes independent from each other. 4) Every results of a provider will be stored seperated (independent from the type). This option should be used when not only, but also other provider types InfoCubes are updated seperately.

2.1 RSRT Query Properties

You can turn off parallel processing for a single query. In the case of queries with very fast response times, the effort required for parallel processing can be greater than the potential time gain. In this case, it may also make sense to turn off parallel processing. Just play a little bit with RSRT and the different optionsto get the optimal settings for your queries! There are also some special read modes for a query. In the most cases the best choice is 'H' (Query to be read when you navigate or expand hierarchies - more information)

2.1 RSRT Query properties with grouping

- Technical Info - Performance Info -> Useage of aggregates, Cache (+delta), compression, status of requests

2.2 RSRT Performance Info

3. Cache monitor

jump from RSRT into Cache monitor (TA: RSRCACHE) Cache parameters General infos about cache parameters, check them if they (runtime object and shared memory) are all well sized. Therefore have also a look at the sap help. There are 2 types of OLAP Cache, Cross-transaction cache and Local Cache (details on help.sap.com). !!!One thing you must know: the local cache is used in the following cases:

When the cross-transactional cache has been deactivated (see the parameter Cache Inactive). When the cache was deactivated for the InfoProvider (for all future queries) or the query

If you determine during runtime that caching cannot take place

Main memory -> Objects inside in list or hirarchy display -> technical info (usage of selected cache)

Check also buffer consumption under buffer monitor (Exp/ImpMem) and buffer overview (Exp./ Imp. SHM). Check for which query it does make sense to save them in the OLAP cache, recommendations from SAP: How often the query is requested We recommend that you save queries that are requested very frequently in the cache. Main memory cache is very fast, but limited in size. By displacing cached data, you can cancel out main memory limitations, but this also affects system performance. There are practically no limitations on the memory space available in the database or in the file system for the persistent cache. Accessing compressed data directly in the persistent cache also improves performance. The complexity of the query Caching improves performance for queries whose evaluation is more complex. We recommend that you keep complex data processed by the OLAP processor in the cache. (Therefore the cache mode Main Memory Without Swapping is less suitable for such queries.) How often data is loaded The cache does not provide an advantage if query-relevant data is frequently changed and therefore has to be loaded frequently, since the cache has to be regenerated every time. If cached data is kept in main memory, data from queries that are called frequently can be displaced, so that calling the data takes more time

For detailed information which of the following modes should be used check sap help :

Cache is Inactive (0) Main Memory Cache Without Swapping (1) Main Memory Cache with Swapping (2) Persistent Cache per Application Server (3) Cross-Application Server Persistent Cache (4) BLOB/Cluster Enhanced (5) You can configurate this settings in RSRT (see screenshot 3.1)

3.1 RSRT performance info

3.2 RSRCACHE - Queries in Main Memory (BLOB/Cluster Enhanced is deactivated)

Use delta caching if possible. With this option you can avoid invalidation of the cache data when the data basis are changed (data loads / process chains). So only the new data are read from the DB.

Hint: Prefilling the OLAP cache via broadcasting (rsa1->administration->broadcasting; documentation)

4. System load Monitor ST03n

ST03N (modi expert) -> click on BI system load to get data like:

Query runtimes (seperated BEx, BEx Web (ABAP / JAVA) Process chain runtimes DTP runtimes Aggregate usage

5. ST13 Analyze & Service Toolset (depends on your ST-A/PI level)


there you can find some well known reports like RSECNOTE, but also new BI tools: BPSTOOLS BIIPTOOLS BW_QUERY_USAGE BW-TOOLS TABLE_ANALYSIS BW-BPS Performance Toolset BI-IP Performance Toolset BW: query usage statistics BW Tools (PC Analyze, Request analyse, Aggregate toolset, IP Analyse, DTP request analyse and IO Usage) Table Analysis Tools

BW_QUERY_ACCESSESBW: aggregate/InfoCube accesses of queries

These tools use all RSDD* tables/views and displays them in a colorful and sorted way. My favourites are BW-TOOLS, BW_QUERY_ACCESSES and BIIPTOOLS.

6. ST14
ST14 -> Business Warehouse -> plan analyze -> client 010 choose date , Basis Data (Top

Objects) and Basis: Determine Top DB Objects and schedule it you will get a great analyze for your whole BI system, including

top 30 PSA, E-fact, F-fact, Dimension, master data tables, change logs, Cubes ODS/DSO, Aggregates and some special infos for BWA for those who use oracle also Tables with more than 100 partitions the upload performance for the last weeks Compression rate result of SAP_INFOCUBE_DESIGNS (D- and E-tables in relation to the F-tables) ...

6.1 ST14 Overview

If you have trouble with the growth of your system this is a great entry point to start your analyze to find out where the space is gone ;) So you know now which requests should be compressed and how to get rid of partitions (maybe repartitioning; rsa1 -> administation -> repartitioning), but keep in mind that repartitioning creates shadow tables in namespace /BIC/4E<InfoCubename> and /BIC/4F<InfoCubename>.

This tables are exists until the next repartitioning, so you can delete them after the repartitioning is completed. Locate and delete empty F-partitions via report SAP_DROP_EMPTY_FPARTITION (note 430486)

7. Statistics
TA: RSDDSTAT statistic recording (tracing) settings for for Infoprovider/queries etc. Views RSDDSTAT_OLAP (OLAP + Frontend statistics) RSDDSTAT_DM (multiprovider, aggregate-split, DB access time, rfc time) Use TA SE11 to view there content. Column AGGRAGATE to identify if its using aggregates or the BWA: aggregates are 1xxxxxx and BWA-Indizes with <InfoCube>$X How to delete statistics TA RSDDSTAT (manual deletion) setting up the tracelevel of queries and setting up deleletion of statistics automatical deletion Table RSADMIN Parameter TCT_KEEP_OLAP_DM_DATA_N_DAYS (DEFAULT 14 days) date is relating field Starttime in table RSDDSTATINFO

8. ST02
check every instance for swaps -> double click on the red marked lines and then click on current parameters and you will see which parameter you should increase. Please read the sap help for each parameter it could be that there are dependencies! (Memory and Buffer). There are two possible reasons for swapping:

There is no space left in the buffer data area -> buffer is too small There are no directory entries left -> Although there is enough space left in the buffer, no further objects can be loaded because the number of directory entries is limited -> increase the needed parameter for the directory entries!

Note : Before you change the settings, also have an eye on the pools via tool sappfpar! (on OS as sidadm: sappfpar check pf=<path-to-profile> )

9. Using the BW Administration Cockpit


Setup via SPRO (BI -> Seetings for BI Content -> Business Intelligence ->BI Adminstration Cockpit)

Prerequisites:

min. NW 7.0 Portal Stack 5 + BI Administration package 1.0 implement technical content (TA: RSTCC_INST_BIAC) Report RSPOR_SETUP Pros:

average and max. runtimes of queries PC runtimes trends for queries and bw-applications suggestion for obsolet PSA data

9.1 compressed and not compressed requests

9.2 process chain status

10. Optimizing performance of InfoProviders in summary


Compress InfoCubes Partitioning (and repartitioning) of InfoCubes - DB level - range partitioning (only for data base system which can handle partitions, e.g. oracle, DB2, MSSQL) - clustering - application level

11. ILM (Information Lifecycle Management)


nearline (Vendors for nearline Storage are e.g. SAND Technology, EMC, FileTek, PBS ...) archiving (Archiving via fileserver or stape drives) deletion of data Currently we dont use any kind of ILM, but research is going on ;)

12. BWA Business Warehouse Accelerator (just a small summary):

RSDDTREX_MEMORY_ESTIMATE (see screenshot)-> to estimate the memory consumption of the BWA for a specific InfoCube. Thats only the memory consumption and not the needed storage on the hard disk! RSDDV Display all your Indizes which are indexed by the BWA RSRV Analyze BW objects RSDDBIAMON2 BWA Monitor TREX_ADMIN_TOOL (standalone tool) Tables RSDDSTATTREX and RSDDSTATTREXSERV for analyzing the runtimes of BWA Table RSDDTREXDIR (Administration of the TREX Aggregates) , check this blog for more information 1) Report: RSDDTREX_INDEX_LOAD_UNLOAD to load or delete BWA Indizes from the memory of the BWA servers. This can also be done over the RSRV ->Tests in Transaction RSRV -> BI Accelerator -> BI Accelerator Performance Checks -> Load BIA index data into main memory/Delete BIA index data from main memory. 2) Optimize Rollup process with BWA-Delta-Index via RSRV (Tests in Transaction RSRV -> All Elementary Tests ->BI Accelerator ->BI Accelerator Performance Checks -> Propose Delta-Index for Indixes ) Note that the Delta index growth with every load. The Delta index should not be bigger than 10% of the main index. If this is the case -> merge both indexes via report RSDDTREX_DELTAINDEX_MERGE 3) Use the BWA/BIA Index Maintenance Wizard for DFI Support or the option 'Always keep all BIA index data in main store'. So they wont be read from the disk, they stay always in memory! You can also activate and monitore DFI support via the trexadmin standalone tool. Control your memory consumption of BWA for this option!

12.1 result of report RSDDTREX_MEMORY_ESTIMATE

12.2 option index keep in memory via BWA/BIA Index Maintenance Wizard

12.3 BWA suggestion for delta indexes (RSRV, see 12. 2) )

13. Query analyzing example


find out which queries have a long runtime over ST03n:

13.1 ST03n - very high DB useage for this query

Check list

how often data in this infoprovider were changed? RSRT -> Performance Info -> any aggregates, cache (+delta) mode, compression? which Infoprovider were hit by the query? RSRT -> Technical Information (in our case GRBCS_V11 - virtual cube and GRBCS_R11 - reporting cube) DB statistics for this table/indexes up-to-date? is it possible to index the Cube via BWA? (GRBCS_V11 cant indexed because it is a virtual Cube, GRBCS_R11 is already indexed, the GRBCS_V11 includes GRBCS_M11 - a realtime infocube, which also cant be indexed - and GRBCS_R11) check where the most part of the runtime is spent (execute query in RSRT with options 'Display Statistic Data' and 'Do not use Cache') check table RSDDIME if Line Item Dimension or High Cardinality used (if you not sure when you should use this features have a look below to the useful links) In this case I would activate the OLAP Cache (which mode depends on the how often the basis data are changed and if they are filled at the same time -> grouping for multiprovider, see point 2) and talk to my colleagues which are responsible for modeling if we can change something on the compression time frames. For more details you can also check table RSDDSTAT_DM. The high runtime causes also from a bug in the db statistics (results in a bad execution plan) which will be fixed in a merge fix (9657085 for PSU 1 and 10007936 for PSU2) for oracle 11g. (bug 9495669 see note 1477787)

13.2 You can see a high usage of the data manager (part of the analytic engine) = read access to the Infoproviders. In this case read time of the DB.

14. General Hints


1. Use high cardinality only where it makes sense! It could result in bad query performance. Use table RSDDIME to get an overview over all properties of your dimensions. 2. Check in table RSRREPDIR (Field Cachemode) if for all queries cache and read mode 'H' are activated (take also care of the Delta-Cache). If you have special cases for some queries, dont change your config. To change the read mode for all queries, call transaction RSRT -> type 'RALL' as "OK code", and press 'Enter'. In the dialog box, choose the new read mode and press 'Enter'. To change the read mode for a specific query, enter the name of the query and select 'Read Mode' 3. Tablespace PSAPTEMP should have minimum size of 2 times of your biggest F-fact table (e.g. we had some performance issues while executing some queries which are really took a lot of temp space in cause of aggregating and sorting, so now our temp space is 4 times bigger than our biggest F-table) 4. Table RSTODSPART shows the amount of records per request 5. BEx Information Broadcaster -> Fill OLAP-Cache via BEx Query Designer, BEx Analyzer, BEx Web Analyzer, WAD, Portal and BEx Report Designer (Scheduling on daily, weekly or monthly bases) 6. All tables of an InfoCube can be listed with TA LISTSCHEMA. 7. Report SAP_INFOCUBE_DESIGNS (Print a list of the cubes in the system and their layout) 8. Delete PSA-tables in your process chains 9. Delete Changelogs in your process chains 10. check if your aggregates are wise or not (TA: RSMON -> Aggregates) 11. Check SAP Note 1139396 and run reports SAP_DROP_TMPTABLES and SAP_UPDATE_DBDIFF to clean obsolete temporary entries.

I hope I could give you some useful hints for your analyses. I appreciate any kind of feedback, improvements and own experiences. Be careful with compression and partitioning, just use it if you know what you are doing and what is happening with your data!!! May be I could show an old stager some new tables/transactions

Вам также может понравиться