Академический Документы
Профессиональный Документы
Культура Документы
BW Administration Cockpit 10. Optimizing performance of InfoProviders 11. ILM (Information Lifecycle Management) 12. BWA 13. Query analyzing example 14. General Hints
no aggregates/BWA DB-statistics are missing Indexes not updated read mode of the query is not optimal small sized PSAPTEMP DB-parameters not optimal (memory and buffer) HW: buffer, I/O, CPU, memory are not sufficient Useage of OLAP Cache?
high amount of transmitted cells, because read mode is not optimal user exits in query execution usage of big hirarchies
high amount of transmitted cells and formattings to the front-end high latencies in refering WAN/LAN
You can turn off parallel processing for a single query. In the case of queries with very fast response times, the effort required for parallel processing can be greater than the potential time gain. In this case, it may also make sense to turn off parallel processing. Just play a little bit with RSRT and the different optionsto get the optimal settings for your queries! There are also some special read modes for a query. In the most cases the best choice is 'H' (Query to be read when you navigate or expand hierarchies - more information)
- Technical Info - Performance Info -> Useage of aggregates, Cache (+delta), compression, status of requests
3. Cache monitor
jump from RSRT into Cache monitor (TA: RSRCACHE) Cache parameters General infos about cache parameters, check them if they (runtime object and shared memory) are all well sized. Therefore have also a look at the sap help. There are 2 types of OLAP Cache, Cross-transaction cache and Local Cache (details on help.sap.com). !!!One thing you must know: the local cache is used in the following cases:
When the cross-transactional cache has been deactivated (see the parameter Cache Inactive). When the cache was deactivated for the InfoProvider (for all future queries) or the query
Main memory -> Objects inside in list or hirarchy display -> technical info (usage of selected cache)
Check also buffer consumption under buffer monitor (Exp/ImpMem) and buffer overview (Exp./ Imp. SHM). Check for which query it does make sense to save them in the OLAP cache, recommendations from SAP: How often the query is requested We recommend that you save queries that are requested very frequently in the cache. Main memory cache is very fast, but limited in size. By displacing cached data, you can cancel out main memory limitations, but this also affects system performance. There are practically no limitations on the memory space available in the database or in the file system for the persistent cache. Accessing compressed data directly in the persistent cache also improves performance. The complexity of the query Caching improves performance for queries whose evaluation is more complex. We recommend that you keep complex data processed by the OLAP processor in the cache. (Therefore the cache mode Main Memory Without Swapping is less suitable for such queries.) How often data is loaded The cache does not provide an advantage if query-relevant data is frequently changed and therefore has to be loaded frequently, since the cache has to be regenerated every time. If cached data is kept in main memory, data from queries that are called frequently can be displaced, so that calling the data takes more time
For detailed information which of the following modes should be used check sap help :
Cache is Inactive (0) Main Memory Cache Without Swapping (1) Main Memory Cache with Swapping (2) Persistent Cache per Application Server (3) Cross-Application Server Persistent Cache (4) BLOB/Cluster Enhanced (5) You can configurate this settings in RSRT (see screenshot 3.1)
Use delta caching if possible. With this option you can avoid invalidation of the cache data when the data basis are changed (data loads / process chains). So only the new data are read from the DB.
ST03N (modi expert) -> click on BI system load to get data like:
Query runtimes (seperated BEx, BEx Web (ABAP / JAVA) Process chain runtimes DTP runtimes Aggregate usage
These tools use all RSDD* tables/views and displays them in a colorful and sorted way. My favourites are BW-TOOLS, BW_QUERY_ACCESSES and BIIPTOOLS.
6. ST14
ST14 -> Business Warehouse -> plan analyze -> client 010 choose date , Basis Data (Top
Objects) and Basis: Determine Top DB Objects and schedule it you will get a great analyze for your whole BI system, including
top 30 PSA, E-fact, F-fact, Dimension, master data tables, change logs, Cubes ODS/DSO, Aggregates and some special infos for BWA for those who use oracle also Tables with more than 100 partitions the upload performance for the last weeks Compression rate result of SAP_INFOCUBE_DESIGNS (D- and E-tables in relation to the F-tables) ...
If you have trouble with the growth of your system this is a great entry point to start your analyze to find out where the space is gone ;) So you know now which requests should be compressed and how to get rid of partitions (maybe repartitioning; rsa1 -> administation -> repartitioning), but keep in mind that repartitioning creates shadow tables in namespace /BIC/4E<InfoCubename> and /BIC/4F<InfoCubename>.
This tables are exists until the next repartitioning, so you can delete them after the repartitioning is completed. Locate and delete empty F-partitions via report SAP_DROP_EMPTY_FPARTITION (note 430486)
7. Statistics
TA: RSDDSTAT statistic recording (tracing) settings for for Infoprovider/queries etc. Views RSDDSTAT_OLAP (OLAP + Frontend statistics) RSDDSTAT_DM (multiprovider, aggregate-split, DB access time, rfc time) Use TA SE11 to view there content. Column AGGRAGATE to identify if its using aggregates or the BWA: aggregates are 1xxxxxx and BWA-Indizes with <InfoCube>$X How to delete statistics TA RSDDSTAT (manual deletion) setting up the tracelevel of queries and setting up deleletion of statistics automatical deletion Table RSADMIN Parameter TCT_KEEP_OLAP_DM_DATA_N_DAYS (DEFAULT 14 days) date is relating field Starttime in table RSDDSTATINFO
8. ST02
check every instance for swaps -> double click on the red marked lines and then click on current parameters and you will see which parameter you should increase. Please read the sap help for each parameter it could be that there are dependencies! (Memory and Buffer). There are two possible reasons for swapping:
There is no space left in the buffer data area -> buffer is too small There are no directory entries left -> Although there is enough space left in the buffer, no further objects can be loaded because the number of directory entries is limited -> increase the needed parameter for the directory entries!
Note : Before you change the settings, also have an eye on the pools via tool sappfpar! (on OS as sidadm: sappfpar check pf=<path-to-profile> )
Prerequisites:
min. NW 7.0 Portal Stack 5 + BI Administration package 1.0 implement technical content (TA: RSTCC_INST_BIAC) Report RSPOR_SETUP Pros:
average and max. runtimes of queries PC runtimes trends for queries and bw-applications suggestion for obsolet PSA data
Compress InfoCubes Partitioning (and repartitioning) of InfoCubes - DB level - range partitioning (only for data base system which can handle partitions, e.g. oracle, DB2, MSSQL) - clustering - application level
nearline (Vendors for nearline Storage are e.g. SAND Technology, EMC, FileTek, PBS ...) archiving (Archiving via fileserver or stape drives) deletion of data Currently we dont use any kind of ILM, but research is going on ;)
RSDDTREX_MEMORY_ESTIMATE (see screenshot)-> to estimate the memory consumption of the BWA for a specific InfoCube. Thats only the memory consumption and not the needed storage on the hard disk! RSDDV Display all your Indizes which are indexed by the BWA RSRV Analyze BW objects RSDDBIAMON2 BWA Monitor TREX_ADMIN_TOOL (standalone tool) Tables RSDDSTATTREX and RSDDSTATTREXSERV for analyzing the runtimes of BWA Table RSDDTREXDIR (Administration of the TREX Aggregates) , check this blog for more information 1) Report: RSDDTREX_INDEX_LOAD_UNLOAD to load or delete BWA Indizes from the memory of the BWA servers. This can also be done over the RSRV ->Tests in Transaction RSRV -> BI Accelerator -> BI Accelerator Performance Checks -> Load BIA index data into main memory/Delete BIA index data from main memory. 2) Optimize Rollup process with BWA-Delta-Index via RSRV (Tests in Transaction RSRV -> All Elementary Tests ->BI Accelerator ->BI Accelerator Performance Checks -> Propose Delta-Index for Indixes ) Note that the Delta index growth with every load. The Delta index should not be bigger than 10% of the main index. If this is the case -> merge both indexes via report RSDDTREX_DELTAINDEX_MERGE 3) Use the BWA/BIA Index Maintenance Wizard for DFI Support or the option 'Always keep all BIA index data in main store'. So they wont be read from the disk, they stay always in memory! You can also activate and monitore DFI support via the trexadmin standalone tool. Control your memory consumption of BWA for this option!
12.2 option index keep in memory via BWA/BIA Index Maintenance Wizard
Check list
how often data in this infoprovider were changed? RSRT -> Performance Info -> any aggregates, cache (+delta) mode, compression? which Infoprovider were hit by the query? RSRT -> Technical Information (in our case GRBCS_V11 - virtual cube and GRBCS_R11 - reporting cube) DB statistics for this table/indexes up-to-date? is it possible to index the Cube via BWA? (GRBCS_V11 cant indexed because it is a virtual Cube, GRBCS_R11 is already indexed, the GRBCS_V11 includes GRBCS_M11 - a realtime infocube, which also cant be indexed - and GRBCS_R11) check where the most part of the runtime is spent (execute query in RSRT with options 'Display Statistic Data' and 'Do not use Cache') check table RSDDIME if Line Item Dimension or High Cardinality used (if you not sure when you should use this features have a look below to the useful links) In this case I would activate the OLAP Cache (which mode depends on the how often the basis data are changed and if they are filled at the same time -> grouping for multiprovider, see point 2) and talk to my colleagues which are responsible for modeling if we can change something on the compression time frames. For more details you can also check table RSDDSTAT_DM. The high runtime causes also from a bug in the db statistics (results in a bad execution plan) which will be fixed in a merge fix (9657085 for PSU 1 and 10007936 for PSU2) for oracle 11g. (bug 9495669 see note 1477787)
13.2 You can see a high usage of the data manager (part of the analytic engine) = read access to the Infoproviders. In this case read time of the DB.
I hope I could give you some useful hints for your analyses. I appreciate any kind of feedback, improvements and own experiences. Be careful with compression and partitioning, just use it if you know what you are doing and what is happening with your data!!! May be I could show an old stager some new tables/transactions