Вы находитесь на странице: 1из 17

Essbase Application Performance Tuning

Essbasse system and application performance tuning are very important tasks for a successful implementation of
Essbase or Hyperion Planning. Good application design and proper system administration include some of following
items.
1) Design The Outline Hour Glass Model
2) Defragmentation
3) Database Restructuring
4) Compression Techniques
5) Cache Settings
6) Intelligent Calculation
7) Data Load Optimization
8) Uncommitted Access
Design the Outline using Hour Glass Model:
We have to build the Outline should be in a way that dimensions are placed in the following order
1) Dimension tagged as an Account Dimension Type
2)Dimension tagged as a Time Dimension Type
3) Largest dense
4)Smallest dense
5) Smallest sparse
6) Largest sparse
Attribute Dimensions.
Using hourglass model improves the calculation Performance of the cube.
Defragmentation:
Fragmentation is caused by any of the following activities:
1) Frequent data loads
2) Frequent data retrieval
3) Calculations
We can check whether the cube is fragmented or not by seeing its Average Clustering Ratio in the properties . The
Optimum clustering value is 1. If the average clustering ratio is less than 1, then the cube is fragmented which
degrades the performance of the cube.
There are Three ways of doing Defragmentation.
Export Data of the application in to text file, then clear data and reload the data using text file without using Rules file.
1)Using MAXL Command:
Alter Database App name .DB name Force Restructure;
2)Add and Delete One Dummy Member in the Dense Dimension to force a full restructure.
Database Restructuring:
There are 3 types of Restructure.
1)Outline Restructure
2)Sparse Restructure
3)Dense Restructure/Full Restructure

Outline Restructure:
When we rename any member or add Alias to any member or member formula changes then outline Restructure
would happen.
Dense Restructure:
If we want to moved, delete and add a member to a dense dimension then dense Restructure would happen.
Sparse Restructure:
when we moved, deleted, or added a member to a sparse dimension then sparse restructure would happen.
Compression Techniques:
There are 4 types of Compressions. They are
1)Bitmap Compression
2)RLE Run length Encoding
3)ZLIB
4)No Compression.
Index Cache: Min -1024 KB (1048576 bytes) Default Buffered I/O : 1024 KB (1048576 bytes);Direct I/O : 10240 KB
(10485760 bytes) Opt -Combined size of all essn.ind files, if possible; as large as possible otherwise. Do not set this
cache size higher than the total index size, as no performance improvement results.
Data File Cache:
Min Direct I/O: 10240 KB(10485760 bytes) Default -Direct I/O: 32768 KB(33554432 bytes) Opt -Combined size of
all essn.pag files, if possible; otherwise as large as possible. This cache setting not used if Essbase is set to use
buffered I/O.

Data Cache: Min 3072 KB (3145728 bytes) Default 3072 KB (3145728 bytes) Opt -0.125 * the value
of data file cache size.
Calculator Cache: Min 4 bytes Max: 200,000,000 bytes Default 200,000 bytes Opt -The best size for
the calculator cache depends on the number and density of the sparse dimensions in your outline. The
optimum size of the calculator cache depends on the amount of memory the system has available.Set
cache High |Low |Off; command used in calc scripts to set the cache.
We set cache value for calculator cache in Essbase.cfg file.
We need to restart the server to make the changes in calculator cache after setting it in config file.
Intelligent Calculation:
Whenever the Block is created for the 1st time Essbase would treat it as Dirty Block. When we run Calc all/Calc dim
Essbase would calculate and mark all blocks as Clean blocks. Subsequently, when we change value in any block the
block is marked as Dirty block. when we run calc scripts again only dirty blocks are calculated it is known as
Intelligent Calculation.
By default Intelligent calculation is ON. To turn off the Intelligent Calculation use command
SET Update Calc OFF;
Data Load Optimization:
Data load optimization can be achieved by the following.
1)Always load the data from the Server than file system.
2)The data should be at last after the combinations.
3)Should use #MI instead of 0s. If we use 0 uses 8 bytes of memory for each cell.
4)Restrict max Decimal Points to 3 1.234
5)Data should be loaded in the form of Inverted Hourglass Model.(Largest sparse to Smallest Sparse followed by
smallest Dense to Largest Dense data) .Sorting data prior to loading in this manner will allow a block to be fully

populated prior to storing it on disk .thus eliminating the need to retrieve a previously created block from disk during
the load.
6)Always Pre-Aggregate data before loading data in to Database
DL Threads write (4/8): Used for Parallel Data loads. Loads 4 records at a time for 32-Bit system and 8 records for
64-Bit system.
By default Essbase Loads data Record by Record which would consume more time resulting in consuming huge
time for data load.
Uncommitted Access:
Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block. Under
committed access, Essbase holds locks until a transaction completes. With uncommitted access, blocks are released
more frequently than with committed access. The Essbase performance is better if we set uncommitted access.
Besides, parallel calculation only works with uncommitted access.

Fine tuning within an optimization of Hyperion Essbase


Fine tuning within an optimization of Hyperion Essbase can be accomplished by being very secure in using Essbase
member properties (like stored member, dynamic calculation member, label only, two pass calc etc).
Besides optimization tip 5, there are several aspects to be considered.

Try to avoid "dyna calc" at sparse dimensions


Use "Label only" as much as possible if data is not required to be stored
Use "dyna calc" at dense dimensions (you have to for ratios or percentages anyway)
Try to avoid dynamic calculations on a large span of data blocks
Try to avoid using "Two pass calc", it slows down significantly and there is no "Third pass calc"
This is a technical advice for Hyperion Essbase. However, never forget that the functional interpretation is the most
important: understand what the requirements of the client are. You can never come to powerfull solution if you didnt
understand what the functional needs and processes are. The technical aspects are just a matter of experience and
understanding how Essbase functions under the hood.
We have proved to be able to achieve optimizations up to 1000%. See a brief list of our achievements here.

Essbase Tuning For Planning Applications


Tuning Essbase for use with Planning in order to improve performance and avoid errors araising
from a lack of resources when running Business Rules and Calculation Scripts.
The Folling are basic guidelines for Essbase tuning that are intended to cover the use of Essabse
with Planning applications.

To Improve performance go through each of the suggested optimizations below and then test to see
where the most progress is made. Always remember to back up your Essabse data before making
any changes.
1. Block Size: The recommended block size is between 8Kb and 100Kb. the only wat to alter block
size is to change sparse dimensions to dense or vice versa. Changing a dense dimention to sparse
will reduce block size and increate the overall number of blocks. See the Planning documentation for
more information on setting dimensions to be dense or sparse.
2. Vertual Memory: Recommended virual memory setting for Window sytems: 2 to 3 times the RAM
Available. 1.5 times the RAM on older systems.
3. Chaches:
Index Cache:
Minimum: 1Mb
Default: 10 Mb
Recommended: combined size of all ESS*.IND files if possible;
otherwise as large as possible given the availbal RAM
DataFile Cache:
Minium: 8 Mb
Default: 32 Mb
Recommended: combined size of all ESS*.PAG files if possible;
Otherwise as large as possible given the available RAM, up to a maximum of 2GB.
Importent: The Data File Cache is not used if the database is buffered rather than direct I/O
(Check the "Storage" tab). Since all Planning databases are buffered, and most customers
use buffered for native Essbase apllications too, this cache setting is usually not relevant. the
Data Cache is the setting that matters in most cases.
Data Cache:
Miniumum: 3 Mb
Default: 3 Mb
Recommended: 0.125 * combined size of all ESS*.PAG files, if possible, otherwise as large as
possible given the available RAM.
Note: A useful indication of the health of the caches can be gained by looking at the "Hit
Ratio" for the cache on the Statistics tab in EAS. 1.0 is the best possible ratio, lower means
lower performance.
4. Disk Space: The recommended disk space is a minimum of double the combined total of all .IND
and .PAG files. You need double the space because you have to have room for a restructure, whcih
will require twice the usal storage space whilst it is ongoing.

Few Optimization Techniques in Essbase

With the essential features available in Essbase you can load the huge data to the Essbase
cubes, Run the reports and you can perform the complex calculations also,
As you keep on adding the different features to your application the performance will get
reduce. As i said Essbase came up with different features along with the different
performance tuning techniques which makes the application best optimized.
The optimization can be done at many places such as
Outline Optimization
Data Load Optimization
Report Script Optimization
Calculation Script Optimization
Outline Optimization:
1) Arrange the dimension in "Hour Glass Model"
The Outline should starts with dense dimension with highest stored members and it keep
going till the dense dimension with least stored members and then starts with sparse
dimension with least stored members and it keep going till the sparse dimension with
highest stored members.
2) Use the member storage properties efficiently.
If the dimension is to just host the different types of data such as scenarios, here there is
no point in rolling up the lower values to higher level, in this situation you can tag the
dimension as "Label Only" and assign the no consolidation operator to the members under
it.
Some calculations really not required to stored the results in database at this point of time
tag the concern members with "Dynamic Calc" property.

Data Load Optimization:


1) In data file, the fields should starts with sparse dimension members and then dense
dimension members and then the data field.
2) If the same field is repeating in all the records in the data file, then try to ignore that field
from fetching itself and keep that member in the "Header Definition", why means to save
the buffer memory and it will increase data load process.

Report Script Optimization:


1) In the report script first specify the sparse dimensions and then dense dimensions, why
means :Sparse dimension creates the data blocks within which the data cells are available,
so specifying the dense first does not make sense. So to speed up the process specify the
data blocks first(Sparse dimension) and then data cells (Dense dimensions).
2) The dimensions which are not required to display in the report put them in the page.
3) Use the special commands to increase the report performance
SUPMISSINGROWS : To Suppress the data missing rows.
SUPHEADING : To Suppress the headings.
SUPBRACKETS: To Suppress the brackets around the negative values.
SUPEMPTYROWS: To Suppress the empty rows.

Calculation Script Optimization:


1) Use the set commands to increase the calculation performance.
SET MSG SUMMARY : Set the message level to summary.
SET AGGMISSG ON : To avoid the aggregation of #missing values.
SET CACHE HIGH : To increase the bugger size.
SET NOTICE LOW : To set the notices to low.
2) Perform the calculations on only required part of database using FIX command.

Fine Tuning Hyperion Essbase Cache Settings


Fine Tuning Cache Settings

After using a database at your site with typical data, user access, and standard environment
(including server machines, network, etc.), check to see how Essbase performs. It is difficult
to predict optimal cache sizes without testing. You may need to adjust your cache settings.
Understanding Cache Settings

The sizes of the index cache and the data file cache (when direct I/O is used) are the most
critical Essbase cache settings. In general, the larger these caches, the less swapping
activity occurs; however, it does not always help performance to set cache sizes larger and
larger. Read this entire section to understand cache size considerations.
Index Cache
The advantages of a large index cache start to level off after a certain point. Whenever the
index cache size equals or exceeds the index size (including all index files on all volumes),
performance does not improve. However, to account for future growth of the index, you can
set the index cache size larger than the current index size. Because the index cache is filled
with index pages, for optimum use of storage, set the size of the index cache to be a
multiple of the size of the index page (8 KB). See Index Files for an example of estimating
index size.
Data File Cache
If possible, set the data file cache to equal the size of the stored data, which is the
combined size of all ess*.pag files. Otherwise, the data file cache should be as large as
possible. If you want to account for future growth of stored data, you can set the data file
cache size larger than the current size of stored data.
Note:
The data file cache is used only if you are using direct I/O.
Data Cache
The data cache should be about 0.125 times the data file cache. However, certain
calculations require a larger data cache size. If many concurrent users are accessing
different data blocks, this cache should be larger.
In general, if you have to choose between allocating memory to the data file cache or
allocating it to the data cache, choose the data file cache if you are using direct I/O. If you
are upgrading from a previous version of Essbase, see the Hyperion Essbase - System 9
Installation Guide.
Checking Cache Hit Ratios
Every cache has a hit ratio. The hit ratio indicates the percentage of time that a requested
piece of information is available in the cache. You can check the hit ratio of the index cache,
the data cache, and the data file cache to determine whether you need to increase the
cache size.
* To check cache hit ratios, see Checking Cache Hit Ratios in Essbase Administration
Services Online Help.

*
The cache hit ratio indicates the percentage of time that a requested piece of information is
already in the cache. A higher hit ratio indicates that the data is in the cache more often.
This improves performance because the requested data does not have to be retrieved from
disk for the next process. A hit ratio of 1.0 indicates that every time data is requested, it is
found in the cache. This is the maximum performance possible from a cache setting.
*
The Hit Ratio on Index Cache setting indicates the Essbase Kernel success rate in locating
index information in the index cache without having to retrieve another index page from
disk.
*
The Hit Ratio on Data File Cache setting indicates the Essbase Kernel success rate in
locating data file pages in the data file cache without having to retrieve the data file from
disk.
*
The Hit Ratio on Data Cache setting indicates the Essbase success rate in locating data
blocks in the data cache without having to retrieve the block from the data file cache.
*
Check memory allocation. Add smaller amounts of memory at a time , if needed, because a
smaller increment may have the same benefit as a large one. Large, incremental allocations
of memory usually result in very little gain in the hit ratio.
Posted by Chinmay Joshi at 8:27 AM
Labels: Hyperion Essbase

3 comments:
amarnath said...
Good Post.I appreciate you taking time to post this information.
It's mostly theoritical (no offense).
I could add few inputs to your post.
It's not always required to have a high Cache all the time. An optimal setting of cache
would help. Now how do I can set my cache to optimal without running out of memory?
Tips for setting Index Cache
For your database your Index cache might be 1GB, so does that means you need to set
your Index cache to 1GB.

I say "NO". It all depends on your Hit ratio.If you can get a Hit Ratio of around 99% for
150MB of your Index Cache. That is god enough.
Tips for setting my Data cache
Typically I have my database size around 350GB. So how much should be my Data
Cache.
2 ways.
1) 0.125 of your total data cache.
2) with the above setting test and check your Hit Ratio. If you are able to achieve around
25%-35%, then your data cache is good enough to perform calculations in a very less
time.
Now the bigger Question.
Is my Outline Order important?
Ofcourse it is. Your optimization and tuning should start from here.
I guess you know the order of dimensions.

We have an opportunity to configure our Essbase server from the ground up. Essbase stored data in a combination
if IND and PAG. In my case, a typical database has 1gb of IND, and 2gb of PAG (the largest database is 2gb /
6gb). There are five Essbase databases.
The goal of performance tuning is improving "batch calculation". During a batch calc, the application cycles
through the files, reading data in, calculating values, and then writing the values back to disk. It's very I/O
intensive. For this reason, we usually try to put the IND and PAG files on different controller channels to split up
the I/O. A typical batch calc on a large database takes 30 mins.
Our new (recycled) Essbase server has six HDDs. They are currently configured as: Two drives in a RAID1 set for
the OS (channel 0), and Four drives in a RAID5 set with one of the drives as a hot spare for both data drives
(channel 1). This doesn't allow the IND and PAG files to be split to the different RAID channels.
Would it help if we configured as follows?

Two disks in RAID1 on Channel 0, supporting two logical disks, C:\ (OS) and E:\ (IND Files)
Four disks in RAID10 on Channel 1, supporting one logical disk, F:\ (PAG Files)

Also, does partition alignment really help, and how do you do this in the context of a Dell server with a PERC 6/i
with two partitions on one physical set? All the info I found on this addresses setting the offset for the first
partition.
The server config is:

Server Model: PE 2950


OS: Windows 2003 Standard Edition SP2 64bit.
Hard Drive Model: FUJITSU MBA3147RC 147GB 15,000RPM SAS Hard Drive (Qty: 6)
Storage Controller: PERC 6/i

Here are some quick tips to check when tuning your BSO application:

1. Dimension Order

Use the hourglass model:

Largest dense to smallest dense, smallest sparse to largest sparse

Attribute dimensions are always last.

2. Dense/Sparse Settings

Typically, your accounts and period dimensions are classified as dense.

3. Member Properties

Tag all upper-level members on your accounts and period dimension as dynamic calc. This not
only will impact the size of the database which affects performance, but also reduces calculation
times.

4. Block Size

The recommended block size is 8-100KB.

5. Fragmentation

Be sure to check your Average Clustering Ratio (Right click database, Edit, Properties,
Statistics). An Average Cluster Ratio of 1 is ideal in Essbase. You can achieve this number
through a database restructure or by clearing your database and reloading your export file.

Be sure you have set configuration settings in the server environment.

6. Restructure

Right click on the database and click restructure. There are three different types of restructures:

Outline

Dense

Sparse

7. Cache

Your Index Cache should be equal to the size of the index file. If it is higher, you will not see
improvement in performance.

Use the set command SET CACHE to improve calculation performance. This command
specifies the size of the calculator cache.

8. Calc commands

It is recommended to If on dense members, and fix on sparse members.

Use SET Commands as necessary to improve performance.

When designing your calc script, keep in mind to minimize the number of passes on the
database.

9. Avoid loading zeros to databases. It is best practice to replace zeros with #missings as a blank
cell uses no memory as opposed to a zero

General Essbase BSO cube calc performance tuning.


Solution: Gather two baseline calc time:
1. Time requires to cycle through all level 0 blocks.
2. Time requires to run a calc all after performing level 0 export/import. This
will provide calc time for cycle through all upper level blocks.
Once baselines have been established, calc script performance can always
be measured against these set baselines. Of course, fragmentation plays a
role in calc performance as well. As best practice, daily defragmentation is
recommended for frequently updated cube.
Generally, allocation calc script would require cycling through upper level
blocks more than once. In that case, calc time may well be two or more times
baseline.
For calc that requires iterations of aggregations and allocations, an ASO cube
can be utilized to process the aggregation portion. This advance methodology
would involve setting up data feeds to pass between the two ASO and BSO
cubes for each stage of the iterations.
- See more at: http://www.adistrategies.com/resources/knowledgebase/id0022-bso-calculation-tuning/#sthash.QJ866PKb.dpuf

Essbase Calculation Performance Tunning


1. After we enabled Parallel Calculation, by default,Essbase uses the last sparse dimension in an outline to
identify tasks that can be performed concurrently. But the distribution of data may cause one or more
tasks to be empty; that is, there are no blocks to be calculated in the part of the database identified by a
task. This situation can lead to uneven load balancing, reducing parallel calculation effectiveness.

2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the
identification of tasks for parallel calculation. For example, if you have a FIX statement on a member of
the last sparse dimension, you can include the next-to-last sparse dimension from the outline as well.
Because each unique member combination of these two dimensions is identified as a potential task, more
and smaller tasks are created, increasing the opportunities for parallel processing and improving load
balancing.
3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation script
command SET CALCTASKDIMS at the top of the script.

Sample Code: SET CALCTASKDIMS 2

This will enable last 2 sparse dimension to be included in the checking, it may significantly increase the
running performance.(416-3025810)

ALCPARALLEL in Essbase - Things worth of knowing...


I am having hard time in writing introduction of this post, thought of writing "Parallelism is one of the most
important...." , "Essbase BSO calculation engine can use multiple threads" etc....

But, we all know essbase can use multiple processors for calculating data and that can be enabled by
SET CALCPARALLEL command. Let's see how to use it wisely.

Essbase admin guide suggests using parallel calculation to improve the performance. Yes, it is, but this is
not true in all cases.

If you do not have any backward dependencies and dynamic calcs in your formula, Essbase will decide a
calculation into tasks so that it can run these tasks in different threads.CALCTASKDIMS setting will
specifies how many of the sparse dimensions in an outline are used to identify potential tasks that can be
run in parallel. If CALCTASKDIMS is set to 3, Essbase takes last 3 sparse dimensions into consideration,
and determines number of parallel tasks which can run on parallel. The number of parallel tasks is equal
to (or apprx equal to) product of all stored members to be calculated in these 3 dimensions( takes only
FIX'ed members in calc). Essbase will divide these tasks to run on number of threads equally based
CALCPARALLEL setting.

** Essbase v11.1.2.2 is designed to determine number of CALCTASKDIMS itself. So, we don't need to
worry about this at this point.

As usual , we can try best CALCPARALLEL setting with trial and error method. Let's see when to use and
not use CALCPARALLEL.

For example, I have a BSO application with 13 dimensions (3 Dense+12 Sparse). I ran a calculation script
with CALCPARALLEL 6. This is what i found in logs.

Maximum Number of Lock Blocks: [100] Blocks


Completion Notice Messages: [Disabled]
Calculations On Updated Blocks Only: [Disabled]
Clear Update Status After Full Calculations: [Enabled]
Calculator Cache: [Disabled].

OK/INFO
1012678
Calculating
in
parallel
with
OK/INFO - 1012679 - Calculation task schedule [3016,71,1].

[6]

threads.

OK/INFO - 1012680 - Parallelizing using [2] task dimensions. .


OK/INFO - 1012681 - Empty tasks [2797,71,1].
OK/INFO - 1012672 - Calculator Information Message:

From above logs, Essbase automatically decided to use 2 dimensions to identify parallel tasks ( Essbase
decided to use 2 dimensions because i am using 11.1.2.2. Essbase will use 1 task dimension by default
in earlier versions). Because of sparsity in my cube, essbase found 2797 Empty tasks out of 3016
identified tasks. 92% of my tasks are empty in this calculation which is bad. So, in this case, using
parallelism is not adding up anything for performance even though it reserved 6 processors, it's not even
using 10 % of them.

But one interesting observation i made is, above calculation ran faster in serial mode rather than parallel
mode. Along with the processors,Essbase is also using some other resources on server to run calc
parallel mode. I used only 6 processors (out of 32 processors in the server) only for this calc, But,
Essbase had hard time in managing 6 processors for the calculation where 92% of tasks are empty.
So, bottom line is "DO NOT USE PARALLEL MODE JUST BECAUSE YOU HAVE RESOURCES
AVAILABLE. USE PARELL CALC BASED ON NON EMPTY TASKS"

So, when to use calcparellel?

I will recommend parallel calculation if Empty tasks are at least 40% of identified tasks. We can play
around with order of dimensions in the outline and CALCTASKDIMS settings to reduce number of parallel
tasks and empty tasks. We decide number processors based on the resource available and other things
running on the server etc... Better start with 2 processors.

We recently upgraded from 11.1.2.0 to 11.1.2.2. A guy from Oracle development team told me that they
enhanced parallelism in new version. Instead of improving the performance, it has deprived after up
gradation. We have tuned parallelism in calcs which are now running better than previous ones. 11.1.2.2
is doing better job in analyzing parallel tasks than previous version. So, tune your calcparallel if calcs are
running longer in 11.1.2.2 compared to previous versions.

Essbase - Designing an Outline to Optimize Query or


Calculations Performance
About
The relative locations of dimensions in an outline can affect performance times :

either Essbase - Calculations

or retrieval .
Indeed, although they contain the same dimensions, the outline examples below :

for Optimized Query Times

and for Optimized Calculation Times


are different. See Meeting the Needs of Both Calculation and Retrieval at the end of the
article.
The structure defined in the Essbase - Outline (Database Outline) determines how data
is stored in the database.

Articles Related

Essbase - The OLAP Design Cycle (to create an optimized database)

Essbase - Outline (Database Outline)

Essbase - Outline Creation and Management using Outline Editor

Essbase - ASO/BSO Storage

Rules of thumb
To optimize attribute calculation and retrieval performance, consider the following design
tips :

Position attribute dimensions at the end of the outline.

Locate sparse dimensions after dense dimensions in the outline.

Place the most-queried dimensions at the beginning of the sparse dimensions and
attribute dimensions at the end of the outline. In most situations, base dimensions are

queried most.
To optimize attribute calculation and retrieval performance, consider the following :

The calculation order for Essbase - Attribute calculations is the same as for Essbase Dynamic Calculations. For an outline, see Calculation Order for Dynamic Calculation.

Because Essbase calculates attribute data dynamically at retrieval time, Essbase Attribute calculations do not affect the performance of the overall (batch) database
calculation.

Tagging base-dimension members as Dynamic Calc may increase retrieval time.

When a query includes the Sum member and an attribute-dimension member whose

associated base member is tagged as two-pass, retrieval time may be slow.


To maximize attribute retrieval performance, use any of the following techniques:

Ensure that Essbase - Attribute dimensions are the only sparse Essbase - Dynamic
Calculations dimensions in the outline.

Drill down to the lowest level of base dimensions before retrieving data. For example,
in Essbase - Spreadsheet Add-in, turn on the Navigate Without Data feature, drill down
to the lowest level of the base dimensions included in the report, and then retrieve data.

When the members of a base dimension are associated with several attribute
dimensions, consider grouping the members of the base dimension according to their
attributes. For example, in the Sample.Basic database, you can group all 8-ounce
products.

Optimizing
Use the following topics to understand performance optimization basics.

Optimizing Query - Retrieval Performance


To optimize query performance, use the following guidelines when you design an outline:

If the outline contains Essbase - Attribute dimensions, ensure that the attribute
dimensions are the only sparse Dynamic Calc dimensions in the outline.

In the outline, place the more-queried sparse dimensions before the less-queried sparse

dimensions.
The outline below is designed for optimum query performance:

Because

the

outline

contains Essbase

Attribute

dimensions,

the storage

property for Essbase - Standard dimensions and all standard dimensions members is set
as store data.

As the most-queried sparse dimension, the Product dimension is the first of the sparse
dimensions. Base dimensions are typically queried more than other dimensions.

Optimizing Calculation Performance


To optimize Essbase - Calculations performance, order the sparse dimensions in the outline
by their number of members, starting with the dimension that contains the fewest.
See Designing for Calculation Performance.
The outline in the Figure below is designed for optimum calculation performance:

The smallest standard dimension that is sparse, Market, is the first of the sparse
dimensions in the outline.

The largest standard dimension that is sparse, Product, is immediately above the first
attribute dimension. If the outline did not contain attribute dimensions, the Product
dimension would be at the end of the outline.

Meeting the Needs of Both Calculation and Retrieval


To determine the best outline sequence for a situation, prioritize the data retrieval
requirements of the users against the time needed to run calculations on the database. How
often do you expect to update and recalculate the database? What is the nature of user
queries? What is the expected volume of user queries?
A possible workaround is initially to position the dimensions in the outline to optimize
calculation. After you run the calculations, you can manually resequence the dimensions to
optimize retrieval. When you save the outline after you reposition its dimensions, choose to
restructure the database by index only. Before you run calculations again, resequence the
dimensions in the outline to optimize calculation.

Вам также может понравиться