Вы находитесь на странице: 1из 28

http://www.sapnwnewbie.

com/search/label/oracle
We had a problem with a badly designed database file layout.
SAP application, Oracle data files and the redo logs were all configured to be stored on a single
disk.
A higher than normal load on SAP system caused the entire system to crawl. Quickly checking a
few transactions, which included ST06, showed that the response time of disks was very high,
more than 500 ms.
Before planning to redistribute the data files and redo logs away to a new disk, someone wanted
to try a DB reorganization. When you reorganize the database, the fragmentation of data is
reduced. So a read requires fewer blocks to be read and thus reduces the load on the disk.
After the DB was reorged, problem was gone!
No comments:
Labels: basis, oracle, performance, sap

Oracle parameters and their descriptions


Here are some important Oracle parameters and information in the context of use with SAP
applications.
BACKGROUND_DUMP_DEST

Path for alert log and background trace files

COMPATIBLE

Defines the Oracle version whose features can be used to the greatest extent

As a rule, it must not be reset to an earlier release (see Note SAP 598470).

A value with three parts (such as 10.2.0) rather than five parts (such as 10.2.0.2.0) is
recommended to avoid changing the parameter as part of a patch set installation.

If an ORA-00201 error occurs when you try to convert the value with five parts
10.2.0.2.0 to 10.2.0, you can leave the value 10.2.0.2.0 (independent of the patch set
used).

CONTROL_FILES

Path and name of the control files that are used

CONTROL_FILE_RECORD_KEEP_TIME

Defines how many days historic data is retained in the control files

Historic data is required by RMAN, for example.

May cause control files to increase in size (see Note 904490)

CORE_DUMP_DEST

Path under which core files are stored

DB_BLOCK_SIZE.

Size of an Oracle block

Can be set to a value higher than 8K in well-founded individual cases after it has been
approved by SAP Support (see Note 105047)

DB_CACHE_SIZE

Size of the Oracle data buffer (in bytes)

Optimal size depends on the available memory (see Notes 789011 and 617416)

DB_FILES

Maximum number of Oracle data files

DB_NAME

Name of the database

DB_WRITER_PROCESSES

Number of DBWR processes

EVENT

Activation of internal control mechanisms and functions

To set events in SPFILE, refer also to Note 596423.

If many events are set, data sources such as V$PARAMETER,


DBA_HIST_PARAMETER, or "SHOW PARAMETER" may supply an incomplete
value. This is only a display problem. The values that are included in V$PARAMETER2
are the relevant values.

FILESYSTEMIO_OPTIONS

Activation of file system functions (see Note 999524 and Note 793113)

If you previously used a large file system cache (>= 2 * Oracle Buffer Pool), the
performance may get worse after you activated the direct I/O if you set
FILESYSTEMIO_OPTIONS to SETALL. Therefore, it is important that you enlarge the
Oracle buffer pool to replace the file system cache that is no longer available.

HPUX_SCHED_NOAGE

Optimized scheduling policy for Oracle processes on HP-UX.

The privileges RTSCHED and RTPRIO must be assigned to the dba group to enable you
to use the functions (see Note 1285599).

LOG_ARCHIVE_DEST

Historic variant of LOG_ARCHIVE_DEST_1, which is not compatible with features


such as the Flash Recovery Area and which should therefore no longer be used.

LOG_ARCHIVE_DEST_1

Path/prefix for offline redo logs

The syntax differs with an additional "LOCATION=" string of LOG_ARCHIVE_DEST;


if this difference is ignored, ORA-16024 occurs concerning LOG_ARCHIVE_DEST_1.

LOG_ARCHIVE_FORMAT

Name format of the offline redo logs

To avoid the problems described in Note 132551, it must be explicitly set to WINDOWS
at least.

LOG_BUFFER

Minimum size of the Oracle redo buffer (in bytes)

Oracle internally determines the buffer's actual size, so it is normal for "SHOW
PARAMETER LOG_BUFFER" or a SELECT on V$PARAMETER to return values
between 1MB and 16MB.

LOG_CHECKPOINTS_TO_ALERT

Defines whether checkpoints are to be logged in the alert log

MAX_DUMP_FILE_SIZE

Maximum size of Oracle trace files (in operating system blocks)

A limitation is useful to avoid file system overflows and to reduce the duration of the
dump generation.

You can increase it temporarily if required.

OPEN_CURSORS

Maximum number of cursors opened in parallel by one session

OPTIMIZER_DYNAMIC_SAMPLING

Determines how much data is to be read to determine the access plan.

Level 2 (the default setting for Oracle 10g): Dynamic sampling is performed only if
tables do not have any statistics.

Level 6: As level 2 and includes dynamic sampling of 128 table blocks if literals are used
and there are no bind variables.

OPTIMIZER_INDEX_COST_ADJ

Adjusts the calculated index costs; when there is a value of 20 (percent), index costs are
reduced by a factor of 5, for example.

A value lower than 100 is advisable so that index accesses are preferred instead of full
table scans.

PARALLEL_EXECUTION_MESSAGE_SIZE

Defines size of the memory area for parallel query messages (in bytes)

PARALLEL_MAX_SERVERS

Defines the maximum number of parallel execution processes (see Note 651060)

Based on the number of CPU Cores of the database server

The number of CPU Cores generally corresponds to the default value for the Oracle
parameter CPU_COUNT. If you are unsure in individual cases, you can use the value of
the parameter CPU_COUNT (for example, in transaction DB26).

If the database shares the server with other software (for example, SAP central instance,
other Oracle instances), you can also select a lower value (for example, 8 CPU Cores, the
SAP central instance and the Oracle database should share resources 50:50 ->
PARALLEL_MAX_SERVERS = 8 * 0.5 * 10 = 40).

PARALLEL_THREADS_PER_CPU

Defines the number of parallel query processes that can be executed in parallel for each
CPU

Influences the DEFAULT level of parallel processing during a parallel execution (see
Note 651060).

PGA_AGGREGATE_TARGET

Checks the available PGA memory (see Notes 789011 and 619876)

PROCESSES

Defines the maximum number of Oracle processes that exist in parallel

The component relating to ABAP work processes is only relevant in systems with ABAP
stacks. The component relating to J2EE server processes is only relevant in systems with
Java stacks.

<max-connections> indicates the maximum number of connections (also called pool size)
of the J2EE system DataSource (sysDS.maximumConnections). You can set the value of
this parameter using the VisualAdmin tool or other J2EE administration tools.

QUERY_REWRITE_ENABLED

Defines whether query transformations are also factored in when the access path is
determined

RECYCLEBIN

Enables access later on to objects that have already been dropped

Not supported by SAP (see Note 105047)

REMOTE_OS_AUTHENT

Defines whether TCP database access via OPS$ users is allowed (see Note 400241)

REPLICATION_DEPENDENCY_TRACKING

Defines whether the system has to create replication information when the database is
accessed

Performance improves if it is deactivated

SESSIONS

Defines the maximum number of Oracle sessions that exist in parallel - must be
configured larger than PROCESSES, since single processes can serve several sessions
(for example, in the case of multiple database connections from work processes)

SHARED_POOL_SIZE

Defines the size of the Oracle shared pool (see Notes 690241 and 789011)

STAR_TRANSFORMATION_ENABLED

Specifies to what extent STAR transformations can be used

UNDO_MANAGEMENT

Defines whether automatic undo management is used (see Note 600141)

UNDO_TABLESPACE

Defines the undo tablespace to be used (see Note 600141)

USER_DUMP_DEST

Path for trace files of Oracle shadow processes

_B_TREE_BITMAP_PLANS

Defines whether data of a B*TREE index can be transformed into a bitmap display
during a database access.

_BLOOM_FILTER_ENABLED

Determines whether bloom filters may be used during joins.

_DB_BLOCK_NUMA

Control use of NUMA optimizations.

_ENABLE_NUMA_OPTIMIZATION

Control use of NUMA optimizations.

_FIX_CONTROL

Activates or deactivates individual CBO fixes

To set _FIX_CONTROL, see Note 1455168.

If many _FIX_CONTROL values are set, data sources such as V$PARAMETER,


DBA_HIST_PARAMETER, or "SHOW PARAMETER" may supply an incomplete
value. This is only a display problem. The values that are included in
V$SYSTEM_FIX_CONTROL are the relevant values.

Note 1454675 describes a problem whereby the _FIX_CONTROL values do not work
despite being displayed correctly in V$PARAMETER.

_CURSOR_FEATURES_ENABLED

With a value of 10 and in connection with fix 6795880, the following is prevented:
sporadic hanging during parsing

_FIRST_SPARE_PARAMETER

This is a generic parameter that can be used for different purposes in certain cases.

With Oracle 10.2.0.4 and fix 6904068, you use this parameter to introduce a break of
1/100 second between two "cursor: pin S" mutex requests instead of continually
executing requests. This may help to avoid critical CPU bottlenecks.

_INDEX_JOIN_ENABLED

Controls whether index joins can be used or not; within an index join, two indices of a
table are directly linked together.

_IN_MEMORY_UNDO

Controls whether the In Memory Undo feature (IMU) is used or not

_OPTIM_PEEK_USER_BINDS

Defines whether Oracle takes the contents of the bind variables into account during
parsing

May cause various problems (Notes 755342, 723879) if not set to FALSE.

_OPTIMIZER_BETTER_INLIST_COSTING

Controls the cost calculation for IN lists

If the parameter is set to OFF, long IN lists are evaluated very favorably.

The CBO performs a reasonable cost calculation for IN lists using the value ALL.

Therefore, you should always use the default value ALL (this means that you should not
set the parameter). If the CBO takes incorrect decisions in individual cases, these
incorrect decisions must be analyzed and corrected.

_OPTIMIZER_MJC_ENABLED

Controls whether Cartesian merge joins are used.

_PUSH_JOIN_UNION_VIEW

Controls whether join predicates may be used in a UNION ALL construct beyond the
view boundaries.

_SORT_ELIMINATION_COST_RATIO

Controls rule-based CBO decision in connection with the FIRST_ROWS hint and
ORDER BY (see Note 176754).

_TABLE_LOOKUP_PREFETCH_SIZE

Controls whether table prefetching is used (a value of zero means no table prefetching).

No comments:
Labels: basis, oracle, sap, sap netweaver

How to check index storage quality in SAP


To check the index storage quality call the transaction DB02 > Detailed Analysis > (enter the
index name) > Detailed Analysis > Analyze Index > Storage Quality. If the quality is less than
50%, the index needs a reorg.
No comments:
Labels: basis, oracle, sap, sap netweaver

Pulling information from Oracle table and assigning it to a


UNIX variable
Let us say you have a table (myemployeetable) containing employee names, IDs etc and you are
writing a script carrying a variable meant to hold the total number of employees. You can get the
total count of employees by running a sql file and assigning the output to this variable.
This is however not that straightforward. A sql command outputs a lot of information that has to
be excluded when the result value is assigned to the variable.
First, you don't want to print column names, so turn the heading off.
set heading off
Next, in the output, you do not want messages like "x records selected". To do this, turn feedback
off.
set feedback off
Suppress all headers, page breaks, titles etc by setting the page size to 0.
set pagesize 0
If you are using a SQL variable, you have to suppress how a variable is substituted before being
sent to execution. This is done by turning verify off.
set verify off
The output of sql command shows "SQL>", this can be suppressed by turning echo off.
set echo off;
So your sql file, say getemployeecount.sql, will look like this:
set heading off
set feedback off
set pagesize 0
set verify off
set echo off;
select count(*) from myemployeetable;

exit
When you execute this from sql prompt, you will get the number of employees returned.
When a script has to execute the sql file, it will first enter sqlplus, which returns banner and the
initial SQL prompt to the user. we don't need that either. Therefore, sqlplus is called "silently"
using -S option.
Now we assign the value of the total number of employees to a UNIX variable using command
substitution.
SUMEMP=`sqlplus -S user/pass @getemployeecount.sql`
No comments:
Labels: basis, oracle, unix

Changing redo log files location or size online


You may want to change the redo log file size or it's location online (in case you are facing space
issues on the existing disk drive).
Before changing the redo logs, take a full backup.
Drop the first redo log by using the SQL command
alter database drop logfile '<path to the redo log file>';
Now create it with your preferred location or size using the command
alter database add logfile '<preferred path and name of the log file>' size <preferred size>M;
Repeat this on the rest of the redo logs. You will encounter ORA1515 error at the point when you
are deleting a redo log file that is currently in use. You can wait or skip the redo log file for now
and proceed later with this redo log file. You may also force log switch to let you change the
current log file using the command
alter system switch logfile;
No comments:
Labels: oracle

Finding Corrupt Indexes on Oracle Database


You can find out all the indexes that are corrupted by matching up v$database_block_corruption
view.
Let's begin with formatting the output:
set pagesize 50 linesize 170
col segment_name format a30
col partition_name format a30

Now the following SQL will list out the corrupt indexes:
select distinct file#,segment_name,segment_type, TABLESPACE_NAME,
partition_name from dba_extents a,v$database_block_corruption
b where a.file_id=b.file# and a.BLOCK_ID <= b.BLOCK#
and a.BLOCK_ID + a.BLOCKS >= b.BLOCK#;
No comments:
Labels: oracle

SAP Oracle Database Refresh - Control file creation


SAP refresh requires database to be restored and recovered on the target system.

One of the most important steps in Oracle DB restore/recovery is the control file creation on the
target system as the file locations and SID of the database change. Here are the steps to create
control file:
Generate the control file trace on the source system:
1. Ensure that the source DB is on open or mounted mode by running the following command
select open_mode from v$database;
The output should be MOUNTED or READ WRITE
2. Write the control file to trace by running the following command
alter database backup controlfile to trace;
3. Find out where the trace is written by running the following
show parameter dump;
The location is most
likely /oracle/<SID>/saptrace/diag/rdbms/<sid>/<SID>/trace for SAPOracle database. Check the latest trace file.
4. Open the file and copy the section resembling the following as a new file ex:
createcontrolfile.sql by removing all lines above STARTUP NOMOUNT and changing REUSE
to SET (that's because SID is changing) and replace prod SID with QA SID

CREATE CONTROLFILE SET DATABASE "<SID>" RESETLOGS


MAXLOGFILES 255
MAXLOGMEMBERS 3
MAXDATAFILES 1000
MAXINSTANCES 50
MAXLOGHISTORY 1168
LOGFILE
GROUP 1 (
'/oracle/<SID>/origlogA/log_g11m1.dbf',
'/oracle/<SID>/mirrlogA/log_g11m2.dbf'
) SIZE 200M BLOCKSIZE 512,
GROUP 2 (
'/oracle/<SID>/origlogB/log_g12m1.dbf',
'/oracle/<SID>/mirrlogB/log_g12m2.dbf'
) SIZE 200M BLOCKSIZE 512,
GROUP 3 (
'/oracle/<SID>/origlogA/log_g13m1.dbf',
'/oracle/<SID>/mirrlogA/log_g13m2.dbf'
) SIZE 200M BLOCKSIZE 512,
GROUP 4 (
'/oracle/<SID>/origlogB/log_g14m1.dbf',
'/oracle/<SID>/mirrlogB/log_g14m2.dbf'
) SIZE 200M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
'/oracle/<SID>/sapdata1/system_1/system.data1',
'/oracle/<SID>/sapdata1/sysaux_1/sysaux.data1',
'/oracle/<SID>/sapdata3/undo_1/undo.data1',
'/oracle/<SID>/sapdata1/sr3_1/sr3.data1',
'/oracle/<SID>/sapdata1/sr3_2/sr3.data2',
'/oracle/<SID>/sapdata1/sr3_3/sr3.data3',
'/oracle/<SID>/sapdata1/sr3_4/sr3.data4',
'/oracle/<SID>/sapdata1/sr3_5/sr3.data5',
'/oracle/<SID>/sapdata2/sr3_6/sr3.data6',
'/oracle/<SID>/sapdata2/sr3_7/sr3.data7',
'/oracle/<SID>/sapdata2/sr3_8/sr3.data8',
'/oracle/<SID>/sapdata2/sr3_9/sr3.data9',
'/oracle/<SID>/sapdata2/sr3_10/sr3.data10',
'/oracle/<SID>/sapdata3/sr3_11/sr3.data11',
'/oracle/<SID>/sapdata3/sr3_12/sr3.data12',
'/oracle/<SID>/sapdata3/sr3_13/sr3.data13',
'/oracle/<SID>/sapdata3/sr3_14/sr3.data14',
'/oracle/<SID>/sapdata3/sr3_15/sr3.data15',
'/oracle/<SID>/sapdata4/sr3_16/sr3.data16',
'/oracle/<SID>/sapdata4/sr3_17/sr3.data17',
'/oracle/<SID>/sapdata4/sr3_18/sr3.data18',
'/oracle/<SID>/sapdata4/sr3_19/sr3.data19',
'/oracle/<SID>/sapdata4/sr3_20/sr3.data20',
'/oracle/<SID>/sapdata1/sr3700_1/sr3700.data1',
'/oracle/<SID>/sapdata1/sr3700_2/sr3700.data2',
'/oracle/<SID>/sapdata1/sr3700_3/sr3700.data3',
'/oracle/<SID>/sapdata1/sr3700_4/sr3700.data4',
'/oracle/<SID>/sapdata2/sr3700_5/sr3700.data5',
'/oracle/<SID>/sapdata2/sr3700_6/sr3700.data6',
'/oracle/<SID>/sapdata2/sr3700_7/sr3700.data7',
'/oracle/<SID>/sapdata2/sr3700_8/sr3700.data8',
'/oracle/<SID>/sapdata3/sr3700_9/sr3700.data9',

ARCHIVELOG

'/oracle/<SID>/sapdata3/sr3700_10/sr3700.data10',
'/oracle/<SID>/sapdata3/sr3700_11/sr3700.data11',
'/oracle/<SID>/sapdata3/sr3700_12/sr3700.data12',
'/oracle/<SID>/sapdata4/sr3700_13/sr3700.data13',
'/oracle/<SID>/sapdata4/sr3700_14/sr3700.data14',
'/oracle/<SID>/sapdata4/sr3700_15/sr3700.data15',
'/oracle/<SID>/sapdata4/sr3700_16/sr3700.data16',
'/oracle/<SID>/sapdata1/sr3usr_1/sr3usr.data1'
CHARACTER SET UTF8
;

5. Adjust the SID and datafile locations as per the target system.
Run control file on the target system after datafiles are restored:
1. Start the database on NOMOUNT mode
startup nomount;
2. Run the createcontrolfile.sql file created in step 4 on source system
@createcontrolfile.sql
3. Check the status of the database to ensure that it is in MOUNTED state by running the SQL
command select status from v$instance
4. Recover database using one the options (until a specific time or until all redo logs in oraarch
are applied)
recover database using backup controlfile until time '2013-0817:11:56:00';
recover database until cancel using backup controlfile;
You will get the following prompt:
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
Choose AUTO
Once the redo logs are applied, you will get the prompt again, this time choose CANCEL
5. Open the database
alter database open resetlogs;
6. Create the temporary tablespace. The trace file created in the source system carries the
command to recreate temporary tablespace. The command will resemble the following syntax:
ALTER TABLESPACE PSAPTEMP ADD TEMPFILE
'/oracle/<SID>/sapdata3/temp_1/temp.data1' SIZE 4000M REUSE
AUTOEXTEND ON NEXT 20971520 MAXSIZE 10000M;
2 comments:
Labels: basis basics, Basis interviews, oracle, r3, sap, sap netweaver

Oracle 11g Extended Statistics for SAP Tables


Extended Statistics is an attempt to fix one of the flaws in CBO--values of different columns are
not correlated.
Let us take an example of two columns in a table. One of the column contain department code
and the other contains employee name. Let us assume that there are 10 departments and 3000
employees in our example. In a real life scenario, all the employees do not belong to all the
departments, but CBO assumes that is the case and hence it assumes that there are 3000*10 =
30000 combinations of employee name and department code that exist. In reality, it can be

between 3000 and 30000 (assuming employee belongs to at least one department and can clock
for multiple departments).
The CBO is not intelligent to know these relations and this assumption can have serious
performance impact on join operations.
In order to calculate better statistics, we can use extended statistics from Oracle 11g onwards.
SAP has provided these statistics for AUSP, BKPF, MSEG and HRP1001 tables as part of SAP
note 1020260.
You can run the following commands to define the extended statistics on the above listed tables
on SAP ERP application.
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
KLART, ATINN)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
BUKRS, BSTAT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
'(RELAT, SCLAS, OTYPE, PLVAR)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
MATNR, WERKS, LGORT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
MBLNR, MJAHR)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, BWART)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, BWART, LGORT)') FROM DUAL;
SELECT DBMS_STATS.CREATE_EXTENDED_STATS('SAPR3',
WERKS, LGORT)') FROM DUAL;

'AUSP', '(MANDT,
'BKPF', '(MANDT,
'HRP1001',
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,
'MSEG', '(MANDT,

If you are aware of similar relationships, you can use the following syntax.
To define extended statistics:
SELECT DBMS_STATS.CREATE_EXTENDED_STATS ('<owner>',
'<table_name>', ' (<col1>, ..., <colN>)') FROM DUAL;
To define and create extended statistics:
EXEC DBMS_STATS.GATHER_TABLE_STATS('<owner>', '<table_name>',
METHOD_OPT => 'FOR COLUMNS (<col1>, ..., <colN>) SIZE 1');
No comments:
Labels: basis, ecc6, oracle, r3, sap

Oracle Database Overview - Memory Areas


One of the most important aspects of understanding the Oracle system architecture is
understanding how instance memory is divided. We will look at an overview of the memory

areas which have relevance to database usage with an SAP application as SAP does not make use
of all the memory areas available.
There are two broad memory areas
1. Memory shared by all the processes - System Global Area in Oracle
2. Memory assigned to exactly one process - Program Global Area in Oracle

System Global Area [SGA]

Database Buffer Cache


Also known as Buffer Pool or Data buffer or "Cache"
This portion of SGA holds copies of the data blocks from the datafiles. The SQL operations on
data objects are first performed on these blocks and then transferred to the datafiles by the DB
Writer processes.
The Buffer pool is further divided into the following parts

Free Buffer - free space

Pinned Buffer - holds data that is currently being accessed

Dirty Buffer - holds data that is modified, but not moved to Write List

Write List - holds data that is modified and ready to be written to disk

Free Buffer, Pinned Buffer and Dirty Buffer form LRU list of the Buffer pool. Free Buffer is the
LRU end of LRU List and Dirty Buffer is the MRU end. Each buffer again has data placed by
last used order.

When a process requires data, it starts looking for it in the data buffer. If it finds that data, it is a
cache hit otherwise it is a cache miss. In the event of a cache miss, the Oracle process has to
copy the data from datafile into the LRU List.
Before copying, the process will first start looking for free space in the LRU list starting from the
LRU end. As the process starts to scans for free space, if it hits a dirty data, it copies that over to
the Write List.
It continues until it has found enough free space or if it hits a threshold to search. If it hits the
threshold, it asks DBWn process to write some of the data blocks from Write List to datafiles and
free up space. If it gets the required free space, it reads the data into the MRU end of LRU List.
Whenever an Oracle process accesses any of the data in the LRU list (cache hit), the data is
moved to the MRU end of that buffer. Over the time, the older data (except for full table scans)
moves to the LRU end of the buffer.
The size of data buffer is defined by DB_BLOCK_BUFFERS (in blocks) or by
DB_CACHE_SIZE (in bytes) if using a dynamic SGA.
Redo Buffer
Redo Buffer is circular buffer. The contents of this buffer are periodically written to the active
online redo log files by LGWR process. Database operations such as INSERT, UPDATE,
DELETE, CREATE, ALTER and DROP are logged into the this buffer. These operations help
redo the changes to the table and hence are important in restore of database in the event of a
crash.

The size of redo buffer is defined by LOG_BUFFER (in bytes). SAP recommends to set it to 1
MB or less.
Shared Pool
Shared Pool is made up of various memory areas of which Dictionary Cache and Shared SQL
Area are of high importance.
Dictionary cache (or Row Cache) contains meta information about database tables, views and
users. As the meta information is stored in the form of tables or views (principally rows),
Dictionary cache is also known as Row cache.
Shared SQL Area (or Shared Cursor Cache) is a part of Library Cache. Before any SQL
statement can be executed, the statement is first parsed and stored in the Library Cache along
with its execution plan. Each SQL statement has parts that are common to two or more users
executing similar statement and bind variables that are private to each user. The Shared SQL
Area stores the shared part of the statement.

The size of shared pool is set by SHARED_POOL_SIZE (in bytes). SAP recommends to set this
value to 600 MB or above.
The SGA comprises of other areas such as Java Pool, Large Pool and Streams Pool which are not
utilized by SAP. However, these memory cannot be set to 0 if you are using Oracle utilities
(RMAN, Oracle VM etc)
It is recommended to limit SGA to 1/4th of the RAM.

Program Global Area [PGA]


PGA memory is dynamically allocated to the server processes. PGA is used to hold session data,
query execution state etc and to facilitate execute operations such as sorts, hash joins, bitmapping
etc. If PGA is insufficient, temporary tablespace (PSAPTEMP in SAP) is used.
The size of PGA is determined by the parameters SORT_AREA_SIZE, HASH_AREA_SIZE,
BITMAP_MERGE_AREA_SIZE and CREATE_BITMAP_AREA_SIZE or by
PGA_AGGREGATE_TARGET if automatic PGA administration is used.
2 comments:
Labels: basis basics, oracle, sap

Checking if an Environment Variable is Set in UNIX Shell


We had a task to write a script to check if certain environment variables, which should not be set,
were set. This task was straightforward with C shell:
if $?GARBAGEENV then
echo "Unset GARBAGEENV on `hostname`"
endif
With K shell, we did not find anything built-in. So we used something innovative.
if test "isenvset${GARBAGEENV}" != "isenvset"; then

echo "Unset GARBAGEENV on `hostname`"


fi
If GARBAGEENV was set to N, isenvset${GARBAGEENV} would be evaluated as isenvsetN
(which is not equal to isenvset). If it was not set isenvset${GARBAGEENV} would be evaluated
as isenvset.
No comments:
Labels: basis, oracle, sap, unix

Oracle Error Information Utility oerr


You can look up the explanation of Oracle error code on the database server itself using oerr
utility on UNIX OS. Here is an example explanation of ORA-1555 error.
sapadm> oerr ora 1555
01555, 00000, "snapshot too old: rollback segment number %s with
name \"%s\" too small"
// *Cause: rollback records needed by a reader for consistent
read are
//
overwritten by other writers
// *Action: If in Automatic Undo Management mode, increase
undo_retention
//
setting. Otherwise, use larger rollback segments
Pretty handy if you are working for a third-class company that blocks google search.
No comments:
Labels: basis, oracle, sap

Job Scheduling Software


Enterprises using multiple ERP systems look for a a job schedulers that can work with SAP and
other ERP or DW vendors alike. Third party tools are not advisable if the Enterprise runs purely
on SAP ERP. They increase the TCO, increase delays in problem correction, create high
dependencies among technologies, and increase chances of miscommunication.
If varied technologies are being used, investing in a third party scheduler is useful to have
a centralized scheduler. Here are some popular schedulers:

Control-M by BMC

Cronacle by Redwood

ORSYP by Dollar Universe

Tidal Enterprise Scheduler by Cisco

Tivoli Workload Scheduler (Maestro) by IBM

UC4 Global by UC4

No comments:
Labels: ecc6, job, oracle, r3, sap, sap tools

ORA-00059 maximum number of DB_FILES exceeded


Fixing ORA-00059 maximum number of DB_FILES
exceeded
The number of datafiles per database is limited by the parameter DB_FILES.
You can check the current limit using the following SQL command:
show parameter db_files;
You can check the curent number of datafiles the following SQL command:
select count(*), max(file#) from v$datafile;
When the curent number of datafiles equals DB_FILES, the next attempt to add a datafile will
error out with ora error code ORA-00059.
The limit can be increased (for example to 300) using the following SQL command. The changed
requires a restart of database.
alter system set db_files=300 scope=spfile;
No comments:
Labels: basis, oracle

SAP Buffers and Buffer Synchronization

SAP Buffers and Synchronization


In a distributed architecture, the database and instances can be on separate servers. Therefore the
access to database from the application instances would have to resort to inter server
communication over the network. If the data is not changed very frequently and not too big,
caching it at the application's memory can improve the access speed. Apart from improving the
access speed, caching will prevent the load on the database. By reducing the database load, the

need to add more CPU (and memory) can be avoided, which in turn reduces the licensing cost.
SAP buffers are a part of the application memory that conceive this concept.

Buffer Synchronization
SAP buffers are local to each instance. When a change is made, the application instance where
the transaction ran can be made aware of the change easily. However, it is very important to
ensure that the changes are communicated to other application instances to ensure validity of the
buffered information. This is realized by using the table DDLOG to centrally log and read the
changes. The inevitable need to synchronize data across the application instances is the biggest
challenge and a limitation to the type of data that is synchronized.
A change operation is first executed at the database level (but not committed yet), and if it is
successful, the change is applied to the buffer. All the changes made to buffered objects are
registered by the database interface of the work process in a main memory structure known as a
"notebook". At the end of the transaction, these registered synchronization requests are inserted
to the NOTEBOOK field of DDLOG table. When the insert operation on DDLOG table
completes, the database transaction is committed. The changes may not always be triggered by a
transaction; a change could also be triggered by transports (tp and R3trans), but the technical
realization of logging changes to DDLOG table remains the same. The newly created DDLOG
data record is identified by a sequence number (SEQNUMBER field) that is automatically
assigned in an ascending order by the database. In case of Oracle database, DDLOG_SEQ
sequence takes care of sequencing.
The other application servers note the change as follows:
The dispatcher of the instance triggers buffer synchronization, which reads the new
synchronization requests (since the last synchronization) from the table DDLOG. Only those
sequence numbers that are higher than the previous sequence number are read to fetch the new
entries from DDLOG. The new synchronization requests are applied to the buffers. The delay
between two buffer synchronizations is determined by the parameter rdisp/bufreftime. This
parameter (defaults 120 sec) should not be changed. If synchronization is required at a more
frequent interval, then the buffering on the specific objects should be disabled or special access
commands should be used to bypass the buffer.
Due to the delay in buffer synchronization (and performance reasons) transactional data should
never be buffered.
The chief parameters to control buffer synchronization are:
rdisp/bufrefmode - controls read and write on DDLOG table
rdisp/bufreftime (already discussed above) - controls the frequency of buffer synchronization
rdisp/bufrefmode defines (a) if synchronization records are written to the table DDLOG (possible
values "sendon" or "sendoff") and (b) if the periodic buffer invalidation occurs by reading
synchronization records from the table DDLOG (possible values "exeauto" or "exeoff"). The
parameter should be set to sendoff,exeauto if only one instance (the central instance) is
configured. If there are more than one instances (i.e at least one dialog instance is installed in

addition to the central instance), this parameter should be set to sendon,exeauto.


You may have noticed that the writing to DDLOG can be switched based on the number of
instances, but reading is always turned on. The reason being that changes can also be made using
R3trans or tp; such changes are always written to DDLOG and should always be read from
DDLOG. exeoff can only be used for testing. If you set the parameter to exeoff, changes to
repository objects in the database are not detected by the repository buffers.
The newer kernels or kernel patches automatically correct sendon/sendoff as change to
rdisp/bufrefmode is usually forgotten when dialog instance is added to the central instance.

Error Situations in Buffer Synchronization


Let's look at a few error situations that may affect buffer synchronizations
1. rdisp/bufrefmode set incorrectly
This is no longer a threat with the newer kernel releases and patches. If you are using older
kernels and multiple instances, rdisp/bufrefmode should be set to "sendon,exeauto" on all the
application instances, including the central instance.
2. DDLOG was not reset after a system copy
If you perform system refresh using database tools and methods instead of using SAP tools,
DDLOG should be checked to ensure that it is empty. In such cases, you have to shut down all
SAP instances and empty the records in DDLOG table.
3. Issues with DDLOG_SEQ on Oracle database
The SEQNUMBER field for each buffer synchronization record in DDLOG table is updated in
ascending order by DDLOG_SEQ sequence.
a. If this sequence is missing, insert operations on DDLOG table will error out with return code
ORA-02289 (LOG BY00>ORA-02289: sequence does not exist). In such cases, stop all the
applications; truncate DDLOG table; create the missing seqnence using the following command
and start the application:
create sequence ddlog_seq minvalue -2147483640 maxvalue 2147483640 increment by 1 cache
50 order nocycle;
b. If the maximum value of SEQNUMBER field is reached, insert operations on DDLOG table
will error with a return code ORA-0800 (sequence DDLOG_SEQ.NEXTVAL exceeds
MAXVALUE and cannot be instantiated).
To fix this error, stop the application instances; truncate DDLOG table; drop sequence
ddlog_seq; recreate ddlog_seq sequence and start the application.
c. DDLOG_SEQ not updating the SEQNUMBER field on Oracle RAC
The creation of sequence in most SAP releases does not using "order" keyword explicitly. Since
the default is noorder, the sequence is created using N flag for ORDER_FLAG column. This is
usually not a problem if the Oracle database runs with a single instance. In case of Oracle RAC,
the noorder can result in sequence numbers not being assigned in the time sequence. Buffer
synchronization logic requires that the sequence numbers relate to time line in which they were
created.

You can confirm the flag being used using the following SQL command:
select * from user_sequences where sequence_name = 'DDLOG_SEQ';
The fix to this problem is same as the previous fix (3b).
4. Dispatcher or work process problems
The buffer synchronization is triggered by the dispatcher and executed using dialog work
process. If the dispatcher does not get enough CPU time or there are not enough dialog work
processes, buffer synchronization does not happen. In such cases, you have to increase the trace
level, reproduce the problem and engage SAP.
To increase the trace level
Add the following parameters using RZ11
rdisp/TRACE_LOGGING = on, 150 m
Go to SM50 on each server, and increse the trace level from menu Process--> Trace -->
Dispatcher --> Increase level
Keep an eye on this blog for more posts on SAP buffers!
3 comments:
Labels: basis, basis basics, oracle, r3, sap, trace

GRC RAR Row-prefetch value for Oracle Risk Analysis


If you are using Oracle database with GRC, you can set Row-prefetch value for Oracle Risk
Analysis to tune the performance of Risk Analysis. The parameter uses the row-prefetching
feature of Oracle JDBC driver to fetch multiple rows at a time (instead of the default one row)
reducing the round trips to be made to the database layer. The extra rows are stored in the buffers
and used by the later queries which pick one of the rows that are already placed in the buffer.
To set the number of rows to prefetch, navigate to Configuration --> Risk Analysis (left pane) -->
Performance Tuning --> Row-prefetch value for Oracle Risk Analysis. A value of 10 is a good
starting point.
No comments:
Labels: basis, grc, j2ee, oracle, sap

Oracle "connected to an idle instance" and ORA-01034


When you try to connect to Oracle database and see "connected to an idle instance" as you enter
sqlplus, there are chances that you have set incorrect file permissions.
Check that $ORACLE_HOME/bin has "drwxr-xr-x" permisson
The file "oracle" in $ORACLE_HOME/bin should have non-zero size and permission "-rwsr-s-x"
If the permission differs, set it correctly using the following command
chmod 6751 oracle

No comments:
Labels: basis, oracle, sap, unix

Creating Index in Oracle-based SAP Application


This is a thumb-rule for creating indexes in Oracle:
For small tables (< 100 MB)
The creation of index on such tables usually takes a few minutes. Therefore, when you are
creating the index pick a time of day where a lock on the table (transactions will be slow due to
lock on the table) for a few minutes is acceptable.
For medium tables (100 MB - 1 GB)
Choose a week-end or a time frame where no user will be running a report that will be using this
table.
For large tables (>1 GB)
The index creation on these tables will be very time consuming and you may have to do this as a
downtime. You will have to use DB specific options to speed up index creation. For example in
Oracle, you can use PARALLEL DEGREE while creating it.
a. Create the index in the DEV system using SE11.
b. Take note of the exact name of the index.
c. Create the index at Oracle level in QA with the same name that was used in DEV using the
following syntax:
create index <index name> on <table name> <field1, field2, ...
fieldn>
nologging tablespace <tablespace name>
parallel (degree <number>)
pctfree 1
storage <storage clause>
online;
d. Transport the index to QA. As the index already exists, it won't be recreated. The transport will
only move the ABAP dictionary information.
e. Remove reset the parallel degree to 1
alter index <index name> noparallel logging;
f. Repeat c, d and e in production.
If you are looking to reorganize an index, you must read
this: http://www.sapnwnewbie.com/2011/12/rebuilding-sap-indexes.html
1 comment:
Labels: basis, oracle, sap

Two or more aggregate functions in Oracle cause


performance problems
There seems to be a bug with Oracle database that causes execution of two or more aggregate
functions in an SQL statement to be time consuming.
Example of two aggregate functions in same query: SELECT MIN(XYZ) MAX (XYZ) FROM
TABLE_XYZ;
SAP XI has one such statement in the report SXMS_PF_REORG (reorganization of adapter
data).
As a work around, SAP has split the two aggregate functions in one statement to run as two
separate statements. Implement SAP note 1540734 to implement SAP's workaround if you are
facing performance issues with regard to this report.
No comments:
Labels: basis, oracle, sap, xi

Database Access through O/JDBC Service Connection


You can now let SAP support personnel access your database through SAP service connection.
Ensure that you maintain permit access to the database server and listener port in sap router
table.
1. Call service.sap.com/access-support
2. Click on Maintain connection

3. Choose the system to which you wish to open the connection

4. Click on the JDBC/ODBC Connection type entry and provide the port number used by
the listener

5. Now go to Open/Close connection and click on JDBC/ODBC Connection to open the


connection like any other SAP service connection
No comments:
Labels: basis, oracle, sap

Disaster Recovery Plan


Disaster Recovery is the process, policies and procedures of restoring operations critical to the
resumption of business, including regaining access to data (records, hardware, software, etc.),
communications (incoming, outgoing) workspace, and other business processes after a natural or
human-induced disaster.
Here are some common aspects of designing a Disaster Recovery environment for a business
application:
1. DR Server
The server-that you intend to use in case of disaster-should contain the same version/patch of the
Operating System as the primary server. You have to ensure that the disk allocation is similar to
that of primary. You need not allocate identical server resources on the DR site, but you may
need them when you intend to run the DR site as primary in the event of a disaster.
2. Standby Database
Most of the database vendors provide you with an option to install and maintain a standby
database, which keeps itself cloned with the primary database. Ex: Oracle provides DataGuard.
You may also explore and use third-party software if you are not happy with the existing
solution.
The most common methods of keeping a cloned database are:

a. Log shipping from primary to DR database


In this method, the redo logs that are created in the primary database are copied and applied to
the DR database. These logs may be applied immediately (synchronous) or in with a
delay/independently (asynchronous).
b. Storage level replication
In this method, the changes made to the files are written to the primary storage and the DR
storage. The database transaction complete only when the write operations are successful on both
storage devices.
3. Synchronizing File Systems
Ensure that the file systems are synchronized, so that the application data files, profiles, scripts,
logs, software and user environment is cloned at the DR site. Don't forget OS configuration files
such as service and host files in the sync list. The server OS usually come with utilities to help
synchronize data remotely.
4. DNS/Firewall
When the DR server replaces the primary, the IP address for the host name changes. The DNS
server and FIrewall rules should be updated with the new IP and the users/application accessing
via IP address or maintaining a host file should be informed of the change.
5. License Aspects
Some software vendors provide licenses based on the hardware used. For any hardware change,
you have to apply a new license. In case of a DR, from license perspective, you are essentially
changing your hardware.
6. The Plan
When a disaster takes place, you will not have the peace of mind to take sound decisions.
Everyone will be confused as to what to do to get the business application back to functional
state. You can minimize the chaos, by documenting and testing the procedure to get the cloned
server and application to work as your primary. Take into account of the safety procedures,
management approval process and communication process into these documents.

Вам также может понравиться