Вы находитесь на странице: 1из 43

Andr Bgelsack, Stephan Gradl, Manuel Mayer, Helmut Krcmar

SAP MaxDB Administration

tm

Bonn Boston

Contents at a Glance
1 2 3 4 5 6 7 A B Introduction to SAP MaxDB ....................................... Overview of SAP MaxDB ............................................ SAP MaxDB and SAP .................................................. 11 21 89

Administration tasks ................................................... 115 Performance tuning .................................................... 207 Problem Situations ...................................................... 281 Summary and Outlook ................................................ 305 Command Reference dbmcli ....................................... 309 the Authors ................................................................. 317

Contents

Introduction to SAP MaxDB ........................................


1.1 1.2 History ............................................................................. SAP MaxDB Features ........................................................ 1.2.1 General Features .................................................... 1.2.2 Flexibility during Operation .................................... 1.2.3 SQL Modes and Interfaces ..................................... 1.2.4 Areas of Use ........................................................... Useful Internet Sources ..................................................... 1.3.1 Official SAP MaxDB Website .................................. 1.3.2 SAP MaxDB Wiki on the SAP Developer Network ... 1.3.3 SAP MaxDB FAQ .................................................... 1.3.4 SAP MaxDB Forum ................................................ Structure of this Book .......................................................

11
11 12 12 14 15 16 16 17 17 17 17 18

1.3

1.4

Overview of SAP MaxDB .............................................


2.1 SAP MaxDB Instance Types .......................................... 2.1.1 OLTP and OLAP ..................................................... 2.1.2 SAP liveCache ........................................................ SAP MaxDB Software ................................................... 2.2.1 The X Server .......................................................... 2.2.2 Database Studio ..................................................... 2.2.3 Database Manager GUI .......................................... 2.2.4 Database Manager CLI ........................................... 2.2.5 SQL Studio ............................................................. 2.2.6 SQL CLI .................................................................. 2.2.7 Web SQL ............................................................... 2.2.8 Other Utilities ........................................................ SAP MaxDB User Concept ............................................ 2.3.1 MaxDB Users ......................................................... 2.3.2 Operating System Users ......................................... 2.3.3 Security Aspects ..................................................... Database Concepts ........................................................... 2.4.1 Kernel Threads ....................................................... 2.4.2 Caches ................................................................... 2.4.3 Data and Log Volumes ........................................... 2.4.4 Savepoints and Snapshots ......................................

21
21 21 23 26 27 27 30 34 35 36 37 38 45 45 52 54 55 56 63 70 76

2.2

2.3

2.4

Contents

2.5

2.4.5 Locking .................................................................. 2.4.6 Directory Structure ................................................. 2.4.7 Operational States ................................................. 2.4.8 Database Parameters .............................................. 2.4.9 Configuration Files ................................................. Summary ..........................................................................

78 80 83 84 86 87

SAP MaxDB and SAP ...................................................


3.1 SAP Architectures ............................................................. 3.1.1 ABAP and Java Stack .............................................. 3.1.2 Architecture Levels ................................................. Communication with SAP MaxDB ..................................... 3.2.1 SAP MaxDB Interfaces ........................................... 3.2.2 Communication with SAP Systems ......................... Important Transactions ...................................................... 3.3.1 Transaction DB50 Database Assistant ................. 3.3.2 Transaction DB13 ................................................... 3.3.3 Transaction RZ20 ................................................... Summary ..........................................................................

89
89 90 91 93 96 99 102 103 109 112 114

3.2

3.3

3.4

Administration tasks ................................................... 115


4.1 Server Software Installation and Upgrade .......................... 4.1.1 SDBINST/SDBSETUP .............................................. 4.1.2 SDBUPD ................................................................ Creating and Initializing the Database ............................... 4.2.1 Planning the Database ........................................... 4.2.2 Creating the Database via the GUI ......................... 4.2.3 Creating the Database via the dbmcli Tool .............. 4.2.4 Interaction with SAPInst ........................................ Configuring the Database .................................................. 4.3.1 Adding and Deleting Data/Log Volumes ................. 4.3.2 Configuring Log Volumes and Log Mode ................ 4.3.3 Updating the System Tables ................................... 4.3.4 Parameter Changes ................................................ Database Backup .............................................................. 4.4.1 Backup Concepts ................................................... 4.4.2 Creating a Backup Medium .................................... 4.4.3 Incremental and Complete Backup ......................... 4.4.4 Log Backups ........................................................... 115 116 126 130 130 132 140 144 146 146 154 158 159 163 163 166 171 174

4.2

4.3

4.4

Contents

4.5

4.6

4.7

4.8

4.4.5 Snapshots .............................................................. 4.4.6 Checking Backups .................................................. Database Recovery ............................................................ 4.5.1 Recovery Types ...................................................... 4.5.2 Recovery Strategy .................................................. 4.5.3 Recovery/Recovery with Initialization ..................... 4.5.4 Reintegrating Faulty Log Mirrors ........................... 4.5.5 Bad Indexes ........................................................... Consistency Checks ........................................................... 4.6.1 General Description ............................................... 4.6.2 Checking the Database Structure ........................... Deleting the Database ...................................................... 4.7.1 Deleting the Database ............................................ 4.7.2 Server Software Uninstallation ............................... Summary ..........................................................................

177 182 186 186 187 188 192 193 196 196 197 200 200 203 206

Performance tuning ..................................................... 207


5.1 5.2 Performance Optimization ................................................ Indexes ............................................................................. 5.2.1 B* Trees: Theory .................................................... 5.2.2 Primary and Secondary Key .................................... The Database Optimizer .................................................... 5.3.1 Basic Principles ...................................................... 5.3.2 Criteria for Selecting Specific Access Strategies ....... Caches .............................................................................. 5.4.1 Background ............................................................ 5.4.2 The Various Caches ................................................ 5.4.3 The Appropriate Size of the Caches ........................ 5.4.4 The Most Important Information in Caches ............ 5.4.5 Critical Region Statistics ......................................... Analysis Tools ................................................................... 5.5.1 Database Analyzer ................................................. 5.5.2 Resource Monitor .................................................. 5.5.3 Command Monitor ................................................ 5.5.4 SQL Explain ........................................................... Performance with SAP NetWeaver AS ............................... 5.6.1 SAP NetWeaver AS Performance Analysis ............... 5.6.2 Load Analysis ......................................................... 5.6.3 Database Analysis in SAP NetWeaver AS ................ Summary .......................................................................... 208 208 209 211 218 218 223 226 226 227 230 230 235 237 238 249 253 260 263 264 266 267 280

5.3

5.4

5.5

5.6

5.7

Contents

Problem Situations ...................................................... 281


6.1 Diagnostic Files ................................................................. 6.1.1 Dev Traces ............................................................. 6.1.2 SQL Trace ............................................................... 6.1.3 SQLDBC Trace ........................................................ 6.1.4 X Server Log: xserver_<hostname>.prt ................... 6.1.5 appldiag ................................................................. 6.1.6 dbm.prt ................................................................. 6.1.7 KnlMsg (knldiag) .................................................... 6.1.8 KnlMsgArchive (knldiag.err, dbm.utl) ..................... 6.1.9 dbm.knl ................................................................. 6.1.10 dbm.ebp ................................................................ 6.1.11 dbm.ebl ................................................................. 6.1.12 rtedump ................................................................. 6.1.13 knltrace .................................................................. 6.1.14 knldump ................................................................ Error Types and Analysis .................................................... 6.2.1 Installation Problems ............................................. 6.2.2 Connection Problems ............................................. 6.2.3 Log Full/Data Full .................................................. 6.2.4 System Crash/System Error ..................................... 6.2.5 System Blockade .................................................... 6.2.6 Backup/Recovery Error ........................................... 6.2.7 Hardware Error ...................................................... Summary .......................................................................... 281 282 282 283 284 285 285 286 287 288 289 289 290 291 292 293 293 294 295 299 300 302 303 304

6.2

6.3

Summary and Outlook ................................................. 305

Appendices ........................................................................ 307


A B Command Reference dbmcli ................................................... 309 The Authors ........................................................................... 317

Index ............................................................................................. 319

10

Caches, indexes, and analysis tools and how theyre used efficiently this chapter provides background information and describes how you can identify and eliminate the causes of performance bottlenecks.

Performance tuning

Databases ensure both the persistency and integrity of data. That databases are widely used in nearly all IT areas is largely a result of the very fast and flexible access options to stored information. This chapter discusses the theoretical and technical principles that enable this high-performance access. Furthermore, it introduces the means and methods for recognizing, analyzing, and eliminating performance bottlenecks. Section 5.1 describes the performance concept and defines the database administrators options for optimizing performance. Section 5.2 introduces the structure of the database storage concept, the theoretical background of the search structure used in SAP MaxDB the B* tree and its characteristics for primary and secondary indexes. When and how you use these search structures when accessing data and how you can provide the necessary information for optimized access is explained in Section 5.3. This section also describes how you can accelerate the execution of slow SQL statements. The section on caches (Section 5.4) explains why accessing data is despite these search structures considerably slower than reading data from main memory. Youll also learn how to benefit from the speed advantage of main memory in SAP MaxDB. Section 5.5 provides information on how you can monitor the database using the DB Analyzer. In addition, this chapter illustrates how you can use the Resource Monitor to identify SQL statements, which statements cause the greatest load on the database, how you can use the Command Monitor to search for single expensive SQL statements, and how you can analyze these statements using the SQL Explain statement. The last section covers the analysis process with SAP NetWeaver AS. It describes how you can use a transaction of an SAP system to identify performance bottlenecks and analyze and eliminate the causes of these bottlenecks. 207

Performance Tuning

5.1
Performance

Performance Optimization

A central aspect of database performance is the speed with which SQL statements are processed. The faster queries are processed, the greater the performance of the database system. That means that you can influence the performance of the database by ensuring that the database supports the expected queries in the best possible way. This way, SQL statements incur less cost, that is, they become less expensive.
Queries are Expensive
EE

if they query large datasets with a potentially high percentage of redundant data. if one or several tables need to be scanned for their execution.

EE

The database developer is entirely responsible for the first scenario. You can only accelerate this type of query by variably and physically clustering the data in background memory. This, however, isnt supported by many database systems for secondary indexes including SAP MaxDB.
Optimization

The second type of cost-intensive queries can be optimized and thus made less expensive by tuning the database appropriately. To optimize the second query type, you first need to understand how the database system usually executes queries. This is done using an execution plan. The system generates a new execution plan for each request and defines the type of access to the data. Scanning the entire table is one intuitive option. Another option is to use data structures, which can considerably accelerate the scanning process, particularly for large tables. The following section also discusses the corresponding data structures and their usage.

5.2

Indexes

Due to the size of todays databases in some cases theyre several petabytes in size you must store the data on hard disks, because it cant be stored in main memory. Because accessing data on hard disks is

208

Indexes

5.2

significantly slower than accessing data in main memory, the frequency of disk accesses that are required to read a date is a main criterion for performance. Therefore, the SQL Optimizers described in Section 5.3, The Database Optimizer, were implemented in all database systems to decide which access strategy requires the least number of disk accesses and consequently has the highest performance. The following text first explains the theoretical concepts that enable you to read a data record in background memory with a guaranteed maximum number of disk accesses. After discussing the theoretical principles, it then describes the properties of B* trees in SAP MaxDB.

5.2.1

B* trees: theory
B* tree

As the name implies, this data structure is a tree. A tree is a robust and powerful data structure. Its integrated with nearly all modern database systems and is also already integrated with the leading database systems. Its often referred to as the data structure of the relational database model. In the internal nodes, the B* tree only uses reference keys, which dont have to correspond to real keys. For SAP MaxDB, the reference keys correspond to real keys, but this isnt a prerequisite for this data structure and depends on the implementation of the database manufacturer. Because each node occupies a complete page in the background memory, the system can store many of these reference keys in one node. Even for large datasets, there are thus only a few levels in the tree so that fewer disk accesses are required to find a data record. Real keys are assigned to the data at the lowest level, that is, at the leaf level. At this level, the system implements another optimization of the data structure for sequential reading. Each background memory page contains additional references to the previous and the next page. That means that if youve found the entry point, you only have to follow the sequential references until the search predicate is no longer met (see Figure 5.1). The algorithms for adding and deleting data are structured in such a way that the tree is always balanced. This means that the distance from the root of the tree to any leaf that is, to any data record is always the same.

Structure

Balancing the value distribution

209

Performance Tuning

Index Search

V0

R1

V1

R2

Rn

Vn

Free

1 1

Sj D
j

Free

Sequential Search

Figure 5.1 Schematic Structure of a B* Tree

The following illustrates the benefits of the B* tree by comparing it with the B tree. Because this data structure is an internal search tree that also stores data in nodes, its less adapted to the properties of background memory than the B* tree, which references an even larger number of data pages but has the same height. The smaller the tree, the fewer accesses are required to find a data record.
Numerical example

If a tree has four levels, and each internal node can accommodate 200 reference keys, it references at least 1.6 107 items, that is, data records. For the same height, that is, for the same maximum number of disk accesses that are required to find a data record, this tree can expand to a size of 2.56 1010 items without losing performance. Because a portion of an index is in the cache, the system frequently only needs two to three disk accesses to find a data record from a dataset of 10 billion data records. For 1KB per data record, this corresponds to a table size of about 1TB.

210

Indexes

5.2

Having described the theoretical properties of the B* tree in this section, in the following section well describe where B* trees are integrated and how you use them.

5.2.2

Primary and Secondary Key


B* tree characteristics

SAP MaxDB uses B* trees that are directly stored in tables. Figure 5.2 shows the structure of a B* tree in SAP MaxDB. The tree is created from the bottom, that is, from the leaf level. This lowest level contains the data records in ascending order according to the primary key. The index level nodes are determined from the values of the leaf level. If the system reaches the end of a page at the leaf level, it creates a new entry at the index level. This entry differs significantly from the last entry of the page compared with the next page. In our example, this applies to Seattle. In the list of cities, Salem would be the last entry of the previous page. Consequently, the entry at the index level must be SEA. The creation of the primary index continues until all references fit on one page, the root page.
Root Level

Sample structure

Index Level

SEA Leaf Level

ALB

Houston Salem

Seattle

Frankfort

Richmond

Albany

Figure 5.2 Sample Storage of a Table in a B* Tree

If you add data records when you use the table and the references no longer fit on the page at the root level, the system divides the root page and converts the two resulting pages into index pages to which a new root page references. Figures 5.3 and 5.4 illustrate an example of this.

Adding a data record

211

Performance Tuning

Root Level

Sea

Leaf Level

Houston Salem

Seattle

Figure 5.3 Situation Before the Data Records Have Been Added

Root Level Sea F

Leaf Level Houston Salem Seattle Frankfort

Figure 5.4 Situation After the Data Records Have Been Added

All entries at all levels are linked via sequential links that enable the system to also execute range queries with high performance. The maximum number of table entries is limited because the B* tree of the primary index in SAP MaxDB is restricted to a height of four index levels and one root level. However, because a logical page has a size of 8KB, sufficiently large tables can be managed. On the data pages, the entries arent sorted according to the index but in the historical order in the initial area of the data page. The order regarding the primary key is created using an item list, which is located at the end of each data page. This item list is arranged from right to left so that item list and data continuously approximate each other. If the system now searches for a data record, it can find and read the record using the item list. Figure 5.5 shows the schematic structure of a data page.

212

Indexes

5.2

If the system is supposed to read a data record of the table using a request, the index only supports this request optimally if the where condition is scanned for exactly the fields that are indexed by this index. Because SAP MaxDB creates a B* tree index for each primary index, a request for this example could be as follows:
Select * from inhabitants where city = 'Seattle
Data Entries (Not Sorted) Houston Los Angeles San Francisco
578 6 89 2 13 76 25 456 9 356

Accessing a data page

Houston Albany Salem

Los Angeles San Francisco Boston Detroit Seattle


New York

Springfield Chicago

Springfield Albany

Albany Boston Chicago Detroit Houston

Data Entries (Sorted)

Boston Seattle Chicago Salem Detroit New York

Figure 5.5 Structure of a Data Page

Figure 5.6 illustrates the access to a data record via a primary index. First, the system scans the root page. When the searched value is smaller than the entry on the root page, the system follows the reference of this entry to the next index level. The system now scans the node reached at this level using the same concept. If the system reaches the end of the page without having found an entry whose logical size is greater than the search concept, the system uses the last reference on this page. This procedure is repeated until the system reaches the leaf level and finds the value via the already mentioned item list on the data pages. To store field content of the LONG type, the system uses specific B* trees, depending on the respective length. Here, you distinguish between two types of LONG values: short LONG values, which fit on one logical page, and long LONG values, which require more than one logical page. The system manages all short LONG values in one B* tree. As a result, the
B* trees for LONG values

213

Performance Tuning

data page of the table displays a reference to this B* tree of the short LONG values instead of the value of the LONG field. If the content of the LONG field exceeds one logical page, the system creates a separate B* tree for this value. The entry on the data page then references to the B* tree of this single value. Figure 5.7 shows a diagram of this concept.
Root Level Processing Root Page: Is "Seattle" Less than an Entry? R A Follow the Last Determined Link

Index Level

Is "Seattle" < "F"? Follow the Last Link

SEA Leaf Level

ALB

Houston Salem

Seattle

Richmond

Albany

Figure 5.6 Accessing a Data Record

Base Table City ZIP 98104 13340 23229 74354 77004 01106 38101 21209 City Information <long1> <long2> <long3> <long4> <long5> <long6> <long7> <long8> <long9> <long10> <long11> <long12>

Short LONG Values <long1> <long2> <long 1 data> <long 2 data>

Long LONG Values <long 3 data>

<long 4 data>

Figure 5.7 Storing LONG Values

214

Indexes

5.2

The system automatically creates the previously mentioned indexes for each table in SAP MaxDB. That means that it creates the corresponding B* trees for the primary key of a table and for the LONG values. You can also add indexes in additional columns of a table. This is often done for secondary keys because this modeling logically links tables with other tables using these keys. This logical link would have a strong negative effect on performance if additional accesses to data records via secondary keys and thus B* trees werent supported. In general, the structure of a B* tree for additional indexes is identical to the structure of B* trees for primary keys. However, a difference exists when it comes to the relational modeling of tables. The field or fields of the primary key uniquely identify each data record. Indexes use the same condition. For secondary keys, this condition isnt provided. The following illustrates this using address data as an example. Table 5.1 uses the ZIP code as the primary key and the name of the city and a description as additional fields. This table was deliberately designed as simple as possible and lists every city only once, although of course, larger cities have numerous ZIP codes.
ZIP 48217 84113 97306 33149 77004 98104 75201 46205 08079 12865 80216 94102 City Detroit Salt Lake City Salem Miami Houston Seattle Dallas Indianapolis Salem Salem Denver San Francisco

B* trees for additional indexes

Inverted lists

Example

Table 5.1 Example for Data Records with Identical City Names and Different ZIP Codes

215

Performance Tuning

ZIP 19118 30316 74354 89044 01106 53227

City Philadelphia Atlanta Miami Las Vegas Springfield Milwaukee

Table 5.1 Example for Data Records with Identical City Names and Different ZIP Codes (Cont.)

If the system should now also support access to the data records of this index via the City field, there may be several ZIP codes for one city name because there are several cities that have the same name. As a result, the system uses inverted lists for this case, as shown in Table 5.2. These lists can be stored at the flat level as long as they fit on one data page.
City Houston Dallas San Francisco Detroit Denver Philadelphia Miami Salem Seattle Indianapolis Salt Lake City Springfield Milwaukee Las Vegas ZIP 77004 75201 94102 48217 80216 19118 33149,74354 97306,08079,12865 98104 46205 84113 01106 53227 89044

Table 5.2 Inverted List for the Index via the Column City

216

Indexes

5.2

Thus, this B* tree has a unique search criterion for the additional index. If this inverted list is too long for a city, the list is relocated and managed in a separate B* tree. In the original index that manages the inverted lists, the system then creates a reference to the B* tree created for this inverted list in the data area defined for this location.
Table with Primary Key for ZIP

Unique search criterion

Index for City

M
33149

Mia

33
1

74

Miami

331

74354

3 743 1

Index

33149 74354

rec

rec

Figure 5.8 Additional Index

Important! Generally, SAP MaxDB stores data only in the B* tree of the primary key. In a B* tree of a secondary index, the inverted lists dont store the values again but create references to the primary key. These references contain the entire primary key of the referenced data record. This is particularly critical for the selection of the access strategy and thus for the acceleration of data accesses.

The execution costs indicate how important it is to optimally support requests using high performance that is, selective indexes. Without index support, execution can be more expensive, up to 1,000 times more in some cases. Conversely, this means that an expensive SQL statement may be reduced to a thousandth by optimizing the indexes and/ or changing the statement. Note, however, that additional indexes also require resources because when changes are made to the data you must also maintain and store them in the data cache. As a result, you should first check the statement and the code of the application whether you can solve or alleviate the problem there. 217

Execution costs

Performance Tuning

5.3

the Database Optimizer

The maintenance and provision of effective indexes is important for highperformance queries. The program in the database, the optimizer, decides whether an index is used or if there are multiple indexes which index is used to search for data. Performance can significantly depend on the processes for database requests. To illustrate these processes, the following sections first introduce the database optimizer, which is also often referred to as the SQL Query Optimizer. They describe the basic properties of the optimizer and explain which criteria are used to evaluate indexes. Furthermore, they introduce the most critical strategies using typical examples of SQL queries and discuss why the optimizer chooses them.

5.3.1
Optimizer types

Basic Principles

The execution plan is created by a database program, the database optimizer. Two types exist: the Rule Based Optimizer (RBO) and the Cost Based Optimizer (CBO). Of the database systems certified to use SAP, only Oracle lets you use an RBO; all others use a CBO. The following sections therefore illustrate the steps and behavior of a CBO. A CBO decides which strategy is used to access data. The system first determines all possible access strategies and then their costs, which derive from the number of page accesses. Among others, the following criteria are used as a basis for a decision of whether an index is used:
EE

Procedure

Criteria for using indexes

Storage on the physical medium How effective an index is depends on the distribution of the data across the storage medium. If the data is highly distributed, the system needs more slow read accesses than would be necessary to read a lot of required data with one read access. Distribution of the field content The database optimizer also considers the distribution of the searched field content within a table because its critical for a decision whether the content is evenly distributed across the table or stored in clusters. Number of different values of indexed fields The more different fields that are included in an indexed field, the more efficient the corresponding index and the higher its selectivity. That means selectivity refers to the number of different values of a column in relation to the total number. The literature says that the

EE

EE

218

The Database Optimizer

5.3

database optimizer only uses indexes if this reduces the dataset to be scanned around 5 10%.
EE

Table size If the tables are small, it may be less expensive to scan the entire table because this reduces the number of read accesses (that is, the costs).
Using Optimizer Statistics

The SQL Database Optimizer uses optimizer statistics only for joins or operations for views to select the appropriate execution strategy. Views are usually tables that are linked via particular columns; this means that technically speaking, they are also joins.

In part, the database stores this information for optimizer statistics in the internal file directory itself. The creation and updating of additional statistical information on the existing database tables must be initiated by the database administrator. The information is then stored in the database catalog. You should run these statistics at least once a week, or, at the latest when the content of a table has significantly changed. You can manually or automatically run the statistical information using the Database Manager GUI or directly via the command line. Note that only the first 1,022 bytes of a column value are considered. This may lead to small uncertainties if the column values match in the first 1,022 bytes. The DBMGUI enables you to create these statistics for single tables or all required tables, as well as for all tables for which creating statistics is possible. Figure 5.9 shows the dialog box in which you can configure the necessary settings.

Optimizer statistics

Manually DBMGUI

Figure 5.9 Settings for Updating the Optimizer Statistics in the DBMGUI

To navigate to the screen displayed in Figure 5.9 and update the optimizer statistics, proceed as follows: 1. In the DBMGUI, connect to the database instance. 2. Select Instance Tuning Optimizer Statistics.

219

Performance Tuning

3. Select the desired tables. 4. Start the search by selecting Search in the Actions menu item. 5. Configure the update process. 6. Start the update via Actions Execute.
Selection of the tables to be updated

The three columns, Search, Estimate, and Advanced serve to configure the update process of the optimizer statistics. If you use the default settings, the system lists all tables for which an update is required. However, if you want to display all tables that can be updated, you must select the Select From Tables option in the Advanced area. If you want to do this for single tables, you can search for the respective table or a single column via Search.

Configuring the update process

Depending on the size of the tables and the level of distribution, you may have to change the scope of the sample in the Estimate column. For a size of 1 billion data records or more SAP recommends setting the sample to 20% to obtain a sufficiently reliable result. In rare cases, you may have to increase the size of the sample to 100%. If you want to exclude a table from the update run, you can do so by specifying a value of 0% for this field. As already mentioned, you can also have the system schedule the update of the optimizer statistics automatically. Figure 5.10 shows the screen in which you can configure this setting. Perform the following steps: 1. In the DBMGUI, connect to the database instance. 2. Select Instance Automatic Statistics Update . 3. Click on the On button. The columns and tables that are listed in the SYSUPDSTATWANTED system table are now event-controlled, that is, the optimizer statistics are automatically updated.

Scheduling in the DBMGUI

Figure 5.10 Automatically Updating the Optimizer Statistics in the DBMGUI

220

The Database Optimizer

5.3

You can also manually carry out these functions at the command line.
update_statistics_statement uses the parameters outlined in Table 5.3.

Manual update SQL statement

Parameter
Schema_name table_name Column_name Sample_ Definition

Description Name of the database schema Table name of a basis table Column name
ESTIMATE <Sample_Definition> ::= SAMPLE <unsigned_integer> ROWS |SAMPLE <unsigned_integer> PERCENT

As per System table Identifier

Causes the statistics for all tables that are listed in the SYSUPDSTATWANTED system table to be updated Name of a basis table

Table 5.3 update_statistics_statement Parameters

Note for this statement that a user can only update tables and fields for which he has access rights. You can now select the statistics values from the OPTIMIZERINFORMATION system table. Here, each row maps the statistics values of indexes, columns, or sizes of a table. To update the optimizer statistics for all basis tables, proceed as follows: 1. Connect to the database instance with:
/opt/sdb/programs/bin/ dbmcli u <SYSDBA user>,<password> -d <database> [-n <database_host>]

2. Update the statistics of all tables:


UPDATE STATISTICS *

You can manually control the number of data records that should be analyzed for each table by setting SAMPLE_DEFINITION to the Estimate parameter. This enables you to configure how many table rows or what percentage of the tables or column values the system scans. If you dont specify a SAMPLE_DEFINITION, the system uses random values. The size of the sample may considerably affect the runtime of the update run. If you dont specify this parameter, the system imports the size of the sample from the definition of the table. You should thus also consider

Fine-tuning the update run

221

Performance Tuning

this aspect when creating tables, because its critical for the performance of the database. Because tables and their usage can change over time, you can also change or correct this value retroactively using the Alter Table statement. You can also exclude a table from the entire optimization run by setting the size of the sample to 0 using the Alter Table statement. If you dont specify a value for the Estimate parameter, the system scans the entire table, which may lead to long runtimes for comprehensive tables. If you use the Update-Statistics-AS-PER-SYSTEM-TABLE option, the system updates the statistics of the tables that are listed in the SYSUPDSTATWANTED system table (similar to the variant with the DBMGUI). When this process completes successfully, the system deletes the table names form this system table.
Automatic update SQL statement

To schedule the update of the optimizer statistics automatically via the command line, you can use the auto_update_statistics statement: 1. Connect to the database instance with:
/opt/sdb/programs/bin/dbmcli u <SYSDBA user>, <password> -d <database> [-n <database_host>]

2. Start the automatic, event-controlled update process:


auto_update_statistics <mode>

Three modes are available for the update:


EE

On: Enables the automatic update function. Note that this is eventcontrolled and based on the frequently mentioned SYSUPDSTATWANTED system table. Because this DBM command also requires a separate event task, ensure that the size of the _MAXEVENTTASKS database parameter is sufficient. Off: Disables the automatic update function. Show: Returns the current status of the automatic update function; possible values include:
EE EE EE

EE EE

On: The automatic update function is enabled. Off: The automatic update function is disabled. Unknown: The system couldnt determine the status of the automatic update function.

222

The Database Optimizer

5.3

5.3.2

Criteria for Selecting Specific Access Strategies

Only for join operations are up to date optimizer statistics critical for the optimizer to select the correct access strategy. This section illustrates several significant query examples and describes why the respective access strategy has been selected. Which access strategy to select depends on numerous factors:
EE

Factors

What kind of query is it, that is, between which columns does the
Where differentiate?

EE

Do indexes exist, and what selectivity do they have?

The optimizer considers all of these aspects when it selects the access strategy. the Sample table A table (Table) with seven columns (Column1 to Column7) that has a primary key of three columns (Column1, Column2, Column3) and an additional index for the fifth column (Column5) will serve as an example. The columns of the primary key have different selectivity. Column1 has a very low selectivity, while Column3 has a very high selectivity. Column2 has an average selectivity. Column5, which has an additional index, has a very high selectivity, similar to Column3. Access via the Primary Key For queries on tables, you should, in general, use all fields of the primary key in the query:
select * from table where Column1 = 'John AND Column2 = Doe AND Column3 = 10/12/1970

This query is executed with the equal condition for key column execution strategy, that is, the system accesses the required data record(s) via the primary key. Because the data is also physically stored according to the order of the primary key, the primary key is ideal for supporting queries that dont use all fields of the primary key.
select * from table where Column1 = John AND Column2 = Doe

Equal condition and range condition

223

Performance Tuning

For this query, the system also uses the primary key. In this case, due to the physical arrangement of the data according to the primary key, the system can access the data via the first two key fields and identify the required data records in the primary key index, which includes all fields of the primary key. The strategy that implements this behavior is called range condition for key column. Primary Key versus Index
Only one client

However, the execution plan mentioned isnt necessarily effective. In many tables in the SAP environment, the client is a part of the primary key. If a system only has one client, which is often the case for BI, a query for all users from client 800 with street Main Street may result in a full table scan:
select * from table where Column1 = 800 AND Column4 = Main Street

For this query, the range condition for key column strategy is used, but the system has to scan all data records of the table. You can accelerate this query significantly by using an additional index for the Column4 column. This index would likely have a high selectivity. A major advantage of an index for Column4 is the structure of secondary indexes. Here, you can use the values of the primary key, which are stored in the secondary index, to select the data. In this example, if you create a secondary index for Column4, this means that the access strategy wouldnt use the primary key. Instead the access takes place via the index for Column4 with the equal condition for indexed column strategy. Its also possible that the system uses the index for Column4 for the access, despite its presumably bad selectivity and the very high selectivity of the column Column1. This is the case when, during the check of the various access strategies, the system determines that Column4 doesnt contain the searched value and that the result set therefore is empty.
Access Strategies This chapter has distinguished between two strategies so far: The equal condition for index column strategy is a search strategy that evaluates data in a comparison operation but uses an inverted list. This strategy directly addresses table entries. For the range condition for key column strategy, the system scans portions of the table sequentially. In addition to the search strategies discussed here, you can view additional strategies using the Explain statement.

224

The Database Optimizer

5.3

Index versus Full table Scan The system uses a full table scan if the query isnt sufficiently supported by the primary key or additional indexes. A full table scan is also used if a table is very small and the system needs to load fewer pages than for access via an index. After all, accessing an index also incurs costs and the system also has to scan all data records for small tables. To support queries, you can often avoid full table scans by using only the fields that, individually, have a very low selectivity:
select * from table where Column4 = Financial Accounting AND Column6 = Team Lead
Avoiding a full table scan

To considerably accelerate the execution of this statement, you can use a composite index for the columns Column4 and Column6. Individually, each column has a very low selectivity; however, in combination, they can represent an acceptable decision criterion. As a result, this index can provide a sufficiently high selectivity to increase performance compared to a full table scan when accessing data. For small tables, you can determine this by proceeding as follows: 1. Open an SQL dialog via SQL Studio or via the dbmcli tool. 2. Enter the following statement:
Select distinct Column4,Column6 from table

The statement provides all combinations of the values of the two columns, Column4 and Column6. If the result set contains many values, you can assume that an index for these columns has enough selectivity. Joins
Joins are database queries that link several tables using the values of one or more columns. It would go far beyond the scope of this book to describe the execution strategies for joins or queries on database views (equivalent to join queries). Remember that optimizer statistics assume a central role for selecting the execution strategy. Although the statistics arent used to access basis tables, they form a critical basis for the decision on the execution strategy of joins. If you come across unexpected execution strategies when analyzing joins or queries on database views, obsolete optimizer statistics may be the reason. In this case, update all tables that are used by the join or view.
Distinctive optimizer statistics

225

Performance Tuning

Indexes for join columns are critical

Furthermore, you should generally provide an index with a sufficient selectivity for those table columns you want to use for a join. If this is impossible, you can also create an index for several tables, which due to the combination is provided with sufficient selectivity. However, afterward, you have to adapt the join condition to the new index. Unfortunately, there is no definite solution to this problem because each problem usually has several, often very individual approaches to its solution.

5.4

Caches

Among other things, the caching strategies used at the database level are also responsible for the high access speeds of todays database systems. An incorrect configuration of these caches can have very negative effects on performance. This section again introduces the various caches of SAP MaxDB and their use and describes how you can analyze the optimal hit ratio. In addition, it covers the problem of the appropriate cache size.

5.4.1
Why caches?

Background

Disk access is excruciatingly slow. This statement from Database Principles, Programming and Performance by ONeil, 2001, addresses to the point the core problem responsible for the existence of caches. To read data from a hard disk, the read/write heads must be put on the right track. This is called the seek time. Because of the rotation, the read head then has to wait until its positioned above the correct page. This time is also called response time. Its followed by the read time, also called transfer time, during which the required pages are read. Because all of these processes are mechanical actions, the access time is painfully slow, compared to main memory access time. In a size comparison, the result of reading several thousand bytes is an elapsed time of approximately 0.003 seconds. The same amount of data can be loaded from main memory in about 0.00000001 seconds. Its thus beneficial to keep data you need frequently in main memory, in caches. However, main memory doesnt ensure persistent storage of data because its lost in the event of power outages or when the computer is shut down. Because there is less space available in main memory than on hard disks, how you can optimally assign main memory space to different applications is an issue.

0.003 sec versus 0.00000001 sec

226

Caches

5.4

5.4.2 the Various Caches


SAP MaxDB uses three caches: I/O buffer cache, catalog cache, and log I/O queue. These caches are divided into different regions to enable parallel access and thus increase the write rate. When a region is accessed, its locked against usage by a different user task. Collisions for access to regions lead to wait times until the regions are released. This indicates a heavy CPU load. Usually, these locks are released within 1 microsecond. However, if the processor is experiencing a high load, the operating system dispatcher may withdraw the CPU from this user kernel thread (UKT), and the UKT may still be locked. This increases the risk of collisions or queues. Data Cache Due to the large data caches, more than 98% of the read and write accesses in todays live SAP MaxDB installations are processed via the cache. Because its very likely that the data in the cache will be modified again, you should perform all data changes in the cache and make this persistent by defining an entry in the redo log. The system then writes the data records from the data cache to the data volumes and thus to the disk at regular intervals. If the system cant find data in the cache, it reads the entire page from the data volumes and writes it to the data cache so that the page can be reused from there. Because access to data in the data volumes is very slow and consequently expensive, a maximum data cache hit rate is always beneficial. A hit rate of 99% or more is nevertheless not a sufficient criterion because the large amount of statements that are processed via the data cache hides a transaction with low performance. If a single statement has to load 10 pages with 1,000 data records to read a record and then be able to process the next 990 queries from the cache, the hit rate is 99% still, this single statement has low performance. As long as enough physical main memory is available, the size of the I/O buffer cache should be as large as possible because the data read times in a large cache do not differ from the read times in a small cache. However, the risk of physical data accesses is reduced. Several reasons can exist for the data cache hit rate to be 99% over a long period of time. In most cases, the cache is too small and/or the SQL statements are inefficient. Section 5.5, Analysis Tools, describes how you can determine the cause. 227
98% of accesses processed via cache Three caches

Hit rate in the data cache

Performance Tuning

Converter Cache
Converter cache for the assignment table

Because the database uses only logical pages, automation is required that assigns logical pages to physical pages on the hard disk. The converter is responsible for this. The system imports the entire assignment table into the cache when the instance starts. That is, you cant configure the size of the cache because the system automatically assigns the required size at startup. If memory requirements increase during the operation because new data volumes were dynamically added, the I/O buffer cache assigns memory to this cache. Catalog Cache

Catalog cache for SQL statements

The catalog cache stores SQL statement information. This includes information on the parse process, input parameters, and output values. If the SHAREDSQL parameter has the value yes, the system stores these values for each user individually. If the same SQL statement is triggered by various users, the system also stores the statement several times if this parameter is enabled. For each user task, the system reserves a specific area in the catalog cache and releases it as soon as the user session is completed. If this cache has reached its maximum fill level, the system moves the information to the data cache. The catalog cache should have a hit rate of more than 90%. OMS Cache

OMS Cache for liveCache instances

The OMS cache is only used in the MaxDB liveCache instance type. This cache stores and manages data in a heap data structure. This structure consists of several linked trees. In this context, the system stores local copies of the OMS data, which are written to the heap when the system accesses a consistent view for the first time. The database system copies the data of each OMS version to the heap when its read. To read a persistent object, SAP MaxDB first scans this heap. If it doesnt find the object, it scans the data cache. Finally, the system writes the searched data from the data area to the data cache and then to the HEAP. Here, the HEAP serves as a work center where the data is changed and rewritten to the data cache when a COMMIT is triggered. Because this buffer assumes a central role for liveCache instances, you should provide it with memory generously, within the scope of your hardware requirements.

228

Caches

5.4

Log I/O Queue To avoid having to write data across data volumes which has a negative effect on the performance of write processes the system stores data changes in a redo log. The system writes to this redo log sequentially, which leads to write processes with high performance. Because the system stores all data changes in the redo log, you must use highperformance disks for this log volume. To accelerate the write processes to the redo log, the system caches them in log queues. The MAX_LOG_ QUEUE_COUNT parameter defines the maximum number of log queues. The database or the administrator using the LOG_QUEUE_COUNT parameter determines how many queues are used. The LOG_IO_QUEUE parameter defines the size of the log queue(s) in pages of 8KB. The problem of the appropriate memory size applies to this cache as well. It should be large enough to buffer write process peaks in the redo log. The Database Analyzer described in a moment enables you to determine whether log queue overflows have occurred. These indicate that the log queue is full before the system can write the data to the log volumes. These situations lead to performance bottlenecks. In this case, check the hardware speed. If the hardware speed is too low for the amount of data that should be processed, by expanding the log queue you only delay the overflow situation. To avoid this situation, you can use the MaxLogWriterTasks parameter to increase the number of tasks that can simultaneously write data to the log volumes. If you combine this with locating the log volumes on different hard disks, you increase performance and thus prevent log queue overflows. You can solve the log queue overflow performance problem by expanding the log queue only if the hardware on which the log volumes are located is fast enough overall and if the overflows occur as a result of single peaks of the dataset that should be processed. You can also determine the maximum number of log queue pages the system has used so far. This information indicates the quality of the configured log queue size. If this value is significantly below the number of available pages in the cache over a long period of time, you can release main memory for other applications or caches by decreasing the size of this cache. However, you should keep a margin of safety for possible load peaks.
Increasing performance

Appropriate cache size

229

Performance Tuning

5.4.3 the Appropriate Size of the Caches


Basic issues

An insufficient cache size has a negative effect on SAP MaxDB performance. As a rule of thumb, 66% of the entire main memory should be used by caches. If you configure too much cache that isnt physically available on the hardware, this leads to swaps. This situation should be avoided at all costs because it decreases system performance. SAP MaxDB allocates the configured cache (the memory space in the main memory of the server) during startup, that is, at the beginning of the Admin phase. This means that the configured cache is no longer available for other applications. If you configure too much cache, this may lead to memory bottlenecks for other applications. In general, the following is true: As long as the system provides enough main memory, a cache thats too large doesnt do any harm. The duration of a search for a data record in main memory doesnt depend on the size of the cache.

5.4.4 the Most Important Information in Caches


This section is a reference to enable you to quickly obtain the necessary cache information. It explains how you can obtain critical cache values such as their size and hit rates in the SAP system, in the DBMGUI, and via dbmcli.
Transaction DB50 in the SAP system

In the SAP system, Transaction DB50 provides a useful tool to acquire a quick and detailed overview of the current cache states. This is also possible using the DBMGUI. Unfortunately, requesting cache status via dbmcli isnt particularly convenient. Nonetheless, its described as a possible option. Viewing Caches in transaction DB50 Transaction DB50 (see Figure 5.11) provides detailed cache and cache utilization information. To navigate to this data, proceed as follows: 1. First, log on to the SAP system. 2. Call Transaction DB50 to display the current status of SAP MaxDB. Next, access the overview screen. 3. Now, follow the path Current Status Memory Areas Caches. The top area of the overview displays the cache sizes as bytes and pages. These values are very useful because you cant explicitly configure the size of some caches. 230

Caches

5.4

Figure 5.11 Cache Information Overview

Viewing Caches in the DBMGUI In the DBMGUI, you can find the same number of values (see Figure 5.12) as in Transaction DB50 described previously. The only difference relates to the definition of the caches sizes. The DBMGUI uses megabytes as the unit, rounded to two decimal places. Transaction DB50 displays the values in kilobytes. At first glance, the values seem to be different; however, this is due to the rounding and conversion. The values are in fact identical.
Unit: megabytes

Figure 5.12 The Most Critical Cache Values as Displayed in the DBMGUI

231

Performance Tuning

Steps in the DBMGUI

To obtain this information, perform the following steps in the DBMGUI: 1. Double-click on an instance to connect to the database. 2. Next, open the cache overview via the Information path.

Caches menu

3. Use the Refresh button at the top to update the values because they may change during operation. The DBMGUI outputs the same data as Transaction DB50. Viewing the Caches via dbmcli
More effort

You can also view the cache data via dbmcli at the command line. This, however, involves more effort because the system must write the data that was mentioned in the two previous sections to tables. The data must therefore be queried using SQL commands. This process is less userfriendly than in SQL Studio. Nonetheless, this section introduces these queries and their results using the dbmcli tool. The following SQL statement illustrates that some of the values are included in the IOBUFFERCACHES table. Because you cant explicitly configure the sizes of the data and converter caches, you cant obtain these values by outputting parameters. Instead, the database must provide them using tables. To have the system display cache data, proceed as follows:

Viewing caches via dbmcli

1. Connect to the database:


/opt/sdb/programs/bin/ dbmcli d MAXDB -n <host> -u <user>,<password>

2. Execute the following SQL command, which outputs the cache data. You dont have to place the statement inside of quotation marks; simply write it after the sql_execute command.
/opt/sdb/programs/bin/dbmcli ON MAXDB> sql_execute Select TOTALSIZE AS IOBUFFERCACHE_kB, round(TOTALSIZE/8,0) AS TOTALSIZE_Pages DATACACHEUSEDSIZE AS DATACACHE_kB, round( DATACACHEUSEDSIZE/8) AS DATACACHE_Pages,

232

Caches

5.4

CONVERTERUSEDSIZE AS CONVERTERCACHE_kB, round(CONVERTERUSEDSIZE/8) AS CONVERTERUSEDSIZE_Pages, (TOTALSIZE-DATACACHEUSEDSIZE-CONVERTERUSEDSIZE) AS MISC round((TOTALSIZE-DATACACHEUSEDSIZECONVERTERUSEDSIZE)/8,4) AS MISC_Pages From IOBUFFERCACHES

Figure 5.13 shows sample output. It lists the individual selected values sequentially. However, this is very complex, and you must be able to interpret the values accordingly. You should consequently log the values in regular intervals to create analyses and determine and eliminate bottlenecks at an early stage.

Sample output

Figure 5.13 Result of an SQL Query on the Size of the Data and Converter Caches

Reading Additional Caches via dbmcli In addition to the already described caches, you can directly configure the size of additional caches. As shown in Figure 5.14, you can easily read the sizes from the database parameters. Proceed as follows: 1. Connect to the database:
/opt/sdb/programs/bin/dbmcli d MAXDB -n <host> -u <user>,<password>
Configuring sizes

2. Execute the following commands to output the current sizes of the caches:
param_directget CAT_CACHE_SUPPLY param_directget SEQUENCE_CACHE

233

Performance Tuning

Figure 5.14 Reading the Sizes of the Remaining Caches from the Database Parameters

Reading Cache Hit Rates via dbmcli Reading the hit rates of the various caches is much easier. To do so, you again need SQL because the data changes dynamically during operations and is thus provided in tables by the database. To be able to evaluate the data more easily, the system provides descriptions of the individual values. The DESCRIPTION column contains a brief description of the respective value.
Displaying hit rates via dbmcli

1. Connect to the database:


/opt/sdb/programs/bin/dbmcli d MAXDB -u control,control

2. Execute the following command to output the current size of the cache:
sql_execute select * from monitor_caches

The result of this query is illustrated in Figure 5.15. In contrast to the previous statements, this statement doesnt involve additional calculation work because the system can determine the hit rate from the ratio of the number of all accesses to successful accesses. This result is stored in the monitor_caches system table. The values of the OMS caches indicate that this example is not a liveCache instance: For example, the size of the OMS cache is zero.

234

Caches

5.4

Figure 5.15 Cache Hit Rates from the monitor_caches Table

5.4.5 Critical Region Statistics


The caches are divided into different access areas also referred to as critical regions to accelerate competitive accesses that use locks for data areas. This section describes how you identify critical regions using the most important tools and transactions. Critical Regions in transaction DB50 You can use Transaction DB50 to display critical regions as a table. Figure 5.16 shows sample output.
Recognizing critical regions

235

Performance Tuning

Figure 5.16 Statistics of the Critical Regions in Transaction DB50 Displaying critical regions

To navigate to an overview such as the one shown in Figure 5.16, proceed as follows: 1. Log on to the SAP system. 2. Start Transaction DB50. 3. Navigate to the overview of critical regions via Current Status Critical Regions. If you determine that the collision rate shown in the overview is too high, you should take appropriate countermeasures such as increasing the size of the cache. Displaying Critical Regions via dbmcli Like the data on cache sizes, the data on access statistics for critical regions isnt static but is logged regularly by SAP MaxDB and defined in the REGION_STATISTICS table in aggregated form. To have the system display the region data via the command line, proceed as follows: 1. Connect to the database:
/opt/sdb/programs/bin/dbmcli d MAXDB -u control,control

2. Execute the following command to output the current cache sizes:


dbmcli> sql_execute

236

Analysis Tools

5.5

select REGIONID AS ID, REGIONNAME AS Name, round((COLLISIONCOUNT*100)/ACCESSCOUNT,2) AS Collision Rate, WAITCOUNT AS Waits, ACCESSCOUNT AS Accesses from REGIONSTATISTICS where ACCESSCOUNT > 0

This example avoided all values that would result in a division by zero using the Where condition. However, this doesnt affect the information content because the system divides by the value of the ACCESSCOUNT column. If this value is zero, the critical region hasnt been accessed and thus didnt cause wait times. Figure 5.17 shows the output of this SQL statement.

Avoiding a division by zero

Figure 5.17 Critical Region Access Statistics

5.5

Analysis tools

When you have correctly configured all indexes and sufficiently sized all caches, it may be possible that due to data growth and changes in usage

237

Index
32-bit, 14 Bad index, 193 Recognize, 193 Remove via the DBMGUI, 194 Resolve via the dbmcli, 195 Before image, 67 B* tree, 65, 210, 211

A
ABAP stack, 90 Absolute path, 81 ACTION, 126 ADABAS, 12 ADA_SQLDBC, 107 After image, 67 Analysis tool, 237 appldiag, 285 Application level, 91 Application server, 89 Area of use, 16 AS ABAP, 90 Asdev thread, 58 AS Java, 90 Automatic log backup, 139, 174

C
Cache, 63, 226, 230 Hit rate via the dbmcli, 234 Size, 70, 230 CacheMemorySize, 141 Catalog cache, 69, 227, 228 Central user administration, 54 Client-server architecture, 92 Command, 253 Command Monitor, 254, 256, 276 Configuration, 256 Command reference, 35 Component group, 116 Configuration file, 86, 87 Configuration type Custom, 133, 135 Desktop PC/Laptop, 133 Desktop PC/Laptop , 133 My Templates, 133 Configuration Type, 133 Configuration type My Templates, 139 Consistency check, 196 Database structure, 197 Consisteny check Check database structure via the dbmcli, 199 Console thread, 57 Control user, 128 Converter, 64, 65 Converter cache, 228 Cooperative multitasking, 60 Coordinator thread, 56 Critical region, 235 Cyclical writing, 73

B
Backup, 163 Backup strategy, 166 Backup types, 164 Call state, 173 Check last backup, 183 Check medium/template, 184 Check via the dbmcli, 184 Check via the DBMGUI, 182 Concepts, 163 Duration, 172 History, 165, 180, 191 Implement via the dbmcli, 172 Implement via the DBMGUI, 171 Incremental, 171 State of check, 184 Template, 166 Backup medium, 166 Create via the dbmcli, 169 Create via the DBMGUI, 167 Delete, 170 Delete via the dbmcli, 170 Delete via the DBMGUI, 170 Parallel medium, 168 Properties, 168 Types of media, 166

D
Data area extension, 298

321

Index

Database Activate, 142 Assistant, 103 Configure, 146 Console, 106 Create on command line, 140 Create via GUI, 132 Create via script, 143 Delete via the dbmcli, 202 Delete via the Installation Manager, 200 Instance, 55, 56, 84 Level, 92 Operators, 47, 48 Parameter, 84 Plan, 130 Trace, 106 Database Analyzer, 42, 238, 268 Configuration file, 241 Log file, 247 Start via the command line, 240 Start via the dbmcli, 239 Start via the DBMGUI, 238 Database manager Operator, 46, 136 Database Manager CLI, 34, 84 GUI, 30, 83 Database Monitor, 112, 113 Database optimizer, 218 Cost based optimizer, 218 Optimizer statistics, 219 Rule based optimizer, 218 Size of the sample, 221 Updating optimizer statistics, 219 Database Studio, 27, 29, 83 Database system administrator, 138, 201 Data cache, 64, 227 Data export, 43 Data import, 43 Data record lock, 300 Data transport, 43 Data volume, 71, 131 Add via the dbmcli, 148 Add via the DBMGUI, 146 Adjust, 138 Create, 137 Create a dynamic data volume, 149 Delete, 153 Properties, 147 Volume restriction, 131 Data warehouse, 22

DB50, 230, 235, 268 DBA, 49 DBA history, 109 DBA Planning Calendar, 106, 110 dbmcli, 34 dbm.ebl, 289, 303 dbm.ebp, 289, 303 DBMGETF, 44 DBMGui, 30 dbm.knl, 288 DBM operator, 47 dbm.prt, 285 dbm.utl, 287 DB Time, 266 Dependent program path, 80, 81, 120 Dev thread, 57 Dev trace, 282 Diagnosis file, 281 Directory structure, 80, 82 Dispatcher, 90 Documentation, 17 Drill down, 22

E
Equal condition for index column, 224 Event, 62 Exclusive lock, 79 Execution costs, 217 Execution plan, 208 EXPLAIN, 260

F
FILE, 76 File directory, 64, 66 Full table scan, 225

G
Garbage collector, 25 GETDBROOT, 45

H
Hard disk, 55 History of origins, 11 Hot standby, 33

I
Independent data path, 80, 81, 120 Independent program path, 80, 81, 120, 125, 147

322

Index

Indexes, 208 B* tree, 209 Execution costs, 217 Inverted list, 215 LONG values in B* trees, 213 Primary key, 211 Secondary key, 211 Installation In the background, 121 In the dialog, 116 Log file, 125 Manager, 123, 200 Phase, 119 Troubleshooting, 125 Type, 124 Installation profile, 117, 118 INSTALLER_INFO, 125 Instance type, 21 Interface, 15, 96 Inverted list, 215, 216 I/O buffer cache, 24, 64, 227 I/O worker thread, 58 IPC (Inter-Process Communication), 99 Isolation level, 79

J
Java stack, 90 JDBC interface, 38 JDBC (Java Database Connectivity), 97 Joins, 225

Loader, 42 Lock escalation, 79, 80 Lock list, 79 Log backup, 74, 111, 174 Automatic, 174 Implement via the dbmcli, 177 Implement via the DBMGUI, 175 Log file, 247 Log full, 75 Log I/O queue, 227, 229 Log mode Configuration, 156 Overwrite mode, 156 Redo log management, 157 Log partition, 73 Log queue, 67 Log segment, 75 Log volume, 72 Adjust, 138 Create, 137 Create via the DBMGUI, 150 Create via the dbmcli, 151 Mirror, 154 Overwrite mode, 142 Properties, 150 Reintegrate mirrors, 192 Log writer, 62

M
Main memory, 55 MaxCPUs, 59 Microsoft Windows, 14 Mirroring, 74 Monitor, 244 MSG, 126

K
Kernel, 55 Thread, 56 Trace, 291 Variant, 84 knldiag, 286 knldiag.err, 287 knldump, 292 KnlMsg, 286 KnlMsgArchive, 287 knltrace, 291

N
.NET Wrapper, 98

O
Object identifier, 24 ODBC, 96 Offline mode, 129 OLAP cubes, 22 OLAP (Online Analytical Processing), 16, 21, 22 OLTP (Online Transaction Processing), 16, 21 OMS cache, 228 OMS heap, 25

L
License, 12 LINK, 76 Linux, 14, 28 liveCache, 23, 228 Load analysis, 266 Load balancing, 62

323

Index

OMS (Object Management System), 24 One-layer architecture, 92 Operating systems, 14 Operating system user, 52 Operational state, 33, 83 Optimistic lock, 79 Optimizer statistics, 219, 220, 225 Optimizer types, 218 Overwrite mode, 139

Log full situation, 295 System blockade, 300 System crash, 299 Python, 99

R
RAID, 71, 131 Range condition for key column, 224 RAW, 76 RAW device, 132 Recovery, 186 Implement via the dbmcli, 190 Implement via the DBMGUI, 188 Strategy, 187 Type, 186 With initialization, 188, 191 Without initialization, 191 Requestor thread, 56 Resource Monitor, 249, 250, 251, 271, 272 ROLAP (Relational OLAP), 22 Role concept, 50 Roll up, 22 root, 53 Root page, 211 rtedump, 290 RUNDIRECTORY, 163

P
Page, 24 Page chain, 24 Pager, 62 Parallelization, 61 Parameter Change, 159 Change via the dbmcli, 161 Change via the DBMGUI, 160 Commit, 141 Copy, 162 Copy to another database, 162 Group, 85 Initialize, 141 _ IOPROCS_ PER_ DEV, 130 Parameter category, 160 Session, 86 Start parameter session, 140 Parameter initialization, 136 Copy parameters from existing database, 136 Initialize parameters with default value, 136 Restore parameters from a backup, 136 Use current parameters, 137 Performance, 208 Perl, 99 pgm/kernel, 129 PHP, 99 Pointer, 24 Port, 100 Position index, 24 Preparing phase, 119 Presentation level, 91 Primary key, 223 Problem situation Connection problems, 294 Data full situation, 295, 297 Hardware error, 303

S
SAP, 94, 96 SAP architecture, 89 SAPCAR, 116 SAP CCMS, 103, 112 SAP Content Server, 16 SAP DB, 12 SAP Developer Network, 17 SAPInst, 72, 144 Error case, 146 Log file, 145 Log file for MaxDB installation, 145 Phases, 144 SAP landscape, 101 SAP NetWeaver AS, 263, 267 SAProuter, 101 SAP standard user, 51 Savepoint, 64, 71, 76 sdb, 53 sdba, 53 SDBINST, 115, 116, 121

324

Index

SDBREGVIEW, 44 SDBSETUP, 115, 123, 204 SDBUPD, 126, 128, 129 Search criterion, 217 Security aspects, 54 Selectivity, 258 Sequence cache, 70 Server landscapes, 29 Server software Uninstallation, 203 Uninstallation via sdbunsint, 205 Server task, 61 Service session, 184 Servlet container, 38 Shadow page mechanism, 64 Shared lock, 79 Shared SQL cache, 69 Slice and dice, 22 Snapshot, 177 Create via the dbmcli, 179 Create via the DBMGUI, 178 Delete via the dbmcli, 181 Delete via the DBMGUI, 181 Functionality, 178 Revert via the dbmcli, 181 Revert via the DBMGUI, 180 Software component group, 117 sql6, 100 sql30, 100 SQL CLI, 36 SQLDBC, 97 SQLDBC trace, 283 SQLDBC Trace, 107 SQL editor, 29 SQL Explain, 260 SQL interface, 102 SQL mode, 15 SQL Studio, 35, 106 SQL trace, 282 SQL user, 49 Standard SAP user, 94 Star schema, 22 STDIN, 125 STDOUT, 125 Striping, 72 Support groups, 54 SYS, 125 SYSDBA user, 46, 50 SYSDB user, 45 SYSMONITOR, 254, 255

System table, 129, 142, 158 Load, 159 System table category, 158

t
Table editor, 29 TCP port, 32 Template, 134 Three-layer architecture, 92 Timer, 62 Timer thread, 57, 63 Tomcat, 38 Trace file, 108 Trace writer, 62 Transaction, 102 CCMS, 304 DB12, 108 DB13, 109 DB50, 103, 230, 267, 271, 276, 279 DBCO, 95 RZ20, 112 ST03N, 264, 266 Transaction profile, 264 Tutorial data, 139 Two-layer architecture, 92

U
Uninstallation Summary, 204 UNIX, 14 Update, 126 Upgrade, 129 User kernel thread, 61 User rights, 47, 48 User task, 60, 94 User type, 46 Utility, 62 Utility session, 177, 191 Open, 173

V
Version name, 14, 15 View, 242 Visual query editor, 29 Vwait, 301

W
Watchdog process, 58 Web Database Manager, 38

325

Index

Web SQL, 37 Work process, 94

X
X_CONS, 39, 59 XINSTINFO, 44

X_PING, 45 X Server, 27 XServer, 100, 127, 128 X Server log, 284 XUSER, 40

326

Вам также может понравиться