Вы находитесь на странице: 1из 18

IBM B2B INTEGRATOR TUNING

BEST PRACTICES
FOR DB2 PURESCALE
2014-11-27

Authors:
Steve McDuff (mcduffs@ca.ibm.com)
Mark DuFresne (mark_dufresne@us.ibm.com )
Deep Chatterji ( dchatter@us.ibm.com )
Kostas Rakopoulos (kostasr@ca.ibm.com )
Steve Rees (srees@ca.ibm.com )

CONTENTS
Pushing the limits of B2B Integrator ........................................................................................................................4
What is DB2 pureScale?............................................................................................................................................4
Test Methodology ....................................................................................................................................................4
Measurements......................................................................................................................................................4
Test description ....................................................................................................................................................4
Test 1 : Peak Inbound Throughput ...................................................................................................................5
Test 2 : Index and purge throughput ................................................................................................................5
Test 3 : Long duration test. ...............................................................................................................................5
Test 4 : Planned failover ...................................................................................................................................5
Test 5 : Unplanned failover ..............................................................................................................................5
Hardware Setup ....................................................................................................................................................5
B2B Integrator Hardware .................................................................................................................................5
DB2 pureScale Hardware ..................................................................................................................................6
Software Setup .....................................................................................................................................................6
Test Results...............................................................................................................................................................6
Test 1 : Peak Inbound Throughput .......................................................................................................................6
Test 2 : Index and purge throughput ....................................................................................................................8
Test 3 : Long duration test. ...................................................................................................................................9
Test 4 : Planned failover .......................................................................................................................................9
Test 5 : Unplanned failover ............................................................................................................................... 11
Conclusion ............................................................................................................................................................. 11
B2Bi Database tuning recommendations .............................................................................................................. 12
Volatile cardinality ............................................................................................................................................. 12
Adaptive compression ....................................................................................................................................... 12
Append mode .................................................................................................................................................... 13
Added index ....................................................................................................................................................... 13
larger table space extent size for tables that grow quickly ............................................................................... 13
Table partitioning .............................................................................................................................................. 14
Random Indexes ................................................................................................................................................ 15
Inlining ............................................................................................................................................................... 16

Database tuning tools............................................................................................................................................ 16


Query & verify JDBC driver version ................................................................................................................... 16
Total database size ............................................................................................................................................ 16
Identifying longest queries ................................................................................................................................ 17
Explain tools ...................................................................................................................................................... 18

PUSHING THE LIMITS OF B2B INTEGRATOR


The performance of B2B Integrator heavily depends on the performance of the underlying database. Improving
the performance of the database is one of the easiest ways to scale the application. In this article, we will evaluate the impact of DB2 pureScale on B2B Integrators scalability. We will also share a few tools and techniques
we used to monitor and tune the application.

WHAT IS DB2 PURESCALE?


The DB2 pureScale Feature delivers a high level of availability and scalability, giving you the ability to scale out
your database over a number of servers in an active-active configuration. The DB2 pureScale Feature offers
virtually unlimited capacity for growth because you can dynamically add servers to a DB2 pureScale instance
with no disruption to cluster operation. The DB2 pureScale Feature also provides near-continuous availability
by being able to tolerate component failures while continuing to provide full access to data that does not have
to be recovered.
The DB2 pureScale Feature offers true application transparency. Applications require no knowledge of the underlying database system topology and do not require any modifications to run on the DB2 pureScale Feature.
Applications running need not be cluster aware to scale, as can be the case with other scale-out architectures
that rely on a heavy network-based messaging infrastructure for sharing data in a cluster.

TEST METHODOLOGY
The performance of a B2Bi system depends on the business processes it runs. Some of them depend on the
CPU capacity of the B2Bi system while others rely heavily on the database. Since the goal was to test the limits
of B2Bi on DB2 pureScale, we opted to run a simple business process that would read the request, validate it,
store the document in the database and send a simple OK Message Disposition Notification (MDN)

MEASUREMENTS
The following characteristics of the system will be measured during our tests:

The average response speed of requests from the trading partner's perspective.

The number of request processed per second.

The CPU load % on the B2Bi system.

The CPU load % on the DB2 pureScale members.

TEST DESCRIPTION

TEST 1 : PEAK INBOUND THROUGHPUT


In this test, we will simulate the largest number of concurrent clients we can reach. We will limit the
test when either the database CPU reaches over 75% utilization or when the request response time
starts to degrade. All background processes will be disabled during this test.
To measure the impact of database clustering, this test will be run with a DB2 pureScale configuration
that includes 1, 2 and 4 active members.

TEST 2 : INDEX AND PURGE THROUGHPUT


This test is used to measure how fast the system can take documents out of the system. It is used to
determine how many documents per day the system can handle. The speed of index and purge is typically linked to the number of business process steps. To keep the results of this test consistent with the
inbound throughput test results, we opted to measure the result in terms of the time it took to index
and purge a complete client request which usually translated to 4 business process steps.

TEST 3 : LONG DURATION TEST.


This test is used to measure the performance of B2Bi after a long duration. Using the lowest number
between the test 1 and test 2, we will use 60% of that number to push the system over a 3 day period.
All background processes will be enabled during this test.

TEST 4 : PLANNED FAILOVER


This test simulates taking a DB2 pureScale member offline to perform planned maintenance. It is done
by running 4 active members under load and taking one of them offline gracefully by using the
db2stop quiesce command.

TEST 5 : UNPLANNED FAILOVER


This test simulates taking a DB2 pureScale member offline in an abrupt fashion. It is done by running 4
active members under load and taking one of them offline by shutting down the operating system.

HARDWARE SETUP
B2B INTEGRATOR HARDWARE

Cluster of 4 B2Bi Application Server

x3550 M3 AC1

CPU : 2 x Intel Xeon X5660 6 cores 2.80 GHz

Memory : 48GB 1333mhz

HDD : 2x 300 gb

OS : RHEL 6.2

Network : 10 gigabit Ethernet

DB2 PURESCALE HARDWARE


We used a small Pure Data for Transactions (PDTX) system with the following specifications:

4 active DB2 pureScale members

CPU : 2 x Intel Xeon E5-2670 per host. 8 Core 2.6GHz.

Memory : 256GB per host.#

Storage : IBM v7000 storage system. #Raid 10.

SOFTWARE SETUP

JDBC Workload balancing

Protocol : AS2

Request size : 1kb request

1 hour document retention

Minimum persistence level

DB2 pureScale automated online runstats

TEST RESULTS
TEST 1 : PEAK INBOUND THROUGHPUT
To satisfy the testing criteria, we used the following number of parallel clients:

1 active DB2 pureScale member : 50 parallel clients


2 active DB2 pureScale members : 100 parallel clients
4 active DB2 pureScale members : 100 parallel clients

Throughput

380

380

50 clients

100 clients

100 clients

+32%

Documents per second

400
285

0
Active pureScale Member Count

Average pureScale Member CPU Utilization

67%

73%

50%

-33%

100 clients

100 clients

40%
50 clients

CPU Utilization %

100%

0%
Active pureScale Member Count

Response Time

Milliseconds

300

262

262

50 clients

100 clients

100 clients

174

0
Active pureScale Member Count
From 1 to 2 active DB2 pureScale members :
The system throughput showed a 32% improvement when using a 2 member configuration
while the response time went up by 50%.
When DB2 pureScale operates on a single member, there is no data contention with other
members. Depending on the data-sharing characteristics of the workload, adding another
member has the potential to force both members to exchange ownership of some of the data
pages they need to access. In this case, the need to exchange ownership of pages partly explains why the average response time increased. Other possible causes include additional contention at the disk storage layer, which provided a fixed amount of capacity in this test, regardless of the number of members used. However, the system can now handle a larger number of
request in parallel.

From 2 to 4 active DB2 pureScale members :


While the throughput and response time remained the same, the average CPU load on the system went down by 33%. Running on a 4 member system increases the system resiliency. If any
of the 4 member goes offline, we know that the remaining members will be able to handle the
same traffic volume.

TEST 2 : INDEX AND PURGE THROUGHPUT


In our setup, we measured the indexing speed at 80 requests per second. The purge process speed ran
at 120 requests per second. Since both steps are required to eliminate a document from the system,
we took the lower of the two measurements.

TEST 3 : LONG DURATION TEST.


Using results from test 1 and 2, the lowest number we reached was 80 request per second. We opted
to use 50 request per second. This represents 62.5 % of our system capacity. We ran this test for 3
days, resulting in over 13 million requests processed.

3 Days of AS2 Transactions

Documents per second

70

60

50

40

30

20

10

3 days duration

During this test, the system showed no signs of performance degradation.

TEST 4 : PLANNED FAILOVER

300
Shutdown
Time

200

Minor impact to throughput

100

0
15:10:32 15:12:12 15:13:52 15:15:32 15:17:12 15:18:52 15:20:33 15:22:13 15:23:53 15:25:33
Time
Throughput

Response Time

The degradation in throughput of the system was barely noticeable after the DB2 pureScale member
was taken down.

TEST 5 : UNPLANNED FAILOVER

When the DB2 pureScale member is abruptly shut down, it takes around 20 seconds for JDBC to notice
that the connection needs to be redirected.
When we monitor the throughput and response time from the trading partner's perspective, we notice
throughput is impacted for about 30 seconds, while the system is rolling back inflight transactions with
data changes on the failed member, resolving locks and rerouting connections. Most transactions will
not be rolled back, but they may be delayed without error during this recovery interval. After the overall recovery and reroute process is complete, in about 30s, normal throughput and response time resumes. The amount of time the total process takes is dependent on the work going on when the failure occurred, and the nature of the workload.

CONCLUSION
Our tests proved that DB2 pureScale is well suited to run high volume B2Bi deployments. Compared to
a traditional DB2 instance, DB2 pureScale is able to handle increased throughput, it increases the systems availability and it also does a better job at leveraging the allocated hardware.

ADDITIONAL ACKNOWEDGEMENTS
Special thanks to the following people who contributed to this benchmark:
Bryan Johnson (bryanjoh@us.ibm.com )
Kevin Yu (kevinyu@ca.ibm.com )
Yassir Warraich (warraich@ca.ibm.com )

B2BI DATABASE TUNING RECOMMENDATIONS


VOLATILE CARDINALITY
Tables that are used for transient data; that is, a table that goes from zero rows to 50K rows and back to zero
rows is a good candidate for volatile cardinality. This is because the DB2 optimizer relies on statistics collected
with runstats. If statistics were taken when a table was empty and a query hits when the table is large, the
optimizer can choose an inappropriate or poorly performing access plan. Marking a table as volatile forces the
optimizer to use index lookup wherever possible. Note that even when the VOLATILE keyword is used, its still
highly recommended to run runstats when there is some data in the table, to give the optimizer the best information possible when choosing a plan.
Each table can be modified using a statement like the following:
ALTER TABLE "SIDBU1

"."WORKFLOW_CONTEXT" VOLATILE CARDINALITY;

ADAPTIVE COMPRESSION
Adaptive compression was introduced in DB2 10.1. It adds page level compression to the usual table level
compression and that leads to better compression ratios and fewer I/Os. The following B2Bi tables were altered to add adaptive compression because they included blob columns.

ARCHIVE_INFO
CORRELATION_SET
DATA_TABLE
DOCUMENT
TRANS_DATA
WF_INST_S
WORKFLOW_CONTEXT
WORKFLOW_LINKAGE

Prior to DB2 10.1, compression would have to be defined in the CREATE TABE statement; as of 10.1 we can ALTER tables. For example:
ALTER TABLE "SIDBU1

"."WORKFLOW_CONTEXT" COMPRESS YES ADAPTIVE;

Adaptive compression was enabled in the CREATE TABLE statements for this project. This ensured that all
rows and pages were compressed from the start. If the ALTER TABLE statement is used, existing rows in the
table can be compressed by executing one of:

REORG
Online table move
Unload/load data

APPEND MODE
Append mode forces the database to insert new data at the end of a table instead of looking for a free spot in
existing data pages. This can result in faster data insertion speed.
The following B2Bi tables were altered to enable append mode:

ACT_SESSION_GUID
ARCHIVE_INFO
DOCUMENT
MSGMDNCORRELATION
TRANS_DATA
WF_INST_S
WORKFLOW_CONTEXT
WORKFLOW_LINKAGE

Each table can be modified using a statement like the following:


ALTER TABLE "SIDBU1

"."WORKFLOW_CONTEXT" APPEND ON;

ADDED INDEX
Analysis of long lasting queries exposed the need for an additional index for our AS2 workload. Other B2Bi
workloads may or may not benefit by adding this index.
CREATE INDEX "SIDBU1 "."MAD_exp" ON "SIDBU1 "."WORKFLOW_CONTEXT"
("STEP_ID" RANDOM,
"ACTIVITYINFO_ID" RANDOM,
"WORKFLOW_ID" RANDOM)
PARTITIONED
COMPRESS NO
INCLUDE NULL KEYS ALLOW REVERSE SCANS;

LARGER TABLE SPACE EXTENT SIZE FOR TABLES THAT GROW QUICKLY
The table space extent size configures how many new data pages are added to a table when it needs to grow.
For tables that grow quickly, frequently extending the table can be avoided by moving the table into a tablespace with a larger extentsize. Two new tablespaces with larger extent sizes (and their corresponding bufferpools) were created and the following tables moved to those new tablespaces:
Table name
ACT_SESSION_GUID
ARCHIVE_INFO
DOCUMENT
EDIINTDOC
MSGMDNCORRELATION
TRANS_DATA
WF_INST_S

Tablespace name
extTS_4K
extTS_4K
extTS_4K
extTS_4K
extTS_4K
extTS_32K
extTS_4K

WORKFLOW_CONTEXT extTS_4K
WORKFLOW_LINKAGE extTS_4K
Most of the tables in the above list were created with 4K page sizes; one table was created with a tablespace
using 32K page sizes. An example statement to create the new tablespaces follows:
CREATE LARGE TABLESPACE "extTS_32K" IN DATABASE PARTITION GROUP IBMDEFAULTGROUP
PAGESIZE 32768 MANAGED BY AUTOMATIC STORAGE
USING STOGROUP "IBMSTOGROUP"
AUTORESIZE YES
INITIALSIZE 64 M
MAXSIZE NONE
EXTENTSIZE 256
PREFETCHSIZE AUTOMATIC
BUFFERPOOL "BP_32K"
DATA TAG INHERIT
OVERHEAD INHERIT
TRANSFERRATE INHERIT
NO FILE SYSTEM CACHING
DROPPED TABLE RECOVERY ON;

Once the tablespaces have been created with larger extent sizes, the tables can be created or moved. See the
CREATE TABLE statement in the following section for an example of how the table is created in its new tablespace.

TABLE PARTITIONING
Table partitioning was introduced in DB2 pureScale in version 10.1. Each table partition is a separate database
object unlike an unpartitioned table which is a single database object. This helps the database perform insertion in parallel. We used 40 partitions on the tables that were the most solicited during data insertion. This
change had the biggest impact on our insertion throughput speed.
A new column was added to each of the six tables listed below: WORKFLOW_ID_MOD (see the sample DDL below). The new column was calculated using the formula:
__ = (_ ((

_
) 40)
40

The following tables were partitioned because they were the targets of many rapid insert statements during
our AS2 load.

ARCHIVE_INFO
DOCUMENT
TRANS_DATA
WF_INST_S
WORKFLOW_CONTEXT
WORKFLOW_LINKAGE

An example of the partitioning syntax used follows:


CREATE TABLE "SIDBU1 "."TRANS_DATA" (
"DATA_ID" VARCHAR(255 OCTETS) NOT NULL ,
"DATA_OBJECT" BLOB(200000000) INLINE LENGTH 1200 LOGGED NOT COMPACT NOT NULL ,
"PAGE_INDEX" INTEGER NOT NULL ,
"DATA_TYPE" INTEGER NOT NULL ,
"WF_ID" BIGINT ,
"REFERENCE_TABLE" VARCHAR(255 OCTETS) ,

"CUM_SIZE" BIGINT ,
"WORKFLOW_ID_MOD" BIGINT NOT NULL IMPLICITLY HIDDEN GENERATED ALWAYS AS (WF_ID WF_ID/40 * 40) )

COMPRESS YES ADAPTIVE


PARTITION BY RANGE("WORKFLOW_ID_MOD")
(PART "PARTneg" STARTING(-1) IN "extTS_32K",
PART "PART00" STARTING(0) IN "extTS_32K",
PART "PART01" STARTING(1) IN "extTS_32K",
PART "PART02" STARTING(2) IN "extTS_32K",
PART "PART03" STARTING(3) IN "extTS_32K",
PART "PART04" STARTING(4) IN "extTS_32K",
PART "PART05" STARTING(5) IN "extTS_32K",
PART "PART06" STARTING(6) IN "extTS_32K",
PART "PART07" STARTING(7) IN "extTS_32K",
PART "PART08" STARTING(8) IN "extTS_32K",
PART "PART09" STARTING(9) IN "extTS_32K",
PART "PART10" STARTING(10) IN "extTS_32K",
PART "PART11" STARTING(11) IN "extTS_32K",
PART "PART12" STARTING(12) IN "extTS_32K",
PART "PART13" STARTING(13) IN "extTS_32K",
PART "PART14" STARTING(14) IN "extTS_32K",
PART "PART15" STARTING(15) IN "extTS_32K",
PART "PART16" STARTING(16) IN "extTS_32K",
PART "PART17" STARTING(17) IN "extTS_32K",
PART "PART18" STARTING(18) IN "extTS_32K",
PART "PART19" STARTING(19) IN "extTS_32K",
PART "PART20" STARTING(20) IN "extTS_32K",
PART "PART21" STARTING(21) IN "extTS_32K",
PART "PART22" STARTING(22) IN "extTS_32K",
PART "PART23" STARTING(23) IN "extTS_32K",
PART "PART24" STARTING(24) IN "extTS_32K",
PART "PART25" STARTING(25) IN "extTS_32K",
PART "PART26" STARTING(26) IN "extTS_32K",
PART "PART27" STARTING(27) IN "extTS_32K",
PART "PART28" STARTING(28) IN "extTS_32K",
PART "PART29" STARTING(29) IN "extTS_32K",
PART "PART30" STARTING(30) IN "extTS_32K",
PART "PART31" STARTING(31) IN "extTS_32K",
PART "PART32" STARTING(32) IN "extTS_32K",
PART "PART33" STARTING(33) IN "extTS_32K",
PART "PART34" STARTING(34) IN "extTS_32K",
PART "PART35" STARTING(35) IN "extTS_32K",
PART "PART36" STARTING(36) IN "extTS_32K",
PART "PART37" STARTING(37) IN "extTS_32K",
PART "PART38" STARTING(38) IN "extTS_32K",
PART "PART39" STARTING(39) ENDING(39) IN "extTS_32K")
ORGANIZE BY ROW;

When creating indexes on a partitioned table you have the option of partitioning the indexes or not by adding
PARTITIONED to the statement that creates the index. An example can be seen in the following section.

RANDOM INDEXES
Indexes on the tables with high activity were created as random indexes. Random indexes randomize the location of data inserted in the index tree. This reduces contention on the index pages when multiple insert operations occur simultaneously. It is especially useful for monotonically increasing/decreasing key/values such as
identifiers and timestamps. An example of the syntax we used follows:
CREATE INDEX "SIDBU1 "."SCI_IDX_177" ON "SIDBU1 "."ARCHIVE_INFO"
("ARCHIVE_DATE" RANDOM,
"ARCHIVE_FLAG" RANDOM,
"GROUP_ID" RANDOM,
"WF_ID" RANDOM)
PARTITIONED
COMPRESS NO
INCLUDE NULL KEYS ALLOW REVERSE SCANS;

It should be noted that indexes created by primary key constraints cannot be made random. Because we
wanted to our key columns indexes to be random we created a UNIQUE INDEX using the same columns as the
primary key.
CREATE UNIQUE INDEX "SIDBU1 "."ARCHIVE_INFO_PK" ON "SIDBU1
("WF_ID" RANDOM,
"GROUP_ID" RANDOM)
NOT PARTITIONED IN "extTS_4K"

"."ARCHIVE_INFO"

COMPRESS YES
INCLUDE NULL KEYS ALLOW REVERSE SCANS;

INLINING
Normally, LOBs are stored separately from the rest of the columns in a table resulting in more than one I/O for
a single row. Inlining puts the LOB column in the same page resulting in fewer I/Os.
We inlined a BLOB column in the TRANS_DATA table. This can be seen also in the previous DDL example. Incidentally, this is why the TRANS_DATA table needed to be moved to a tablespace with 32K page size and a high
extent size: The pages stored only a few, potentially very long rows, so new pages needed to be created
quickly.
An example of the syntax for inlining a column can be seen in the table partitioning section above.
"DATA_OBJECT" BLOB(200000000) INLINE LENGTH 1200 LOGGED NOT COMPACT NOT NULL

DATABASE TUNING TOOLS


The following section describes some methods we used to monitor the database performance.

QUERY & VERIFY JDBC DRIVER VERSION


To query the JDBC driver for version info you can use the following command.
java -cp ./db2jcc.jar com.ibm.db2.jcc.DB2Jcc version

Output of the above command on our systems:


IBM DB2 JDBC Universal Driver Architecture 3.66.46

To download JDBC drivers to DB2 Server versions check:

http://www-01.ibm.com/support/docview.wss?uid=swg21363866

TOTAL DATABASE SIZE


This is an easy query to get an idea of the actual database size (not the space taken on the file system which
can include the empty portion(s) of the tablespace(s)). The following was used before and after each test run
to watch growth or contraction.
CALL GET_DBSIZE_INFO(?, ?, ?, 0)
Value of output parameters
-------------------------Parameter Name : SNAPSHOTTIMESTAMP
Parameter Value : 2014-10-14-13.45.13.769981
Parameter Name : DATABASESIZE
Parameter Value : 1295794176
Parameter Name

: DATABASECAPACITY

Parameter Value : 130802917376


Return Status = 0

IDENTIFYING LONGEST QUERIES


Time spent and CPU spent in queries can be collected in several different ways. We looked at queries using the
following:
SELECT
num_executions,
stmt_type_id,
total_act_time,
decimal(total_act_time / float(total_total_act_time) * 100, 5, 2) as pct_total_act_time,
decimal((total_act_wait_time / float(total_act_time)) * 100, 5, 2) as
pct_wait_time,
stmt_text
FROM TABLE
(
MON_GET_PKG_CACHE_STMT(null, null, null, -2)
),
(
select
sum(total_act_time) as total_total_act_time
from table(MON_GET_PKG_CACHE_STMT(null,null,null,-2))
)
order by total_act_time desc
fetch first 100 rows only

The output identifies

Number of executions of each statement. Those with a high number of executions are good candidates for contention investigation if the wait time is also large.
Percentage of the total workload time. Those that take up a large portion (time) of the whole workload are good candidates for query optimization, index investigation and contention investigation.
Percent wait time. Queries with high wait time are good candidates for contention investigation and
query optimization.
In the following output, it can be seen that the bulk of the load was insert statements. Which tables are hit
and how often theyre hit is there also. And the statements exhibiting high wait times should be examined
for additional index(es) or perhaps contention.
MEMBER NUM_EXEC_WITH_METRICS TOTAL_ACT_TIME
PCT_TOTAL_ACT_TIME PCT_WAIT_TIME STMT_TEXT
------ --------------------- -------------------- ------------------ ------------- -------------------------------3
1703479
18635264
12.23
97.54 insert into TRANS_DATA (DATA_ID,
1
1692051
18192128
11.94
97.52 insert into TRANS_DATA (DATA_ID,
2
1678279
17710425
11.63
97.43 insert into TRANS_DATA (DATA_ID,
0
1678317
17605416
11.56
97.48 insert into TRANS_DATA (DATA_ID,
0
1499744
8220571
5.39
88.64 INSERT INTO WORKFLOW_CONTEXT (WF
3
1517759
8190009
5.37
88.60 INSERT INTO WORKFLOW_CONTEXT (WF
1
1502117
8177858
5.37
88.69 INSERT INTO WORKFLOW_CONTEXT (WF
2
1481979
8059803
5.29
88.45 INSERT INTO WORKFLOW_CONTEXT (WF
3
1180117
4181233
2.74
90.67 INSERT INTO DOCUMENT (DOC_ID, CR
0
1185833
4074214
2.67
90.57 INSERT INTO DOCUMENT (DOC_ID, CR
1
1131676
3922203
2.57
90.52 INSERT INTO DOCUMENT (DOC_ID, CR
2
1002421
3209339
2.10
89.71 INSERT INTO DOCUMENT (DOC_ID, CR
2
581728
1522273
0.99
92.90 insert into WORKFLOW_LINKAGE val
1
564787
1494412
0.98
93.02 insert into WORKFLOW_LINKAGE val
3
558297
1475840
0.96
93.10 insert into WORKFLOW_LINKAGE val
0
545204
1467708
0.96
93.22 insert into WORKFLOW_LINKAGE val
2
754675
1417982
0.93
88.50 INSERT INTO ARCHIVE_INFO (WF_ID,
1
752351
1360368
0.89
88.13 INSERT INTO ARCHIVE_INFO (WF_ID,
3
753894
1348136
0.88
88.06 INSERT INTO ARCHIVE_INFO (WF_ID,
0
739205
1289502
0.84
87.86 INSERT INTO ARCHIVE_INFO (WF_ID,
2
193060
851307
0.55
93.21 insert into DATA_TABLE (DATA_ID,
1
188817
845193
0.55
93.18 insert into DATA_TABLE (DATA_ID,
3
186228
841116
0.55
93.31 insert into DATA_TABLE (DATA_ID,
0
181899
812421
0.53
93.38 insert into DATA_TABLE (DATA_ID,

EXPLAIN TOOLS
To analyze candidate queries, the standard Explain tools were used.
See the DB2 knowledge center page on the explain tool for more information:
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.ref.doc/doc/r0000952.html?cp=SSEPGG_10.5.0%2F2-12-7-133

Вам также может понравиться