Вы находитесь на странице: 1из 38

ABC of Teradata System Performance

Analysis
Shaheryar Iqbal
Shaheryar.Iqbal@teradata.com
GCC Pakistan.
Date created: Feb 20, 2009.
Last Updated: Oct 12, 2009.
2 > 10 October 2011
Agenda
1) Co-existing systems parallel efficiency
2) CPU Reports
1) CPU Utilization
2) CPU Node Hours available
3) Percent Shift Busy
4) OS % of CPU
5) CPU Utilization among Vprocs
6) Nodes Parallel Efficiency
7) AMPs Parallel Efficiency
3) Memory Reports
1) Memory availbilty
2) Mem allocation Failures
4) Disk I/O Reports
1) Disk busy %
2) I/O Wait
3) Measuring I/O
4) Disk Read Writes
5) Full cylinder Reads
6) Logical vs. Physical Reads
7) Mini Cyl Packs
5) Buddy Backup Effectiveness
6) AWT Usage
7) Host Utilities Report
8) Charting Teradata Manager Reports
3 > 10 October 2011
Feed back
______________________________________________
rom: Clark, Dave
Sent: Saturday, March 07, 2009 12:14 AM
To: Iqbal, Shaheryar
Subject: ABC of Teradata System Performance Analysis
Shaheryar-
I have approved your presentation for general viewing. Thank-you very much for your
effort in developing this. The information is very useful.
-dave.clark
(858)485-2177
- http://pc01.teradata.com/CKS
4 > 10 October 2011
1) Co-existing systems parallel efficiency
5 > 10 October 2011
Co-existing systems parallel efficiency
Calculation & Reconfiguration
Before Reconfiguration:
Unusable Node Capacity
14.23 13.25 = 0.98;
Parallel Efficiency = 93.09
After Reconfiguration:
Unusable Node Capacity
14.23 -14.20 = 0.03;
Parallel Efficiency = 99.80
Gain in Co-exiting systems parallel
efficiency is the source of the biggest
gain in performance.
6 > 10 October 2011
- 2) CPU Reports
7 > 10 October 2011
CPU Utilization Chart
Generated with Higa Macro ResPMATotal
Higa Macro
ResPmaTotal
8 > 10 October 2011
Absolute CPU Node Hours available
ABS Avail Hours: Time when CPU was available to process a iob but there was nothing to process
This graph gives a ratio between total CPU available to Max CPU used on daily basis
Higa Macro
ResPmaThreeShifts
9 > 10 October 2011
CPU Utilization: Percent Shift Busy
ShiIt Time 00-07
Mon-Fri
ShiIt Time 07-17
Mon-Fri
ShiIt Time 17-24
Mon-Fri
ShiIt Time 00-24
Sat-Sun
-From previous slide same graph shown in Percent shift busy
Higa Macro
ResPmaThreeShifts
10 > 10 October 2011
OS % of CPU
Higa Macro
ResPmasec
Extremely degraded
system performance
-CPU 100 % Busy
-OS% of CPU less than 20%
-For more than 30 minutes
In-Efficient Use of CPU
11 > 10 October 2011
CPU Utilization among Vprocs Example #1
Higa Macro
ResSvprCPUs
A normal breakdown of the total CPU utilization
among the AMP, PE and node vprocs
12 > 10 October 2011
CPU Utilization among Vprocs Example #2
Higa Macro
ResSvprCPUs
A very high CPU usage by PE. (Non
optimized ) TPUMP sessions one of
the reasons
PE consuming 40% of
Node CPU, works at
their max capacity.
And have tendency to
become bottleneck.
13 > 10 October 2011
Nodes Parallel Efficiency - Example 1
Higa Macro
ResPmaTotal
Extreme case
of Node skew
-Node Skew = Max Node CPU - Avg Node CPU
14 > 10 October 2011
Nodes Parallel Efficiency - Example 2
Higa Macro
ResPmaTotal
Node Skew = Max Node CPU - Avg Node CPU
High Parallel Efficiency
among Nodes
15 > 10 October 2011
AMPs Parallel Efficiency - Example 1
High parallel EIIiciency is present among AMPs
Higa Macro
ResSvprCPUs
16 > 10 October 2011
AMPs Parallel Efficiency - Example 2
Higa Macro
ResSvprCPUs
Parallel EIIiciency not good among AMPs at times
17 > 10 October 2011
AMPs Parallel Efficiency - Example 3
Higa Macro
ResSvprCPUs
Extreme case of skew among AMPs
18 > 10 October 2011
- 3) Memory Reports
1
9
>


1
0

O
c
t
o
b
e
r

2
0
1
1
M
e
m
o
r
y

U
t
i
l
i
z
a
t
i
o
n

;
e
r
a
g
e

&

M
i
n
i
m
u
m

F
r
e
e

M
e
m
o
r
y

;
a
i
I
a
b
I
e




0




2
0
0




4
0
0




6
0
0




8
0
0



1

0
0
0



1

2
0
0
12/18Sat 00:10
12/20Mon23:10
12/23Thu21:50
12/26Sun20:30
12/29Wed19:10
01/01Sat 17:50
01/04Tue16:30
01/07Fri 15:10
01/10Mon13:50
01/13Thu12:30
01/16Sun11:10
12/19Sun17:20
12/22Wed16:00
12/25Sat 14:40
12/28Tue13:20
12/31Fri 12:00
01/03Mon10:40
01/06Thu09:20
01/09Sun08:00
01/12Wed06:40
01/15Sat 05:20
12/18Sat 11:10
12/21Tue10:10
12/24Fri 08:50
12/27Mon07:30
12/30Thu06:10
01/02Sun04:50
01/05Wed03:30
01/08Sat 02:10
01/11Tue00:50
01/13Thu23:30
01/16Sun22:10
M
e
m

F
r
e
e

-
-
-
-
-
-
M
e
m

F
r
e
e

-
-
-
-
-
-
C
o
e
x
i
s
t
e
n
c
e

V
i
e
w
O
n
e

o
f

t
h
e

n
o
d
e

g
r
o
u
p
s
i
s

e
x
p
e
r
i
e
n
c
i
n
g

l
o
w

m
e
m
o
r
y

c
o
n
d
i
t
i
o
n

-
C
a
n

F
S
G
c
a
c
h
e

a
d
j
u
s
t
m
e
n
t

h
e
l
p
?
H
i
g
a

M
a
c
r
o

R
e
s
P
m
a
20 > 10 October 2011
Memory Utilization
There are low memory or depletions present, as marked with
circles. But there Irequency oI occurrence is not alarming.
Available Free memory less than 100 MB and 40 MB are termed as 'Memory depletion and 'system panic states respectively.
Higa Macro
ResPma
21 > 10 October 2011
Paging & Memory Allocation Fails
Mem Alloc Fails occurs when FREE MEM reaches to zero.
Negligible Mem Alloc Failures
occurred in last month.
Higa Macro
ResPma
22 > 10 October 2011
- 4) Disk IO Reports
23 > 10 October 2011
DISK IO: Disk % Busy
System is not IO bound: Disk doesn't remain 100 busy.
Higa Macro
ResSldvNode
-Disk Busy time is the amount of time in which there is at least one I/O request outstanding
2
4
>


1
0

O
c
t
o
b
e
r

2
0
1
1
I
/
O

W
a
i
t
%

E
x
a
m
p
l
e

#
1

;
e
r
a
g
e

C
p
u

B
u
s
y

;
s
.

I
/
O

W
a
i
t




0




2
0




4
0




6
0




8
0




1
0
0




1
2
0
03/01 Wed 00:10
03/01 Wed 09:50
03/01 Wed 19:30
03/02 Thu 05:10
03/02 Thu 14:50
03/03 Fri 00:30
03/03 Fri 10:10
03/03 Fri 19:50
03/04 Sat 05:30
03/04 Sat 15:10
03/05 Sun 00:50
03/05 Sun 10:30
03/05 Sun 20:10
03/06 Mon05:50
03/06 Mon15:30
03/07 Tue 01:10
03/07 Tue 10:50
03/07 Tue 20:30
03/08 Wed 06:10
03/08 Wed 15:50
03/09 Thu 01:30
03/09 Thu 11:10
03/09 Thu 20:50
03/10 Fri 06:30
03/10 Fri 16:10
03/11 Sat 01:50
03/11 Sat 11:50
03/11 Sat 21:30
03/12 Sun 07:10
03/12 Sun 16:50
03/13 Mon02:30
C
h
a
r
t

T
y
p
e
:

S
t
a
c
k
e
d

r
e
a
W
a
i
t

%

-
-
-
-
C
P
U

b
s
y

-
-
-
-
A
v
e
r
a
g
e

C
P
U

+

I
/
O

W
a
i
t

=

S
y
s
t
e
m

B
u
s
y
-
I
f

A
v
g

C
P
U

+

I
/
O

W
a
i
t

=

1
0
0
%
,

d
e
s
i
r
a
b
l
e

r
a
t
i
o

i
s

>
=

9
0
%

a
v
g

C
P
U

t
o

>
=

1
0
%

I
/
O

W
a
i
t

R
a
t
i
o

i
s

w
e
l
l

w
i
t
h
i
n

t
a
r
g
e
t

r
a
n
g
e
H
i
g
a

M
a
c
r
o

R
e
s
P
m
a
T
o
t
a
l
2
5
>


1
0

O
c
t
o
b
e
r

2
0
1
1
I
/
O

W
a
i
t
%

E
x
a
m
p
l
e

#
2

;
e
r
a
g
e

C
p
u

B
u
s
y

;
s
.

I
/
O

W
a
i
t
0
2
0
4
0
6
0
8
0
1
0
0
1
2
0
1/9 Fri 0:00
1/9 Fri 8:50
1/9 Fri 17:40
1/10 Sat 2:30
1/10 Sat 11:20
1/10 Sat 20:10
1/11 Sun 6:20
1/11 Sun 15:10
1/12 Mon 0:00
1/12 Mon 8:50
1/12 Mon 17:40
1/13 Tue 2:30
1/13 Tue 11:20
1/13 Tue 20:10
1/14 Wed 5:00
1/14 Wed 13:50
1/14 Wed 22:40
1/15 Thu 7:30
1/15 Thu 16:20
1/16 Fri 1:10
1/16 Fri 10:00
1/16 Fri 18:50
1/17 Sat 3:40
1/17 Sat 12:30
1/17 Sat 21:20
1/18 Sun 6:10
1/18 Sun 15:00
1/19 Mon 0:10
1/19 Mon 12:30
1/19 Mon 21:30
1/20 Tue 6:20
1/20 Tue 15:10
1/21 Wed 0:00
1/21 Wed 8:50
1/21 Wed 17:40
1/22 Thu 2:30
1/22 Thu 11:20
1/22 Thu 20:10
1/23 Fri 5:00
1/23 Fri 13:50
C
h
a
r
t

T
y
p
e
:

S
t
a
c
k
e
d

r
e
a

/
O

W
a
i
t

%
A
v
g

C
P
U

b
s
y
-
W
a
i
t

I
/
O

s
h
o
u
l
d

b
e

1
0
%

o
r

l
e
s
s

f
o
r

c
o
n
f
i
g
u
r
a
t
i
o
n

o
p
t
i
m
a
l
l
y

b
a
l
a
n
c
e
d

f
o
r

p
o
w
e
r

a
n
d

t
h
r
o
u
g
h
p
u
t
#
1

4
0
%
-
6
0
%


W
a
i
t

I
/
O

s
h
o
w
n

h
e
r
e

H
i
g
a

M
a
c
r
o

R
e
s
P
m
a
T
o
t
a
l
26 > 10 October 2011
isk Read KByte and Write KByte
0
50 000
100 000
150 000
200 000
250 000
300 000
0
6
/
1
5

T
h
u

0
6
:
2
0
0
6
/
1
5

T
h
u

0
6
:
2
0
0
6
/
1
5

T
h
u

0
6
:
2
0
0
6
/
1
5

T
h
u

0
6
:
2
0
0
6
/
1
5

T
h
u

0
6
:
2
0
0
6
/
1
5

T
h
u

0
6
:
3
0
0
6
/
1
5

T
h
u

0
6
:
3
0
0
6
/
1
5

T
h
u

0
6
:
3
0
0
6
/
1
5

T
h
u

0
6
:
3
0
0
6
/
1
5

T
h
u

0
6
:
3
0
0
6
/
1
5

T
h
u

0
6
:
4
0
0
6
/
1
5

T
h
u

0
6
:
4
0
0
6
/
1
5

T
h
u

0
6
:
4
0
0
6
/
1
5

T
h
u

0
6
:
4
0
0
6
/
1
5

T
h
u

0
6
:
4
0
0
6
/
1
5

T
h
u

0
6
:
5
0
0
6
/
1
5

T
h
u

0
6
:
5
0
0
6
/
1
5

T
h
u

0
6
:
5
0
0
6
/
1
5

T
h
u

0
6
:
5
0
0
6
/
1
5

T
h
u

0
6
:
5
0
Chart Type: Stacked rea
Disk WrKB /Sec
Disk RdKB /Sec
Measuring I/O
On the system, differences in throughput
were viewable with disk read/writes
Actual rated bandwidth for the configuration
Higa Macro
ResPmabyNode
27 > 10 October 2011
isk Position Reads, Pre-Reads and Writes
0
2 000 000
4 000 000
6 000 000
8 000 000
10 000 000
12 000 000
0
9
/
0
3

S
a
t

0
0
:
0
0
0
9
/
0
4

S
u
n

0
1
:
1
0
0
9
/
0
5

M
o
n

0
2
:
2
0
0
9
/
0
6

T
u
e

0
3
:
3
0
0
9
/
0
7

W
e
d

0
4
:
4
0
0
9
/
0
8

T
h
u

0
5
:
5
0
0
9
/
0
9

F
r
i

0
7
:
0
0
0
9
/
1
0

S
a
t

0
8
:
1
0
0
9
/
1
1

S
u
n

0
9
:
2
0
0
9
/
1
2

M
o
n

1
0
:
3
0
0
9
/
1
3

T
u
e

1
3
:
0
0
0
9
/
1
4

W
e
d

1
4
:
1
0
0
9
/
1
5

T
h
u

1
5
:
2
0
0
9
/
1
6

F
r
i

1
6
:
5
0
0
9
/
1
7

S
a
t

1
8
:
0
0
0
9
/
1
8

S
u
n

1
9
:
1
0
0
9
/
1
9

M
o
n

2
0
:
2
0
0
9
/
2
0

T
u
e

2
1
:
3
0
0
9
/
2
1

W
e
d

2
3
:
3
0
0
9
/
2
3

F
r
i

0
0
:
4
0
0
9
/
2
4

S
a
t

0
1
:
5
0
0
9
/
2
5

S
u
n

0
3
:
0
0
0
9
/
2
6

M
o
n

0
4
:
1
0
0
9
/
2
7

T
u
e

0
5
:
2
0
0
9
/
2
8

W
e
d

0
6
:
3
0
0
9
/
2
9

T
h
u

0
7
:
4
0
0
9
/
3
0

F
r
i

0
8
:
5
0
1
0
/
0
1

S
a
t

1
0
:
0
0
1
0
/
0
2

S
u
n

1
1
:
1
0
Chart Type: Stacked rea
Total DB Wrts
Total Pre Rds
Total Position Rds
Total Disk Reads/Writes
Total Pre-Reads are large proportion of
total reads, so this system may
be a good candidate for raising
Full Cylinder Read slots
Next step is to look at FCR denied cache.
If this is high, and there is wait I/O often,
then this system is a candidate for
higher FCR slots.
Higa Macro
ResPmaTotal
28 > 10 October 2011
Full Cylinder Reads Example #1
Higa Macro
ResullCylReadTotal
Less FCR requests
with high successful
rate
29 > 10 October 2011
Full Cylinder Reads Example #2
Higa Macro
ResullCylReadTotal
High FCR requests
with moderate
successful rate
30 > 10 October 2011
Logical vs. Physical Reads
Higa Macro
SvprReadTotal
The more that the logical
reads exceeds physical
reads, the better use of
memory as a cache
31 > 10 October 2011
Mini-cyl-packs
Higa Macro
ResCylPackTotal
Mini Cyl Pack occurs when there are only ten Free cylinders leIt on any
AMP resulting in degraded system perIormance.
32 > 10 October 2011
5) AWT Usage
Higa Macro
ResSvprQLenAvg
ByVproc
Potential indicator of
"FLOW Control state
Message queue
length > 20
33 > 10 October 2011
6) Buddy Backup Effectiveness
Higa Macro
ResPmaBkupHour
Total
When there are many complete and few
partial blocks, sent to the buddy, then the
buddy backup should be turned off
34 > 10 October 2011
7) Host Utilities Traffic
Higa Macro
reshostTotalHour
Total read/write traffic
per Hour, physical
MBytes transferred
between Host and Node.
how much data is read or
written by utility
35 > 10 October 2011
- 8) Charting Teradata Manager Reports
Use Teradata Manager
to record Active
Session detail data
Generate Custom
reports using pivot
tables
36 > 10 October 2011
Charting Teradata Manager Reports:
CPU utilized per User
CPU utilized per User
37 > 10 October 2011
Charting Teradata Manager Reports:
CPU utilization Per Group
CPU utilized per
Performance Group
38 > 10 October 2011
Questions
The only bad question
is the question
never asked

Вам также может понравиться