Вы находитесь на странице: 1из 89

Design and Analysis of

Algorithms
Dr. Muhammad Safysn
Spring 2019

1
Introduction
 The word Algorithm comes from the name of the Muslim
author Abu Ja’far Mohammad ibn Musa al-Khowarizmi.
 He was born in the eighth century at Khwarizm (Kheva), a
town south of river Oxus in present Uzbekistan.
 Al-Khwarizmi parents migrated to a place south of Baghdad
when he was a child.
 It has been established from his contributions that he
flourished under Khalifah Al-Mamun at Baghdad during 813
to 833 C.E. Al-Khwarizmi died around 840 C.E.

2
Definition
 An algorithm is a mathematical entity, which is
independent of a specific programming language,
machine, or compiler.
 Design is all about the mathematical theory behind
the design of good programs.
 Multi-facets to good program design
 Aware of programming and machine issues as well
 Two Issues
 Micro-Issues
 Macro-Issues

3
Definition
Macro-Issues
Computation Cost:
 Few lines e.g 100 lines of code take most of the execution time.

Micro-issue
 Deals with small critical section e.g. debugging
 To Deal the Macro-section, written code must be dealt
with efficient manner.
 Conventional approach: codify poor design problem
 and attempt to fine-tune its performance by applying clever
coding tricks OR
 by implementing it on the most expensive and fastest machines
around to boost performance as much as possible
4
Good Design
 The problem is that if the underlying design is bad,
then often no amount of fine-tuning is going to make a
substantial difference.

 Before you implement, first be sure you have a good design, This
course is about to good design and good data structure.
 Both are integrated issues
 Fastest algorithms are fast because they use fast data
structures, and vice versa.
 Applications: compiler , Operating system, databases, Artificial
Intelligence, Semantic Web, computer vision, machine leaning
etc.

5
Analyzing Algorithms
 What is criteria for Good Algorithm?
 What is criteria of measuring Algorithm?

 Measure algorithms in terms of the amount of


computational resources that the algorithm requires.
 it is also known as complexity analysis

 Primary Resouces are (i) Runing Time and (ii) Memory


 Other Includes: number disk accesses in a database program , communication
bandwidth in a networking application.

6
Computing resources:

e Memory space: space required to store the data processed by


algorithm

eCPU time: also known as running time. It is time needed to


execute the operations of the algorithm.

An efficient algorithm uses a minimum amount of


computing resources.
Efficiency Analysis
e There are two types of efficiency analysis
)Space analysis: how much space an algorithm required to store the
data in memory?
) Running time analysis: it refers to how fast an algorithm runs

eThe key strategy in efficiency analysis is the amount of


computing resources depends on the input size (problem size)
) Input size: the number of elements belonging to the input data.

e Hence the main question to be answered by efficiency analysis is


how depends the time and/or space needed by algorithm on the
input size?
Space and time trade off

e Often we have to make a compromise between space efficiency


and time efficiency

e Example: adjacency matrix vs adjacency list for graph


representation.
) A sparse graph can be represented by an adjacency matrix which would
be time efficient for traversing an edge but it will be at the cost of
space
) A sparse graph can be represented by an adjacency list which would be
space efficient and but it would take longer time for traversing the
edges.
How can be time efficiency measured? Our efficiency measure
for running time must be independent of

e Machine

e Programminglanguage

e Programmer
RAM: Computational Model
However to estimate running time we must use some computational
model.
Computational model is an abstract machine having some properties.
For this purpose we will use RAM computational model(Random
Access Machine) having the following properties

1.All processing steps are sequentially executed (there is no


parallelism in the execution of the algorithm)
RAM: Computational Model

1. All processing steps are sequentially executed (there is no


parallelism in the execution of the algorithm)

2. The time of executing the basic operations does not depend on


the values of the operands (there is no time difference between
computing 1+2 and computing 12433+4567)

3. The time to access data does not depend on their address (there
are no differences between processing the first element of an
array and processing the last element) All basic operations,
assignment,
arithmetic, logical, relational take unit time.
Loop-holes
 two numbers may be of any length.
 Serlization

 .

13
Running Time Analysis
 Concern about measuring the execution time.
 Concerned about the space (memory) required by the
algorithm.
 Ignore during Running Time
 Speed of computer
 Programming Language
 optimization by the compiler
 .Count
 Different inputs of the same size may result in different running time

14
Running Time Analysis
 Two criteria for measuring running time are worst-case
time and average-case time
 Worst-case time
 maximum running time over all (legal) inputs of size n. Let I denote an input instance, let |I|
denote its length, and let T(I) denote the running time of the algorithm on input I. Then

 Average-case time is the average running time over all inputs of size n. Let p(I) denote the
probability of seeing this input. The average-case time is the weighted sum of running times
with weights being the probabilities:

 We will almost always work with worst-case time


 Average-case time is more difficult to compute;
15
Running Time Analysis

16
Running Time Analysis

17
Example1: running time

Now we see how running time expresses the dependence of


the number of executed operations on the input size

Example: swapping iterations cost


aux ← x 1 c1
x←y 1 c1
y← x 1 c1
T (n) = 3 c1

where c1 is some constant and 3c1 is also


some constant. We conclude that running
time of this algorithm is some constant.
Hence we can understand that the running
time of above algorithm is independent of
the input size.
Example2: running time

Example 2: Compute sum of the series s = 1 + 2 + . . . + n


precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . +
n line no. cost iterations
1 c1 1
2 c1 1
input: n 3 c2 n+1
4 c3 n.1
1: s ← 0 5 c3 n.1
2: i ← 1
3: while i ≤ n
do
4: s←s+i
5: i←i+1
6: end while
Example2: running time
Example 2: Compute sum of the series s = 1 + 2 + . . . + n
precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . +
n

line no. cost iterations


input: n 1 c1 1
2 c1 1
1: s←0 3 c2 n+1
2: i ← 1 4 c3 n.1
3: while i ≤ n 5 c3 n.1
do + c2(n + 1) + 2c3n
T (n) = 2c1
4: s←s+i
5: i←i+1
6: end while
Example2: running time
Example 2: Compute sum of the series s = 1 + 2 + . . . + n
precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . +
n

line no. cost iterations


input: n 1 c1 1
2 c1 1
1: s←0 3 c2 n+1
2: i ← 1 4 c3 n.1
3: while i ≤ n 5 c3 n.1
do + c2(n + 1) + 2c3n
T (n) = 2c1
4: ← s++c2.n
T (n) = s2c1 i + c2 + 2c3.n
5: i←i+1
6: end while
Example2: running time
Example 2: Compute sum of the series s = 1 + 2 + . . . + n
precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . +
n

line no. cost iterations


input: n 1 c1 1
2 c1 1
1: s←0 3 c2 n+1
2: i ← 1 4 c3 n.1
3: while i ≤ n 5 c3 n.1
T (n) = do + c2(n + 1) + 2c3n
2c1
4:
T (n) = ← s++c2.n
s2c1 i + c2 + 2c3.n
5:
T (n) = i ← i a.n
+ 1+ b
6: end while
Example2: running time
Example 2: Compute sum of the series s = 1 + 2 + . . . + n
precondition: n ≥ 1 postcondition: s = 1 + 2 + . . . +
n

line no. cost iterations


input: n 1 c1 1
2 c1 1
1: s←0 3 c2 n+1
2: i ← 1 4 c3 n.1
3: while i ≤ n 5 c3 n.1
T (n) = do + c2(n + 1) + 2c3n
2c1
4:
T (n) = ← s++c2.n
s2c1 i + c2 + 2c3.n
5:
T (n) = i ← i a.n
+ 1+ b
6: end while

Running time is linear function of input size


Example3: running time
Example 3: Find the minimum in non-empty array x [1 . .
. n]
P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x
[1..n]
1: m ← x[1] line no. cost iterations
2: for i ← 2, n do 1 1 1
3: if x [i ] < 2 1 2n
m 3 1 n −1
4: m ← x[i]
then 4 1 h(n)
5: end if
6: end for
7: return m

../pucitl ogo.jp
Example3: running time
Example 3: Find the minimum in non-empty array x [1 . .
. n]
P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x
[1..n]
line no. cost iterations
1: m ← x [1] 1 1 1
2: for i ← 2, n do 2 1 2n
3: if x [i ] < m 3 1 n−1
4
then 1 h(n)
4: m ← x[i]
5: end if T (n) = 1 + 2n + n − 1 + h(n)
6: end for
7: return m
Example3: running time
Example 3: Find the minimum in non-empty array x [1 . .
. n]
P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x
[1..n]
line no. cost iterations
1: m ← x [1] 1 1 1
2: for i ← 2, n do 2 1 2n
3: if x [i ] < m 3 1 n−1
4
then 1 h(n)
4: m ← x[i]
5: end if T (n) = 1 + 2n + n − 1 + h(n)
6: end for T (n) = 3n + h(n)
7: return m
Example3: running time
Example 3: Find the minimum in non-empty array x [1 . .
. n]
P: n ≥ 1 Q: m = min{x [j ]|j = 1, 2, . . . n}n input: x
[1..n]
line no. cost iterations
1: m ← x [1] 1 1 1
2: for i ← 2, n do 2 1 2n
3: if x [i ] < m 3 1 n−1
4
then 1 h(n)
4: m ← x[i]
5: end if T (n) = 1 + 2n + n − 1 + h(n)
6: end for T (n) = 3n + h(n)
7: return m
The running time depends not only on n but also on the properties
of input data

../pucitl ogo.jp
Best-case analysis and worst-case analysis
Whenever analysis of an algorithm not only depends on the input size
but also on some property of input data then we have to perform
analysis in more details
e Worst-case analysis: gives the longest running time for any input of
size n.
e Best-case analysis: gives us the minimum running time for any input
of size n.
)worst-case running time of an algorithm gives us an upper bound on the
running time for any input.
)Knowing it provides a guarantee that the algorithm will never take any
longer.

e best-case running time of an algorithm gives us lower bound on the


running time for any input.
eKnowing it provides a guarantee that the algorithm will never tak..e/l
more less time.
Example 4: sequential search
Preconditions: x [1..n], n >= 1, v a value
Postconditions: found = TRUE when v ∈ x
[1..n] Input: x [1..n], v

Algorithm 1 search 1: found ← true


2: i ← 1
3: while (found = false) and (i ≤ n)
do
4: if x [i ] = v then

5: found ←true
6: else
7: i← i+ 1
8: end if
9: end while
Example 4: sequential search
Preconditions: x [1..n], n >= 1, v a value
Postconditions: found = TRUE when v ∈ x
[1..n] Input: x [1..n], v

Algorithm 2 search
1: found ← true
line no cost
2: i ← 1
1 1
3: while (found = false) and (i ≤ n) 2 1
do 3 f (n) + 1
4: if x [i ] = v then 4 f(n)
5: found ←true 5 g (n)
6: else 7 h(n)
7: i← i+ 1
T(n)=
8: end if
3 + 2f (n) + g(n) + h(n)
9: end while
Example 4: sequential search
Preconditions: x [1..n], n >= 1, v a value
Postconditions: found = TRUE when v ∈ x
[1..n] Input: x [1..n], v

Algorithm 3 search
1: found ← true
line no cost
2: i ← 1
1 1
3: while (found = false) and (i ≤ n) 2 1
do 3 f (n) + 1
4: if x [i ] = v then 4 f(n)
5: found ←true 5 g (n)
6: else 7 h(n)
7: i← i+ 1
T(n)=
8: end if
3 + 2f (n) + g(n) + h(n)
9: end while
Today’s Agenda

Role of dominant
term

Order of growth

Asymptotic Analysis
Dominant term or Leading term

We used some simplifying abstractions to ease our analysis


The main aim of efficiency analysis is to find out how increases
the running time when the problem size increases
e Running time is mostly affected by dominant term
e In most of the cases we do not require the detailed analysis of
running time
e We need to just identify the dominant term, which helps to
find:
) The order of growth of the running time
) The efficiency class to which an algorithm belongs
Identify the dominant term

In the expression of the running time one of the terms will


become significantly larger than the other ones when n
becomes large:this is the so-called dominant term

Running time Dominanat term


T1(n) = an + ban
T2(n) = alogn + b alog
T3(n) = an2 n
T4(n) = an + bn + c(a > n) an2
an
What is the order of growth?

The order of growth expresses how increases the dominant term


of the running time with the input size

Running time Dominant term Order of growth


T1(n) = an + b an Linear
T2(n) = alogn + b alogn Logarithmic
T3(n) = an2 an2 Quadratic
T4(n) = an + bn + c(a > n) an Exponential
Order of growth vs Input size?

Between two algorithms it is considered that the one having


a smaller order of growth is more efficient
this is true only for large enough input sizes
Example:
T1(n) = 10n + 10 (linear order of growth)
T2(n) = n2 (quadratic order of growth)
Order of growth vs Input size?

Between two algorithms it is considered that the one having a


smaller order of growth is more efficient
this is true only for large enough input sizes
Example:
T1(n) = 10n + 10 (linear order of growth)
T2(n) = n2 (quadratic order of growth) if n ≤ 10 then T1(n)
> T2(n)
e In this case the order of growth is relevant only for n > 10
e For larger input,n the low terms in a function are relatively
insignificant
Order of growth vs Input size?

Between two algorithms it is considered that the one having a


smaller order of growth is more efficient
this is true only for large enough input sizes
Example:
T1(n) = 10n + 10 (linear order of growth)
T2(n) = n2 (quadratic order of growth) if n ≤ 10 then T1(n)
> T2(n)
e In this case the order of growth is relevant only for n > 10
e For larger input,n the low terms in a function are relatively
insignificant
e n4 + 100n2 + 10n + 50 ≈ n4
A comparison of the order of growth

log n n nlogn n2 2n
3.3 10 33 100 1024
6.6 100 664 10000 1030
10 1000 9965 1000000 10301
13 10000 132877 100000000 103010
A comparison of the order of growth

log n n nlogn n2 2n
3.3 10 33 100 1024
6.6 100 664 10000 1030
10 1000 9965 1000000 10301
13 10000 132877 100000000 103010
Comparing order of growth

The order of growth of two running times T1(n) and T2(n) can be
compared by computing the limit of T1(n)/T2(n) when n goes to
infinity.
e If the limit is 0 then T1(n) has a smaller order of growth than
T2(n)
e If the limit is a finite constant c(c > 0) then T1(n) and T2(n)
have the same order of growth
e If the limit is infinity then T1(n) has a larger order of growth
than T2(n)

.l ogo.jp
Asymptotic Analysis

e While analysis of running time often extra precision is not


required
e For large enough input, the multiplicative constants and
lower order terms can be ignored
e When we look at input size large enough to make only the
order of growth of the running time relevant, its called
asymptotic efficiency of algorithms
e That is, we are concerned with how the running time of an
algorithm increases with the size of the input in the limit,
as the size of the input increases without bound.

..l ogo.jp
Asymptotic Analysis

e Asymptotic notation actually applies to functions.


) Recall that we characterized running time of a matrix initialization as
an2 + bn + c.
) By writing writing running time of above function as O(n2), we have
abstracted some details of the function.

e While applying asymptotic notations to running time, we


need to understand which running times of algorithms.
) Sometimes we are interested in worst-case running time
) Often we wish to characterize running time no matter what the
ogo.jp
input
Asymptotic Notations

e Big O: asymptotic less than


) f (n) = O(g (n)) ⇒ f (n)≤g (n)
e Big Ω: asymptotic greaterthan
) f (n) = Ω(g (n)) ⇒ f (n)≥g (n)
e Big Θ: asymptotic equality
) f (n) = Θ(g (n)) ⇒ f (n)=g (n)

../pucitl ogo.jp
Big-O Notation

e We say f (n) = 30n + 8 is order n, or O(n) it is, at most,


roughly proportional to n.
e We say g (n) = n2 + 1 is order n2, or O(n2) it is, at most,
roughly proportional to n2.

../pucitl ogo.jp
Big-O Notation
·104
1
f(n)
g (n)
0.8

0.6
T (n)

0.4

0.2

0 ../pucitl ogo.jp
0 20 40 60 80 100
n
Big-O: Example

e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no = 2

../pucitl ogo.jp
Big-O: Example

e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no = 2


e n2 = O(n2): n2 ≤ cn2 ⇒ 1 ≤ c ⇒ c = 1 and no = 1
Big-O: Example

e 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and no = 2


e n2 = O(n2): n2 ≤ cn2 ⇒ 1 ≤ c ⇒ c = 1 and no = 1
e 1000n2 + 1000n = O(n2): 1000n2 + 1000n ≤ cn2 ⇒
1000n2 + 1000n ≤ 1001n2 ⇒ c = 1001 and no = 1000

. ogo.jp
Big-O: FormalExample

Show that 30n + 8 is O(n)


For this we have to prove
that
Proof:

 ∃c, no : 30n + 8 ≤ cn, ∀n > no

 Let c = 31, no = 8. Assume n > no then 30n + 8 ≤ 30n +


 e There is no unique set of values for no and c in proving the as
 e Must find some constants c and no that satisfy the asymptotic

../pucitl ogo.jp
Big-Ω Notation

Ω(g (n)) is the set of functions with larger or


same order of growth as g (n)

../pucitl ogo.jp
Example

5n2 = Ω(n)
∃c, no such that 0 ≤ cn ≤ 5n2 ⇒ cn ≤ 5n2 ⇒ c = 1 and
no = 1
100n + 5 ƒ= Ω(n2)
∃c, no such that 0 ≤ cn2 ≤ 100n + 5 100n + 5 ≤ 100n +
5n(∀n ≥ 1) = 105n cn2 ≤ n ⇒ n(cn − 105) ≤ 0
since n is positive ⇒ cn − 105 ≤ 0 ⇒ n ≤ 105/c ⇒
contradiction: n cannot be smaller than constant.

../pucitl ogo.jp
Example

n2/2 − n/2 = Θ(n2)


Example

n2/2 − n/2 = Θ(n2)


First we prove that n2/2 − n/2 = O(n2), (∀n ≥ no )
1 2 1 12
2 n − n2≤ n 2
1 2 1 2 1
2 n − n2≤ n 2
1
0 ≤ 2n
0≤n
Hence proved that n2/2 − n/2 = O(n2), (∀n ≥ no ) where
no = 0and c2 = 1 2
Example

n2/2 − n/2 = Θ(n2)


First we prove that n2/2 − n/2 = O(n2), (∀n ≥ no )
1 2 1 12
2 n − n2≤ n 2
1 2 1 2 1
2 n − n2≤ n 2
1
0 ≤ 2n
0≤n
Hence proved that n2/2 − n/2 = O(n2), (∀n ≥ no ) where
no = 0and c2 = 1 2
Secondly we prove that n2/2 − n/2 = Ω(n2),(∀n ≥ no )
1 2 1 12
2 n − n2≥ n 4
1 2 1 2 1
2 n − n4≥ n 2
1 2 1
4 n ≥ n2
Hence∀n ≥ 2, (∀n ≥ no ) where no = 2 and c1 =
1/4 6n3 ƒ= Θ(n2)
Running Time Analysis

56
Running Time Analysis

57
Review: Running Time
 An Example: Insertion Sort
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}

58
Review: Running Time
30 10 40 20 i =  j =  key = 
A[j] =  A[j+1] = 
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
59
An Example: Insertion Sort
i = 2 j = 1 key = 10
30 10 40 20
A[j] = 30 A[j+1] = 10
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

60

10/31/2019
An Example: Insertion Sort
i = 2 j = 1 key = 10
30 30 40 20
A[j] = 30 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

61

10/31/2019
An Example: Insertion Sort
i = 2 j = 1 key = 10
30 30 40 20
A[j] = 30 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

62

10/31/2019
An Example: Insertion Sort
i = 2 j = 0 key = 10
30 30 40 20
A[j] =  A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

63

10/31/2019
An Example: Insertion Sort
i = 2 j = 0 key = 10
30 30 40 20
A[j] =  A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

64

10/31/2019
An Example: Insertion Sort
i = 2 j = 0 key = 10
10 30 40 20
A[j] =  A[j+1] = 10
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

65

10/31/2019
An Example: Insertion Sort
i = 3 j = 0 key = 10
10 30 40 20
A[j] =  A[j+1] = 10
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

66

10/31/2019
An Example: Insertion Sort
i = 3 j = 0 key = 40
10 30 40 20
A[j] =  A[j+1] = 10
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

67

10/31/2019
An Example: Insertion Sort
i = 3 j = 0 key = 40
10 30 40 20
A[j] =  A[j+1] = 10
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

68

10/31/2019
An Example: Insertion Sort
i = 3 j = 2 key = 40
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

69

10/31/2019
An Example: Insertion Sort
i = 3 j = 2 key = 40
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

70

10/31/2019
An Example: Insertion Sort
i = 3 j = 2 key = 40
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

71

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 40
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

72

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

73

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 40 20
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

74

10/31/2019
An Example: Insertion Sort
i = 4 j = 3 key = 20
10 30 40 20
A[j] = 40 A[j+1] = 20
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

75

10/31/2019
An Example: Insertion Sort
i = 4 j = 3 key = 20
10 30 40 20
A[j] = 40 A[j+1] = 20
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

76

10/31/2019
An Example: Insertion Sort
i = 4 j = 3 key = 20
10 30 40 40
A[j] = 40 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

77

10/31/2019
An Example: Insertion Sort
i = 4 j = 3 key = 20
10 30 40 40
A[j] = 40 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

78

10/31/2019
An Example: Insertion Sort
i = 4 j = 3 key = 20
10 30 40 40
A[j] = 40 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

79

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 40 40
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

80

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 40 40
A[j] = 30 A[j+1] = 40
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

81

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 30 40
A[j] = 30 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

82

10/31/2019
An Example: Insertion Sort
i = 4 j = 2 key = 20
10 30 30 40
A[j] = 30 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

83

10/31/2019
An Example: Insertion Sort
i = 4 j = 1 key = 20
10 30 30 40
A[j] = 10 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

84

10/31/2019
An Example: Insertion Sort
i = 4 j = 1 key = 20
10 30 30 40
A[j] = 10 A[j+1] = 30
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

85

10/31/2019
An Example: Insertion Sort
i = 4 j = 1 key = 20
10 20 30 40
A[j] = 10 A[j+1] = 20
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
}
David Luebke

86

10/31/2019
An Example: Insertion Sort
i = 4 j = 1 key = 20
10 20 30 40
A[j] = 10 A[j+1] = 20
1 2 3 4
InsertionSort(A, n) {
for i = 2 to n {
key = A[i]
j = i - 1;
while (j > 0) and (A[j] > key) {
A[j+1] = A[j]
j = j - 1
}
A[j+1] = key
}
David Luebke
} Done!
87

10/31/2019
Asymptotic Notation
 What is an
algorithm?
 a step-by-step
procedure to solve a
problem
 every program is the
instantiation of some
algorithm
http://blog.kovyrin.net/wp-content/uploads/2006/05/algorithm_c.png

CSCE 411, Spring 2013: Set 1 88


Model of Computation
 An other Goal: Algorithm is independent as possible
of the variations in machine, operating system,
compiler, or programming language.
 algorithms to be understood by the people and programs are
understood by the machine.
 Flexiblitiy -> low level detail may be omitted

For algoritm to be understandable, settle it on mathematical Model of Computation.


Mathematical model is abstraction of a standard generic single-processor machine. call this
model a Random Access Machine( RAM)
RAM:
Infinite Memory
Instruction are executed in serlization
Instruction perform the basic Operation
Each basic operation takes the same constant time to execute.

89

Вам также может понравиться