Вы находитесь на странице: 1из 6

technische universiteit eindhoven

2IL50 data structures

Assignment 1
Sergiu Marin
Student number: 0932191
s.c.i.marin@student.tue.nl
February 14, 2016

Solution to Exercise 1: The mentioned functions are ranked by their order of growth(starting
from slowest growing), where functions with the same order of growth are ranked equal.
n2
4 log 8

log n and log2 n

n+4 n
n log n
n2 and 4log n
nlog 8
2n
Solution to Exercise 2:
f (n) =

Pn

i=1 [4i4]

= [4

Pn

i=1 i][4

Pn

i=1 1]

= 4 n(n+1)
4n = 2n2 +2n4n = 2n2 2n
2

Now, n 2, we have that 12 n2 2n2 2n 2n2 , where 2n2 2n = f (n). Thus,


by the definition of -notation, f (n) (n2 ), where c1 = 21 , c2 = 2 and n0 = 2. This
corresponds to choice (f.) n2 .

2
P n
log n(log n+1)
g(n) = 2n + n log
= 2n + n (log n)2+log n = 12 n(log n)2 + 12 n log n +
i=1 i = 2n + n
2
+ 2n = 21 n(log n)2 + n( 12 log n + 2)

Now n 2, we have that 21 n(log n)2 12 n(log n)2 + n( 12 log n + 2) 4n(log n)2 ,
where 12 n(log n)2 + n( 12 log n + 2) = g(n). Thus, by definition of -notation, g(n)
(n(log n)2 ), where c1 = 21 , c2 = 4 and n0 = 2. This corresponds to choice (e.)
n(log n)2 .

h(n) =

Pn

Pi

i=0 [

j=1 i]

P0

j=1 0

P1

j=1 1

P2

3+4+4+4+4+...+n
| +n+
{z. . . + n} =

Pn
j=1 2 + . . . +
j=1 n = 1 + 2 + 2 + 3 + 3
2
2
2
1 + 2 + 3 + 4 + . . . + n2 = n(n+1)(2n+1)
6

+
=

n-times

2n3 +3n2 +n
6

= 13 n3 + 12 n2 + 61 n

department of mathematics and computer science

technische universiteit eindhoven

2IL50 data structures

Now, n 2, we have that 13 n3 31 n3 + 12 n2 + 61 n n3 , where 31 n3 + 12 n2 + 16 n = h(n).


Thus, by definition of -notation, h(n) (n3 ), where c1 = 13 , c2 = 1 and n0 = 2. This
corresponds to choice (h.) n3 .

i(n) =

Pn

i=0 2

= 20 + 21 + 22 + . . . + 2n =

2n+1 1
21

= 2n+1 1

Now, n 1, we have that 2n 2n+1 1 2 2n , where 2n+1 1 = i(n). Thus,


by definition of -notation, i(n) (2n ), where c1 = 1, c2 = 2 and n0 = 1. This
corresponds to choice (i.) 2n .

j(n) =

Plog n
i=1

n=n

Plog n
i=1

1 = n log n

Now, n 1, we have that n log n n log n n log n, where n log n = j(n).


Thus, by definition of -notation, j(n) (n log n), where c1 = 1, c2 = 1 and n0 = 1.
This corresponds to choice (d.) n log n.

k(n) =

Pn

4
i=0 n

4
n

Pn

i=0 1

4(n+1)
n

=4+

4
n

Now, n 4, we have that 1 4 + n4 5, where 4 + n4 = k(n). Thus, by definition of -notation, k(n) (1), where c1 = 1, c2 = 5 and n0 = 4. This corresponds to
choice (a.) 1.

Solution to Exercise 3: We may prove that the presented algorithm is correct by using the
following loop invariant:
At the start of each iteration of the for loop i , the variables leastSmall value is the
largest integer in the subarray A[1..i 1] that is strictly smaller than x.
The following is a proof of correctness using the above-stated loop invariant:

Initialisation: Before the first iteration, i = 1. Since the current subarray is A[1..0],
there is no element in this subarray that is strictly smaller than x. In accordance with
the algorithms description, leastSmall = 0. Thus, we have proved that the invariant
is true prior to the first iteration of the loop.

Maintenance: The loop invariant is true before iteration i of the loop. Well prove
that it remains true before iteration i + 1.
At the start of iteration i, leastSmall contains the value of largest integer in the subarray A[1..i 1] that is strictly smaller than x. Now, during iteration i, a

department of mathematics and computer science

2IL50 data structures

technische universiteit eindhoven

conditional statement is used in order to determine if the element at index i is the


largest integer in A[1..i] that is strictly smaller than x. Thus, 2 cases emerge:
I. If the element at index i is indeed the largest element in the subarray A[1..i] that
is strictly smaller than x, then the condition in the if statement would evaluate
to T RU E which will result in leastSmall having its value updated. In this case,
leastSmall will contain the largest element strictly smaller than x in the subarray
A[1..i](after the execution of the if statement in lines 3 4).
II. If the element at index i is not the largest element in the subarray [1..i] that is
strictly smaller than x, then the condition of the if statement would evaluate to
F ALSE, which will result in leastSmall not having its value changed. In this case,
leastSmall will still contain the largest element strictly smaller than x in the subarray A[1..i] (after the execution of the if statement in line 3 - note: line 4 would
be skipped since the condition in the if statement would evaluate to F ALSE).

In conclusion, no matter what case applies to the value at the current index i, leastSmall
would still contain the value of the largest element strictly smaller than x in the
subarray A[1..i]. Incrementing i for the next iteration of the for loop then preserves
the loop invariant - that is: before iteration i + 1 leastSmall0 s value will contain the
largest element strictly smaller than x in subarray A[1..i + 1 1] = A[1..i].

Termination: The condition that causes the loop to terminate is that i > n. Because
each iteration increases i by 1, we must have i = n + 1 at that time. Substituting n + 1
for i in the loop invariant, we have that the variables leastSmall value is the largest
integer in the subarray A[1..n] that is strictly smaller than x. Hence, the algorithm is
correct as the subarray A[1..n] is actually just the array A.

By way of a loop invariant, we have proved that the algorithm is correct. We can now analyse
the running time of the presented algorithm. We denote the running time of the algorithm
as a function of n as T (n).
T (n) = c1 + (n + 1) c2 + n c3 + c4 = n (c2 + c3 ) + c1 + c2 + c4
The above values are obtained by determining how many times each line of code gets executed(a statement takes ci steps to execute). Line 1 gets executed 1 time, so it contributes
only by a constant factor to the final running time. The condition in the for-loop gets executed n + 1 times, contributing c2 (n + 1) to the running time. The statements inside the
for loop get executed n times, contributing by c3 n and, finally, the statement in line 5 gets
executed only one time, also contributing with a constant factor to the final running-time,
namely c4 .
So, T (n) is of the form a n + b, where a > 0 and b 0. Since a n a n + b
a n + n = (a + 1) n, n 1, where a n + b = T (n), by definition of -notation, we may

department of mathematics and computer science

technische universiteit eindhoven

2IL50 data structures

conclude that T (n) (n), where c1 = a, c2 = a + 1 and n0 = 1. Therefore, the running time
of the algorithm is (n).

Solution to Exercise 4: Well prove that


ematical induction.

Base Case: n = 0 =
() holds for n = 0.

Pn

k=0 x

Pn

P0

k=0 x

k=0 x

xn+1 1
x1 , x

= x0 = 1 =

x1
x1

> 1, n N() using math-

x0+1 1
x1

xn+1 1
x1 .

Thus,

Induction Hypothesis (IH): We assume that () holds for n = m, m N.


Induction Step: We prove that () holds for n = m + 1, m N.
Pm

Pm+1

xk =

Thus,

Pm+1

k=0

k=0 x

k=0

xk =

IH xm+1 1
x1

+ xm+1 =

xm+2 1
x1 .

+ xm+1 =

xm+1 1+xm+2 xm+1


x1

xm+2 1
x1 .

So, () holds for n = m + 1, m N.

Conclusion: By the Principle of Mathematical Induction, x > 1 and n N,


Pn

k=0 x

xn+1 1
x1 .

Solution to Exercise 5:
(a) n +

n = (n) This is true. Proof:

n 1, n n + n n + n = 2n. So, c1 , c2 and n0 such that c1 n n + n c2 n, n


n0 , where c1 = 1, c2 = 2 and n0 = 1. This is indeed the definition of -notation, so we

may conclude that n + n = (n).


(b) n3 2n2 = O(n2 ) This is false. We can show this using a proof by contradiction.
We assume that n3 2n2 = O(n2 ), and derive a contradiction from the statement.
So, n3 2n2 = O(n2 ). By the definition of O-notation, c, n0 positive constants, such
that 0 n3 2n2 cn2 , n n0 . The first part of the equality always holds since n is
positive. Now we may derive a contradiction from the second part of the inequality. So,
n3 2n2 cn2 - since n is positive, n2 is positive, so we can divide by n2 . It follows that
n 2 c so n c + 2 which is false when n since c + 2 is a constant. Therefore,
the aforementioned inequality does not hold n n0 .
Thus, we have derived a contradiction. We may conclude that n3 2n2 6= O(n2 ).

department of mathematics and computer science

2IL50 data structures

technische universiteit eindhoven

(c) n2 4n + 5 = (n2 ) This is true. Proof:


n 2, 0 9n2 40n + 50 since = 160 200 9 < 0. So, n2 10n2 40n + 50.
1 2
1 2
Dividing by 10 we get: 10
n n2 4n + 5. Since n 2 we get 0 10
n n2 4n + 5.
1
Thus, c and n0 such that 0 cn2 n2 4n + 5, where c = 10
and n0 = 2. This is
2
indeed the definition of -notation, so we may conclude that n 4n + 5 = (n2 ).
(d) 5n + 4 = (n log n) - This is false. We can show this using a proof by contradiction.
We assume that 5n + 4 = (n log n), and derive a contradiction from the statement.
So, 5n + 4 = (n log n). By the definition of -notation, c1 , c2 and n0 positive constants, such that c1 n log n 5n + 4 c2 n log n, n n0 . From the first part of the
inequality we have that c1 n log n 5n + 4. We can divide by n since it is positive.
We arrive at c1 log n 4 + n4 . We can divide by c1 since it is positive. This results in
4+ 4

4+ n

log n c1n c1n = c51 . So, log n c51 which is not true when n since c51 is a
constant. Thus, c1 log n 4+ n4 does not hold. Since n is positive, c1 n log n 5n+4 does
not hold either. But we assumed that c1 n log n 5n + 4 would hold in the beginning.
Thus, we have derived a contradiction. We may conclude that 5n + 4 6= (n log n).

(e) n2 log n = O(n2 n) This is true. Proof:

Well, we know that limn lognn = . Thus, n 16, it holds that 0 log n n. We

now multiply the inequality by n2 (which is positive) and arrive at 0 n2 log n n2 n.

That means c and n0 positive constants, such that 0 n2 log n c n2 n, where


c = 1 and n0 = 16. This is indeed the definition of O-notation, so we may conclude that

n2 log n = O(n2 n).

(f) If f (n) = (g(n)) and g(n) = (f (n)), then f (n) = (g(n)) - This is true. Since
this is an implication, in order to prove it to be true, we assume that f (n) = (g(n)) and
g(n) = (f (n)) are indeed true, and derive that f (n) = (g(n)).
f (n) = (g(n)) so, by the definition of -notation, c3 and n1 positive constants,
such that 0 c3 g(n) f (n), n n1 . (I)
g(n) = (f (n)) so, by the definition of -notation, c4 and n2 positive constants,
such that 0 c4 f (n) g(n), n n2 . (II)
From (II), we have that 0 f (n) c14 g(n), where c14 is a positive constant, since
c4 is a positive constant. When we combine this inequality with (I), using the transitivity of , we arrive at 0 c3 g(n) f (n) c14 g(n), which holds n n0 , where
n0 = max(n1 , n2 ). ()
Thus, through (), we have proved that c1 , c2 and n0 positive constants, such that
0 c1 g(n) f (n) c2 g(n), n n0 , where c1 = c3 , c2 = c14 and n0 = max(n1 , n2 ).
This is indeed the definition of -notation, so we may conclude that f (n) = (g(n)).

department of mathematics and computer science

technische universiteit eindhoven

2IL50 data structures

NOTE - there is a shorter way to prove this by using the properties of , and O.
This proof works as follows:
We know that g(n) = (f (n)). By the transpose-symmetry property of and O we
have that f (n) = O(g(n)) - but we also know that f (n) = (g(n)) = c1 , c2 , n1
and n2 positive constants, such that 0 c1 g(n) f (n), n n1 and 0 f (n)
c2 g(n), n n2 . From the transitive property of inequalities, we have that 0 c1 g(n)
f (n) c2 g(n), n n0 where n0 = max(n1 , n2 ). Since this is indeed the definition of
-notation, we may conclude that f (n) = (g(n)).

department of mathematics and computer science

Вам также может понравиться