Вы находитесь на странице: 1из 8

1. (8points)Goal:Practiceanalysisofalgorithms.

Considerthealgorithm
representedbythefollowingprogramfragment.

MYSTERY(intx,intn)
{
intsum=0
if(x<=100)then{
while(x>10){
x=x1
}

}else{
for(inti=1i<=ni++){
sum=sum+i+x
}
}
returnx+sum
}

a)(2points)Whatvalue(asafunctionofxandn)doesMYSTERYcompute?
b) (2 points) Assume that the value of x is constant, but unknown. Use only arithmetic
operations as primitive operations. Compute the worstcase running time of MYSTERY
asafunctionofxandn.
c) (2 points) Assume that x remains constant while n goes to infinity. Derive a tight,
bigOh expression (dependent on x) for the running time of MYSTERY. Justify your
solution.
d) (2 points) Assume that n remains constant while x goes to infinity. Derive a tight,
bigOh expression (dependent on n) for the running time of MYSTERY. Justify your
solution.

Solution:
n(n+1)
a) f (x, n) = {xf orx < 1010f or10 x 100 2 + x(n + 1)otherwise
b.)Wecountthefollowingarithmeticoperations:
i.
ii.
iii.
iv.

x = x 1
i++
sum=sum+i+x
x+sum

Thentheworstcaserunningtimewouldbe:
T (f (x, n)) = {1f orx < 10x 9f or10 x 1002n + 1otherwise

c) T (f (x, n)) = {O(1)f orx 100O(n)otherwise


d) T (f (x, n)) = O(1)

Question2:

Solution:
a.TherunningtimeisdeterminedbytheforloopfromlinesH2tolineH4.
Asymptoticallywegetarunningtimeof(n).
H1y=0
H2for(i=ni>=0i)
H3

y*=x

H4y+=a[i]

b.Morenaivepseudocode.
N1y=0
N2for(i=ni>=0i)
N3z=a[i]
N4for(j=0j<ij++)
N5z*=x
N6y+=z
TherunningtimeisdominatedbythenestedforloopinlinesN4andN5.Asymptotically,

wegetarunningtimeof

(n2).Thisisalargerasymptoticrunningtimethan

Hornersrule.

c.
Initialization:Priortothefirstloopiteration,wehavei=n,sothatthesumgoesfrom
k=0tok=n(n+1)=1,anditdoesnotcontainanytermandsoequalsto0.Thisis
identicaltotheinitializationvalueofy.
Maintenance:Assumethatbeforetheiterationi>i1wehave

InlineH3wemultiplyybyx,resultingin

InlineH4weadda[i]toy,resultingin

Thisequatesto

Hencetheloopinvariantispreservedatthebeginningofthenextiterationwithi1.
Termination:Thealgorithmterminateswheni=1.Theloopinvariantguaranteesthat
atthistimewehave

d.Firstly,lineH1guaranteesthattheloopinvariantisinitializedcorrectlyfori=n.When
theloopinvariantismaintainedwhiletheloopinlineH2simplycountsdownuntilthe
algorithmterminateswithi=1.Atthistime,theloopinvariantguaranteesthatyis
identicaltothevalueofthepolynomewehadtoevaluate,i.e.

3. Purpose: practice working with asymptotic notation. Please solve (12 points)
32 on page 61 , (8 points) 34 [ac] on page 62. For full marks justify your
solutions.

32
Solution:sorryforthebaddisplayofformulasinthetable.

Lgkn

Nk

Yes

No

No

No

No

Nk

Cn

Yes

Yes

No

No

No

sqrt(n

nsin(n

No

No

No

No

No

2n

2n/2

No

No

Yes

Yes

No

nlgc

Clgn

Yes

No

Yes

No

Yes

lg(n!)

lg(nn)

Yes

No

Yes

No

Yes

a) ApplyLHospitalsrule
k

lnn
lg k n = (log 2 n)k = ( ln2
)
k

1
limn lgnn = ( ln2
) limn
k

1
= ( ln2
) limn kk!n =

(lnn)k

k 1

1
= ( ln2
) limn k(lnn)

k!
limn n1
(ln2)k k

= 0

b) ApplyLHospitalsrule
k 1

limn ncn = limn cknn lnc = limn

k!

cn (lnc)k

k!
1
k limn cn
(lnc)

= 0

c) The value of sin n fluctuates between 1 and 1. Therefore, this function is not
comparable.
d) ApplyLHospitalsrule
n

limn 22n/2 = limn 2n/2 =


e) nlgc = clgn
f) lg(n!) =
B ecause

n
i=1
n
i=1

Because

34
Solution:

lg(n) ,lg( nn ) = nlgn


lg(n) nlgn ,lg(n!)isBigohoflg( nn ) .

n
i=1

lg(i)

n
i= n2 +1

lg(i)

n
i= n2 +1

lg( n2 ) = ( n2

1)lg( n2 ) ,lg(n!)isBigomegaoflg( nn ) .

a)f(n)=O(g(n))impliesg(n)=O(f(n))
Letf(n)=nandg(n)= n2 ,thenwehaven=O( n2 ) but n2 O(n).
=>False
b)f(n)+g(n)= (min(f(n),g(n)))
Letf(n)=nandg(n)= n2 ,thenmin(f(n),g(n))= {n2 , n < 1n, n 1} .
Therefore,for n 1 ,wehavemin(f(n),g(n))=n= (n) .
However,f(n)+g(n)=n+ n2 (n) .
=>False
c)f(n)=O(g(n))implieslg(f(n))=O(lg(g(n))),wherelg(g(n)) andf(n) a 1forallsufficiently
largen.
f(n)=O(g(n))impliesthatforsomec>0and n n0 0, wehave 0 f (n) cg(n).
If 0 f (n) cg(n) istrue,then lg(f (n)) lg(cg(n) )istrueandalso lg(f (n)) lgc + lg(g(n)) is
true.
Since lg(g(n)) 1, lgc * lg(g(n)) lgc.
lg(f (n)) lgc + lg(g(n)) lgc * lg(g(n)) + lg(g(n)) = (1 + lgc)lg(g(n))
Therefore, lg(f (n)) = O(lg(g(n))) .
=>True

4. Purpose: more practice. The following 10 functions are comparable by


asymptoticgrowth.
(Here,lg:=logarithmbase2)
4lg(n),n3 ,n2 +nlg5(n),n2 n+5,n2 lg(n),n3 lg3(n),sqrt(n),n/2,lg(n),2n

a) (5points)Puttheminincreasingorderofasymptoticgrowth.
Solution:

The order would be: 2n, lg(n), sqrt(n), n/2, n2 n+5, 4lg(n), n2 +nlg5(n), n2 lg(n), n3 ,
n3 lg3(n)
b) (2point)TheorderisNOTunique.Whichfunctionscanbeexchanged?
Solution:
4lg(n)=22*lg(n)=2lg(n)*2=n2,So,4lg(n),n2 +nlg5(n),n2 n+5canbeexchanged.

5. Purpose: reinforce your understanding of sorting algorithms and lower


bounds. Please analyze the number of comparisons needed by Insertion
Sort(2bonus points), Merge Sort (2 bonus points), and Heap Sort (2 bonus
points) in the special case when all the keys are 0s and 1s. Assume that
the algorithm in question does not know that the keys are 0s and 1s, that
it behaves just as it would for any other set of keys. In each case, give a
big-theta bound for the worst-case number of comparisons for n in
each of three situations:
The number of 0s is a constant c > 0;
The number of 1s is a constant c > 0; and
The number of 0s is n/2 and the number of 1s is n/2
(a) Insertion Sort
The number of comparisons for insertion sort is less or equal to n1+
I and greater or equal to I, where I is the number of inversions in the
array. Each inversion forces a comparison: if a[i] > a[j] and i < j,
then a[i] and a[j] will be compared when a[j] is inserted. And there
are at most n 1 additional comparisons that are performed then
an element has found it correct place and is not the first element.
When there are only 0s and 1s, the maximum number of inversions
occurs when all the 1s precede all the 0s. In this case, the number
of inversions is k (n k), where k is the number of 0s. Take the
following example :
111110
The zero on the right has to be compared with all the existing 1s in
the worst case. Here to get the sequence 0 1 1 1 1 1 it would require
5 comparisons in total.
If the sequence was :
1 1 1 1 1 0 0 the total number of comparisons would be 5+6 (for each
zero) = 11 with number of inversions= 10.
The number of 0s is a constant c > 0;
c (n c) (n)
The number of 1s is a constant c > 0;
(n c) c (n)
The number of 0s is n/2 and the number of 1s is n/2
bn/2 cdn/2 (n2 )
(b) Merge Sort
In our implementation, merging two lists with a total of n elements,
requires n1 compares, independent of the specific elements. Hence,
the number of comparisons at each level of the recursion tree is (n)
for a total of (nlgn) comparisons.

(c) Heap Sort


The MakeHeap phase of Heapsort does not have any impact on the
asymptotic number of comparisons. The sorting phase accounts for
the following scenarios.
The number of 0s is a constant c
>
0;
Each time a 0 becomes the root, it moves down one step at
a time towards the leftmost position that did not have a 0 before. In other words, it continues to have only 0s to the left
of it on its level. This also means that every time it trickles
down its depth decreases by 1: the 0s to the left of it keep it
from coming back to the same level, and if it is the last 0 on
its level, the depth of the tree decreases. So each 0 can move to
the root at most lg(n) times (the number of times its depth can
decrease) and take at most 2 lg(n) comparisons heapify. Any
time a 1 moves to the root there are 2 comparisons. Thus to
the total number is bounded by: 2 c lg 2 n + (n c) (n).
The number of 1s is a constant c
>
0;
The first c iterations of Heapsorts main loop remove the 1s
from the heap, now consisting of only 0s. A heap with all 0s
takes 2 comparisons per iteration. Therefore the answer is (n)
The number of 0s is n/2 and the number of 1s is n/2 .
The upper bound is O(nlg(n)).
To find the lower bound:
Let us take an example where the last row has all 0s and every
non leaf node is a 1. MakeHeap will not change this in any
way. Assume n = 2x 1. After the first 2x2 iterations, half
of the bottom row which consisted of 0s will end up as part
of the subheap on the far left side. Almost half of these will
be in the last but one row of the subheap: the subheap having
2x2 leaves and 2x3 parents of leaves. Each of the 2x3 0 s
that end up as parents of leaves in the subheap traverses past
x 3 pairs of 1s. As a result, just these 2x3 0s account for
2(x2)2x3 = (x3)2x2 comparisons, which is (nlg(n)) with
a suitable small constant. If n 6= 2x 1 we can simply apply this
argument to the largest x for which 2x 1 < n. Any additional
0s will only increase the relative number of comparisons. The
tight bound is therefore (nlg(n)).
6. (1 point) Purpose: even more practice. Show en (nt )
Using LHospitals Rule:
en
en
=
lim
= .
n+ t!
n+ nt
lim

7. (8 points) Purpose: practice algorithm design and algorithm analysis.


Give pseudocode for a (logn) algorithm which computes an, given a > 0
and integer n. Justify the asymptotic running time of your algorithm. Do
not assume that n is a power of 2.
Algorithm 1 Iterative algorithm to calculate power(a,n)
1: procedure FastPower(a,n)
2:
result 1
3:
if n == 0 then
4:
return result
5:
power of 2 a
6:
k lg(n) + 1
7:
for i=1 to k do
8:
if n is even then
9:
n n/2
10:
else
11:
n (n 1)/2
12:
result result power of 2
13:
power of 2 power of 2 power of 2
return result
This algorithm calculates an in lg(n) + 1 iterations of the for loop.
Since the number of operations in each iteration is bound by a constant,
the algorithm has (log(n)) worst-case running time. Assume n = ak 2k +
ak1 2k1 + . . . + a0 20 with aj {0, 1}
0

k1

The algorithm calculates a(a0 2 ) ,a(a0 2 +a1 2 ) ,... a(ak 2 +ak1 2 +. . . +a0 2 )
and stores these values in the variable result. For example,for n = 5 =
1 22 + 0 21 + 1 20 , algorithm F astP ower(a, 5) computes successively:
Initialization:
result = 1, n = 5, power of 2 = a
for-loop:
i = 1, result = a, n = 2, power of 2 = a2
i = 2, result = a3 , n = 1, power of 2 = a4
i = 3, result = a5 , n = 0, power of 2 = a8 The final value is in the result
variable as shown above.
In an alternate recursive version the recurrence could be formulated as:
T (n) =

(1),
T ( n2 ) + (1),

if n = 1 or n = 0
otherwise

This can be solved using Masters Theorem with a = 1, b = 2Case 2.

(1)

Вам также может понравиться