Вы находитесь на странице: 1из 16

Analysis of Algorithms

Mar 5, 2017
Time and space

To analyze an algorithm means:


developing a formula for predicting how fast an
algorithm is, based on the size of the input (time
complexity), and/or
developing a formula for predicting how much memory
an algorithm requires, based on the size of the input
(space complexity)
Usually time is our biggest concern
Most algorithms require a fixed amount of space

2
Basics
Before we attempt to analyze an algorithm, we need to
define two things:
How we measure the size of the input
How we measure the time (or space) requirements
Once we have done this, we find an equation that
describes the time (or space) requirements in terms of
the size of the input
We simplify the equation by discarding constants and
discarding all but the fastest-growing term

3
What about the constants?
An added constant, f(n)+c, becomes less and less
important as n gets larger
A constant multiplier, k*f(n), does not get less
important, but...
Improving k gives a linear speedup (cutting k in half cuts the
time required in half)
Improving k is usually accomplished by careful code
optimization, not by better algorithms
We arent that concerned with only linear speedups!
Bottom line: Forget the constants!

4
Simplifying the formulae
Throwing out the constants is one of two things we
do in analysis of algorithms
By throwing out constants, we simplify 12n2 + 35 to
just n2
Our timing formula may be a polynomial, and may
have terms of various orders (constant, linear,
quadratic, cubic, etc.)
We usually discard all but the highest-order term

We simplify n2 + 3n + 5 to just n2
Bottom line: Throw out the lower order terms!

5
y = x2 + 3x + 5, for x=1..20

6
Big O notation
When we have a polynomial that describes the time
requirements of an algorithm, we simplify it by:
Throwing out all but the highest-order term
Throwing out all the constants
If an algorithm takes 12n3+4n2+8n+35 time, we
simplify this formula to just n3
We say the algorithm requires O(n3) time
We call this Big O notation
(More accurately, its Big , but well talk about that shortly)

7
Measuring requirements
Our formulae are in terms of n, the size of the input
For a Tuning Machine, there is an obvious measure

Measuring time is usually done by counting


characteristic operations
A characteristic operation is some operation that is executed

the most times


Example: Counting the comparisons needed in an

array search
For a Turing Machine, there is an obvious measure

8
Average, best, and worst cases
Usually we would like to find the average time to perform an
algorithm
However,
Sometimes the average isnt well defined
Example: Sorting an average array
Time typically depends on how out of order the array is

How out of order is the average unsorted array?


Sometimes finding the average is too difficult


Often we have to be satisfied with finding the worst (longest)
time required
Sometimes this is even what we want (say, for time-critical operations)
The best (fastest) case is seldom of interest

9
Big-O and friends
Informal definitions:
Given a complexity function f(n),
(f(n)) is the set of complexity functions that are lower
bounds on f(n)
O(f(n)) is the set of complexity functions that are upper
bounds on f(n)
(f(n)) is the set of complexity functions that, given the
correct constants, correctly describes f(n)
Example: If f(n) = 17x3 + 4x 12, then
(f(n)) contains 1, x, x2, log x, x log x, etc.
O(f(n)) contains x4, x5, 2x, etc.
(f(n)) contains x3
10
Formal definition of Big-O*
A function f(n) is O(g(n)) if
there exist positive constants c and N
such that, for all n > N,
0 < f(n) < cg(n)
That is, if n is big enough (larger than Nwe dont
care about small problems), then cg(n) will be bigger
than f(n)
Example: 5x2 + 6 is O(n3) because
0 < 5n2 + 6 < 2n3 whenever n > 3 (c = 2, N = 3)
We could just as well use c = 1, N = 6, or c = 50, N = 50
Of course, 5x2 + 6 is also O(n4), O(2n), and even O(n2)
11
Formal definition of Big-*
A function f(n) is (g(n)) if
there exist positive constants c and N
such that, for all n > N,
0 < cg(n) < f(n)
That is, if n is big enough (larger than Nwe dont
care about small problems), then cg(n) will be smaller
than f(n)
Example: 5x2 + 6 is (n) because
0 < 20n < 5n2 + 6 whenever n > 4 (c=20, N=4)
We could just as well use c = 50, N = 50
Of course, 5x2 + 6 is also O(log n), O(n), and even O(n2)
12
Formal definition of Big-*
A function f(n) is (g(n)) if
there exist positive constants c1 and c2 and N
such that, for all n > N,
0 < c1g(n) < f(n) < c2g(n)
That is, if n is big enough (larger than N), then c1g(n)
will be smaller than f(n) and c2g(n) will be larger than
f(n)
In a sense, is the best complexity of f(n)
Example: 5x2 + 6 is (n2) because
n2 < 5n2 + 6 < 6n2 whenever n > 5
(c1 = 1, c2 = 6)
13
Graphs

f(n) is O(g(n)) cg(n) f(n) is (g(n))


f(n) f(n)
cg(n)

N N

f(n) is (g(n)) c1g(n) Points to notice:


f(n)
What happens near the beginning
(n < N) is not important
c2g(n)
cg(n) always passes through 0, but f(n)
might not (why?)
In the third diagram, c1g(n) and c2g(n)
have the same shape (why?)
N
14
Informal review
For any function f(n), and large enough values of n,
f(n) = O(g(n)) if cg(n) is greater than f(n),
f(n) = theta(g(n)) if c1g(n) is greater than f(n) and c2g(n) is
less than f(n),
f(n) = omega(g(n)) if cg(n) is less than f(n),
...for suitably chosen values of c, c1, and c2

15
The End

The formal definitions were taken, with some slight modifications, from

Introduction to Algorithms, by Thomas H. Cormen, Charles E. Leiserson,


Donald L. Rivest, and Clifford Stein

16

Вам также может понравиться