Вы находитесь на странице: 1из 12

UNIT: I FUNDAMENTALS OF ALGORITHMS

THE ANALYSIS OF ALGORITHMS

The analysis of algorithms is made considering both qualitative and quantitative aspects
to get a solution that is economical in the use of computing and human resources which
improves the performance of an algorithm. A good algorithm usually possesses the
following qualities and capabilities.

1. They are simple but powerful and general solutions.

2. They are user friendly

3. They can be easily updated.

4. They are correct.

5. They are able to be understood on a number of levels.

6. They are economical in the use of computer time, storage and peripherals.

7. They are well documented.

8. They are independent to run on a particular computer.

9. They can be used as sub procedures for other problems.

10. The solution is pleasing and satisfying to its designer.

Computational Complexity

The computational complexity can be measured in terms of space and time required by an
algorithm.

1
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Space Complexity

The space complexity of an algorithm is the amount of memory it needs to run the
algorithm.

The better the time complexity of an algorithm is, the faster the algorithm will carry out
his work in practice. Apart from time complexity, its space complexity is also important:
This is essentially the number of memory cells which an algorithm needs. A good
algorithm keeps this number as small as possible, too.

There is often a time-space-tradeoff involved in a problem, that is, it cannot be solved


with few computing time and low memory consumption. One then has to make a
compromise and to exchange computing time for memory consumption or vice versa,
depending on which algorithm one chooses and how one parameterizes it.

Amortized analysis

Sometimes we find the statement in the manual that an operation takes amortized time
O(f(n)). This means that the total time for n such operations is bounded asymptotically
from above by a function g(n) and that f(n)=O(g(n)/n). So the amortized time is (a bound
for) the average time of an operation in the worst case.

The special case of an amortized time of O(1) signifies that a sequence of n such
operations takes only time O(n). One then refers to this as constant amortized time.

Such statements are often the result of an amortized analysis: Not each of the n
operations takes equally much time; some of the operations are running time intensive
and do a lot of “pre-work” (or also “post-work”), what, however, pays off by the fact that,
as a result of the pre-work done, the remaining operations can be carried out so fast that a
total time of O(g(n)) is not exceeded. So the investment in the pre-work or after-work
amortizes itself.

2
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Time Complexity

The time complexity of an algorithm is the amount of time it needs to run the algorithm.
The time taken by the program is the sum of the compile time and run time. To make a
quantitative measure of an algorithm's performance, the computational model must
capture the essence of the computation while at the same time it must be divorced from
any programming language.

How long does this sorting program run? It possibly takes a very long time on large
inputs (that is many strings) until the program has completed its work and gives a sign of
life again. Sometimes it makes sense to be able to estimate the running time before
starting a program. Nobody wants to wait for a sorted phone book for years! Obviously,
the running time depends on the number n of the strings to be sorted. Can we find a
formula for the running time which depends on n?

Having a close look at the program we notice that it consists of two nested for-loops. In
both loops the variables run from 0 to n, but the inner variable starts right from where the
outer one just stands. An if with a comparison and some assignments not necessarily
executed reside inside the two loops. A good measure for the running time is the number
of executed comparisons.[11] In the first iteration n comparisons take place, in the second
n-1, then n-2, then n-3 etc. So 1+2+...+n comparisons are performed altogether.
According to the well known Gaussian sum formula these are exactly 1/2·(n-1)·n
comparisons. The screened area corresponds to the number of comparisons executed. It
apparently corresponds approx. to half of the area of a square with a side length of n. So
it amounts to approx. 1/2·n2.

3
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Running time analysis of sorting by minimum search

How does this expression have to be judged? Is this good or bad? If we double the
number of strings to be sorted, the computing time quadruples! If we increase it ten-fold,
it takes even 100 = 102 times longer until the program will have terminated! All this is
caused by the expression n2. One says: Sorting by minimum search has quadratic
complexity. This gives us a forefeeling that this method is unsuitable for large amounts
of data because it simply takes far too much time.

So it would be a fallacy here to say: “For a lot of money, we'll simply buy a machine
which is twice as fast, then we can sort twice as many strings (in the same time).”
Theoretical running time considerations offer protection against such fallacies.

The number of (machine) instructions which a program executes during its running time
is called its time complexity in computer science. This number depends primarily on the
size of the program's input, that is approximately on the number of the strings to be sorted
(and their length) and the algorithm used. So approximately, the time complexity of the
program “sort an array of n strings by minimum search” is described by the expression
c·n2.

4
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
c is a constant which depends on the programming language used, on the quality of the
compiler or interpreter, on the CPU, on the size of the main memory and the access time
to it, on the knowledge of the programmer, and last but not least on the algorithm itself,
which may require simple but also time consuming machine instructions. (For the sake of
simplicity we have drawn the factor 1/2 into c here.) So while one can make c smaller by
improvement of external circumstances (and thereby often investing a lot of money), the
term n2, however, always remains unchanged.

Analysis of Algorithms
Algorithm Input Output
An algorithm is a step-by-step procedure for
solving a problem in a finite amount of time.

Running Time (§1.1)


 M ost algorithms transform input objects into output objects. The running time of an
algorithm typically grows with the input size. Average case time is often difficult to
determine. We focus on the worst case running time. Easier to analyze n Crucial to
applications such as games, finance and robotics

Experimental Studies
Write a program implementing the algorithm and Run the program with inputs of
varying size and composition Use a method like System.currentTimeMillis() to
get an accurate measure of the actual running time Plot the results 0

5
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Limitations of Experiments
It is necessary to implement the algorithm, which may be difficult
Results may not be indicative of the running time on other inputs not included
in the experiment.
In order to compare two algorithms, the same hardware and software
environments must be used
Theoretical Analysis
Uses a high-level description of the algorithm instead of an implementation
Characterizes running time as a function of the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an algorithm independent of the hardware/software
environment
Pseudocode
High-level description of an algorithm
More structured than English prose
Less detailed than a program
Preferred notation for describing algorithms
Hides program design issues
Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax A[0]
for i =1 to n 1 do
if A[i] currentMax then
currentMax = A[i]

6
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
return currentMax
Example: find max element of an array
Control flow
if … then … [else …]
while … do …
repeat … until …
for … do …
Indentation replaces braces
Method declaration
Algorithm method (arg [, arg…])
Input …
Output …
Method call
var.method (arg [, arg…])
Return value
return expression
Expressions
Assignment (like in Java)
Equality testing (like in Java)
Superscripts and other mathematical formatting allowed
Analysis of Algorithms 8
The Random Access Machine
(RAM) Model
A CPU
An potentially unbounded bank of memory cells, each of which can hold an arbitrary
number or character

7
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Memory cells are numbered and accessing any cell in memory takes unit time.
Primitive Operations
o Basic computations performed by an algorithm
o Identifiable in pseudocode
o Largely independent from the programming language
o Exact definition not important
o Assumed to take a constant amount of time in the RAM model
o Examples:
o Evaluating an expression
o Assigning a value to a variable
o Indexing into an array
o Calling a method
o Returning from a method
o Analysis of Algorithms 10
o Counting Primitive
o Operations
o By inspecting the pseudocode, we can determine the maximum number of
primitive operations executed by an algorithm, as a function of the input size
Algorithm arrayMax(A, n) # operations
currentMax ¬ A[0] 2
for i ¬ 1 to n - 1 do 2 + n
if A[i] > currentMax then 2(n - 1)
currentMax ¬ A[i] 2(n - 1)
{ increment counter i } 2(n - 1)
return currentMax 1
Total 7n - 1
Estimating Running Time
Algorithm arrayMax executes 7n - 1 primitive operations in the worst case.
Define:
a = Time taken by the fastest primitive operation

8
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
b = Time taken by the slowest primitive operation
Let T(n) be worst-case time of arrayMax.
Then
a (7n - 1) £ T(n) £ b(7n - 1)
Hence, the running time T(n) is bounded by two linear functions
Growth Rate of Running Time
Changing the hardware/ software environment n Affects T(n) by a constant factor, but
n Does not alter the growth rate of T(n) The linear growth rate of the running time T(n)
is an intrinsic property of algorithm arrayMax

Asymtotic Notation

Asymtotic notations are method used to estimate and represent the efficiency of an
algorithm using simple formula. This can be useful for seperating algorithms that leads to
different amounts of work for large inputs.

Comparing or classify functions that ignores constant factors and small inputs is called as
asymtotic growth rate or asymtotic order or simply the order of functions. Complexity of
an algorithm is usually represented in O, o, , notations.

Big - oh notation (O)

This is a standard notation that has been developed to represent functions which bound
the computing time for algorithms and it is used to define the worst case running time of
an algorithm and concerned with very large values of N.

Definition : - T(N) = O(f(N)), if there are positive constants C and n o such that T(N)
Cf(N) when N no

BIG - OMEGA NOTATION ( )

This notation is used to describe the best case running time of algorithms and concerned
with very large values of N.

9
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
Definition : - T(N) = omega(f(N)), if there are positive constants C and n o such that T(N)
CF(N) when N no

BIG - THETA NOTATION

This notation is used to describe the average case running time of algorithms and
concerned with very large values of n.

Definition : - T(N) = theta (F(N)), if there are positive constants C1, C2 and no such that

T(N) = O(F(N)) and T(N) = (F(N)) for all N no

Definition : - T(N) = (F(N)), if there are positive constants C1, C2 and no such that

T(N) = O(F(N)) and T(N) = (F(N)) for all N no.

Little - Oh Notation (o)

This notation is used to describe the worstcase analysis of algorithms and concerned with
small values of n.

Basic Asymptotic Efficiency Classes

Computing Time Name

0(1) constant

0(log n) Logarithmic function

0(n) Linear

0(n2) quadratic

0(n3) cubic

0(2n) exponential

10
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
0(nlogn) n - log - n Logarithmic

n! factorial

WORST - CASE, BEST - CASE AND AVERAGE - CASE EFFICIENCIES

Worst - Case - Efficiency

The worst - case efficiency of an algorithm is its efficiency for the worst - case input of
size n, which is an input of size n for which the algorithm runs the longest among all
possible inputs of that size

Best - Case Efficiency

The best - case efficiency of an algorithm is its efficiency for the best case input of size n,
which is an input of size n for which the algorithm runs the fastest among all possible
inputs of that size.

Average - Case Efficiency

The average - case efficiency of an algorithm is its efficiency for the random input of size
n, which makes some assumptions about possible inputs of size n.

For example, let us consider sequential search

ALGORITHM

SequentialSearch (A[0...n-1],K)

// Input : An array A[0..n-1] and a search key k.

// Output : Returns the index of the first element of A that matches R or -1 if there are no
matching elements.

11
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE
while i < n and A[i] # k do

i i+1

if i < n return i

else return - 1

Here, the best - case efficiency is 0(1) where the first element is itself the search element
and the worst - case efficiency is 0(n) where the last element is the search element or the
search element may not be found in the given array.

12
Prepared by Mr B.N.Karthik & Ms R.Rathi, Lecturer, Dept of IT, AVCCE

Вам также может понравиться