Вы находитесь на странице: 1из 46

Dr.

ISSAM ALHADID
Modification date 23-2-2019
 Complexity (Worst, Best, and Average Cases)
 Practical vs. Impractical (not realistic)
Complexities
 Mathematical Functions’ Properties:
◦ Floors & Ceilings
◦ Polynomials
◦ Exponents
◦ Logarithms
◦ Summation
◦ Factorials
 Complexity is the number of steps required
to solve a problem.
 The goal is to find the best algorithm to solve
the problem with a less number of steps.
 Complexity of Algorithms
◦ The size of the problem is a measure of the
quantity of the input data n.
◦ The time needed by an algorithm, expressed as a
function of the size of the problem (it solves), is
called the (time) complexity of the algorithm.
 the time complexity is the computational
complexity that describes the amount of time it
takes to run an algorithm.
 computational complexity is the amount of
resources required for running an algorithm.
 computational complexity depends on
the computer that is used, and the time is
generally expressed as the number of needed
elementary operations, which are supposed to
take a constant time on a given computer, and to
change by a constant factor when one changes of
computer.
 Running Time: Number of primitive steps that
are executed.
◦ most statements roughly require the same amount
of time
y=m*x+b
 c = 5 / 9 * (t - 32 )
 z = f(x) + g(y)
 Each algorithm performs a sequence of basic
operations:
◦ Arithmetic: (low + high)/2
◦ Comparison: if ( x > 0 ) …
◦ Assignment: temp = x
◦ Branching: while ( true ) { … }
◦ …
 Idea: count the number of basic operations
performed on the input.
 Difficulties:
◦ Which operations are basic?
◦ Not all operations take the same amount of time.
◦ Operations take different times with different
◦ hardware or compilers.
 Let T(n) denote the number of operations
required by an algorithm to solve a given
class of problems.
 Often T(n) depends on the input, in such
cases one can talk about:
◦ Worst-case complexity,
◦ Best-case complexity,
◦ Average-case complexity of an algorithm.
 Alternatively, one can determine bounds:
(upper or lower or tight/strict) on T(n).
 Worst-Case Running Time: the longest time for
any input size of n,
◦ provides an upper bound on running time for any input.
 Best-Case Running Time: the shortest time for
any input size of n,
◦ provides lower bound on running time for any input.
 Average-Case Behavior: the expected
performance averaged over all possible inputs,
◦ it is generally better than worst case behavior, but
sometimes it’s roughly as bad as worst case,
◦ difficult to compute.
 Worst-case running time:
◦ when x is not in the original array A,
◦ in this case, while loop needs 2(n + 1) comparisons
+ c other operations.
◦ So, T(n) = 2n + 4 + c  Linear complexity
 Best-case running time:
◦ when x is found in A[1],
◦ in this case, while loop needs 2 comparisons + c
other operations.
◦ So, T(n) = 4 + c  Constant complexity
Pr : probability
 Average-case running time:
 assume x is equally likely to equal A[1], A[2],
…, A[n],
 in this case, Pr[x=A[i]] = 1/n, for 1≤ i ≤ n
then, the average-case running time is
 Average-case running time (cont….)
 Numbers of the form 1 + 2 + 3 + 4…+ n are
called triangular numbers, and can be
expressed using the formula:

 This equation is known as Pythagoreans.


 1+2+3+4+5 = 15 Also, 5(5+1)/2 = 15
 Therefore, T(n) =(1/n)*n(n+1)/2 = (n+1)/2
 So, T(n) = (n + 1)/2  Linear complexity

Pr : probability
What is the T(n) of the Linear (Sequential)
Search:
1. Found = False
2. Loc = 1
3. While Loc <= n And Found == False Do
A. If Item = A[Loc]
B. Found = True
Else
C. Loc = Loc + 1
 What is the T(n) of the Linear (Sequential)
Search:
 What is the T(n) of the Linear (Sequential)
Search:

Why the Cost of Step (B) is (0) ?


If Not (0) what is the value of Step (C)
and what is the final Value of the T(n) ?
 What is the T(n) of the Linear (Sequential)
Search:
T(n) = 3n + 3
The complexity time of
the Sequential search
is proportional to the
number of items…. An
algorithm is said to
take linear time…
accordingly … it called
the Linear Search.
What is the T(n) of the following algorithm:
1. Read n
2. Sum = 0
3. i = 1
4. While i <= n do
A. Read Number
B. Sum = Sum + Number
C. i = i + 1
5. Calculate Mean = Sum / n
T(n) = 4n + 5

The complexity time is


proportional to the
number of items…
(Linear Regression)

An algorithm is said to
take linear time

What is the best


Complexity time and the
worst Complexity time ?
 What is the time complexity of the following
algorithm:
 INPUT: a sequence of n numbers,
◦ T is an array of n elements
◦ T[1], T[2], …, T[n]
 OUTPUT: the smallest number among them
 For very large input size, it is the rate of
growth, or order of growth that matters
asymptotically.

 We can ignore the lower-order terms, since


they are relatively insignificant for very large n.
 We can also ignore leading term’s constant
coefficients, since they are not as important for
the rate of growth in computational efficiency for
very large n.

 Higher order functions of n are normally


considered less efficient.
higher-order function is a function that does at least
one of the following:
◦ takes one or more functions as arguments (i.e. procedural
parameters),
◦ returns a function as its result.
 By dropping the less significant terms and the
constant coefficients, we can focus on the
important part of an algorithm's running
time—its rate of growth—without getting
involved in details that complicate our
understanding. When we drop the constant
coefficients and the less significant terms, we
use asymptotic notation
 By now you should have an intuitive feel for
asymptotic (big-O) notation:

 What does O(n) running time mean? O(n2)?


O(n lg n)?
 Is O(n2) too much time?
 Is the algorithm practical?

At CPU speed 109 instructions/second


 Exponential functions increase rapidly, e.g.,
2n will double whenever n is increased by 1.
 For any real number x, we denote the greatest
integer less than or equal to x by ⎣x⎦,
◦ read “the floor of x”
 For any real number x, we denote the least
integer greater than or equal to x by ⎡x⎤,
◦ read “the ceiling of x”
 For all real x, (example for x=4.2)
◦ x – 1 < ⎣x⎦ ≤ x ≤ ⎡x⎤ < x + 1
 For any integer n ,
◦ ⎣n/2⎦ + ⎡n/2⎤ = n
◦ ⎣n/2⎦ + 1 = ⎡n/2⎤
where (Theta) related to asymptotically tight bound on the
growth of a function. In terms of algorithms, we would like to be able to
say (when it's true) that a given characteristic such as run time grows no
better and no worse than a given function.
 Why do we need summation formulas?
For computing the running times of iterative
constructs (loops). (CLRS – Appendix A)
Example: Maximum Subvector
Given an array A[1…n] of numeric values (can be
positive, zero, and negative) determine the
subvector A[i…j] (1 i  j  n) whose sum of
elements is maximum over all subvectors.

1 -2 2 2
MaxSubvector(a, n) {
maxsum = 0;
for i = 1 to n {
for j = i to n {
sum = 0;
for k = i to j
{ sum += a[k] }
maxsum = max(sum, maxsum);
} }
return maxsum; }

Вам также может понравиться