Вы находитесь на странице: 1из 41

)

DEPARTMENT
OF
COMPUTER SCIENCE AND ENGINEERING

ANALYSIS AND DESIGN OF ALGORITHMS


LAB MANUAL

IV SEM - 07CS46

R.V. Vidyaniketan Post, Mysore Road,


Bangalore - 560 059.
TABLE OF CONTENTS

Page No

1. Instructions 2

2. Question Bank 4

3. Viva-voce Questions 6

4. Introduction to vi Editor 11

5. Lab Schedule 13

6. Algorithms 14

2
INSTRUCTIONS

General Instructions:

1. Students are required to

 Come prepared for viva.


 Read theory about the program to be executed.
 Sign in the login book
 Use the same login assigned to them.
 Follow the lab exercise cycles as instructed by the
department.
 Get the signature of the concerned staff after executing the
program.
 Write index and fill up all the details in the record.
 Submit completed lab record and get it evaluated by the
faculty in charge.

2. Break-up of marks:

Preparedness and datasheet write-up = 01

Executing with proper input & output, record write-up = 07

Viva = 02
_____________
10
_____________

Instructions for writing datasheets:

Students should come prepared for

 Atleast two programs with comments (neatly written) per

3
lab slot.
 Minimum three sets of input along with expected output for
each program.

Instructions for writing record:

 Program to be written on the right-hand side of the record.


 Algorithm to be written on the left-hand side of the record.
 Three sets of input-output for each program to be written.
 Stick the datasheets properly to the left-hand side of the
record for every program.
 Plot the time complexity graphs on graph sheets only.
 Consider atleast five nodes for graph related problems.

4
QUESTION BANK

Implement the following using C/C++ Language

1. Write a program providing an option to search a given key using the


following techniques:
a) Linear :search
b) Recursive Binary Search
Find the time complexity and display error messages if any.

2. Write a program to sort a given set of elements using Heap sort


method. Find the time complexity.

3. Write a program to sort a given set of elements using the


Merge sort method and find the time required to sort the
elements.

4. Write a program to check whether a given graph is connected or not


using DFS method.

5. Write a program to sort a given set of elements using Selection sort


and find the time required to sort the elements.

6. Write a program to obtain the Topological ordering of vertices in a


given digraph using the following techniques:
a) DFS traversal stack

5
b) Vertices deletion method.

7. Write a program to sort a given set of elements using Insertion sort


method and find the time complexity.

8. Write a program to implement 0/1 Knapsack problem using


dynamic programming.

9. Write a program to find the shortest path using Dijkstra’s algorithm


for a weighted connected graph.

10. Write a program to sort a given set of elements using Quick sort
method.

11. Write a program to find Minimum cost spanning tree of a given


undirected graph using Kruskal’s algorithm.

12. Write a program to print all the nodes reachable from a given
starting node in a
diagraph using Breadth First Search method.

13. Write a program to implement all pair shortest paths problem


using Floyd’s
algorithm.

14. Write a program to find a subset of a given set S= {s1, s2… sn} of
n positive integers whose sum is equal to a given positive integer d.
For example, if S={1,2,5,6,8} and d=9, then there are two solutions
{1,2, 6} and {1,8}. A suitable message is to be displayed if the
given problem instance doesn’t have a solution.

15. a) Write a program to implement Horspool algorithm for String


Matching.
b) Write a program to find the Binomial Co-efficient using Dynamic
Programming.

6
16. Write a program to find Minimum Cost Spanning Tree of a given
undirected graph using Prims algorithm.

17. a) Write a program to print all the nodes reachable from a given
starting node in a digraph using Depth First Search method.
b) Write a program to compute the transitive closure of a given
directed graph using Warshall’s algorithm.

18. Write a program to implement N Queen’s problem using Back


Tracking.

VIVA-VOCE QUESTIONS

1. Define asymptotic notation. Explain the various notations used.


2. What is the need for analyzing the complexity?
3. Briefly explain the notations of Bigot's and Asymptotic growth
rate.
4. Explain the difference between

and ;

and ;

and ;

7
and .

5. How does one calculate the running time of an algorithm?


6. How can we compare two different algorithms?
7. How do we know if an algorithm is optimal?
8. Give a characterization, in big-Oh terms, of the following loop:

p = 1;
for (i=0; i < 2*n; i++)
p = p*i;
9. Give a characterization, in big-Oh terms, of the following loop:

s = 0;
for (i=0; i < 2*n; i++)
for (j=0; j < i; j++)
s = s + i;
10. Give a characterization, in big-Oh terms, of the space complexity
of the following loop:

for (i=0; i < N; i++)


a[i] = malloc((i+1)*sizeof(blah));

11. Suppose that the running time of one algorithm is

and that the running time of another algorithm is . What


does this say about their relative performance?
12. Suppose that the running time of one algorithm is about

and that the running time of another algorithm is about


. What does this say about their relative performance?
13. Suppose we are told the best case complexity of an algorithm is

. What can we say about the average case?


14. Suppose we are told the average case complexity of an

algorithm is . What can we say about the worst case?

8
15. Suppose we are told the worst case complexity of an algorithm is

and the best case is . What can we say about the


average case?
16. Suppose we are told the best case complexity of an algorithm is

and the worst case is . What can we say about the


average case?
17. Suppose we are told the best case complexity of an algorithm is

and the worst case is . What can we say about the


average case?
18. Draw the binary search tree that results from inserting into an
initially empty tree records with the keys E A S Y Q U E S T I O N.
Then show the tree after deletion of the first S.
19. Which of selection sort and insertion sort runs fastest for a
file which is already sorted? Which is fastest if the file is in
reverse order?
20. How many comparisons are used by Shellsort to 7-sort,
then 3-sort the following keys:

E A S Y Q U E S T I O N
21. Show how simple quick sort sorts the following keys:

10 14 12 28 15 23 15 24 28 15 26 30
What is the maximum stack size?

22. Show how natural merge sort sorts the keys: 3,5,1,4,7,2,8,3.
23. Explain why a minimum cost sub graph must be a tree.
24. Explain the following representations with respect to the
graphs shown in figure
i. adjacency matrix
ii. packed adjacency list
iii. linked adjacency list

9
2 1 2 1

3
1 3 4 2 4

5 4
5 3 5

25. Solve the recurrence relations

T(n)= aT(n/c)+bn for n>1


=b for n=1

26. Which process determines the amount of work done by


Kruskal's algorithm? How much work is done in the worst case?
Explain.
27. In Kruskal's method for finding a minimum spanning tree,
how does the algorithm know when the addition of an edge will
generate a cycle?
28. Explain the essential difference between breadth first
search and Depth first search.
29. Briefly explain the depth first search of a graph and
illustrate it on the graph shown in fig below. Show the tree and
back edges.

30. Briefly explain an algorithm to find connected components


of a graph with an example
31. What is meant by greedy criterion?
32. Explain breadth first search algorithm. Find breadth first
search for the following graph.

10
1 9

2 3 4

6
5 7

33. What are the different measures to express the


greediness? Consider the knapsack instance.
No of objects (n) =3
Capacity of Knapsack (M) =20
Profits [p1, p2, p3] = [25, 24, 15]
Weights [w1, w2, w3] = [18, 15, 10] Find
optimal solution.
34. Show how minimum spanning tree is found using Kruskal's
algorithm. Using the same, Find minimum spanning tree for the
following graph.

1 2
3 4

8
6 5
7

35. Explain how the merge sort can be viewed as a recursive


application of the divide and conquer methodology. Trace its
applications to the following data set.
9, 4, 3, 8, 6, 2, 1, 5, 7
36. Explain the concept of 2-3 tree. How can keys be inserted
into it?
37. Explain the concept of hashing as a method of
implementing dictionaries. What are the two main methods of
resolving collisions?

11
38. Explain the merge sort and obtain its time and space
complexity.
39. What is a Huffman tree? Explain an algorithm to construct
the Huffman tree.
40. Explain the concept of decision trees for sorting algorithms.
41. Explain the Prim's algorithm to construct a minimum cost
spanning tree with respect to the following graph

1
6

2
7

5 3

42. Explain the warshall's algorithm to find the transitive


closure of a directed graph. Apply it to the following graph.
a b c d
a 0 1 0 0
b 0 0 0 1
c 0 0 0 0
d 1 0 1 0
43. State and explain Dijkstra's algorithm to find single source
shortest paths.
44. What is backtracking? Explain it's usefulness with the help of an
algorithm .What are the specific areas of its applications?
45. Give an algorithm to obtain the maximum clique of a
graph.
46. Briefly explain the method of "branch and bound".
47. Give a method for obtaining a "bound” in travelling salesman
problem. Illustrate it on the matrix below.

v1 v2 v3 v4
V1 ∞ -5 3 8
V2 4 ∞ 6 5
V3 -2 8 ∞ 10
V4 8 13 14 ∞

12
Show the matrix after the edge (v3, v2) is selected and compute
the bound for the resulting matrix.
48. Briefly explain the concepts of polynomial reducibility and NP-
completeness.
49. Suggest a high level algorithm to solve a 0/1 Knapsack
problem using backtracking.
50. Explain Strassen's matrix multiplication method.

INTRODUCTION TO VI EDITOR

Vi (visual) is a display oriented interactive text editor. When


using vi the screen of your terminal acts as a window into the file which
you are editing. Changes which you make to the file are reflected in
what you see.

Given below is a list of important basic commands used in the editor:

Entering command mode


Exit editing mode. Keyboard keys now interpreted as
[Esc] commands.
Moving the cursor
h (or left arrow
key) move the cursor left.
l (or right arrow
key) move the cursor right.
j (or down arrow
key) move the cursor down.
k (or up arrow
key) move the cursor up.
[Ctrl] f move the cursor one page forward .
[Ctrl] b move the cursor one page backward
move cursor to the first non-white character in the
^ current line.
$ move the cursor to the end of the current line.
G go to the last line in the file.
nG go to line number n.
display the name of the current file and the cursor
[Ctrl] G position in it
Entering editing mode
i insert new text before the cursor.

13
a append new text after the cursor.
o start to edit a new line after the current one.
O start to edit a new line before the current one.
Replacing characters, lines and words
replace the current character (does not enter edit
r mode).
enter edit mode and substitute the current character
s by several ones
cw enter edit mode and change the word after the cursor.
enter edit mode and change the rest of the line after
C the cursor.
Copying and pasting
yy copy (yank) the current line to the copy/paste buffer.
p paste the copy/paste buffer after the current line.
P Paste the copy/paste buffer before the current line.
Deleting characters, words and lines
x delete the character at the cursor location.
dw delete the current word.
D delete the remainder of the line after the cursor
dd delete the current line.

Repeating commands
. repeat the last insertion, replacement or delete command.
Looking for
strings
/string find the first occurrence of string after the cursor.
?string find the first occurrence of string before the cursor.
n find the next occurrence in the last search.
Replacing strings
n,ps/str1/str2/g between line numbers n and p, substitute all (g:global)
occurrences of str1 by str2.
1,$s/str1/str2/g g in the whole file ($: last line), substitute all occurrences
of str1 by str2.
Misc
[Ctrl] l redraw the screen.
J join the current line with the next one
Exiting and
saving
ZZ save current file and exit vi.
:w write (save) to the current file.
:w file write (save) to the file file
:wq quit vi after saving the changes.

14
:q! quit vi without saving changes.
Applying a command several times - Examples
5j move the cursor 5 lines down.
30dd delete 30 lines
4cw change 4 words from the cursor.
1G go to the first line in the file.

LAB SCHEDULE

Students are required to strictly follow the following order:

Lab Programs to be
Session completed
1 Introduction

2 1,5

3 3, 10

4 7, 4

5 17 (a), 6

6 2, 15(b)

15
7 8, 13

8 17(b), 12

9 11, 15(a)

10 16, 9

11 14, 18

ALGORITHMS

1. Write a program providing an option to search a given key


using the following techniques:
a) Linear search
b) Recursive Binary Search
Find the time complexity and display error messages if any.

Linear Search

 This algorithm works by comparing a search key with the elements


Algorithm: Lineararray.
of the given Search (A[0…..n-1])
//Searches for a key element using recursive linear search method
 If they match the search is successful: otherwise it is unsuccessful.
//Input: An array A [0….n-1] and a key
//Output: An index of the array’s element that is equal to key or -1 if there is no such
//element
{
if i=n
return -1
if key=A[i]
return i
else 16
return LinearSearch(A[i+1….n-1],key)
}
Complexity: The Complexity is analyzed using back tracking
technique in the recurrence relation obtained for the above algorithm.
The complexity depends on the size and type of input.
1. Best Case : Ω(1)
2. Worst Case : Ө (n)
3. Average Case : a) Successful search : (n+1)/2
b) Unsuccessful search: n

Binary Search

 It is an efficient algorithm for searching in a sorted array.


 Works by comparing a search key ‘k’ with the array’s middle
element A[m].
 If they match, the algorithm stops; otherwise, the same
operation is repeated recursively for the first half of the array if
k<A[m]
Algorithm: or for
Binary the(A[0…..n-1])
Search second half if k>A[m].
//Searches for a key element using recursive Binary search method
//Input: An array A[0….n-1] elements in ascending order and a search key
//Output: The position of first element in A[0….n-1] whose value is equal to key or -1
//if no such element found
{
if low>high
return -1
mid←(low+high)/2
if key=A[mid]
return mid
if key<A[mid]
return BinarySearch(A[low….mid-1],key)
else
return BinarySearch(A[mid+1…high],key)
}

17
Complexity: The Complexity is analyzed using Masters Theorem. The
complexity depends on the size and type of input.
1. Best Case: Ω (1)
2. Worst Case: Ө (log n)
3. Average Case: Ө (log n)

Note: Program to be executed for various sizes of input and below


given tables are to be filled up as part of the observation.

Linear Search
Best Case Worst Case
Experimen Theoretic Experimen Theoretic
Size tal al tal al
100
200
400
800
160
0
320
0

Binary Search
Best Case Worst Case
Siz Experimen Theoretic Experimen Theoretic
e tal al tal al
100
200
400
800
160
0
320
0

2. Write a program to sort a given set of elements using


Heap sort method. Find the time complexity.

 This is a two stage algorithm that works as follows:


 Stage 1 [Heap Construction]: Construct a heap for a given
array.
 Stage 2 [Maximum deletions]: Apply the root deletion
operation n-1 times to the remaining heap.

18
 As a result, the array elements are eliminated in
decreasing order. Under the array implementation of heaps, an
element being deleted is placed last, the resulting array will be
exactly the original array sorted in ascending order.

Algorithm: Heap Bottom-up (H [1…n])


//Constructs a heap from the elements of the given array by the bottom-up algorithm.
//Input: An array H [1….n] of orderable elements.
//Output : A heap H [1…n]
{
for i←└n/2┘ downto 1 do
{
k ← i; v ←H[k]
heap ←false
while not heap and 2*k<=n do
{
j←2*k
if j<n //there are two children
if H[j] < H[j+1] j←j+1
if v>=H[j]
heap ← true
else
{
H[k] ←H[j];
k←j
}
}
H[k] ←v
}
}

Complexity: The Complexity of heap construction stage of the


algorithm is in O (n) and maximum deletion stage is in O (nlogn).
1. Worst Case: Ө (n log n)
2. Average Case: Ө (n log n)
Note: Program to be executed for various sizes of input. Fill the given
table where g(n)= n log n. Obtaining a constant value in the column
t(n)/g(n) would prove that the complexity of heap sort is n log n.

Ascending Descending Random Order


Size: Count: t(n) / Count: t(n) / Count: t(n) /
n t(n) g(n) t(n) g(n) t(n) g(n)
256
512

19
1024
2048
4096
8192

3. Write a program to sort a given set of elements using the


Merge sort method.

Merge sort is a perfect example of a successful application of the


divide-and-conquer technique.
1. Split array A[1..n] in two and make copies of each half in arrays
B[1.. n/2 ] and C[1.. n/2 ]
2. Sort arrays B and C
3. Merge sorted arrays B and C into array A as follows:
a) Repeat the following until no elements remain in one of
the arrays:
i. compare the first elements in the remaining
unprocessed portions of the arrays
ii. copy the smaller of the two into A, while
incrementing the index indicating the unprocessed
portion of that array
b) Once all elements in one of the arrays are processed,
copy the remaining unprocessed elements from the other
array into A.
Algorithm: MergeSort (A [0...n-1])
//This algorithm sorts array A [0...n-1] by recursive mergesort.
//Input: An array A [0...n-1] of orderable elements.
//Output: Array A [0...n-1] sorted in non-decreasing order
{
if n>1

{ Copy A [0…└n/2┘-1] to B [0…└n/2┘-1]


┌ ┐
Copy A [└n/2┘… n-1] to C [0… n/2 -1]

MergeSort (B [0…└n/2┘-1])
┌ ┐
MergeSort (C [0… n/2 -1])

Merge (B, C, A)

20
Algorithm: Merge (B [0…p-1], C [0...q-1], A [0...p+q-1])
//Merges two sorted arrays into one sorted array.
//Input: Arrays B [0…p-1] and C [0...q-1] both sorted.
//Output: Sorted array A [0...p+q-1] of the elements of B and C.
{
i ← 0; j←0; k←0
while i<p and j<q do
{
if B[i] <= C[j]
A[k] ← B[i]; i← i+1
else
A[k] ← C[j]; j← j+1
k ← k+1
}
if i=p
copy C [j…q-1] to A[k…p+q-1]
else
copy B [i…p-1] to A[k…p+q-1]
}

Complexity:

• All cases have same efficiency: Θ( n log n)


• Number of comparisons is close to theoretical minimum for
comparison-based sorting:
log n ! ≈ n lg n - 1.44 n
• Space requirement: Θ( n ) (NOT in-place)

Note: Program to be executed for various sizes of input. Fill the given
table where g(n)= n log n. Obtaining a constant value in the column
t(n)/g(n) would prove that the complexity of merge sort is n log n.

Ascending Descending Random Order


Size: Count: t(n) / Count: t(n) / Count: t(n) /
n t(n) g(n) t(n) g(n) t(n) g(n)
256
512
1024
2048
4096
8192

21
4. Write a program to check whether a given graph is
connected or not using DFS method.

Depth-first search starts visiting vertices of a graph at an arbitrary


vertex by marking it as having been visited. On each iteration, the
algorithm proceeds to an unvisited vertex that is adjacent to the one it
is currently in. This process continues until a vertex with no adjacent
unvisited vertices is encountered. At a dead end, the algorithm backs
up one edge to the vertex it came from and tries to continue visiting
unvisited vertices from there. The algorithm eventually halts after
backing up to the starting vertex, with the latter being a dead end.

DFS can be implemented with graphs represented as Adjacency


matrices: Θ(V2) or Adjacency linked lists: Θ(V+E)

Algorithm: GraphConnectivity( G)
// Checks for connectivity of a Graph using DFS
//Input: Graph G = (V, E)
//Output: Returns true for a connected graph, false otherwise.
//visited [] is global
{
n ← |V|
for i←0 to n do
visited [i] ← 0
DFS (v) // any arbitrary vertex v
for i ← 0 to n do
if ( visited [i] = 0 )
return false
return true
}

Algorithm: DFS (v)


//this algorithm visits all vertices reachable from v, a vertex of a Graph G = (V, E)
{
visited [i] ← 1
for each vertex w adjacent from v do
{
if ( visited [w] = 0) DFS (w)
}
}

Complexity: For the adjacency matrix representation, the traversal


time efficiency is in Θ(|V|2) and for the adjacency linked list
representation, it is in Θ(|V|+|E|), where |V| and |E| are the number of

22
graph’s vertices and edges respectively.

5. Write a program to sort a given set of elements using


Selection sort and find the time required to sort the elements.

Selection sort works by scanning the entire given list to find its
smallest element and exchange it with the first element, putting the
smallest element in its final position in the sorted list. Then we can
scan the list, starting with the second element, to find the smallest
among the last n-1elements and exchange it with the second element,
putting the second smallest element in its final position. On the ith
pass through the list, which we number from 0 to n-2, the algorithm
searches for the smallest item among the last n-1 elements and swaps
it with Ai. After n-1 passes, the list is sorted.

Algorithm: Selection Sort (a[0..],n)


//This algorithm sorts a given array by selection sort
//Input: An array A [0...n-1] of orderable elements
//Output: Array A [0..n-1] sorted in ascending order
{
for i←0 to n-2 do
{
min←i
for j←i+1 to n-1 do
{
if A[j]<A[min]
min← j
}
swap A[i] and A[min]
}
}

Complexity: The Time Complexity of Selection Sort is: Ө(n2)

Note: Program to be executed for various sizes of input. Fill the below
table where g(n)=n2. Obtaining a constant value in the column
t(n)/g(n) would prove that the complexity of selection sort for all cases
is Ө(n2).

Ascending Descending Random Order


Size: Count: t(n) / Count: t(n) / Count: t(n) /
n t(n) g(n) t(n) g(n) t(n) g(n)
256

23
512
1024
2048
4096
8192

6. Write a program to obtain the Topological ordering of


vertices in a given digraph using the following techniques:
a) DFS traversal stack
b) Vertices deletion method.
Topological sort or topological ordering of a directed acyclic graph
(DAG) is a linear ordering of its nodes in which each node comes before
all nodes to which it has outbound edges. Every DAG has one or more
topological sorts. Below we list three DAGs and possible topological
orders for each.

DFS traversal stack


This algorithm
Algorithm: DFS (G)is the simplest application of depth-first search.
//Implements a depth-first search traversal of a given graph
//Input: Graph G = (V, E)
//Output: Graph G with its vertices marked with consecutive integers in the order they
//have been first encountered by the DFS traversal. array a[] is global
{
n ← |V|
mark each vertex in V with 0 as a mark of being “unvisited”.
count ← 0
for each vertex v in V do
{
if v is marked with 0
dfs(v)
}
for i ← n-1 downto 0 do 24
print a[i]
}
Algorithm: dfs(v)
//visits recursively all the unvisited vertices connected to vertex v by a path and
//numbers them in the order they are encountered via global variable count
{
count ← count+1;
mark v with count
for each vertex w in V adjacent to v do
{
if w is marked with 0
{
dfs(w)
write w to array a
}
}
}
Vertices deletion method

Vertex deletion method is based on a direct implementation of the


decrease-and-conquer technique. It picks vertices from a DAG in a
sequence such that there are no other vertices preceding them. That
is, if a vertex has in-degree 0 then it can be next in the topological
order. We remove this vertex and look for another vertex of in-degree 0
in the resulting DAG. We repeat until all vertices have been added to
the topological order.

Algorithm: TopologicalSorting ()
// algorithm to print the toplogical ordering of a graph
L ← Empty list that will contain the sorted elements
S ← Set of all nodes with no incoming edges
while S is non-empty do
remove a node n from S
insert n into L
for each node m with an edge e from n to m do
remove edge e from the graph
if m has no other incoming edges then
insert m into S
if graph has edges then
output error message (graph has at least one cycle)
else
output message (proposed topologically sorted order: L)

25
Complexity: The algorithms for topological sorting have running time
linear in the number of nodes plus the number of edges O(|V|+|E|).

7. Write a program to sort a given set of elements using


Insertion sort method and find the time complexity.
Insertion sort method involves scanning the sorted sub array from left
to right until the first element greater than or equal to A [n-1] is
encountered and then insert A[n-1] right before that element. Another
alternative is to scan the sorted sub array from right to left until the
first element smaller than or equal to A [n-1] is encountered and then
insert A [n-1] right after that element. Though insertion sort is clearly
based on a recursive idea, it is more efficient to implement this
algorithm bottom up iteratively.

Algorithm: Insertion Sort (a[0..n-1)


//Sorts a given array by insertion sort
//Input: An array A [0...n-1] of n orderable elements
//Output: Array A [0...n-1] sorted in ascending order
{
for i←1 to n-1 do
{
v←A[i]
j←i-1
while j>=0 and A[i]>v do
{
A[j+1] ←A[j]
j←j-1
}
A [j+1] ←v
}
}

Complexity: The Time complexity of Insertion sort is:


1.Best Case : n-1 € Ө (n)
2.Worst Case : (n-1)n/2 € Ө (n2)
3.Average Case : n2/4 € Ө (n2)
Note: Program to be executed for various sizes of input. Fill the below

26
table where g1(n)=n and g2(n)=n2. Obtaining a constant value in the
column t(n)/g(n) would prove that the complexity of insertion sort for
all cases.

Ascending Descending Random Order


Size: Count: t(n) / Count: t(n) / Count: t(n) /
n t(n) g1(n) t(n) g2(n) t(n) g2(n)
256
512
1024
2048
4096
8192

8. Write a program to implement 0/1 Knapsack problem using


dynamic programming.
Given: A set S of n items, with each item i having
• bi - a positive benefit
• wi - a positive weight
Goal: Choose items with maximum total benefit but with weight at
most W. i.e.
• Objective: maximize ∑b
i∈T
i

• Constraint: ∑w
i∈T
i ≤W

“knapsack”

Items: Solution:
5 (2 in)
1 2 3 4 5 3 (2 in)
1 (4 in)
Weight: 4 in 2 in 2 in 6 in 2 in
Algorithm: 0/1Knapsack(S, W) 9 in
of items$3
//Input: set S$20
Benefit: $6 b$25
with benefit $80 wi; max. weight W
i and weight
//Output: benefit of best subset with weight at most W
// Sk: Set of items numbered 1 to k.
//Define B[k,w] = best selection from Sk with weight exactly equal to w
{
for w ← 0 to n-1 do
B[w] ← 0
for k ← 1 to n do
{
for w ← W downto wk do
{
if B[w-wk]+bk > B[w] then
B[w] ← B[w-wk]+bk
} 27
}
}
Complexity: The Time efficiency and Space efficiency of 0/1 Knapsack
algorithm is Ө(nW).

9. Write a program to find the shortest path using Dijkstra’s


algorithm for a weighted connected graph.

Single Source Shortest Paths Problem: For a given vertex called


the source in a weighted connected graph, find the shortest paths to
all its other vertices.Dijkstra’s algorithm is the best known algorithm
for the single source shortest paths problem. This algorithm is
applicable to graphs with nonnegative weights only and finds the
Algorithmpaths
shortest : Dijkstra(G,s)
to a graph’s vertices in order of their distance from a
//Dijkstra’s algorithm for single-source shortest paths
given source. It finds the shortest path from the source to a vertex
//Input :A weighted connected graph G=(V,E) with nonnegative weights and its vertex s
nearest to it, then to a second nearest, and so on. It is applicable to
//Output : The length dv of a shortest path from s to v and its penultimate vertex pv for
both undirected and directed graphs
//every v in V.
{
Initialise(Q) // Initialise vertex priority queue to empty
for every vertex v in V do
{
dv←œ; pv←null
Insert(Q,v,dv) //Initialise vertex priority queue in the priority queue
}
ds←0; Decrease(Q,s ds) //Update priority of s with ds
Vt←Ø
for i←0 to |v|-1 do
{
u* ← DeleteMin(Q) //delete the minimum priority element
Vt ←Vt U {u*}
for every vertex u in V-Vt that is adjacent to u* do
{
if du* + w(u*,u)<du
{
du←du* + w(u*, u): pu←u*
Decrease(Q,u,du)
}
} 28
}
}
Complexity: The Time efficiency for graphs represented by their
weight matrix and the priority queue implemented as an unordered
array and for graphs represented by their adjacency lists and the
priority queue implemented as a min-heap, it is O(|E| log |V|).

10. Write a program to sort a given set of elements using


Quick sort method.
Algorithm : QUICKSORT(a[l..r])
Quick Sort
//Sorts divides
a subarray the array according to the value of elements. It
by quicksort
rearranges elements
//Input: A subarray of of
A[l..r] a A[0..n-1],defined
given array A[0..n-1] to achieve
by its left and itsl partition,
right indices and r
where the Subarray
//Output: elements before
A[l..r] sortedposition s are smaller
in nondecreasing order than or equal to A[s]
and{ all the elements after position s are greater than or equal to A[s]
if l<r
{ A[0]…A[s-1] A[s] A[s+1]…A[n-1]
s← Partition(A[l..r]) //s is a split position
QUICKSORT(A[l..s-1])
All are <=A[s] all are >=A[s]
QUICKSORT(A[s+1..r])
}
}

Algorithm : Partition(A[l..r])
//Partition a subarray by using its first element as its pivot
//Input:A subarray A[l..r] of A[0..n-1],defined by its left and right indices l and r (l<r)
//Output:A partition of A[l..r],with the split position returned as this function’s value
{
p ← A[l]
i ← l; j ← r+1
repeat
{ repeat i ← i+1 until A[i] >=p
repeat j ← j-1 until A[j] <=p
swap(A[i],A[j])
} until i>=j
swap(A[i],A[j]) // undo last swap when i>=j
swap(A[l],A[j]) 29
return j
}
Complexity: Cbest (n) =2 Cbest (n/2) +n for n>1 Cbest (1) =0
Cworst (n) ∈ θ (n2)
Cavg (n) ≈ 1.38nlog2n
Note: Program to be executed for various sizes of input. Fill the below
table where g1(n)=n2 and g2(n)=n log n. Obtaining a constant value in
the column t(n)/g(n) would prove that the complexity of quick sort for
all cases.

Ascending Descending Random Order


Size: Count: t(n) / Count: t(n) / Count: t(n) /
n t(n) g1(n) t(n) g1(n) t(n) g2(n)
256
512
1024
2048
4096
8192

30
11. Write a program to find Minimum cost spanning tree of a
given undirected graph using Kruskal’s algorithm.

Kruskal’s algorithm finds the minimum spanning tree for a weighted


connected graph G=(V,E) to get an acyclic subgraph with |V|-1 edges
for which the sum of edge weights is the smallest. Consequently the
algorithm constructs the minimum spanning tree as an expanding
sequence of subgraphs, which are always acyclic but are not
necessarily connected on the intermediate stages of algorithm. The
algorithm begins by sorting the graph’s edges in non decreasing order
of their weights. Then starting with the empty subgraph, it scans the
sorted list adding the next edge on the list to the current subgraph if
Algorithm : Kruskal(G)
such an inclusion does not create a cycle and simply skipping the edge
//otherwise.
Kruskal’s algorithm for constructing a minimum spanning tree
//Input: A weighted connected graph G=(V,E)
//Output: ET,the set of edges composing a minimum spanning tree of G
{
Sort E in non decreasing order of the edge weights w(ei1)<=…….>= w(ei |E|))
ET ← ∅ ; ecounter ← 0 //Initialize the set of tree edges and its size
k ← 0 //initialize the number of processed edges
while ecounter <|V|-1 do
{
k← k+1
if ETU {eik} is acyclic
ET ← ET U {eik}; ecounter ← ecounter+1
} 31
return ET
}
Complexity: With an efficient sorting algorithm, the time efficiency of
kruskal’s algorithm will be in O(|E| log |E|).

Algorithm : BFS(G)
//Implements a breadth-first search traversal of a given graph
//Input: Graph G = (V, E)
//Output: Graph G with its vertices marked with consecutive integers in the order they
//have been visited by the BFS traversal
{
mark each vertex with 0 as a mark of being “unvisited”
count ←0
for each vertex v in V do
{
if v is marked with 0
12. Write a program bfs(v) to print all the nodes reachable from a
given} starting node in a diagraph using Breadth First Search
}method.
BFSAlgorithm
explores : bfs(v)
graph moving across to all the neighbors of last visited
//visits all
vertex traversalsthe unvisited
i.e., vertices connected
it proceeds in atoconcentric
vertex v andmanner
assigns them
by the numbers
visiting all
//in order they are visited via global variable count
the vertices that are adjacent to a starting vertex, then all unvisited
{
vertices two edges apart from it and so on, until all the vertices in the
count ← count
same connected +1
component as the starting vertex are visited. Instead
mark v with
of a stack, BFS uses queue. count and initialize queue with v
while queue is not empty do
{
a := front of queue
for each vertex w adjacent to a do
{
if w is marked with 0
{
count ← count + 1
mark w with count
add w to the end of the queue
}
}
remove a from the front of32the queue
}
}
Complexity: BFS has the same efficiency as DFS: it is Θ (V2) for
Adjacency matrix representation and Θ (V+E) for Adjacency linked list
representation.
13. Write a program to implement all pair shortest paths
problem using Floyd’s
algorithm.

Floyd’s algorithm is applicable to both directed and undirected


graphs provided that they do not contain a cycle. It is convenient to
record the lengths of shortest path in an n- by- n matrix D called the
distance matrix. The element dij in the ith row and jth column of matrix
indicates the shortest path from the ith vertex to jth vertex (1<=i,
j<=n). The element in the ith row and jth column of the current matrix
D(k-1) is replaced by the sum of elements in the same row i and kth
column and in the same column j and the kth column if and only if the
latter sum is smaller than its current value.

33
Algorithm Floyd(W[1..n,1..n])
//Implements Floyd’s algorithm for the all-pairs shortest paths problem
//Input: The weight matrix W of a graph
//Output: The distance matrix of shortest paths length
{
D←W
for k←1 to n do
{
for i ← 1 to n do
{
for j ← 1 to n do
{
D[i,j] ← min (D[i, j], D[i, k]+D[k, j] )
}
}
}
return D
}

Complexity: The time efficiency of Floyd’s algorithm is cubic i.e. Θ (n3)

14. Write a program to find a subset of a given set S= {s1, s2…


sn} of n positive integers whose sum is equal to a given
positive integer d. For example, if S={1,2,5,6,8} and d=9,
then there are two solutions {1,2, 6} and {1,8}. A suitable
message is to be displayed if the given problem instance
doesn’t have a solution.

Subset-Sum Problem is to find a subset of a given set S= {s1, s2… sn}


of n positive integers whose sum is equal to a given positive integer d.
It is assumed that the set’s elements are sorted in increasing order.
The state-space tree can then be constructed as a binary tree and

34
applying backtracking algorithm, the solutions could be obtained.
Some instances of the problem may have no solutions.

Algorithm SumOfSub(s, k, r)
//Find all subsets of w[1…n] that sum to m. The values of x[j], 1<= j < k, have
n
already //been determined. s=∑k-1 w[j]*x[j] and r =∑ w[j]. The w[j]’s are in ascending
order.
j=1 j=k

{
x[k] ← 1 //generate left child
if (s+w[k] = m)
write (x[1...n]) //subset found
else if ( s + w[k]+w[k+1] <= m)
SumOfSub( s + w[k], k+1, r-w[k])
//Generate right child
if( (s + r - w[k] >= m) and (s + w[k+1] <= m) )
{
x[k] ← 0
SumOfSub( s, k+1, r-w[k] )
}
}

Complexity: Subset sum problem solved using backtracking


generates at each step maximal two new subtrees, and the running
time of the bounding functions is linear, so the running time is O(2n ).

15. a) Write a program to implement Horspool algorithm for


String Matching.
b) Write a program to find the Binomial Co-efficient using
Dynamic Programming.

Horspool algorithm

35
Step1: For a given pattern of length m and the alphabet used in both
the pattern and text, construct the shift table.
Step 2: Align the pattern against the beginning of text
Step 3 Repeat the following until either a matching substring is found
or the pattern reaches beyond the last character of text. Starting
with the last character in the pattern, compare the corresponding
characters in the pattern and text until either all m characters are
matched or a mismatching pair is encountered. In the latter case,
retrieve the entry t(c) from the c’s column of the shift table where
c is the text character currently aligned against the last character
of the pattern and shift the pattern by t(c) characters to the right
along the text.

Algorithm HorspoolMatching (P[0..m-1],T[0..n-1])


//Implements Horspool’s algorithm for string matching
//Input: Pattern P [0...m-1] and text T [0...n-1]
//Output: The index of the left end of the first matching substring or
//–1 if there are no matches
{
ShiftTable(P[0..m-1]) //generate Table of shifts
i ← m-1
while i<=n-1 do
{
k← 0
while k<=m-1 and P[m-1-k] =T[i-k]
{
k ← k+1
}
if k=m return i-m+1
else i ← i + Table[T[i]]
}
return –1
}

The algorithm for computing shift table entries. Initialize all the entries
to the pattern’s length m and scan the pattern left to right repeating
the following step m-1 times, for the jth character of the
pattern(0<=j<=m-2).
Algorithm Shift Table (P [0...m-1])
//Fills the shift table used by Horspool’s algorithm
//Input: Pattern P[0..m-1] and an alphabet of possible characters
//Output: Table [0...size-1] indexed by the alphabet’s characters
//and filled with shift sizes
{
initialize all the elements of Table with m
for j ←0 to m-2 do Table[P[j]]36 ← m-1-j
return Table
}
Complexity: The worst case efficiency of Horspool algorithm is Θ
(nm). But for random texts, it is in Θ (n).
Computing a Binomial coefficient
Computing a Binomial coefficient is a standard example of applying
dynamic programming. Of the numerous properties of binomial co-
efficient, we concentrate on two:
1) C(n,k)=C(n-1,k-1)+C(n-1,k) for n>k>0
2) C(n,0)=C(n,n)=1

Algorithm Binomial(n,k)
// Computes C (n, k) by the dynamic programming algorithm
//Input: A pair of non negative integers n>=k>=0
//Output: The value of C (n, k)
{
for i← 0 to n do
{
for j ← 0 to min(i, k)do
{
if j=0 or j=k
C[i,j] ←1
else
C[i,j] ← C[i-1,j-1]+ C[i-1,j]
}
}
return C[n,k]
}

Complexity: A(n,k)∈ θ (nk), where A(n,k) is the total number of


additions done in the algorithm

17. a) Write a program to print all the nodes reachable from a


given starting node in a digraph using Depth First Search
method.

37
b) Write a program to compute the transitive closure of a
given directed graph using Warshall’s algorithm.

Depth First Search:

Depth-first search starts visiting vertices of a graph at an arbitrary


vertex by marking it as having been visited. On each iteration, the
algorithm proceeds to an unvisited vertex that is adjacent to the one it
is currently in. This process continues until a vertex with no adjacent
unvisited vertices is encountered. At a dead end, the algorithm backs
up one edge to the vertex it came from and tries to continue visiting
unvisited vertices from there. The algorithm eventually halts after
backing up to the starting vertex, with the latter being a dead end.

Algorithm : DFS(G)
//Implements a depth-first search traversal of a given graph
//Input : Graph G = (V,E)
//Output : Graph G with its vertices marked with consecutive integers in the order they
//have been first encountered by the DFS traversal
{
mark each vertex in V with 0 as a mark of being “unvisited”.
count ← 0
for each vertex v in V do
if v is marked with 0
dfs(v)
}

Algorithm : dfs(v)
//visits recursively all the unvisited vertices connected to vertex v by a path
//and numbers them in the order they are encountered via global variable count
{
count ← count+1
mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
}

Complexity: For the adjacency matrix representation, the traversal


time efficiency is in Θ(|V|2) and for the adjacency linked list
representation, it is in Θ(|V|+|E|), where |V| and |E| are the number of
graph’s vertices and edges respectively.

38
Warshall’s algorithm:

The transitive closure of a directed graph with n vertices can be


defined as the n-by-n boolean matrix T={tij}, in which the element in
the ith row(1<=i<=n) and jth column(1<=j<=n) is 1 if there exists a
non trivial directed path from ith vertex to jth vertex, otherwise, tij is 0.

Warshall’s algorithm constructs the transitive closure of a given


digraph with n vertices through a series of n-by-n boolean matrices: R(0)
,….,R(k-1) , R(k) ,….,R(n) where, R(0) is the adjacency matrix of digraph and
R(1) contains the information about paths that use the first vertex as
intermediate. In general, each subsequent matrix in series has one
more vertex to use as intermediate for its path than its predecessor.
The last matrix in the series R(n) reflects paths that can use all n
vertices of the digraph as intermediate and finally transitive closure is
obtained. The central point of the algorithm is that we compute all the
elements of each matrix R(k) from its immediate predecessor R (k-1) in
series.

Algorithm Warshall(A[1..n,1..n])
//Implements Warshall‘s algorithm for computing the transitive closure
//Input: The Adjacency matrix A of a digraph with n vertices
//Output: The transitive closure of digraph
{
R(0) ← A
for k ← 1 to n do
{
for i ← 1 to n do
{
for j ← 1 to n do
{
R(k)[i,j] ← R(k-1) [i,j] or R(k-1) [i,k] and R(k-1) [k,j]
}
}
}
return R(n)
}

Complexity: The time efficiency of Warshall’s algorithm is in Θ (n3).

39
18. Write a program to implement N Queen’s problem using
back tracking.

The n-queens problem

The n-queens problem consists of placing n queens on an n x n checker


board in such a way that they do not threaten each other, according to
the rules of the game of chess. Every queen on a checker square can
reach the other squares that are located on the same horizontal,
vertical, and diagonal line. So there can be at most one queen at each
horizontal line, at most one queen at each vertical line, and at most
one queen at each of the 4n-2 diagonal lines. Furthermore, since we
want to place as many queens as possible, namely exactly n queens,
there must be exactly one queen at each horizontal line and at each
vertical line.

The concept behind backtracking algorithm which is used to solve this


problem is toNQueens
Algorithm successively
(k, n) place the queens in columns. When it is
impossible to place athis
//Using backtracking, queen in aprints
procedure column (it is placements
all possible on the sameof n diagonal,
row,queens
or column as another token), the algorithm backtracks
//on an n x n chessboard so that they are non-attacking and adjusts
a preceding
{ queen.
for i ← 1 to n do
{
if(Place(k,i) )
{
x[k] ← i
if (k=n)
write ( x[1...n])
else
Nqueens (k+1, n)
}
}
}

Algorithm Place( k, i)
//Returns true if a queen can be placed in kth row and ith column. Otherwise it
//returns false. x[] is a global array whose first (k-1) values have been set.
Abs(r) //returns the absolute value of r.
{
for j ← 1 to k-1 do
{
if ( (x[j]=i or Abs(x[j]-i) = Abs(j-k) )
{
return false
} 40
}
}
Complexity: Because the power of the set of all possible solutions of
the n queen’s problem is n! and the bounding function takes a linear
amount of time to calculate, the running time of the n queens problem
is O (n!).

41

Вам также может понравиться