Вы находитесь на странице: 1из 12

Q 1. Describe the characteristics of algorithms? Explain the procedure and recursion in algorithm? Ans.

Characteristics of Algorithms: While designing an algorithm as a solution to a given problem, we must take care of the following five important characteristics of an algorithm. Finiteness: An algorithm must terminate after a finite number of steps and further each step must be executable in finite amount of time. In order to establish a sequence of steps as an algorithm, it should be established that it terminates (in finite number of steps) on all allowed inputs. Definiteness (no ambiguity): Each steps of an algorithm must be precisely defined; the action to be carried out must be rigorously and unambiguously specified for each case. Inputs: An algorithm has zero or more but only finite, number of inputs. Output: An algorithm has one or more outputs. The requirement of at least one output is obviously essential, because, otherwise we cannot know the answer/ solution provided by the algorithm. The outputs have specific relation to the inputs, where the relation is defined by the algorithm. Effectiveness: An algorithm should be effective. This means that each of the operation to be performed in an algorithm must be sufficiently basic that it can, in principle, be done exactly and in a finite length of time, by person using pencil and paper. It may be noted that the FINITENESS condition is a special case of EFFECTIVENESS. If a sequence of steps is not finite, then it cannot be effective also. There are two advanced control structures and they are, Procedure and Recursion. Procedure: Among a number terms that are used, instead of procedure, are subprogram and even function. It may happen that a sequence frequently occurs either in the same algorithm repeatedly in different parts of the algorithm or may occur in different algorithms.

In such cases, writing repeatedly of the same sequence is a wasteful activity. Procedure is a mechanism that provides a method of checking this wastage. Under this mechanism, the sequence of instructions expected to be repeatedly used in one or more algorithms, is written only once and outside and independent of the algorithms of which the sequence could have been a part otherwise. Syntax: Procedure <Name> (<parameterlist>) [:< type>] <declarations> <sequence of instructions expected to occur repeatedly> end; For Instance: Procedure sum square (a, b : integer) integer; {denoted the inputs a and b integers and the output is also an integer} S: integer; {to store the required number} begin S <-a2 + b2 Return (S) end; Program diagonal Length {The program finds lengths of diagonals of the sides of right angled triangles whose lengths are given as integers. The program terminates when the length of any side is not positive integer} L1, L2; integer; {given side lengths} D: real; {To store diagonal length} read (L1, L2) while (L1> 0 and L2> 0) do begin D<-squareroot (sum square (L1, L2)) Write (for sides of given length, L1, L2, the required diagonal length is D); read (L1, L2) end

Recursion: A procedure, which can call itself, is said to be recursive procedure/algorithm. Ex: factorial (1) = 1 factorial (n) = n * factorial (n 1) Procedure SUM (n: integer: integer) s: integer: If n = 0 then, return (0) Begin s <- n + SUM (n 1) end; end;

Q 2. If f ( n) =
f O (g )

n3 , g ( n) =37 n 2 +120 n +17 then show that g O( f ) and 2

Ans.
g ( n) =37 n
2

+120 n +17

And

f ( n) =

n3 2

Solution: (i) For g = O (f), g (n) should be <= C. f (n) for all n >= k. When C = 53 and k = 3, we get g(n) <= C. f(n), which holds true. Therefore, g = O (f). (ii) For f not = to O (g), let us see the sample <= C. Now, lets imply, we get <= C. 174 Further implying we get n < = 174 C for all n > = k. But for n = max {174 + 1, k}, the previous statement is not true. Hence the proof: So, f is not equal to O (g).

Q 3. Explain the concept of bubble sort and also write the algorithm for bubble sort. Ans. Bubble Sort Bubble sort is a simple and well-known sorting algorithm. It is used in practice once in a blue moon and its main application is to make an introduction to the sorting algorithms. Bubble sort belongs to O(n2) sorting algorithms, which makes it quite inefficient for sorting large data volumes. Bubble sort is stable and adaptive. Algorithm Compare each pair of adjacent elements from the beginning of an array and, if they are in reversed order, swap them. If at least one swap has been done, repeat step 1. You can imagine that on every step big bubbles float to the surface and stay there. At the step, when no bubble moves, sorting stops. Let us see an example of sorting an array to make the idea of bubble sort clearer. Example. Sort {5, 1, 12, -5, 16} using bubble sort.

Complexity analysis Average and worst case complexity of bubble sort is O(n2). Also, it makes O(n2) swaps in the worst case. Bubble sort is adaptive. It means that for almost sorted array it gives O(n) estimation. Avoid implementations, which don't check if the array is already sorted on every step (any swaps made). This check is necessary, in order to preserve adaptive property. Turtles and rabbits One more problem of bubble sort is that its running time badly depends on the initial order of the elements. Big elements (rabbits) go up fast, while small ones (turtles) go down very slow. This problem is solved in the Cocktail sort. Turtle example. Thought, array {2, 3, 4, 5, 1} is almost sorted, it takes O(n2) iterations to sort an array. Element {1} is a turtle.

=>

Rabbit example. Array {6, 1, 2, 3, 4, 5} is almost sorted too, but it takes O(n) iterations to sort it. Element {6} is a rabbit. This example demonstrates adaptive property of the bubble sort.

Code snippets There are several ways to implement the bubble sort. Notice, that "swaps" check is absolutely necessary, in order to preserve adaptive property. Java public void bubbleSort(int[] arr) { boolean swapped = true; int j = 0; int tmp; while (swapped) { swapped = false; j++; for (int i = 0; i < arr.length - j; i++) { if (arr[i] > arr[i + 1]) { tmp = arr[i]; arr[i] = arr[i + 1]; arr[i + 1] = tmp; swapped = true; } } }

} Q 4. Prove that If n 1, then for any n-key, B-tree T of height hand minimum degree t 2, Ans. In computer science, a B-tree is a tree data structure that keeps data sorted and allows searches, sequential access, insertions, and deletions in logarithmic amortized time. The B-tree is a generalization of a binary search tree in that more than two paths diverge from a single node. Unlike self-balancing binary search trees, the B-tree is optimized for systems that read and write large blocks of data. It is commonly used in databases and file systems. Proof: If a B tree has height h, the root contains at least one key and all other nodes contain at least t-1 keys. Thus, there are at least 2 nodes at depth 1, at least 2t nodes at depth 2, at least 2t raise to 2 nodes at depth 3, and so on, until at depth h there are at least 2t raise to h 1 nodes.
n +1 h log t . 2

n >= 1 + (t-1) Summation of 2t raise to i-1 from i=1 to h = 1 + 2(t-1) (t raise to h - 1 / t-1) = 2t raise to h - 1. By simple algebra, we get t raise to h <= (n+1) / 2.Taking base - t logarithms of both sides proves the theorem. Therefore, theorem is proved.

Q 5. Explain briefly the concept of breadth-First search(BFS) Ans. In graph theory, breadth-first search (BFS) is a graph search algorithm that begins at the root node and explores all the neighboring nodes. Then for each of those nearest nodes, it explores their unexplored neighbor nodes, and so on, until it finds the goal. BFS is an uninformed search method that aims to expand and examine all nodes of a graph or combination of sequences by systematically searching through every solution. In other words, it exhaustively searches the entire graph or sequence without considering the goal until it finds it. It does not use a heuristic algorithm. From the standpoint of the algorithm, all child nodes obtained by expanding a node are added to a FIFO (i.e., First In, First Out) queue. In typical implementations, nodes that have not yet been examined for their neighbors are placed in some container (such as a queue or linked list) called "open" and then once examined are placed in the container "closed".

Algorithm (informal) Enqueue the root node. Dequeue a node and examine it. If the element sought is found in this node, quit the search and return a result. Otherwise enqueue any successors (the direct child nodes) that have not yet been discovered. If the queue is empty, every node on the graph has been examined quit the search and return "not found".

If the queue is not empty, repeat from Step 2. Note: Using a stack instead of a queue would turn this algorithm into a depth-first search. Pseudocode 1 procedure BFS(Graph,source): 2 create a queue Q 3 enqueue source onto Q 4 mark source 5 while Q is not empty: 6 dequeue an item from Q into v 7 for each edge e incident on v in Graph: 8 let w be the other end of e 9 if w is not marked: 10 mark w 11 enqueue w onto Q

6. Explain Kruskals Algorithm. Ans. Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. If the graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for each connected component). Kruskal's algorithm is an example of a greedy algorithm. This algorithm first appeared in Proceedings of the American Mathematical Society, pp. 4850 in 1956, and was written by Joseph Kruskal. Other algorithms for this problem include Prim's algorithm, Reverse-Delete algorithm, and Borvka's algorithm. Description create a forest F (a set of trees), where each vertex in the graph is a separate tree

create a set S containing all the edges in the graph while S is nonempty and F is not yet spanning remove an edge with minimum weight from S if that edge connects two different trees, then add it to the forest, combining two trees into a single tree otherwise discard that edge. At the termination of the algorithm, the forest has only one component and forms a minimum spanning tree of the graph. Performance Where E is the number of edges in the graph and V is the number of vertices, Kruskal's algorithm can be shown to run in O(E log E) time, or equivalently, O(E log V) time, all with simple data structures. These running times are equivalent because: E is at most V2 and logV2 = 2logV \; is O(log V). If we ignore isolated vertices, which will each be their own component of the minimum spanning forest, V E+1, so log V is O(log E). We can achieve this bound as follows: first sort the edges by weight using a comparison sort in O(E log E) time; this allows the step "remove an edge with minimum weight from S" to operate in constant time. Next, we use a disjoint-set data structure (Union&Find) to keep track of which vertices are in which components. We need to perform O(E) operations, two 'find' operations and possibly one union for each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) = O(E log V). Provided that the edges are either already sorted or can be sorted in linear time (for example with counting sort or radix sort), the algorithm can use more sophisticated disjoint-set data structure to run in O(E (V)) time, where is the extremely slowly-growing inverse of the single-valued Ackermann function. Pseudocode 1 function Kruskal(G = <N, A>: graph; length: A R+): set of edges 2 Define an elementary cluster C(v) {v} 3 Initialize a priority queue Q to contain all edges in G, using the weights as keys.

4 Define a forest T //T will ultimately contain the edges of the MST 5 // n is total number of vertices 6 while T has fewer than n-1 edges do 7 // edge u,v is the minimum weighted route from u to v 8 (u,v) Q.removeMin() 9 // prevent cycles in T. add u,v only if T does not already contain a path between u and v. 10 // the vertices has been added to the tree. 11 Let C(v) be the cluster containing v, and let C(u) be the cluster containing u. 12 if C(v) C(u) then 13 Add edge (v,u) to T. 14 Merge C(v) and C(u) into one cluster, that is, union C(v) and C(u). 15 return tree T

=>

Вам также может понравиться