Вы находитесь на странице: 1из 20

Assignment II :

Graph Algorithm

Submitted By
Mekha Krishnan M
17PHD1027
Contents

1 Breadth first Search Procedure 3


1.1 Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Problem Description: . . . . . . . . . . . . . . . . . 3
1.2.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Time complexity: . . . . . . . . . . . . . . . . . . . 4
1.2.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 5
1.2.5 Real Time Application of Breadth First Search: 5

2 Depth First Search Procedure 7


2.1 Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Problem Description: . . . . . . . . . . . . . . . . . 7
2.2.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.3 Time complexity: . . . . . . . . . . . . . . . . . . . 8
2.2.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 8
2.2.5 Real Time Application of Depth First Search: . 9

3 Topological Sorting on directed acyclic graph 11


3.1 Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.1 Problem Description: . . . . . . . . . . . . . . . . . 11
3.2.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2.3 Time complexity: . . . . . . . . . . . . . . . . . . . 11
3.2.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 12
3.2.5 Real Time Application of Topological Sorting
for Directed Acyclic Graph: . . . . . . . . . . . . . 12

4 Minimum Spanning Tree 13


4.1 Problem I: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 Solution I: . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.1 Problem Description: . . . . . . . . . . . . . . . . . 13
4.2.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2.3 Time complexity: . . . . . . . . . . . . . . . . . . . 14
4.2.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 14
4.3 Problem II: . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 Solution II: . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.1 Problem Description: . . . . . . . . . . . . . . . . . 15
4.4.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4.3 Time complexity: . . . . . . . . . . . . . . . . . . . 16
4.4.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 16
4.5 Real Time Application of Minimum Spanning Tree: . 16

1
5 Single Source Shortest Path 17
5.1 Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2 Solution: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2.1 Problem Description: . . . . . . . . . . . . . . . . . 17
5.2.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . 17
5.2.3 Time complexity: . . . . . . . . . . . . . . . . . . . 18
5.2.4 Proof of Correctness: . . . . . . . . . . . . . . . . . 18
5.2.5 Real Time Application of Dijkstra’s algorithm: 19
1 Breadth first Search Procedure
Breadth First Search (BFS) searches breadth-wise in the problem space.
Given a graph G=(V,E) and a distinguished source vertex s, breadth-first
search systematically explores the edges of G to ”discover” every vertex that
is reachable from s. It computes the distance (smallest number of edges) from
s to each reachable vertex. It also produces a ”breadth-first tree” with root
s that contains all reachable vertices. For any vertex v reachable from s, the
simple path in the breadth-first tree from s to v corresponds to a ”shortest
path” from s to v in G, that is, a path containing the smallest number of
edges. The algorithm works on both directed and undirected graphs.
Breadth-First search is like traversing a tree where each node is a state
which may a be a potential candidate for solution. It expands nodes from
the root of the tree and then generates one level of the tree at a time until
a solution is found. It is very easily implemented by maintaining a queue of
nodes. Initially the queue contains just the root. In each iteration, node at
the head of the queue is removed and then expanded. The generated child
nodes are then added to the tail of the queue.
Advantages:
• Breadth first search will never get trapped exploring the useless path
forever.

• If there is a solution, BFS will definitely find it out.

• If there is more than one solution then BFS can find the minimal one
that requires less number of steps.
Disadvantages:
• The main drawback of Breadth first search is its memory requirement.
Since each level of the tree must be saved in order to generate the
next level, and the amount of memory is proportional to the number
of nodes stored, the space complexity of BFS is O(bd). As a result,
BFS is severely space-bound in practice so will exhaust the memory
available on typical computers in a matter of minutes.

• If the solution is farther away from the root, breath first search will
consume lot of time.

1.1 Problem:
Shortest Path Problem

1.2 Solution:
1.2.1 Problem Description:
Breadth-first search finds the distance to each reachable vertex in a graph G
= (V,E) from a given source vertex s ∈ V . Define the shortest-path distance

3
δ(s,v) from s to v as the minimum number of edges in any path from vertex
s to vertex v; if there is no path from s to v, then δ(s,v)=∞. We call a path
of length δ(s,v) from s to v a shortest path from s to v.

1.2.2 Algorithm:
BFS(G,s)
1. for each vertex u ∈ G.V-{s}

2. u.color = WHITE

3. u.d = ∞

4. u. π = NIL

5. s.color = GRAY

6. s.d = 0

7. s. π = NIL

8. Q = empty queue

9. ENQUEUE(Q,s)

10. while Q is not empty

11. u = DEQUEUE(Q)

12. for each v adjacent to u

13. if v.color == WHITE

14. v.color = GRAY

15. v.d = u.d+1

16. v.π = u

17. ENQUEUE(Q,v)

18. u.color = BLACK

1.2.3 Time complexity:


The operation of enqueuing and dequeuing take O(1) time so the total time
devoted to queue operations is O(V). Since the sum of the lengths of all
the adjacency lists is θ(E),the total time spent in scanning adjacency lists
is O(E). The overhead for initialization is O(V), and thus the total running
time of the BFS procedure is O(V+E). Thus, breadth-first search runs in
time linear in the size of the adjacency-list representation of G.

4
1.2.4 Proof of Correctness:
Lemma 1:
Let G = (V,E) be a directed or undirected graph, and suppose that BFS is
run on G from a given source vertex s∈V. Then upon termination, for each
vertex v ∈V , the value v.d computed by BFS satisfies v.d ≥δ(s,v).
Corollary 1:
Suppose that vertices vi and vj are enqueued during the execution of BFS,
and that vi is enqueued before vj . Then vi .d ≤vj .d at the time that vj is
enqueued.

Theorem of Correctness:
Let G=(V,E) be a directed or undirected graph, and suppose that BFS is
run on G from a given source vertex s ∈ V . Then, during its execution, BFS
discovers every vertex v∈V that is reachable from the source s, and upon
termination, v.d=δ(s,v) for all v∈V . Moreover, for any vertex v6=s that is
reachable from s, one of the shortest paths from s to v is a shortest path from
s to v.π followed by the edge (v.π,v).
Proof:
Assume, for the purpose of contradiction, that some vertex receives a d value
not equal to its shortest-path distance. Let v be the vertex with minimum
δ(s,v) that receives such an incorrect d value; clearly v6=s. By Lemma 1,
v.d ≥δ(s,v), and thus we have that v.d>δ(s,v). Vertex v must be reachable
from s, for if it is not, then δ(s,v)=∞≥v.d. Let u be the vertex immediately
preceding v on a shortest path from s to v, so that δ(s,v)=δ(s,u)+1. Because
δ(s,u)<δ(s,v)and because of how we chose v, we have u.d=δ(s,u).
Putting these properties together, we have
v.d>δ(s,v)=δ(s,u)+1=u.d+1 ......... (Eqn (a))
Now consider the time when BFS chooses to dequeue vertex u from Q in
line 11. At this time, vertex v is either white, gray, or black. We shall show
that in each of these cases, we derive a contradiction to the inequality (a).
If v is white, then line 15 sets v.d= u.d+1, contradicting inequality (a). If
v is black, then it was already removed from the queue and, by Corollary
1, we have v.d ≤u.d, again contradicting inequality (a). If v is gray, then it
was painted gray upon dequeuing some vertex w, which was removed from Q
earlier than u and for which v.d=w.d+1. By Corollary 1, however, w.d ≤u.d,
and sowe have v.d=w.d+1≤u.d+1, once again contradicting inequality (a).
Thus we conclude that v.d=δ(s,v) for all v∈V . All vertices v reachable from
s must be discovered, for otherwise they would have ∞=v.d>δ(s,v). To con-
clude the proof of the theorem, observe that if v.π=u, then v.d= u.d+1.
Thus, we can obtain a shortest path from s to v by taking a shortest path
from s to v.π and then traversing the edge (v.π,v).

1.2.5 Real Time Application of Breadth First Search:


GPS Navigation systems: Navigation systems such as the Google Maps,
which can give directions to reach from one place to another use BFS. They

5
take your location to be the source node and your destination as the desti-
nation node on the graph. (A city can be represented as a graph by taking
landmarks as nodes and the roads as the edges that connect the nodes in the
graph.) BFS is applied and the shortest route is generated which is used to
give directions or real time navigation.

6
2 Depth First Search Procedure

Depth-first search explores edges out of the most recently discovered ver-
tex v that still has unexplored edges leaving it. Once all of v’s edges have
been explored, the search backtracks to explore edges leaving the vertex from
which v was discovered. This process continues until we have discovered all
the vertices that are reachable from the original source vertex. If any undis-
covered vertices remain, then depth-first search selects one of them as a new
source, and it repeats the search from that source. The algorithm repeats
this entire process until it has discovered every vertex.

2.1 Problem:
Given a graph G=(V,E). Find all the vertices reachable from all u∈V and
paint them BLACK. (Initially paint them all WHITE.)

2.2 Solution:
2.2.1 Problem Description:
Let G = (V, E) be an undirected connected graph, and let s ∈ V be some
node of G. If we start a depth first search (DFS) at s, then this DFS visits all
nodes of G. Each edge e ∈ E can exhibit one of two possible characteristics:

• The DFS visits a so far unvisited node w using the edge e = {v, w};
in this case, we call e a tree edge which is considered being directed
(downwards) from v to w.

• The DFS considers the edge e but the node on the other side of the
edge is already visited; in this case, we call e a back edge.

2.2.2 Algorithm:
DFS (V, E)

1. for each vertex u ∈ G.V

2. do u.color = WHITE

3. u.π = NIL

4. time = 0

5. for each vertex u ∈ G.V

6. do if u.color == WHITE

7. then DFS-VISIT(G,u)

7
DFS-VISIT(G,u)

1. u.color = GRAY

2. time = time + 1

3. u.d = time

4. for each vertex v ∈ G.Adj[u]

5. do if v.color == WHITE

6. then v.π = u

7. DFS-VISIT(G,v)

8. u.color[u] = BLACK

9. time = time + 1

10. u.f = time

2.2.3 Time complexity:


The loops on lines 1-3 and lines 5-7 of DFS take time θ(V), exclusive of the
time to execute the calls to DFS-VISIT. As we did for breadth-first search, we
use aggregate analysis. The procedure DFS-VISIT is called exactly once for
each vertex v∈V, since the vertex u on which DFS-VISIT is invoked must be
white and the first thing DFS-VISIT does is paint vertex u gray. During an
execution of DFS-VISIT(G,v), the loop on lines 4-7 executes |Adj[v]| times.
Since
P
|Adj[v]|=θ(E) for each v∈V
the total cost of executing lines 4-7 of DFS-VISIT is θ(E). The running time
of DFS is therefore θ(V+E).

2.2.4 Proof of Correctness:


In order to prove the correctness of this algorithm we need to prove that when
DFS-VISIT(G,s) terminates, the only nodes DFS-VISIT will have been called
on are nodes on which DFS-VISIT had already been called, plus the nodes
reachable from s. If we prove for DFS-VISIT(G,s), it will be true for DFS-
VISIT(G,u) for each vertex u ∈ G.V
Theorem 1:
When DFS-VISIT(G,s) is called on node s, no nodes reachable from s will
be WHITE when DFS-VISIT(G,s) and all ancestor calls return.
Proof:
By induction on the distance of nodes from s.
Base case: Consider all nodes at distance 0 from s. This is just s itself. When
DFS-VISIT(G,s) is called, DFS-VISIT(G,s) will color s gray, then black.

8
Induction hypothesis: Suppose the claim holds for all nodes at distance n
from s.
Inductive Steps: We’ll prove it holds for all nodes at distance n + 1 from s.
Take any node v at distance n + 1 from s; v is adjacent to some node u at
distance n from s. By our IH, u will not be WHITE when DFS-VISIT(G,s)
and its ancestor calls return, so DFS-VISIT(G,u) must have been called at
some point. This call must have called DFS-VISIT on each of u’s WHITE
neighbors. If v was WHITE at this time, DFS-VISIT(G,v) must have been
called on v, coloring v GRAY and then BLACK. Otherwise, v was already
not colored WHITE. Since our choice of v was arbitrary, no nodes at distance
n + 1 will be WHITE when DFS-VISIT(G,s) and its ancestor calls return,
completing the induction.
Theorem 2:
When DFS-VISIT(G,s) is called on a node s, no recursive calls will be made
on nodes not reachable from s.
Proof:
By contradiction; assume a recursive call is made on at least one node not
reachable from s. There must be a first node visited this way; call it v. v
can’t be s, since s is trivially reachable from itself. Thus DFS-VISIT(G,v)
must have been recursively invoked by DFS-VISIT(G,u) for some node u v,
which in turn called DFS-VISIT(G,v). This means edge (u, v) must exist.
Now, we consider two cases:
Case 1: u is reachable from s. But then v is reachable from s, because we can
take the path from s to u and follow edge (u, v), which makes v reachable
from s.
Case 2: u is not reachable from s. But then v was not the first node not
reachable from s to have DFS-VISIT called on it.
In either case, we reach a contradiction, so our assumption was wrong. Thus
DFS-VISIT(G,s) never makes recursive calls on nodes not reachable from s.

• Taken together, the two theorems we have proven, show the following:

• When DFS-VISIT(G,s) terminates, every node reachable from s will


have had DFS-VISIT called on it, though the call to DFS-VISIT(G,s)
might not have initiated those other calls.

• When DFS-VISIT(G,s) terminates, it will never have called DFS-VISIT


on a node not reachable from s.

• Thus when DFS-VISIT(G,s) terminates, the only nodes DFS-VISIT


will have been called on are nodes on which DFS-VISIT had already
been called, plus the nodes reachable from s. And hence true for DFS-
VISIT(G,u) for each vertex u ∈ G.V

2.2.5 Real Time Application of Depth First Search:


Path Finding We can specialize the DFS algorithm to find a path between
two given vertices u and z.

9
1. Call DFS(G, u) with u as the start vertex.

2. Use a stack S to keep track of the path between the start vertex and
the current vertex.

3. As soon as destination vertex z is encountered, return the path as the


contents of the stack.

10
3 Topological Sorting on directed acyclic graph

A topological sort of a directed acyclic graph G=(V,E) is a linear ordering


of all its vertices such that if G contains an edge (u,v), then u appears before
v in the ordering. (If the graph contains a cycle, then no linear ordering is
possible.) We can view a topological sort of a graph as an ordering of its
vertices along a horizontal line so that all directed edges go from left to right.

3.1 Problem:
Order the vertices of a directed acyclic graph linearly such that for every
directed edge uv, vertex u comes before v in the ordering.

3.2 Solution:
3.2.1 Problem Description:
The above problem is basically the Topological Sorting on directed acyclic
graph. We can modify DFS to find Topological Sorting of a graph. In DFS,
we start from a vertex, we first print it and then recursively call DFS for its
adjacent vertices. In topological sorting, we use a temporary stack. We dont
print the vertex immediately, we first recursively call topological sorting for
all its adjacent vertices, then push it to a stack. Finally, print contents of
stack. Note that a vertex is pushed to stack only when all of its adjacent
vertices (and their adjacent vertices and so on) are already in stack.

3.2.2 Algorithm:
TOPOLOGICAL-SORT(G)

1. Compute the Depth First Search Algorithm on the directed acyclic


graph G to compute finishing times v.f for each vertex

2. As each vertex is finished, insert it onto a stack

3. Return the stack of vertices

3.2.3 Time complexity:


We can perform a topological sort in time θ(V+E), since depth-first search
takes θ(V+E) time and it takes O(1) time to insert each of the |V| vertices
onto the stack.

11
3.2.4 Proof of Correctness:
Lemma:
A directed graph G is acyclic if and only if a depth-first search of G yields
no back edges.
Proof :
To prove the direct part:
Suppose that a depth-first search produces a back edge (u,v). Then vertex v
is an ancestor of vertex u in the depth-first forest. Thus, G contains a path
from v to u, and the back edge (u,v) completes a cycle.
To prove the converse part:
Suppose that G contains a cycle c. We show that a depth-first search of G
yields a back edge. Let v be the first vertex to be discovered in c, and let
(u,v) be the preceding edge in c. At time v.d, the vertices of c form a path
of white vertices from v to u. By the white-path theorem, vertex u becomes
a descendant of v in the depth-first forest. Therefore,(u,v) is a back edge.(
White-path theorem: In a depth-first forest of a (directed or undirected)
graph G=(V,E), vertex v is a descendant of vertex u if and only if at the
time u.d that the search discovers u, there is a path from u to v consisting
entirely of white vertices.)
Theorem:
TOPOLOGICAL-SORT produces a topological sort of the directed acyclic
graph provided as its input.
Proof :
Suppose that DFS is run on a given directed acyclic graph G=(V,E) to
determine finishing times for its vertices. It suffices to show that for any pair
of distinct vertices u,v∈V, if G contains an edge from u to v, then v.f<u.f .
Consider any edge (u,v) explored by Depth First Search Procedure . When
this edge is explored, v cannot be gray, since then v would be an ancestor of u
and (u,v) would be a back edge, contradicting the above Lemma. Therefore,
v must be either white or black. If v is white, it becomes a descendant of
u, and so v.f<u.f. If v is black, it has already been finished, so that v.f has
already been set. Because we are still exploring from u, we have yet to assign
a timestamp to u.f , and so once we do, we will have v.f<u.f as well. Thus,
for any edge (u,v) in the directed acyclic graph, we have v.f<u.f , proving
the theorem.

3.2.5 Real Time Application of Topological Sorting for Directed


Acyclic Graph:
Topological Sorting is mainly used for scheduling jobs from the given depen-
dencies among jobs. In computer science, applications of this type arise in
instruction scheduling, ordering of formula cell evaluation when recomput-
ing formula values in spreadsheets, logic synthesis, determining the order of
compilation tasks to perform in make files, data serialization, and resolving
symbol dependencies in linkers.

12
4 Minimum Spanning Tree

A minimum spanning tree (MST) or minimum weight spanning tree is


a subset of the edges of a connected, edge-weighted undirected graph that
connects all the vertices together, without any cycles and with the minimum
possible total edge weight.
The cost of the spanning tree is the sum of the weights of all the edges in
the tree. There can be many spanning trees. Minimum spanning tree is
the spanning tree where the cost is minimum among all the spanning trees.
There also can be many minimum spanning trees.
There are two famous algorithms for finding the Minimum Spanning Tree:

1. Kruskal’s Algorithm

2. Prim’s algorithm

4.1 Problem I:
Find the minimum spanning tree in a graph G=(V,E) using Kruskal’s Algo-
rithm

4.2 Solution I:
4.2.1 Problem Description:
Kruskal’s Algorithm uses a disjoint-set data structure to maintain several
disjoint sets of elements. Each set contains the vertices in one tree of the
current forest. The operation MAKE-SET(v) creates a new tree whose only
member is v. The operation FIND-SET(u) returns a representative element
from the set that contains u. Thus, we can determine whether two vertices
u and v belong to the same tree by testing whether FIND-SET(u) equals
FIND-SET(v). To combine trees, Kruskal’s algorithm calls the UNION pro-
cedure. Lines 13 initialize the set A to the empty set and create |V| trees, one
containing each vertex. The for loop in lines 58 examines edges in order of
weight, from lowest to highest. The loop checks, for each edge (u,v), whether
the endpoints u and v belong to the same tree. If they do, then the edge
(u,v) cannot be added to the forest without creating a cycle, and the edge
is discarded. Otherwise, the two vertices belong to different trees. In this
case, line 7 adds the edge (u,v) to A, and line 8 merges the vertices in the
two trees.

4.2.2 Algorithm:
MST-KRUSKAL(G,V)

1. A = Φ

2. for each vertex v∈G.V

13
3. MAKE-SET(v)

4. sort the edges of G.E into nondecreasing order by weight w

5. for each edge (u,v) ∈G.E, taken in nondecreasing order by weight

6. if FIND-SET(u) 6= FIND-SET(v)

7. A=A∪{(u,v)}

8. UNION(u,v)

9. return A

4.2.3 Time complexity:


Initializing the set A in line 1 takes O(1) time, and the time to sort the
edges in line 4 is O(E lgE). For the for loop of lines 2 to 3 the MAKE-SET
operations take |V| time. The for loop of lines 5 to8 performs |E| FIND-SET
and UNION operations. Thus the total time will be O(1) + O(E lgE) +
O(V). We know that |E|< |V|2 thus we have lg|E|=O(lgV), so we can restate
the running time of Kruskal’s algorithm as O(E lgV).

4.2.4 Proof of Correctness:


We need to prove that Kruskal’s algorithm finds a minimum spanning tree.
Proof:
We will prove this by induction that A is always a subgraph of some Minimum
Spanning Tree.
Basis case: This is obviously true at the beginning, since A is empty.
Induction Hypothesis: Suppose it is true at some point in our algorithm: A
is currently a 2 subgraph of some MST M, and Kruskal’s algorithm adds an
edge e.
Inductive Steps: We want to show that there is some Minimum Spanning
Tree M’such that A∪{e} is a subgraph of M’. If e ∈ M then this is trivially
true: we set M’= M, and since A ⊆ M and e ∈ M we know that F ∪ {e} ⊆
M. Otherwise, consider what happens when we add e to M. Since Kruskal’s
algorithm added e, this means that adding e did not create any cycles, so the
two endpoints of e must be in different trees in A. So if we follow this cycle,
there must be some other edge e’whose two endpoints also lie in different trees
of A (not necessarily the same two trees as e). This means that Kruskal’s
algorithm could have added e’, but instead chose to add e. Thus w(e) ≤
w(e’). So if we remove e’from M and add e to get a new tree M’, we know
that the weight of M’is at most the weight of M and is thus an Minimum
Spanning Tree, and A∪{e} is a subgraph of M’. This maintains the induction.
Hence we can say that Kruskal’s algorithm always finds a minimum spanning
tree.

14
4.3 Problem II:
Find the minimum spanning tree in a graph G=(V,E) using Prim’s Algo-
rithm.

4.4 Solution II:


4.4.1 Problem Description:
Prim’s algorithm has the property that the edges in the set A always form a
single tree. Prim’s Algorithm also use Greedy approach to find the minimum
spanning tree. In Prim’s Algorithm we grow the spanning tree from a starting
position. Unlike an edge in Kruskal’s, we add vertex to the growing spanning
tree in Prim’s.
We start with an arbitrary node u and include it in Q. We then find the
lowest-weight edge incident on u, and add this to Q. We then repeat, always
adding the minimum-weight edge that has exactly one endpoint in Q to Q.
Slightly more formally, Prim’s algorithm is the following:
1. Pick some arbitrary start node u. Initialize Q as the set of all vertices
of the graph G.
2. Repeatedly the lowest-weight edge incident to Q (the lowest-weight
edge that has exactly one vertex in Q and one vertex not in Q) until
Q spans all the nodes.
EXTRACT-MIN(Q) deletes the element from Q whose key is minimum, re-
turning a pointer to the element.

4.4.2 Algorithm:
MST-PRIM(G,w,r)
1. for each u∈G.V
2. u.key=∞
3. u.π=NIL
4. r.key = 0
5. Q = G.V
6. while Q is not empty
7. u = EXTRACT-MIN(Q)
8. for each v∈G.Adj(u)
9. if v∈Q and w(u,v)<v.key
10. v.π=u
11. v.key=w(u,v)

15
4.4.3 Time complexity:
For the for loop in lines 1-5 takes O(V) time. The body of the while loop
executes |V| times, and since each EXTRACT-MIN operation takes O(lgV)
time, the total time for all calls to EXTRACT-MIN is O(VlgV). The for loop
in lines 8-11 executes O(E) times. The assignment in line 11 takes O(lgV)
time. Thus, the total time for Prim’s algorithm is
O(V lgV + E lgV) = O(E lgV).

4.4.4 Proof of Correctness:


We need to prove that Prim’s algorithm finds a minimum spanning tree.
Proof:
We will prove this by induction. The induction hypothesis will be that after
each iteration, the tree Q is a subgraph of some minimum spanning tree M.
Basis case: This is trivially true at the start, since initially Q is just a single
node and no edges.
Induction Hypothesis: Now suppose that at some point in the algorithm we
have Q which a subgraph of M, and Prim’s algorithm tells us to add the edge
e.
Inductive steps: We need to prove that Q ∪ {e} is also a subtree of some
Minimum Spanning Tree. If e ∈ M then this is clearly true, since by induction
Q is a subtree of M and e ∈M and thus Q ∪ {e} is a subtree of M. So suppose
that e6=M. Consider what happens when we add e to M. This creates a cycle.
Since e has one endpoint in Q and one endpoint not in Q (since Prim’s
algorithm is adding it), there has to be some other edge e’in this cycle that
has exactly one endpoint in Q. So Prim’s algorithm could have added e’but
instead chose to add e, which means that w(e) ≤ w(e’). So if we add e to M
and remove e’, we are left with a new tree M’whose total weight is at most
the weight of M, and which contains Q ∪ {e}. This maintains the induction.
Hence we can say that Prim’s algorithm always finds a minimum spanning
tree.

4.5 Real Time Application of Minimum Spanning Tree:


Minimum spanning tree has direct application in the design of networks. It
is used in algorithms approximating the travelling salesman problem, multi-
terminal minimum cut problem and minimum-cost weighted perfect match-
ing. Other practical applications are:

1. Cluster Analysis

2. Handwriting recognition

3. Image segmentation

16
5 Single Source Shortest Path
Single-source shortest-paths problem is such that given a graph G = (V,E),
we want to find a shortest path from a given source vertex s ∈V to each
vertex v ∈V.

5.1 Problem:
Single-source shortest-paths problem using Dijkstras Algorithm .

5.2 Solution:
5.2.1 Problem Description:
Dijkstras algorithm solves the single-source shortest-paths problem on a
weighted, directed graph G=(V,E) for the case in which all edge weights are
nonnegative. Dijkstras algorithm maintains a set S of vertices whose final
shortest-path weights from the source s have already been determined. The
algorithm repeatedly selects the vertex u∈V-S with the minimum shortest-
path estimate, adds u to S, and relaxes all edges leaving u. In the following
implementation, we use a min-priority queue Q of vertices, keyed by their d
values.

5.2.2 Algorithm:
DIJKSTRA(G, w, s)

1. INITIALIZE-SINGLE-SOURCE(G,s)

2. S = Φ

3. Q = G.V

4. while Q is not empty

5. u = EXTRACT-MIN(Q)

6. S=S∪{u}

7. for each vertex v∈G.Adj[u]

8. RELAX(u,v,w)

RELAX(u,v,w)

1. if v.d > u.d + w(u,v)

2. v.d = u.d + w(u,v)

3. v.π=u

INITIALIZE-SINGLE-SOURCE(G,s)

17
1. for each vertex v∈G.V

2. v.d=∞

3. v.π=NIL

4. s.d= 0

5.2.3 Time complexity:


Time Complexity of Dijkstra’s Algorithm is O(V2 ) but with minimum prior-
ity queue it drops down to O(V+ElogV).

5.2.4 Proof of Correctness:


Dijkstras algorithm, run on a weighted, directed graph G=(V,E) with non-
negative weight function w and source s, terminates with u.d for all vertices
u∈V.
Proof:
We use the following loop invariant: At the start of each iteration of the
while loop of lines 48, v.d = δ(s,v) for each vertex v∈S. It suffices to show
for each vertex u∈V, we have u.d= δ(s,u) at the time when u is added to set
S. Once we show that u.d= δ(s,u), we rely on the upper-bound property to
show that the equality holds at all times thereafter.
Initialization: Initially, S=Φ, and so the invariant is trivially true.
Maintenance: We wish to show that in each iteration, u.d = δ(s,u) for the
vertex added to set S. For the purpose of contradiction, let u be the first
vertex for which u.d6=δ(s,u) when it is added to set S. We shall focus our
attention on the situation at the beginning of the iteration of the while loop
in which u is added to S and derive the contradiction that u.d = δ(s,u) at
that time by examining a shortest path from s to u. We must have u 6= s
because s is the first vertex added to set S and s.d= δ(s,s)=0 at that time.
Because u 6= s, we also have that S 6=Φ ; just before u is added to S. There
must be some path from s to u, for otherwise u.d= δ(s,u)=∞ by the no-path
property, which would violate our assumption that u.d 6= δ(s,u). Because
there is at least one path, there is a shortest path p from s to u. Prior to
adding u to S, path p connects a vertex in S, namely s, to a vertex in V-S,
namely u. Let us consider the first vertex y along p such that y ∈V-S, and
let x ∈S be ys predecessor along p. Either of paths p1 or p2 may have no
edges.
We claim that y.d=δ(s,y) when u is added to S. To prove this claim, observe
that x ∈S. Then, because we chose u as the first vertex for which u.d6=δ(s,u)
when it is added to S, we had x.d6=δ(s,x) when x was added to S. Edge (x,y)
was relaxed at that time, and the claim follows from the convergence prop-
erty.
We can now obtain a contradiction to prove that u.d=δ(s,u). Because y
appears before u on a shortest path from s to u and all edge weights are

18
nonnegative (notably those on path p2), we have δ(s,y)≤δ(s,u) and thus
y.d=δ(s,y) ≤δ(s,u) ≤u.d (by the upper-bound property) .(a)
But because both vertices u and y were in V-S when u was chosen in line
5, we have u.d≤y.d. Thus, the two inequalities in (a) are in fact equalities,
giving y.d=δ(s,y)=δ(s,u)=u.d Consequently, u.d=δ(s,u) , which contradicts
our choice of u. We conclude that u.d=δ(s,u) when u is added to S, and that
this equality is maintained at all times thereafter. Termination: At termina-
tion, Q=Φ which, along with our earlier invariant that Q=V-S, implies that
S=V. Thus, u.d=δ(s,u) for all vertices u∈V.

5.2.5 Real Time Application of Dijkstra’s algorithm:


The Dijkstra’s algorithm basically finds the shortest path in a graph, hence
used in many fields including computer networking(Routing systems).It also
has application in Google maps to find the shortest possible path from one
location to other. In Biology it is used to find the network model in spreading
of a infectious disease.

19

Вам также может понравиться