You are on page 1of 62

UNIT IV 1. What are the issues in the design of code generator? Explain in detail.

Issues in the Design of a Code Generator depends on Target language and Operating System. But following issues are inherent in all code generation problems Input to the code generator Target programs Instruction Selection Memory management Register allocation Evaluation order [16]

Input to the Code Generator

The input of the code generator is the intermediate representations of the source program together with symbol table is able to provide run-time address of the data objects Intermediate representations may be Postfix notations Three address representations Stack machine code Syntax tree DAG

Target code
The output of the code generator is the target program. Target program may be :Absolute machine language, Re-locatable machine language or Assembly language Absolute machine language can be placed in a fixed location of memory and immediately executed. Eg PL/C Re-locatable machine language as output allows subprograms to be compiled separately A set of re-locatable object modules can be linked together and loaded for execution by a linker Producing assembly language makes the process of code generation some what easier.

Instruction Selection
The nature of the instruction set of the target machine determines the difficulty of the instruction selection. Uniformity and completeness of the instruction set are important factors Instruction speeds is also important Say, x = y + z MOV y, R0 ADD z, R0 MOV R0, x The quality of the generated code is determined by its speed and size. Cost difference between the different implementation may be significant. Say a = a + 1 MOV a, R0 ADD #1, R0 MOV R0, a If the target machine has increment instruction (INC), we can write INC a.

Memory Management
Mapping names in the source program to address of data objects in runtime memory is done cooperatively by the front end and the code generator. Machine code is being generated, labels in three address statements have to be converted to addresses of instructions.

Register allocation
Instruction evolving registers are shorter and faster Two sub problems are o Register allocation-Select the set of variables that will reside in registers at a point in the program o Register assignment-Pick one specific register that a variable will reside in. Certain machines require register pairs for some operands and results.

Eg. IBM system/370 machines integer multiplication and division involve register pairs.

Evaluation order
The order in which computations are performed can affect the efficiency of the target code. Some computation orders require fewer registers to hold intermediate results than others.

2. Discuss about the runtime storage management of a code generator.


RUN-TIME STORAGE MANAGEMENT Information during an execution of a procedure is kept in a block of storage called an activation record. Names local to the procedure are also stored in the activation record We discuss what code to be generated to manage activation records at run time Two standard storage allocation strategies are presented Static allocation- the position of an activation record in memory is fixed at compile time. Stack allocation- new activation record is pushed onto the stack for each execution of a procedure.

In static allocation the position of an activation record in memory is fixed at compile time. Consider the code needed to implement static allocation. Runtime allocation and deallocation of activation records occurs as part of the procedure call and return sequences Call Return Halt Action, a place holder for other statements. Runtime memory is divided into areas for code, static data and stack.

Static allocation:
Call statement in intermediate code is implemented by a sequence of two target Machine instructions. MOV #here + 20 , callee.static_area---Save the return address GOTO callee.code_area---Transfers control to the target code for the

called procedure

Attributes callee-static-area and calee-code-area are constants,referring to the address of the activation record and the first instruction for the called procedure. 100:ACTION1 120:MOV#140,364 132:GOTO 200 140:ACTION2 160:HALT 200:ACTION3 220:GO TO *364 300: 304: ----364: 368: /*code for C /*save return address 140 /*call p

/*code for p /*return to address saved in location 364 /*300-363 hold activation record for c /*return address /*364-451 hold activation record for p /*local data for p

Stack Allocation
The position of the record for an activation of a procedure is not known until run time. This position is usually stored in a register called SP (Stack Pointer). SP points to the beginning of the activation record on top of the stack. When a procedure call occurs, the calling procedure increments SP and transfers control to the called procedure.After control returns to the caller , it decrements SP, thereby deallocating the activation record of the called procedure. The code for the first procedure initializes the stack by setting SP to the start of the stack area in memory: MOV #stackstart , SP Code for the first procedure HALT A procedure call sequence increments SP, saves the return address, and transfers control to the called procedure: ADD #caller.recordsize , SP->increment the SP MOV #here + 16 , *SP /*save return address */ GOTO callee.code_area recordsize -> size of the activation record. The return sequence has two parts : The called procedure transfers control to the return address using GOTO * 0(SP)

The second part of the return sequence is in the caller, which decrements SP , thereby restoring SP to its previous value. SUB #caller.recordsize , SP

Stack Allocation examble:

/*code for s action1 call q action2 halt /*code for p action3 return /*code

/* code for s */ 100 : MOV #600 , SP 108 : ACTION1 128 : ADD #ssize , SP 136 : MOV #152 , *SP 144 : GOTO 300 152 : SUB #ssize , SP 160 : ACTION2 180: HALT 200:ACTION3 220:GOTO *0(SP) /* code for p */ 300:ACTION4 320:ADD #Qsize,SP 328:MOV #344,*SP 336:GOTO 200 344:SUB qsize,SP 352:Action5 372:add #qsize,SP /* initialize the stack */ /*call sequence begins*/ /*push return address*/ /*call p */ /*restore SP */

380:MOV #396,*SP 388:GOTO 300 396:SUB #qsize,SP 404:ACTION6 424:ADD #qsize,SP 432:MOV #448,*SP 440:GOTO 300 448:SUB #qsize,SP 456:GO TO *0(SP) /*return . 600: /*stack starts here */

3. a. Explain about transformations on basic blocks. Basic Blocks and Flow Graphs A graph representation of three address statements, called flow graph. Nodes in the flow graph represent computations Edges represent the flow of control


Basic Block: A basic block is a sequence of consecutive statements in which flow of control enters at the beginning and leaves at the end without halt or possibly of the branching except at the end. Basic Blocks and Flow Graphs (2) (The following sequence of three address statements forms a basic block) t1 = a*a t2 = a*b t3 = 2*t2 t4 = t1+t3 t5 = b*b t6 = t4+ t5 Three address statement x = y + z is said to define x and to use y and z. A name in a basic block is said to be live at a given point if its value is used after that point in the program, perhaps in another basic block Basic Blocks and Flow Graphs (3)


Partition into basic blocks : A sequence of three-address statements

Output : Sequence of basic blocks Method We first determine the leader The first statement is a leader Any statement that is the target of a conditional or unconditional goto is a leader Any statement that immediately follows a goto or unconditional goto statement is a leader For each leader, its basic block consists of the leader and all the statements up to but not including the next leader or the end of the program. Ex:Consider the fragment of source code. It computes the dot product of two vectors a and b of length 20

begin prod:=0; i:=1; do begin prod:=prod+a[i]+b[i]; i:=i+1; end while i <= 20 end [06]

b. Explain about directed graph for program. Flow Graphs

The nodes of the flow graph are the basic blocks. One node is distinguished as initial: it is the block whose leader is the first statement.There is a directed edge from block B1 to block B2 if B2 can immediately follow B1 in some execution sequence:that is,if 1.there is a conditional or unconditional jump from the last statement of B1 to the first statement of B2, or 2.B2 immediately follows B1 in the order of the program, and B1 does not end in an unconditional jump. We say that B1 is a predecessor of B2, and B2 is a successor of B1


t1:= 4*I; t2:=a[t1]; t3:=4*i: t4:=b[t3]; t5;=t2*t4; t6:=prod+t5; prod:=t6; t7:=i+1; i:=t7; if i <=20 goto B2



A loop is a collection of nodes in a flow graph such that

All nodes in the collection are strongly connected, that is from any node in the loop to any other, there is a path of length one or more, wholly within the loop, and The collection of nodes has a unique entry, that is, a node in the loop such that, the only way to reach a node from a node out side the loop is to first go through the entry. (Inorder to reach a node within a loop first go to the entry of the loop)

4. Explain the process of constructing a DAG for basic blocks given an example DAG representation of basic block:


Directed acyclic graphs (DAGS) are useful data structures for implementing transformation on basic blocks. A dag gives a picture of how the value computed by each statement in a basic block is used in subsequent statements of the block. A dag for a basic block(or just dag) is a directed acycle graph with the following labels on nodes: 1.leaves are labled by unique identifier ,either variable names or constants. 2.interior nodes are labeled by an operator symbol. 3.Nodes are also optionally given a sequences of identifier for labels.

The follow graph statement (1) t1:=4*i (2) t2:=a[ t1 ] (3) t3:=4*i (4) t4:=b[ t3 ] (5) t5:=t2 * t4 (6) t6:=prode + t5 (7) prode := t6 (8) t7:=i+1 (9) i:=t7 (10) if i<=20 goto (1) Three Address Code of Basic Block B2

Dag construction The construction a dag for a basic block , we process each statement of the block in turn.

t6, prode


t5 < =

[ ]


[ ]

t4 t7 1

* a b 4

t1,t3 i0


Dag For Block Algorithm Input a basic block. Output .A dag for the basic block containing the following information 1 A dag for each node. For leaves the label is an identifier (constants permitted), and for interior nodes, an operator symbol. 2 For each node a (possibly empty) list of attached identifiers (constants not permitted here). 3 Method 1. 2. 3. 4. Application Of Dag Arrays pointers procedure cells

5. a. Explain peephole optimization with example. Optimization


It may be possible to restructure the parse tree to reduce its size or to present a parse to the code generator from which the code generator is able to produce more efficient code. Some optimizations that can be applied to the parse tree are illustrated using source code rather than the parse tree. Constant folding: I := 4 + J - 5; --> I := J - 1; or I := 3; J := I + 2; --> I := 3; J := 5 Loop-Constant code motion: From: while (count < limit) do INPUT SALES; VALUE := SALES * ( MARK_UP + TAX ); OUTPUT := VALUE; COUNT := COUNT + 1; end; --> to: TEMP := MARK_UP + TAX; while (COUNT < LIMIT) do INPUT SALES; VALUE := SALES * TEMP; OUTPUT := VALUE; COUNT := COUNT + 1; end; Induction variable elimination: Most program time is spent in the body of loops so loop optimization can result in significant performance improvement. Often the induction variable of a for loop is used only within the loop. In this case, the induction variable may be stored in a register rather than in memory. And when the induction variable of a for loop is referenced only as an array subscript, it may be initialized to the initial address of the array and incremented by only used for address calculation. In such cases, its initial value may be set

From: For I := 1 to 10 do A[I] := A[I] + E to: For I := address of first element in A to address of last element in A increment by size of an element of A do A[I] := A[I] + E Common subexpression elimination: From: A := 6 * (B+C); D := 3 + 7 * (B+C); E := A * (B+C); to: TEMP := B + C; A := 6 * TEMP; D := 3 * 7 * TEMP; E := A * TEMP;

Strength reduction: 2*x --> x + x 2*x --> shift left x Mathematical identities: a*b + a*c --> a*(b+c) a - b --> a + ( - b ) We do not illustrate an optimizer in the parser for Simpile. Peephole Optimization The code generation there are further optimizations that are possible. The code is scanned a few instructions at a time (the peephole) looking for combinations of instructions that may be replaced by more efficient combinations. Typical optimizations performed by a peephole optimizer include copy propagation across register loads and stores, strength reduction in arithmetic operators and memory access, and branch chaining.

A peephole optimizer for Simp. x := x + 1 ld x ld x

inc inc store x dup y := x + 3 ld x ld 3 ld 3 add add store y store y x := x + z ld x ld z ld z add add store x store x

Basic block: A basic block is a sequence of consecutive statements in which flow of control enter at the beginning and leaves at the end without halt or possibility of branching except at the end. Sequence of address statement forms a basic block: t1:=a*a t2:=a*b .t3:=2*t2 t4:=t1+t3 t5:=b*b t6:=t4+t3 A three address statement x := y + z is said to define x and to user y and z. Algorithm: partition into basic blocks. Input : a sequence of three-address statements. Output: A list of basic blocks with each three-address statement in exactly one block. Method : 1.We first determine the set of leaders, the first statements of basic blocks.the rules we use are the following.

i) The first statement is a leader. ii) Any statement that is the target of a conditional or unconditional goto is a leader. iii) Any statement that immediately follows a goto or conditional goto statement is a leader. 2. for each leader ,its basic block consists of the leader and all statements up to but not including the next leader or the program. Transformations on basic blocks A number of transformations can be applied to a basic block without changing the set of expressions computed by the block. Structure- preserving transformations The primary structure-preserving transformations on basic blocks are: 1. common sub expression elimination. 2. dead-code elimination 3. renaming of temporary variables. 4. interchange of two independent adjacent statement. Algebraic transformations The algebraic transformations can be used to change the set of expressions computed by basic block into an algebraically equivalent set. b. Explain briefly about the target machine concept Target Machine A target machine may be another general-purpose computer, a special-purpose device employing a single-board computer or any other intelligent device. Usually the target machine is not able to host all the development tools. That is why a host computer is used to complete development work. Debugging is an important issue in cross-platform development. Since you are usually not able to execute the binary files on the host machine, they must be run on the target machine. The debugger, which is running on the host machine, has to talk to the program running on the target machine. GNU debugger is capable of connecting to remote processes. Some CPUs have a JTAG or BDM connector, which can also be used to connect to processes on remote systems. [08]

Optimizing for a Target Machine or Class of Machines Target machine options are options that instruct the compiler to generate code for optimal execution on a given processor or architecture family. By default, the compiler generates code that runs on all supported systems, but perhaps suboptimally on a given system. By selecting appropriate target machine options, you can optimize your application to suit the broadest possible selection of target processors, a range of processors within a given family, or a specific processor. The following compiler options control optimizations affecting individual aspects of the target machine. Related Information: See -qarch Option, -qtune Option, and -qcache Option. Getting the most out of target machine options Try to specify with -qarch the smallest family of machines possible that will be expected to run your code reasonably well. -qarch=auto generates code that may take advantage of instructions available only on the compiling machine (or similar machines). The default is -qarch=ppcv. Specifying a -qarch option that is not compatible with your hardware, even though your program appears to work, may cause undefined behaviour; the compiler may emit instructions not available on that hardware.

Try to specify with -qtune the machine where performance should be best. Before using the -qcache option, look at the options sections of the listing using -qlist to see if the current settings are satisfactory. The settings appear in the listing itself when the -qlistopt option is specified. Modification of cache geometry may be useful in cases where the systems have configurable L2 or L3 cache options or where the execution mode reduces the effective size of a shared level of cache. If you decide to use -qcache, use -qhot along with it.


Explain brief notes for generating code from DAG concepts.


Code Generation Introduction Code Generation Issues Basic Blocks and Flow Graphs Next-Use Information Register Allocation and Assignment The DAG Representation for Basic Blocks Generating Code from Expression Trees Generating Code from DAGs Introduction

typical structure of the compiler -source-code --> FRONT-END - intermediate-code --> CODE-OPTIMIZER - intermediate-code --> CODE-GENERATOR - target-code --> code generation is extremely machine specific, but can often be described in a machine independent manner, EG GNU C machines are often idiosyncratic o EG constraints on register and memory operations

Code Generation Issues

the input to the code generator o DAG, expression tree, three address, or postfix the target program o absolute binary, relocatable binary, or symbolic assembly

see 9.2 page 519 of text for example

instruction selection is affected by o orthogonality of the instruction set


instruction speeds shift N is faster than multiply by 2**N "clr R0" may be faster than "move 0, R0" machine idioms

EG incr(x) may be faster than x = x + 1 register allocation

register reference is MUCH faster than memory reference EG #ifdef FAST REGISTER #endif INT i; FOR (i = 0; i < MAX_INT; i++) ; instructions using registers may be shorter

o o

often two address instructions require a register for (at least) one operand, especially on RISC machines some instructions require register pairs (multiply and divide), this complicates

code generation and optimization choice of expression evaluation order o some orders require fewer registers, EG a + b * c approaches o simple, dumb, small, and fast

data-flow analysis: complicated, smart, big, and slow

Basic Blocks and Flow Graphs

a definition of a variable o any assignment to that variable a use of a variable o any reference to the value in that variable a variable is live at a point o its value is used after that point basic block o a sequence of statements where flow enters the top and leaves the end, IE one
o o o

entry, one exit a flow graph is a graph nodes represent computations (typically basic blocks) edges represent flow of control one unique entry, called the loop header nodes within the loop are strongly connected, IE any node in the loop can potentially reach any other node inner loop contains no other loops programs spend most of their time in loops, therefore, it pays to optimize loops

o o o o

Basic Block construction algorithm on page 529 EG FOR (result=0, i=1; i<10; i++) result += a[i] * b[i]; 1 result := 0 2 i := 1 3 t1 := 4 * i -- 4 is the element size of a 4 t2 := a[t1]

5 t3 := 4 * i -- 4 is the element size of b 6 t4 := b[t3] 7 t5 := t2 * t4 8 t6 := result + t5 9 result := t6 10 t7 := i+1 11 i := t7 12 IF i < 10 GOTO 3

Next-Use Information

useful for optimization, EG can detect dead code make a backwards pass over each basic block, stores info in symbol table for each instruction i of the form: X := Y op Z o set X to not live and no next use

set y and z to live and next use to i

Register Allocation and Assignment

global register allocation o assigning variables to registers across basic block boundaries

however, may tie up too many register

usage counts o sum up the references and allocate registers to most frequently referenced
o o o o

variables, according to some heuristic FOR block B IN loop L LOOP count += use(x, B) + 2*live(x, B); END LOOP; same idea applies to outer loops, but after inner loop has been assigned

registers by graph coloring o a register-interference graph is generated for each procedure


graph coloring assigns a set of registers to temporaries to minimize spilling (dumping the value of a register to memory)

The DAG Representation for Basic Blocks

not a flow graph o does not contain control flow information


does contain dependency information, IE parents depend on children

similar to DAGs for expression trees

o o o

leaves are labeled by unique identifiers, variable names or constants, and represent initial values of names interior nodes represent operations nodes may be labeled by identifiers which have the value computed by that

node useful for o detecting common subexpressions

o o

provides data flow information useful for optimizations allows elimination of useless or dead statements

Constructing a DAG FOR (result=0, i=1; i<10; i++) result += a[i] * b[i]; 3 t1 := SIZEOF *a * i 4 t2 := a[t1] 5 t3 := SIZEOF *b * i 6 t4 := b[t3] 7 t5 := t2 * t4 8 t6 := result + t5 9 result := t6 10 t7 := i + 1 11 i := t7
12 IF i < 10 GOTO 3

Generating Code from Expression Trees

produces provably optimal register usage for expression trees that have no common subexpressions evaluating operand subtree using most registers first, then other next

Generating code from DAGs

rearrange the order to minimize register usage since generating optimal code from DAGs is NP-hard, we use heuristics use a variant of topological sorting common subexpressions can be treated as separate trees, replace reference to subexpression with temporary containing the value Code Generator Generators

7. Generate code for the following statements for target machine a. x=x+1 b. x=a+b+c c. x=a1/(b+c)-d*(e+f) target machine a. x=x+1 t1=x+1 t2=x Code MOV x,R0 ADD #1,R0 MOV R0,x


b. x=a+b+c t1=a+b t2=t1+c Code MOV a,R0 ADD b,R0 MOV c,R1 ADD R0,R1 MOV R1,x c.x=a/(b+c)-d*(e+f) Three address statements t1=b+c t2=a/t1 t3=e+f t4=d*t3 t5=t2-t4 x=t5 Target code

MOV b,R0 ADD c,R0 DIV a,R0 MOV e,R1 ADD f,R1 MUL d,R1 SUB R0,R1 MOV R1,x

8. Write about Dynamic programming code-generation algorithm with neat diagram and example. Dynamic programming


The word "programming" in "dynamic programming" has no particular connection to computer programming at all, and instead comes from the term "mathematical programming", a synonym for optimization. Thus, the "program" is the optimal plan for action that is produced. For instance, a finalized schedule of events at an exhibition is sometimes called a program. Programming, in this sense, means finding an acceptable plan of action. Optimal substructure means that optimal solutions of subproblems can be used to find the optimal solutions of the overall problem. For example, the shortest path to a goal from a vertex in a graph can be found by first computing the shortest path to the goal from all adjacent vertices, and then using this to pick the best overall path, as shown in Figure 1. In general, we can solve a problem with optimal substructure using a three-step process: 1. Break the problem into smaller subproblems. 2. Solve these problems optimally using this three-step process recursively. 3. Use these optimal solutions to construct an optimal solution for the original problem. The subproblems are, themselves, solved by dividing them into sub-subproblems, and so on, until we reach some simple case that is easy to solve. dynamic programming makes use of:

Overlapping subproblems

Optimal substructure Dynamic programming usually takes one of two approaches:

Top-down approach: The problem is broken into subproblems, and these subproblems are solved and the solutions remembered, in case they need to be solved again. This is recursion and memoization combined together. Bottom-up approach: All subproblems that might be needed are solved in advance and then used to build up solutions to larger problems. This approach is slightly better in stack space and number of function calls, but it is sometimes not intuitive to figure out all the subproblems needed for solving the given problem.

Dynamic programming Algorithm : It consist of 3 phases : 1. Compute bottom up for each node n of the expression tree T an array C of costs ,in which the ith componentC[i] is the optional cost of computing the subtree S rooted at n into the register. Zeroth component of the cost vector is the optimal cost of computing the subtree S into memory. 2. second phase is, traverse T using cost vectors to determine which subtree of T must be computed into memory. 3. Third phase is, traverse each tree using the cost vectors and associated instructions to generate the final target code. This last 2 phases can also be implemented to run time linearly propositional to the size of the expression tree.

Fibonacci sequence A naive implementation of a function finding the nth member of the Fibonacci sequence, based directly on the mathematical definition: function fib(n) if n = 0 or n = 1 return 1 return fib(n 1) + fib(n 2) Notice that if we call, say, fib(5), we produce a call tree that calls the function on the same value many different times: 1. 2. 3. fib(5) fib(4) + fib(3) (fib(3) + fib(2)) + (fib(2) + fib(1))



((fib(2) + fib(1)) (fib(1) + fib(0))) ((fib(1) + fib(0)) fib(1)) (((fib(1) + fib(0)) fib(1)) + (fib(1) fib(0))) + ((fib(1) fib(0)) + fib(1))

+ + + + + +

In particular, fib(2) was calculated twice from scratch. In larger examples, many more values of fib, or subproblems, are recalculated, leading to an exponential time algorithm. Now, suppose we have a simple map object, m, which maps each value of fib that has already been calculated to its result, and we modify our function to use it and update it. The resulting function requires only O(n) time instead of exponential time: var m := map(0 1, 1 1) function fib(n) if map m does not contain key n m[n] := fib(n 1) + fib(n 2) return m[n] This technique of saving values that have already been calculated is called dynamic programming, this is the top-down approach, since we first break the problem into subproblems and then calculate and store values. In the bottom-up approach we calculate the smaller values of fib first, then build larger values from them. This method also uses linear (O(n)) time since it contains a loop that repeats n 1 times, however it only takes constant (O(1)) space, in contrast to the top-down approach which requires O(n) space to store the map. function fib(n) var previousFib := 0, currentFib := 1 repeat n 1 times var newFib := previousFib + currentFib previousFib := currentFib currentFib := newFib return currentFib In both these examples, we only calculate fib(2) one time, and then use it to calculate both fib(4) and fib(3), instead of computing it every time either of them is evaluated.

This problem exhibits optimal substructure. That is, the solution to the entire problem relies on solutions to subproblems. Let us define a function q(i, j) as q(i, j) = the minimum cost to reach square (i, j) If we can find the values of this function for all the squares at rank n, we pick the minimum and follow that path backwards to get the shortest path. It is easy to see that q(i, j) is equal to the minimum cost to get to any of the three squares below it (since those are the only squares that can reach it) plus c(i, j). For instance: function minCost(i, j) if j < 1 or j > n return infinity else if i = 1 return c(i, j) else return min( minCost(i-1, j-1), minCost(i-1, j), minCost(i-1, j+1) ) + c(i, j) It should be noted that this function just computes the path-cost, not the actual path. We will get to the path soon. This, like the Fibonacci-numbers example, is horribly slow since it spends mountains of time recomputing the same shortest paths over and over. However, we can compute it much faster in a bottom up-fashion if we use a two-dimensional array q[i, j] instead of a function. Why do we do that? Simply because when using a function we recompute the same path over and over, and we can choose what values to compute first. We also need to know what the actual path is. The path problem we can solve using another array p[i, j], a predecessor array. This array basically says where paths come from. Consider the following code: function computeShortestPathArrays() for x from 1 to n q[1, x] := c(1, x) for y from 1 to n q[y, 0] := infinity q[y, n + 1] := infinity for y from 2 to n for x from 1 to n m := min(q[y-1, x-1], q[y-1, x], q[y-1, x+1]) q[y, x] := m + c(y, x) if m = q[y-1, x-1] p[y, x] := -1

else if m = q[y-1, x] p[y, x] := 0 else p[y, x] := 1

Now the rest is a simple matter of finding the minimum and printing it.

function computeShortestPath() computeShortestPathArrays() minIndex := 1 min := q[n, 1] for i from 2 to n if q[n, i] < min minIndex := i min := q[n, i] printPath(n, minIndex) function printPath(y, x) print(x) print("<-") if y = 2 print(x + p[y, x]) else printPath(y-1, x + p[y, x]) Sequence alignment is an important application where dynamic programming is essential. Typically, the problem consists of transforming one sequence into another using edit operations that replaces, inserts, or removes an element. Each operation has an associated cost, and the goal is to find the sequence of edits with the lowest total cost. The problem can be stated naturally as a recursion, a sequence A is optimally edited into a sequence B by either: 1. inserting the first character of B, and performing an optimal alignment of A and the tail of B deleting the first character of A, and performing the optimal alignment of the tail of A and B replacing the first character of A with the first character of B, and performing optimal alignments of the tails of A and B. The partial alignments can be tabulated in a matrix, where cell (i,j) contains the cost of the optimal alignment of A[1..i] to B[1..j]. The cost in cell (i,j) can be calculated by adding the cost of the relevant operations to the cost of its neighboring cells, and selecting the optimum.

9. Write a code generation algorithm. Explain about the descriptors and function getres(). Give an example


Code generation In computer science, code generation is the process by which a compiler's code generator converts some internal representation of source code into a form (e.g., machine code) that can be readily executed by a machine (often a computer). The input to the code generator typically consists of a parse tree or an abstract syntax tree. The tree is converted into a linear sequence of instructions, usually in an intermediate language such as three address code. Further stages of compilation may or may not be referred to as "code generation", depending on whether they involve a significant change in the representation of the program. (For example, a peephole optimization pass would not likely be called "code generation", although a code generator might incorporate a peephole optimization pass. Code Generation Algorithm the code-generation algorithm takes a sequence of three-address statements constituting a basic block as the input . for each three-address steamiest of the form x:=y opz ,we performs the following actions. 1. invoke a functions getreg to determine the location L where the result of the computations y op z should be stored L will usually be register 2. consult the address descriptor for y to determine y, the current locations of y ,L to place a copy of y in L. 3. generate the instructions op Z ,L where z is a current locations of z 4. updates the address descriptions of x ,to indicate that x is in locations L. 5. if the current values of y and /or z have no next uses and are in registers after the register describe to indicate that Register and address descriptions The code generation algorithm uses descriptors to keep track of register constents and address for names. Register descriptors A register descriptor is a pointer to a list containing information about the currect contents of each register .Initially all the register are empty. Address descriptions An address descriptor keeps tracks of the locations where the current vales of the names can be found at run-time .This information can be stored in the symbol table

Functions getreg() The Functions getreg() when called upon to return a location where the computations specifications by the three-address statementsx=y op z should be perfomed returns a locations l as follows. Examples the assignment d:=(a-b)+(a-c)+(a-c)might be translated into the following threeaddress code sequernce. .t:=a-b .u:=a-c v:=t + u d:=v+u

Code sequences for the examples is Statement .t:=a-b u:=a-c v:=t + u d:=v+u Code generated MOV a,R0 SUB b,R0 MOV a,R1 SUB c,R1 ADD R1,R0 ADD R1,R0 MOV R0,d Register descriptor R0 contains t R0 contains t R1 contains u R0 contains v R1 contains u R0 contains d Address descriptor T in R0 T in R0 U in R1 U in R1 V in R0 D in R0 D in R0 memory and

10. Explain about code generation for statement and register allocation concepts. Code Generation for Statements


The function genc(code) generates code for a statement. There are only a few kinds of statements: 1. PROGN For each argument statement, generate code. 2. := Generate the right-hand side into a register using genarith. Then store the register into the location specified by the left-hand side. 3. GOTO Generate a Branch to the label number. 4. LABEL Generate a Label with the label number. 5. FUNCALL Compile short intrinsic functions in-line. For others, generate subroutine calls. Register Management

register allocation: which variables will reside in registers? register assignment: which specific register will a variable be placed in?

Registers may be:

general purpose (usually means integer) float special purpose (condition code, processor state) paired in various ways

Simple Register Allocation Note that there may be several classes of registers, e.g., integer data registers, index registers, floating point registers. A very simple register allocation algorithm is: 1. At the beginning of a statement, mark all registers as not used. 2. When a register is requested, 1. If there is an unused register, mark it used and return the register number. 2. Otherwise, punt. On a machine with 8 or more registers, this algorithm will almost always work. However, we need to handle the case of running out of registers.

Heuristic for Expressions The likelihood of running out of registers can be reduced by using a heuristic in generating code for expressions: Generate code for the most complicated operand first. The ``most complicated'' operand can be found by determining the size of each subtree. However, simply generating code for a sub tree that is an operation before a sub tree that is a simple operand is usually sufficient. If a machine allows arithmetic instructions to be used with a full address, the operation may be combined with the last load. Improving Register Allocation The simple register allocation algorithm can be improved in two ways:

Handle the case of running out of available registers. This can be done by storing some register into a temporary variable in memory. Remember what is contained in registers and reuse it when appropriate. This can save some load instructions.

Register Allocation Used Use Number Token

An improved register allocation algorithm, which handles the case of running out of registers, is: 1. At the beginning of a statement, mark all registers as not used; set use number to 0. 2. When an operand is loaded into a register, record a pointer to its token in the register table. 3. When a register is requested, 1. If there is an unused register: mark it used, set its use number to the current use number, increment the use number, and return the register number. 2. Otherwise, find the register with the smallest use number. Get a temporary data cell. Generate a Store instruction ( spill code) to save the register contents into the temporary. Change the token to indicate the temporary. Now, it will be necessary to test whether an operand is a temporary before doing an operation, and if so, to reload it. Note that temporaries must be part of the stack frame Reusing Register Contents

Used Contents

Many instructions can be eliminated by reusing variable values that are already in registers: 1. Initially, set the contents of each register to NULL. 2. When a simple variable is loaded, set the contents of the register to point to its symbol table entry. 3. When a register is requested, if possible choose an unused register that has no contents marked. 4. When a variable is to be loaded, if it is contained in an unused register, just mark the register used. This saves a Load instruction. 5. When a register is changed by an operation, set its contents to NULL. 6. When a value is stored into a variable, set the contents of any register whose contents is that variable to NULL. Then mark the register from which it was stored as containing that variable. 7. When a Label is encountered, set the contents of all registers to NULL. 8. The condition code contents can be reused also.

UNIT V 1. Write brief note for following code optimization. a. Criteria for code-improving transformation. Code-improve The best program transformations are those that yield the most benefit fir the least effort. The transformation provided by an optimizing compiler should have several properties First a transformation must preserv3er the meaning of programs. A given input ,or cause an error ,such as a division by zero, that was not present in the original version of the source program. A transformation must on the average ,speed up programs by a measurable amount. A transformation must be worth the effort. Some transformations can only be applied after detailed ,often time consuming ,analysis of the source program, so there is little point in applying them to programs that will be run only a few times. b.Getting better performance [06]


Getting better performance The running time from a few hours to a few seconds are usually obtained by improving the program at all levels ,from the source level to the target level.

source code

Front end

intermediate code

Code Generator

target code

User Can profile program change algorithm transfer loop

Compiler Can improve loops procedure calls address calculations

Compiler Can user register select instructions do peephole transformation

Algorithmic transformations occasionally procedure spectacular improvements in running time. Void quicksort(m,n) int m,n; { int i,j; int v,x; if(n<=m) return; i=m-1;j=n;v=a[n]; while(1) { do i =i+1;while(a[i]<v); do j =j+1;while(a[j]<v); if(i>=j)break; x=a[i];a[i]=a[n];a[j]=x; } a=a[i];a[i]=a[n];a[n]=x; quicksort(m,j); quicksort(i+1,n); } it may not be possible to perform certain code-improving transformations at the level of the source language. c.An organization for an optimizing compiler. Optimizing The code-improvement phase consists of control-flow and data-flow analysis followed by the application of transformations. Intermediate code ,of the sort produce by the techniques. i. The operation needed to implement high-level constructs are made explicit in the intermediate code,so it is possible to optimize . [06]

Organization of the code optimizer

Front end

Code optimizer

Code generator

Control Flow analysis

data Flow analysis

Transfor mations

Three address code for fragment 1) i:=m-1 2) j:=n 3) t1:=4*n 4) v:=a[t1] 5) i:=i+1 The intermediate code can be independent of the target machine so the optimizer does not have to change much if the code generator. 2. Explain the principal sources of optimization and give example Source Of Optimization A transformation of a program called local if it can be performed by looking only at the statements in a basic block. Function-preserving transformations Common sub expression elimination ,copy propagation ,dead code elimination ,and constant folding are common examples of such function-preserving transformations. [16]

The other transformation come up primary when global optimizations are performed .and we shall discuss each in turn. Frequently a program will include several calculations of the same value such as an offset in an array.

B5 t6:=4*i x:=a[t6] t7:=4*i t8:=4*j t9:=a[t8] a[t7]:=t9 t10:=4*j a[t10]:=x goto B2 B5 t6:=4*i x:=a[t6] t7:=4*i t8:=4*j a[t6]:=t9 a[t8]:=x goto B2

Common sub expression The expression E is called a common sub expression if e was previously compute ,and the vales of variable in e have not changed since the previous computation .

the assignments to t7 and t10 have the common sub expression 4*I and 4*j respectively on the right side. Copy propagation one concerns assignments of the form f:=g called copy statements or copies for short. had we gone into more detail in example copies would have arise much sooner, because the algorithm for eliminating common sub expression. The idea behind the copy-propagation transformation is to use g for f, whener possible after the copy statement f:=g x:=t3 a[t2]:=t5 a[t4]:=t3 goto B2 a:=d+e b:=d+e t:=d+e a:=t c:=t t:=d+e b:=t


copies introduced during common sub expression elimination

Dead-Code Elimination A variable is live at a point in a program if its values can be used subsequently otherwise it is dead at that point. if(debug) print by a data-flow analysis it may be possible to deduce that each time the program reaches this statement ,the vale of debug is false debug:=false that we can deduce to be the last assigmnet to debug prior to the test. a[t2]:=t5 a[t4]:=t3 goto B2 this code is a further improvement of block B5 loop optimizations A very important place for optimizations ,namely loops especially the inner loops where programs tend to send the bulk of their time. The running time of a program may be improved if we declares the number of instructions inner loop, even if we increase the amount of code outside that loop Reduction in strength ,which replaces an expensive operation by a cheaper one ,such as a multiplication by an addition

3.a. Explain about Loop optimization technique with suitable example


The notion of a node dominating another to define natural loops and the important special class of reducible flow graphs. Dominators The node d of flow graph with initial node 1,thi initial node of the flow graph to n goes through d.

A useful way of presenting domination is in a tree called the dominator tree. 1


4 5 6

7 8 9 Flow graph Natural loops One important application of dominator information is in determining the loops of a flow graph suitable for improvement .there are two essential properties of such loops. 1 0

a loop must have a single entry point ,called the header this entry point dominates all nodes in the loop, it would not be sole entry to the loop. there must be at least one way to iterate the loop at least one path back to the header.

Inner loop if we the natural loops as the loops then we have the useful property that unless two loops have the same header ,they are either disjoint or one is entirely contained . if a =10 goto B2

probably the loop {B0,b1,b3}would be the inner loop.





Two loops with the same header Pre-Header the transformations require us to move statements before the header we therefore begin treatment of a loop L by creating a new block called the pre header.


Pre header

Loop L


Loop L Introduction of the pre header

Reducible Flow Graphs exclusive use of structured flow-of control statement such as if-then-else ,whiledo ,continue, and break statements produces programs whose follow graphs are always reducible. 1

The forward edges form an acyclic graph in which every node can be reached from initial node G The break edges consist only of edges whose heads dominate their tails. b. What is global data-flow analysis? Explain in detail Global data flow analysis a compiler needs to collect information about the program as whole and to distribute this information to each block in flow graph the data flow information that an optimizing compiler collects by a process known as data-flow analysis. Out[s]=gen[S]U(in[s]-kill[S]) the notions of generation and killing depends on the desired information ,on the data flow analysis problem to be solve. Since data flows along` control paths ,data-flow analysis is affected by the control construct in a problem [08]

There are subtleties that go along with such statements as procedure calls , assignment through pointer variables ,and even assignments to array variables.

Point and paths The talk of the pointer between two adjacent statement and after the last B1 d1: i=m-1 d2: j:=n d3:a:=u1

d4: i:= i+1

d5: j:=j-1

d6: a:=u2



Let us take a global view and consider all the points in all the books ,a path from p1 to pn is a sequence of points p1,p2,pn,such that for each I between 1 and n1,either. Pi is the point immediately preceding a statement and pi+1 is the point immediately following that statement in the same block. Reaching Definition A definition of a variable x is a statement that assign or may assign a vales to x . The most common forms of definition are assignment to z and statement the read vales from an I/O device and store it in x.

if a=b then a:=2 else if a=b then a:=4 To decide in general whether each path in a flow graph can be taken is an to decided able problem

Data-flow analysis of structured programs Flow graphs for control-flow construct such as do=while statements have a useful property. Sid:=E | S;S|if E then S else S|do s while E Eiid+id |id Expressions in this language are similar to those in intermediate code

S 1

if E goto S1

S 1

S 1 S 2

S 2

if E goto S1


if e then S1 else S2

do s1 while E

Conservation estimation of data-flow Information the conditional expression e in the from that make their branches go either way 2computation of in and out 3.Dealing with loops 4Representing of sets 5Local reaching definitions 4. Define iterative algorithm. Explain about iterative solution of data flow equations [16]

Iterative solution the method of the last section is simple and efficient when it applies to solving data-flow problems the equations in the last section for reaching definitions are forward equations in the senses that the out sets are computed in terms of the in sets .we shall also see data flow problems that are backward in that the in sets are computed from the out sets.

Iterative algorithm for reaching definition assuming that gen and kill have been computed for each block. If a flow graph has n basic blocks ,we get 2n equations ,the 2n equations can be solved by treating them as recurrences for computing the in and out sets. Algorithm Input :a flow graph for which kill[B] and gen [B] have been computed for each book B Output :in[B] and out [B] for each block B. for each block B do out[B[:=gen[b]; Change:=true; While change do begin Change :=false; for each blockB do begin; in [B]:=U out[P]; oldout:=out[B] out[B]:=gen[B]U(in[B]-kill[B]); if out [B]oldout then change :=true end end Available Expressions An expression x+y is available at appoint p if every path (not necessarily cycle-free) These notat6ions of kill and generate obey the same laws as they do for reaching definitions

B1 T1:4*i

B1 T1:=4*i



? T2:=4*i

i:= B2 t0:=4*i

B3 B3

Add to A the expression y+z Delete from A any expression involving x. We can easily compute the set of generation expression for each point in a block. Live variable analysis A number of code improving translations depends on informations computed in the direction In li9ve variable analysis we wish to know for variable x and point p where the a vale of x at p could be used along some path in the flow graph starting at p. Algorithm Input a flow graph with def and use computed for each block Output out[B] ,the set of variable s live on exit from each block b of the flow graph. For each block B dop in{b]:=; While changes to any of the ins occurs do For each block B code begin Out[B]=U in[S] In[B]:=use[B] U (out[B]-def[B]) End Definition-Use chains A calculation done in virtually the same manner as live-variable analysis is definitions use changing. The equations for computing douching information look exactly like with substitutions for def 5. Discuss about code-improving transformations and with suitable example [16]

Code improving transformations The code improving transformations introduced in on data-flow information We consider common sub expression eliminations ,copy propagations ,and transformations for moving loop invariant computations out of loops and for eliminating inductions variables for many language ,significant improvements in running time can be achieved in a compiler. A global transformation that use infractions about the program. When we perfom global common sub expression elimination we shall only be concerned with whether an expression is generated by a block and not with whether it is recomputed several times within a block. Elimination of Global common sub expressions The available expression data-flow problem discussed in the last sections allows us to determine if an expression at point p in a flow graph is common sub expression Algorithm: global common sub expression eliminations Inpu: A flow graph with available expression information Output:a revised flow graph

T2 :=$*i T3:=a[T2]

U:=4*i T2:=u T3:=a[t2]

(15):=4*i (18):=a[(15)] )

T6:=4*i T7:=a[t6]

T6:=u T7:=a[t6]

T6:=(15) T7:=(18)

Eliminating the common sub expression Copy propagation It is sometimes possible to eliminate copy statement s:=x:=y if we determine all places where this definitions of x is used. We may then substitute y for x in all these places ,proved the following conditions are met by every such use of x Statement s must be the definitions of x reaching u On every path forms s to u including paths dthat go throws u several times.







Copy propagations 6. a.Write brief notes for symbolic debugging of optimized code concept [08]

The Symbolic Debugger The GCC-1750 symbolic debugger is based on the GNU debugger GDB, and supports all the GDB features that are useful for an embedded system. The instruction set simulator is built in as the default executable target. This means that 1750 programs can be run and debugged on the host computer where comparatively powerful low-level debugging aids are available.

Debug commands include single stepping through/over statements/instructions, breakpoints and watchpoints, print and modify variables, examine types, write log file etc.

The debugger also supports a host-target link using either a RS232 serial interface or a network connection. In this case the target computer must be equipped with a suitable communications software. GCC-1750 includes a debug monitor that may be customized for your target. Support for Symbolic Debugging The compiler lets you generate code to support symbolic debugging while the -O1, -O2, or -O3 optimization options are specified on the command line along with -g. However, you can receive these unexpected results:

If you specify the -O1, -O2, or -O3 options with the -g option, some of the debugging information returned may be inaccurate as a side-effect of optimization. If you specify the -O1, -O2, or -O3 options, the -fp option (IA-32 only) will be disabled.

dbx Symbolic Debug Program Overview The dbx symbolic debug program allows you to debug a program at two levels: the source-level and the assembler language-level. Source level debugging allows you to debug your C, C++, Pascal, or FORTRAN language

program. Assembler language level debugging allows you to debug executable programs at the machine level. The commands used for machine level debugging are similar to those used for source-level debugging. Using the dbx debug program, you can step through the program you want to debug one line at a time or set breakpoints in the object program that will stop the debug program. You can also search through and display portions of the source files for a program. The following sections contain information on how to perform a variety of tasks with the dbx debug program:

Using the dbx Debug Program Displaying and Manipulating the Source File with the dbx debug Program Examining Program Data Debugging at the Machine Level with dbx Customizing the dbx Debugging Environment

Debugging Programs There are several debug programs available for debugging your programs: the adb, dbx, dex, softdb, and kernel debug programs. The adb program enables you to debug executable binary files and examine non-ASCII data files. The dbx program enables source-level debugging of C, C++, Pascal, and FORTRAN language programs, as well as assemblerlanguage debugging of executable programs at the machine level. The (dex) provides an X interface for the dbx debug program, providing windows for viewing the source, context, and variables of the application program. The softdb debug program works much like the dex debug program, but softdb is used with AIX Software Development Environment Workbench. The kernel debug program is used to help determine errors in code running in the kernel.

b. Discuss about a tool for data-flow analysis system. Data-flow analysis


Introduction In terms of the PIA project flow, this is the second step. It is directly linked to the requirement that institutions must identify all personal information associated with the business processes. The purpose of this step is to describe the personal information flows.

To do this, work with your PIA team to develop detailed descriptions and analysis of the business processes, architecture and detailed data flows for the proposal. Always remember: there is no such thing as a stupid question! Ask questions and clarify your understanding at any point in the PIA process. If you don't understand, there is a strong likelihood that someone else is or will be in exactly the same situation. Understanding the PIA Guidelines document will help you complete this section. Your IT and systems resources are essential for this part of the Report.Charters, Statements of Sensitivity (SOS), Threat and Risk Assessments (TRAs), data-sharing agreements and Memorandums of Understanding (MOUs) can provide a lot of information.

Map out all data flows. Data flows can take a long time to research and write. Give yourself enough time to do this properly. It is a crucial part of the Report. If you don't understand data flows, you're not likely to understand what you're dealing with. Data-flow analysis is a technique for gathering information about the possible set of values calculated at various points in a computer program. A program's control flow graph (CFG) is used to determine those parts of a program to which a particular value assigned to a variable might propagate. The information gathered is often used by compilers when optimizing a program. A canonical example of a data-flow analysis is reaching definitions. A simple way to perform dataflow analysis of programs is to set up dataflow equations for each node of the control flow graph and solve them by repeatedly calculating the output from the input locally at each node until the whole system stabilizes, i.e., it Basic principles Data flow analysis attempts to obtain particular information at each point in a procedure. Usually, it is enough to obtain this information at the boundaries of basic blocks, since from that it is easy to compute the information at points in the basic block. In forward flow analysis, the exit state of a block is a function of the block's entry state. This function is the composition of the effects of the statements in the block. The entry state of a block is a function of the exit states of its predecessors. This yields a set of data flow equations: For each block b: outb = transb(inb)

In this, transb is the transfer function of the block b. It works on the entry state inb, yielding the exit state outb. The join operation join combines the exit states of the predecessors of b, yielding the entry state of b. After solving this set of equations, the entry and / or exit states of the blocks can be used to derive properties of the program at the block boundaries. The transfer function of each statement separately can be applied to get information at a point inside a basic block. Each particular type of data flow analysis has its own specific transfer function and join operation. Some data flow problems require backward flow analysis. This follows the same plan, except that the transfer function is applied to the exit state yielding the entry state, and the join operation works on the entry states of the successors to yield the exit state. The entry point (in forward flow) plays an important role: Since it has no predecessors, its entry state is well-defined at the start of the analysis. For instance, the set of local variables with known values is empty. An iterative algorithm The most common way of solving the data flow equations is by using an iterative algorithm. It starts with an approximation of the in-state of each block. The out-states are then computed by applying the transfer functions on the in-states. From these, the in-states are updated by applying the join operations. The latter two steps are repeated until we reach the so-called fixpoint: the situation in which the in-states (and the out-states in consequence) don't change. A basic algorithm for solving data flow equations is the round-robin iterative algorithm: for i 1 to N initialize node i while (sets are still changing) for i 1 to N recompute sets at node i

Convergence To be usable, the iterative approach should actually reach a fixpoint. This can be guaranteed by imposing constraints on the combination of the value domain of the states, the transfer functions and the join operation.

The work list approach It is easy to improve on the algorithm above by noticing that the in-state of a block will not change if the out-states of its predecessors don't change. Therefore, we introduce a work list: a list of blocks that still needs to be processed. Whenever the out-state of a block changes, we add its successors to the work list. In each iteration, a block is removed from the work list. Its out-state is computed. If the out-state changed, the block's successors are added to the work list. For efficiency, a block should not be in the work list more than once. The algorithm is started by putting the entry point in the work list. It terminates when the work list is empty. The order matters The efficiency of iteratively solving data-flow equations is influenced by the order at which local nodes are visited. Furthermore, it depends, whether the data-flow equations are used for forward or backward data-flow analysis over the CFG. Intuitively, in a forward flow problem, it would be fastest if all predecessors of a block have been processed before the block itself, since then the iteration will use the latest information. In the absence of loops it is possible to order the blocks in such a way that the correct outstates are computed by processing each block only once. random order This iteration order is not aware whether the data-flow equations solve a forward or backward data-flow problem. Therefore, the performance is relatively poor compared to specialized iteration orders.

postorder This is a typical iteration order for backward data-flow problems. In postorder iteration a node is visited after all its successor nodes have been visited. Typically, the postorder iteration is implemented with the depth-first strategy. reverse postorder This is a typical iteration order for forward data-flow problems. In reverse-postorder iteration a node is visited before all its successor nodes have been visited, except when the successor is reached by a back edge. (Note that this is not the same as preorder.)

Initialization The initial value of the in-states is important to obtain correct and accurate results. If the results are used for compiler optimizations, they should provide conservative information, i.e. when applying the information, the program should not change semantics. The iteration of the fixpoint algorithm will take the values in the direction of the maximum element. Initializing all blocks with the maximum element is therefore not useful. At least one block starts in a state with a value less than the maximum. The details depend on the data flow problem. If the minimum element represents totally conservative information, the results can be used safely even during the data flow iteration. If it represents the most accurate information, fixpoint should be reached before the results can be applied.

Examples The following are examples of properties of computer programs that can be calculated by data-flow analysis. Note that the properties calculated by data-flow analysis are typically only approximations of the real properties. This is because data-flow analysis operates on the syntactical structure of the CFG without simulating the exact control flow of the program. However, to be still useful in practice, a data-flow analysis algorithm is typically designed to calculate an upper respectively lower approximation of the real program properties. Forward Analysis Reaching definitions The reaching definition analysis calculates for each program point the set of definitions that may potentially reach this program point. 1: if b==4 The reaching definition of variable "a" at line 7 is the set of assignments a=5 at then line 2 and a=3 at line 4. 2: a = 5; 3: else 4: a = 3; 5: endif

6: 7: if a < 4 then 8: ...

7.a. Write a pascal program for reading and sorting integers with output procedures. [08] pasal program (1) Program sort (input ,output); (2) Var a : array [0..10]of integer; (3) Procedure readarray; (4) Var i : integer; (5) Begin (6) For i:=1 to 9 do read (a[i]) (7) End; (8) Function partition(y,z:interger):integer; (9) Var I,j,z, v:integer; (10) begin (11) if(n>m) then begin (12) i:=partition(m,n); (13) quicksort(m,i-1); (14) quicksort(i+1,n) (15) end (16) end; (17) begin (18) a[0]:=-9999; a[10]:=9999; (19) readarray; (20) quicksort(1,9); (21) end.

Output Execution begins. Enter readarray Leave readarray Eneter quicksort(1,9) Eneter prtition(1,9) Leave partition(1,9) Enter quicksort(5,9) Leave quicksort(5,9) Leave quicksort(1,9) Execution terminated.

b. Write any two storage allocation strategy with suitable examples.


Storage allocation STORAGE ALLOCATION STRATEGIES: The storage allocation strategy used in each of the three data areas, namely static data area, heap, stack, are different. Stack allocation lays out storage for all data objects at compile time. Stack allocation manages the run-time storage as a stack. Heap allocation allocates and deallocates storage as needed at runtime from a data area known as heap.

STATIC ALLOCATION: In static allocation, names are bound to storage as the program is compiled, so there is no need for a run-time support package. Since the bindings do not change at run time, every time a procedure is activated, its names are bounded to the same storage locations. This property allows the values of the local names to be retained across activations of a procedure. That is, when control returns to a procedure, the values of the locals are the same as they are when control left the last time. However, some limitations go along with using static allocations alone. 1. The size of the data object and constraints on its position in memory must be known at compile time.

2. Recursive procedures are restricted, because all activations of a procedure use the same bindings for local names. 3. Data structures cannot be created dynamically, since there is no mechanism for storage allocation at run time. Since the sizes of the executable code and the activation records are known at compile time, memory organizations other than the one in are possible. A Fortran compiler might place the activation record for a procedure together with the code for that procedure. On some computer systems, it is feasible to leave the relative position of the activation records unspecified and allow the link editor to link activation records and executable code.

STACK ALLOCATION: Stack allocation is based on the idea of control stack; storage is organized as a stack, and activation records are pushed and popped as activations begin and end, respectively. Storage for the locals in each call of a procedure is contained in the activation record for that call. Thus locals are bound to fresh storage in each activation, because a new activation record is pushed onto the stack when a call is made. Furthermore, the values of locals are deleted when the activation ends; that is, the values are lost because the storage for locals disappears when the activation record is popped. Consider the case in which the sizes of all activation records are known at compile time. Suppose that register top marks the top of the stack. At run time, an activation record can be allocated and deallocated by incrementing and decrementing top, respectively, by the size of the record. If procedure q has an activation record of size a, then top is incremented by a just before the target code of q is executed. When control returns from q, top is decremented by a. Calling sequences Procedure calls are implemented by generating what are known as calling sequences in the target code. A call sequence allocates an activation record and enters information into its fields. A return sequence restores the state of the machine so the calling procedure can continue execution. Calling sequences and activation records differ, even for implementations of the same language. The code in a calling sequence is often divided between the calling procedure and the procedure it calls. There is no exact division of run-time tasks between the caller and the callee the source language, the target machine and the operating system impose requirements that may favor one solution over another. 1. The caller evaluates actuals.

2. The caller stores a return address and the old value of top_sp into the callees activation record. The caller the increments top_sp to the position shown in figure(d). That is, top_sp is moved past the callers local data and temporaries and the callees parameter and status fields. 3. The callee saves register values and other status information. 4. The callee initializes its local data and begins execution. A possible return sequence is: 1. The callee places a return value next to the activation record of the caller. 2. Using the information in the status field, the callee restores top_sp and other registers and branches to the return address in the callers code. 3. Although top_sp has been decremented, the caller can copy the returned value into its own activation record and use it to evaluate an expression. The calling sequences allow the number of arguments of the called procedure to depend on the call. At compile time, the target code of the caller knows the number of arguments it is supplying to the callee. Hence the caller knows the size of the parameter field. However, the target code of the callee must be prepared to handle other calls as well, so it waits until it is called, and then examines the parameter field. Using the organization in , information describing the parameters must be placed next to the status field so the callee can find it. Variable-length data A common strategy for handling variable-length data is suggested in figure(e), where procedure p has two local arrays. The storage for these arrays is not part of the activation record for p; only a pointer to the beginning of each array appears in the activation record. The relative addresses of these pointers are known at compile time, so the target code can access array elements through the pointers. Also shown is a procedure q called by p. The activation record for q begins after the arrays of p, and the variable length arrays of q begin beyond that. Access to data on the stack is through two pointers, top and top_sp. The first of these marks the actual top of the stack; it points to the position at which the next activation record will begin. The second is to find local data. DANGLING REFERENCES: Whenever storage can be deallocated, the problem of dangling references arises. A dangling reference occurs when there is a reference to storage that has been deallocated. It is a logical error to use dangling references, since the value of deallocated storage is undefined according to the semantics of most languages. Worse, since that storage may later be allocated to another datum, mysterious bugs can appear in programs with dangling references. HEAP ALLOCATION:

The stack allocation strategy cannot be used if either of the following is possible: 1. The values of local names must be retained when activation ends. 2. A called activation outlives the caller. In each of the above cases, the deallocation of activation records need not occur in a last- in first-out fashion, so storage cannot be organized as a stack. Heap allocation parcels out pieces contiguous storage, as needed for activation records or other objects. Pieces may be deallocated in any order, so over time the heap will consist of alternate areas that are free and in use.

1. 1. For each size of interest, keep a linked list of free blocks of that size. 2. 2. If possible, fill a request for size s with a block of size s, where s is the smallest size greater than or equal to s. When the block is eventually deallocated, it is returned to the linked list it came from. 3. 3. For large blocks of storage use the heap manager. This approach results in fast allocation and deallocation of small amounts of storage, since taking and returning a block from a linked list are efficient operations. For large amounts of storage we expect the computation to take some time to use up the storage, so the time taken by the allocator is often negligible compared with the time taken to do the computation.

8.a. Explain language facilities for dynamic storage allocation of storage data. Language facilities The allocation itself can be either explicit or implicit


Explicitly allocation is performed using the standard procedures new Implicit allocation occurs when evaluation of an expression result in strorage begin obtained to hold the vales of the expression (1) Program table (input ,output); (2) Type link=cell; (3) Cell=record (4) Key,info :interger;

(5) Next :link (6) End; (7) Var head:link; (8) Procedure insert (k,i;integer); (9) Var p:link; (10) Begin (11) New(p);p.key:=k;; (12);head:=p (13) End; (14) Begin (15) Head:=nil; (16) Insert(7,1);insert(4,2);insert(76,3); (17) Writeln(head.key,; (18) Writeln(,; (19) Writeln(, (20) End. Dynamic allocation of cells using new in pascal Garbage Dynamic allocated storage can becomes unreachable. o Insert(7,1);insert(4,2);insert(76,3); o Writeln(head.key,; Storage that a program allocates but cannot refer is called garbage. Dangling An additional complication can arise with explicit deal location o Insert(7,1);insert(4,2);insert(76,3); o Writeln(head.key,; Dangling references can occur

b. Write brief notes for storage allocation in FORTRAN program. Equivalence algorithm


Input : a list of equivalence defing saements of the form Equivalences A,B+dist Output:A collection of trees such that for any names mentioned in the input list of equivalences,we may ,by following the path that names to the root ansd summing the offsets found along the path. algorithm

Begin (1) Let p and q point to the nodes fopr A and B, respectively (2) C:=0;d:=0; (3) While parent (p)null do begin (4) C:=c+offset(p); (5) P:=parent(p) End; (6) While parent(q)null do begin (7) D:=d+offset(q); (8) Q:=parent(q) End; (9) If c-ddist then error; (10) Else begin (11) Parent (p):=q (12) Offset(p):=d-c +dist (13) End (14) End

9. Consider the matrix multiplication program begin for i:= 1 to n do for j:= 1 to n do c[I,j]:=0; for i:= 1 to n do for j:= 1 to n do for k:= 1 to n do c[i,j] := c[i,j]+a[i,k]*[k,j] end Produce three-address statements for the program. (1) i=1 (2) if i<=10 goto (4) (3) goto (18) (4) j=1 (5) if j<=10 goto (7) (6) goto (15) (7) t1=i*101 (8) t1 =i+j (9) t1=t1*4


(10) (11) (12) (13) (14) (15) (16) (17) (18) (19) (20) (21) (22) (23) (24) (25) (26) (27) (28) (29) (30) (31) (32) (33) (34) (35) (36) (37) (38) (39) (40) (41) (42) (43) (44) (45) (46) (47) (48) (49) (50)

t2=addr( c)-44 t2[t1=0 t3=j+1 j=t3 goto (5) t4=i+1 i=t4 goto(2) i=1 if i<=10 ,goto(21) goto(54) j=1 if j<=10,goto(24) goto(51) k=1 if k<=10,goto(24_) got(54) t5=i*10 t5=t5+k t5=t5+4 t6=adder(a)-44 t7=t6[t5] t8=k*10 t8=t8+j t8=t8+4 t9=addr(b)-44 t10=t9[t8] t11=t10*t9 t1=i*10 t1=t1*4 t2=addr(c)-44 t12=t2[t1] t13=t12=t2[t1] t13=t12+t11 t2[t1]=t13 t14=k+1 k=t14 goto(25) t15=j+1 j=t15 goto(22_

(51) (52) (53) (54)

t16=i+1 i=t16 goto(19) ---------

10. Explain dealing with aliases expression concept and give Examples? DEALING WITH ALIASES


If two or more expressions denote the same memory address, we say that the expressions are aliases of one another. The presence of pointers makes data-flow analysis more complex, since they cause uncertainty regarding what is defined and used. The only safe assumption if we know nothing about where pointer p can point to is to assume that an indirect assignment through a pointer can potentially change any variable.

A pointer, e.g., x:=*p, can potentially use any variable. As for assignments with pointer variables, when we come to a procedure call, we may not have to make our worst-case assumption that everything can be changed provided, we can compute the set of variables that the procedure might change. A SIMPLE POINTER LANGUAGE

For specificity, let us consider a language in which there are elementary data types (e.g., integers and reals) requiring one word each, and arrays of these types. We shall be content to know that a pointer p is pointing somewhere in array a without concerning ourselves with what particular element of a is being pointed to. Typically, pointers will be used as cursors to run through an entire array, so a more detailed dataflow analysis, if we could accomplish it at all, would often tell us that at a particular point in the program, p might be pointing to any one of the elements of a anyway. If pointer p points to a primitive (one word) data element, then any arithmetic operation on p produces a value that may be an integer, but not a pointer. If p points to an array, then addition or subtraction of an integer leaves p pointing somewhere in the same array, while other arithmetic operations on pointers produce a value that is not a pointer. The Effects of Pointer Assignments We shall refer to all these variables as pointers. If there is an assignment statement s: p:= &a, then immediately after s, p points only to a. If there is an assignment statement s: p:= q+c, where c is an integer other than zero, and p and q are pointers, then immediately after s, p can point to any array that q could point to before s, but to nothing else. If there is an assignment s: p:=q, then immediately after s, p can point to whatever q could point to before s. After any other assignment to p, there is no object that p could point to; such an assignment is probably (depending on the semantics of the language) meaningless. After any assignment to a variable other than p, p points to whatever it did before the assignment. Note that this rule assumes that no pointer can point to a pointer. Relaxing this assumption does not make matters particularly more difficult, and we leave the generalization to the reader.

Making Use of Pointer Information

The beginning of block B and that we have a reference to pointer p inside block B. Starting with in[B], apply transs for each statement s of block B that precedes the reference to p. This computation tells us what p could point to at the particular statement where that information is important. In each case we must consider in which direction errors are conservative, and we must utilize pointer information in such a way that only conservative errors are made. To see how this choice is made, let us consider two examples: reaching definitions and live variable analysis. To calculate reaching definitions we can use Algorithm 10.2, but we need to know the values of kill gen for a block. Interprocedural Data Flow Analysis Until now, we have spoken of programs that are single procedures and therefore single flow graphs. We shall now see how to gather information from many interacting procedures. Data-flow analysis we shall have to deal with aliases set up by parameters in procedure calls.

A Model of Code with Procedure Calls To illustrate how aliasing might be dealt with, let us consider a language that permits recursive procedures, any of which may refer to both local and global variables. We require that all procedures have a flow graph with single entry(the initial node) and a single return node which causes control to pass back to the calling routine. If we are interested in computing reaching definitions, available expressions or any of a number of other data-flow analyses, we must know whether q(u,v) might change the value of some variable. Note that we say might change, rather than will change. Computing Aliases Before we can answer the question of what variables might change in a given procedure, we must develop an algorithm for finding aliases. The approach we shall use here is a simple one.

Algorithm 10.12. Simple alias computation. Input. A collection of procedures and global variables. Output. An equivalence relation with the property that whenever there is a position in the program where x and y are aliases of one another, x y; the converse need not be true. Method. 1. Rename variables, if necessary, so that no two procedures use the same formal parameter or local variable identifier, nor do a local, formal, or global share an identifier. If there is a procedure p(x1,x2,, xn) and an invocation p(y1,y2,, yn) of that procedure, set xi yi for all i. That is, each formal parameter can be an alias of any of its corresponding actual parameters. Take the reflexive and transitive closure of the actual-formal correspondences by adding a) x y whenever y x. b) x z whenever x y and y z for some y.