Академический Документы
Профессиональный Документы
Культура Документы
1.1: Compilers
A Compiler is a translator from one language, the input or source language, to another language, the output or target language. Often, but not always, the target language is an assembler language or the machine language for a computer processor.
Syntax Trees
Often the intermediate form produced by the front end is a syntax tree. In simple cases, such as that shown to the right
file:///D:/cc pdf/Compilers Class Notes.htm 4/133
1/26/12
this tree has constants and variables (and nil) for leaves and operators for internal nodes. The back end traverses the tree in Euler-tour order and generates code for each node. (This is quite oversimplified.)
We will be primarily focused on the second element of the chain, the compiler. Our target language will be assembly language. Preprocessors Preprocessors are normally fairly simple as in the C language, providing primarily the ability to include files and expand macros. There are exceptions, however. IBM's PL/I, another Algol-like language had quite an extensive preprocessor, which made available at preprocessor time, much of the PL/I language itself (e.g., loops and I believe procedure calls). Some preprocessors essentially augment the base language, to add additional capabilities. One could consider them as compilers in their own right having as source this augmented language (say fortran augmented with statements for multiprocessor execution in the guise of fortran comments) and as target the original base language (in this case fortran). Often the preprocessor inserts procedure calls to implement the extensions at runtime. Assemblers Assembly code is an mnemonic version of machine code in which names, rather than binary values, are used for machine instructions, and memory addresses. Some processors have fairly regular operations and as a result assembly code for them can be fairly natural and not-toohard to understand. Other processors, in particular Intel's x86 line, have let us charitably say more interesting instructions with certain registers used for certain things. My laptop has one of these latter processors (pentium 4) so my gcc compiler produces code that from a pedagogical viewpoint is less than ideal. If you have a mac with a ppc processor (newest macs are x86), your assembly language is cleaner. NYU's ACF features sun computers with sparc processors, which also have regular instruction sets.
Two pass assembly
No matter what the assembly language is, an assembler needs to assign memory locations to symbols (called identifiers) and use the numeric location address in the target machine language produced. Of course the same address must be used for all occurrences of a given identifier and two different identifiers must (normally) be assigned two different locations. The conceptually simplest way to accomplish this is to make two passes over the input (read it once, then read it again from the beginning). During the first pass, each time a new identifier is encountered, an address is assigned and the pair (identifier, address) is stored in a symbol table. During the second pass, whenever an identifier is encountered, its address is looked up in Class Notes.htm file:///D:/cc pdf/Compilers the symbol table and this value is used in the generated machine instruction.
5/133
1/26/12
is looked up in the symbol table and this value is used in the generated machine instruction.
A Trivial Assembler Program
Consider the following trivial C program that computes and returns the xor of the characters in a string.
itxr(hrs] n o ca [) { itas=0 n n ; iti=0 n ; wie([]! 0 { hl si = ) as=as^si; n n [] i=i+1 ; } rtr as eun n; } / ntv Csekr syca * / aie paes a hr s
You should be able to follow everything from xor: to ret. Indeed most of the rest can be omitted (.globl g is needed). That is the following assembly program gives the same results.
.lb xr gol o xr o: sb ul mv ol mv ol .2 L: mv ol ad dl cp mb j e mv ol ad dl mvb osl la el
$,%s 8 ep $,4%s) 0 (ep $,(ep 0 %s) (ep,%a %s) ex 1(ep,%a 2%s) ex $,(ex 0 %a) .3 L (ep,%a %s) ex 1(ep,%a 2%s) ex (ex,ex %a)%d 4%s) %a (ep, ex
6/133
1/26/12
la el xr ol mv ol ic nl jp m .3 L: mv ol ad dl rt e
4%s) %a (ep, ex %d,(ex ex %a) %s,%a ep ex (ex %a) .2 L 4%s) %a (ep, ex $,%s 8 ep
What is happening in this program? 1. The stack pointer originally points to the (unused) frame pointer. Just above is the one parameter S. 2. Allocate (by moving the stack pointer) and initialize the local variables. 3. Add S (an address) to I (giving the address of S[I}). 4. Compare the contents of the above address (i.e., S[I]) to 0 and break out if appropriate. 5. Calculate S[I] xor ans (not using the address just calculated, since I did not ask for optimization). The code actually performs the calculation in the memory location containing ans (ppc and sparc would do this in a register). 6. Increment I and loop. 7. When the loop ends store the return value (in eax), restore the stack pointer, and return. Lab assignment 1 is available on the class web site. The programming is trivial; you are just doing inclusive (i.e., normal) OR rather than XOR I just did. The point of the lab is to give you a chance to become familiar with your compiler and assembler. Linkers Linkers, a.k.a. linkage editors combine the output of the assembler for several different compilations. That is the horizontal line of the diagram above should really be a collection of lines converging on the linker. The linker has another input, namely libraries, but to the linker the libraries look like other programs compiled and assembled. The two primary tasks of the linker are 1. Relocating relative addresses. 2. Resolving external references (such as the procedure xor() above).
Relocating relative addresses
The assembler processes one file at a time. Thus the symbol table produced while processing file A is independent of the symbols defined in file B, and conversely. Thus, it is likely that the same address will be used for different symbols in each program. The technical term is that the (local) addresses in the symbol table for file A are relative to file A; they must be relocated by the linker. This is accomplished by adding the starting address of file A (which in turn is the sum of the lengths of all the files processed previously in this run) to the relative address.
Resolving external references
Assume procedure f, in file A, and procedure g, in file B, are compiled (and assembled) separately. Assume also that f invokes g. Since the compiler and assembler do not see g when processing f, it appears impossible for procedure f to know where in memory to find g. The solution is for the compiler to indicated in the output of the file A compilation that the address of g is needed. This is called a use of g When processing file B, the compiler outputs the (relative) address of g. This is called the definition of g. The assembler passes this information to the linker. The simplest linker technique is to again make two passes. During the first pass, the linker records in its external symbol table (a table of external symbols, not a symbol table that is stored externally) all the definitions encountered. During the 7/133
1/26/12
table (a table of external symbols, not a symbol table that is stored externally) all the definitions encountered. During the second pass, every use can be resolved by access to the table. I will be covering the linker in more detail tomorrow at 5pm in 2250, OS Design Loaders After the linker has done its work, the resulting executable file can be loaded by the operating system into central memory. The details are OS dependent. With early single-user operating systems all programs would be loaded into a fixed address (say 0) and the loader simply copies the file to memory. Today it is much more complicated since (parts of) many programs reside in memory at the same time. Hence the compiler/assembler/linker cannot know the real location for an identifier. Indeed, this real location can change. More information is given in any OS course (e.g., 2250 given wednesdays at 5pm).
but not
x3: y+3 = ;
would be grouped into 1. 2. 3. 4. 5. 6. The identifier x3. The assignment symbol :=. The identifier x. The plus sign. The number 3. The semicolon.
Note that non-significant blanks are normally removed during scanning. In C, most blanks are non-significant. Blanks inside strings are an exception. Note that we could define identifiers, numbers, and the various symbols and punctuation can be defined without recursion (compare with parsing below).
would be parsed into the tree on the right. This parsing would result from a grammar containing rules such as
file:///D:/cc pdf/Compilers Class Notes.htm 8/133
1/26/12
Note the recursive definition of expression (expr). Note also the hierarchical decomposition in the figure on the right. The division between scanning and parsing is somewhat arbitrary, but invariably if a recursive definition is involved, it is considered parsing not scanning. Often we utilize a simpler tree called the syntax tree with operators as interior nodes and operands as the children of the operator. The syntax tree on the right corresponds to the parse tree above it. (Technical point.) The syntax tree represents an assignment expression not an assignment statement. In C an assignment statement includes the trailing semicolon. That is, in C (unlike in Algol) the semicolon is a statement terminator not a statement separator.
Semantic analysis
There is more to a front end than simply syntax. The compiler needs semantic information, e.g., the types (integer, real, pointer to array of integers, etc) of the objects involved. This enables checking for semantic errors and inserting type conversion where necessary. For example, if y was declared to be a real and x3 an integer, We need to insert (unary, i.e., one operand) conversion operators inttoreal and realtoint as shown on the right.
We just examined the first three phases. Modern, high-performance compilers, are dominated by their extensive "optimization" phases, which occur before, during, and after code generation. Note that optimization is most assuredly an inaccurate, albeit standard, terminology, as the resulting code is not optimal.
Symbol-table management
As we have seen when discussing assemblers and linkers, a symbol table is used to maintain information about symbols. The compiler uses a symbol to maintain information across phases as well as within each phase. One key item stored with each symbol is the corresponding type, which is determined during semantic and used (among other places) during code generation.
1/26/12
x : y+3 3 = ;
into
i1 : i2 +3; d = d
where id is short for identifier. This is processed by the parser and semantic analyzer to produce the two trees shown above here and here. On some systems, the tree would not contain the symbols themselves as shown in the figures. Instead the tree would contain leaves of the form idi which in turn would refer to the corresponding entries in the symbol table.
We see that three-address code can include instructions with fewer than 3 operands. Sometimes three-address code is called quadruples because one can view the previous code sequence as
itoeltm13 ntra ep ad d tm2i2 tm1 ep d ep ratittm3tm2elon ep ep asg sin i1 tm3d ep -
Code optimization
This is a very serious subject, one that we will not really do justice to in this introductory course. Some optimizations are fairly easy to see. 1. Since 3 is a constant, the compiler can perform the int to real conversion and replace the first two quads with
ad d tm2i2 30 ep d .
Code generation
Modern processors have only a limited number of register. Although some processors, such as the x86, can perform operations directly on memory locations, we will for now assume only register operations. Some processors (e.g., the MIPS architecture) use three-address instructions. However, some processors permit only two addresses; the result overwrites the second source. With these assumptions, code something like the following would be produced for our example, after first assigning memory locations to id1 and id2.
file:///D:/cc pdf/Compilers Class Notes.htm 10/133
1/26/12
MV i2 R OE d, 1 AD #.,R D 30 1 RO R, R TI 1 2 MV R, i1 OE 2 d
Passes
The term pass is used to indicate that the entire input is read during this activity. So two passes, means that the input is read twice. We have discussed two pass approaches for both assemblers and linkers. If we implement each phase separately and use multiple phases for some of them, the compiler will perform a large number of I/O operations, an expensive undertaking. As a result techniques have been developed to reduce the number of passes. We will see in the next chapter how to combine the scanner, parser, and semantic analyzer into one phase. Consider the parser. When it needs another token, rather than reading the input file (presumably produced by the scanner), the parser calls the scanner instead. At selected points during the production of the syntax tree, the parser calls the code generator, which performs semantic analysis as well as generating a portion of the intermediate code.
1/26/12
We will study tools that generate scanners and parsers. This will involve us in some theory, regular expressions for scanners and various grammars for parsers. These techniques are fairly successful. One drawback can be that they do not execute as fast as hand-crafted scanners and parsers. We will also see tools for syntax-directed translation and automatic code generation. The automation in these cases is not as complete. Finally, there is the large area of optimization. This is not automated; however, a basic component of optimization is dataflow analysis (how are values transmitted between parts of a program) and there are tools to help with this task.
2.1: Overview
The source language is infix expressions consisting of digits, +, and -; the target is postfix expressions with the same components. The compiler will convert 7 4 5to 7 + +45. Actually, our simple compiler will handle a few other operators as well. We will "tokenize" the input (i.e., write a scanner), model the syntax of the source, and let this syntax direct the translation.
Example:
Trias 0123456789+emnl: Nnemnl:ls dgt otrias it ii Poutos ls ls +dgt rdcin: it it ii ls ls -dgt it it ii ls dgt it ii dgt0|1|2|3|4|5|6|7|8|9 ii Satsmo:ls tr ybl it
Watch how we can generate the input 7+4-5 starting with the start symbol, applying productions, and stopping when no productions are possible (we have only terminals).
file:///D:/cc pdf/Compilers Class Notes.htm 12/133
1/26/12
It is important that you see that this context-free grammar generates precisely the set of infix expressions with digits (so 25 is not allowed) as operands and + and - as operators. The way you get different final expressions is that you make different choices of which production to apply. There are 3 productions you can apply to list and 10 you can apply to digit. The input cannot have blanks since blank is not a nonterminal. The empty string is not a legal input since, starting from list, we cannot get to the empty string. If we wanted to include the empty string, we would add the production
ls it
Homework: 2.1a, 2.1c, 2.2a-c (don't worry about justifying your answers).
Parse trees
The compiler front end runs the above procedure in reverse! It starts with the string 7+4-5 and gets back to list (the start symbol). Reaching the start symbol means that the string is in the language generated by the grammar. While running the procedure in reverse, the front end builds up the parse tree on the right. You can read off the productions from the tree. For any internal (i.e.,non-leaf) tree node, its children give the right hand side (RHS) of a production having the node itself as the LHS. The leaves of the tree, read from left to right, is called the yield of the tree. We call the tree a derivation of its yield from its root. The tree on the right is a derivation of 7+4-5 from list. Homework: 2.1b
Ambiguity
An ambiguous grammar is one in which there are two or more parse trees yielding the same final string. We wish to avoid such grammars. The grammar above is not ambiguous. For example 1 2 3can be parsed only one way; the arithmetic must be done left ++ to right. Note that I am not giving a rule of arithmetic, just of this grammar. If you reduced 2+3 to list you would be stuck since it is impossible to generate 1+list. Homework: 2.3 (applied only to parts a, b, and c of 2.2)
Associativity of operators
Our grammar gives left associativity. That is, if you traverse the tree in postorder and perform the indicated arithmetic you will evaluate the string left to right. Thus 8-8-8 would evaluate to -8. If you wished to generate right associativity (normally exponentiation is right associative, so 2**3**2 gives 512 not 64), you would change the first two productions to l s d g t + l s and l s d g t - l s it ii it it ii it
Precedence of operators
We normally want * to have higher precedence than +. We do this by using an additional nonterminal to indicate the items that have been multiplied. The example below gives the four basic arithmetic operations their normal precedence unless overridden by parentheses. Redundant parentheses are permitted. Equal precedence operations are performed left to right.
file:///D:/cc pdf/Compilers Class Notes.htm 13/133
1/26/12
ep xr ep +tr |ep -tr |tr xr em xr em em tr em tr *fco |tr /fco |fco em atr em atr atr fco dgt|(ep ) atr ii xr dgt 0|1|2|3|4|5|6|7|8|9 ii
We use | to indicate that a nonterminal has multiple possible right hand side. So
AB|C
Do the examples 1+2/3-4*5 and (1+2)/3-4*5 on the board. Note how the precedence is enforced by the grammar; slick! Statements Keywords are very helpful for distinguishing statements from one another.
sm i : ep tt d = xr |i ep te sm f xr hn tt |i ep te sm es sm f xr hn tt le tt |wieep d sm hl xr o tt |bgnotsmsed ei p-tt n otsmssm-it| p-tt ttls sm-itsm-it;sm |sm ttls ttls tt tt
Remark: opt-stmts stands for optional statements. The begin-end block can be empty in some languages. The epsilon stands for the empty string The use of epsilon productions will add complications. Some languages do not permit empty blocks; e.g. Ada has a null statement, which does nothing when executed, for this purpose. The above grammar is ambiguous! The notorious dangling else problem. How do you parse if x then if y then z=1 else z=2? Homework: 2.16a, 2.16b
Postfix notation
Operator after operand. Parentheses are not needed. The normal notation we used is called infix. If you start with an infix expression, the following algorithm will give you the equivalent postfix expression. Variables and constants left alone. E op F becomes E' F' op, where E' F' are the postfix of E F. ( E ) becomes E', where E' is the postfix of E.
14/133
1/26/12
One question is, given say 1+2-3, what is E, F and op? Does E=1+2, F=3, and op=+? Or does E=1, F=2-3 and op=+? This is the issue of precedence mentioned above. To simplify the present discussion we will start with fully parenthesized infix expressions. Example: 1+2/3-4*5 1. Start with 1+2/3-4*5 2. Parenthesize (using standard precedence) to get (1+ (2/3))-(4*5) 3. Apply the above rules to calculate P{(1+(2/3))(4*5)}, where P{X} means convert the infix expression X to postfix. A. P{(1+(2/3))-(4*5)} B. P{(1+(2/3))} P{(4*5)} C. P{1+(2/3)} P{4*5} D. P{1} P{2/3} + P{4} P{5} * E. 1 P{2} P{3} / + 4 5 * F. 1 2 3 / + 4 5 * Example: Now do (1+2)/3-4*5 1. Parenthesize to get ((1+2)/3)-(4*5) 2. Calculate P{((1+2)/3)-(4*5)} A. P{((1+2)/3) P{(4*5)} B. P{(1+2)/3} P{4*5) C. P{(1+2)} P{3} / P{4} P{5} * D. P{1+2} 3 / 4 5 * E. P{1} P{2} + 3 / 4 5 * F. 1 2 + 3 / 4 5 * -
Syntax-directed definitions
We want to decorate the parse trees we construct with annotations that give the value of certain attributes of the corresponding node of the tree. We will do the example of translating infix to postfix with 1+2/3-4*5. We use the following grammar, which follows the normal arithmetic terminology where one multiplies and divides factors to obtain terms, which in turn are added and subtracted to form expressions.
ep xr ep +tr |ep -tr |tr xr em xr em em tr em tr *fco |tr /fco |fco em atr em atr atr fco dgt|(ep ) atr ii xr dgt 0|1|2|3|4|5|6|7|8|9 ii
This grammar supports parentheses, although our example does not use them. On the right is a movie in which the parse tree is build from this example. The attribute we will associate with the nodes is the text to be used to print the postfix form of the string in the leaves below the node. In particular the value of this attribute at the root is the postfix form of the entire source.
file:///D:/cc pdf/Compilers Class Notes.htm 15/133
1/26/12
The book does a simpler grammar (no *, /, or parentheses) for a simpler example. You might find that one easier. The book also does another grammar describing commands to give a robot to move north, east, south, or west by one unit at a time. The attributes associated with the nodes are the current position (for some nodes, including the root) and the change in position caused by the current command (for other nodes). Definition: A syntax-directed definition is a grammar together with a set of semantic rules for computing the attribute values. A parse tree augmented with the attribute values at each node is called an annotated parse tree.
Synthesized Attributes
For the bottom-up approach I will illustrate now, we annotate a node after having annotated its children. Thus the attribute values at a node can depend on the children of the node but not the parent of the node. We call these synthesized attributes, since they are formed by synthesizing the attributes of the children. In chapter 5, when we study top-down annotations as well, we will introduce inherited attributes that are passed down from parents to children. We specify how to synthesize attributes by giving the semantic rules together with the grammar. That is we give the syntax directed definition. Production Semantic Rule
expr expr1 + term expr.t := expr1.t || term.t || '+' expr expr1 - term expr.t := expr1.t || term.t || '-' expr term expr.t := term.t term term1 * factor term.t := term1.t || factor.t || '*' term term1 / factor term.t := term1.t || factor.t || '/' term factor factor digit factor ( expr ) digit 0 digit 1 digit 2 digit 3 digit 4 digit 5 digit 6 digit 7 digit 8 term.t := factor.t factor.t := digit.t factor.t := expr.t digit.t := '0' digit.t := '1' digit.t := '2' digit.t := '3' digit.t := '4' digit.t := '5 digit.t := '6' digit.t := '7' digit.t := '8' := '9'
16/133
1/26/12
digit 9
digit.t := '9'
We apply these rules bottom-up (starting with the geographically lowest productions, i.e., the lowest lines on the page) and get the annotated graph shown on the right. The annotation are drawn in green. Homework: Draw the annotated graph for (1+2)/3-4*5.
Depth-first traversals
As mentioned in this chapter we are annotating bottom-up. This corresponds to doing a depth-first traversal of the (unannotated) parse tree to produce the annotations. It is often called a postorder traversal because a parent is visited after (i.e., post) its children are visited.
Translation schemes
The bottom-up annotation scheme generates the final result as the annotation of the root. In our infix postfix example we get the result desired by printing the root annotation. Now we consider another technique that produces its results incrementally. Instead of giving semantic rules for each production (and thereby generating annotations) we can embed program fragments called semantic actions within the productions themselves. In diagrams the semantic action is connected to the node with a distinctive, often dotted, line. The placement of the actions determine the order they are performed. Specifically, one executes the actions in the order they are encountered in a Capable of being classified postorder traversal of the tree. Definition: A syntax-directed translation scheme is a context-free grammar with embedded semantic actions. For our infix postfix translator, the parent either just passes on the attribute of its (only) child or concatenates them left to right and adds something at the end. The equivalent semantic actions would be either to print the new item or print nothing.
Emitting a translation
Here are the semantic actions corresponding to a few of the rows of the table above. Note that the actions are enclosed in {}.
ep ep +tr xr xr em ep ep -tr xr xr em tr tr /fco em em atr tr fco em atr dgt3 ii {pit'' } rn(+) {pit'' } rn(-) {pit'' } rn(/) {nl } ul {pit'' } rn(3)
The diagram for 1+2/3-4*5 with attached semantic actions is shown on the right. Given an input, e.g. our favorite 1+2/3-4*5, we just do a depth first (postorder) traversal of the corresponding diagram and perform the semantic actions as they occur. When these actions are print statements as above, we can be said to be emitting the translation. Do a depth first traversal of the diagram on the board, performing the semantic actions as they occur, and confirm that the
file:///D:/cc pdf/Compilers Class Notes.htm 17/133
1/26/12
translation emitted is in fact 123/+45*-, the postfix version of 1+2/3-4*5 Homework: Produce the corresponding diagram for (1+2)/3-4*5. Prefix to infix translation When we produced postfix, all the prints came at the end (so that the children were already printed. The { actions } do not need to come at the end. We illustrate this by producing infix arithmetic (ordinary) notation from a prefix source. In prefix notation the operator comes first so +1-23 evaluates to zero. Consider the following grammar. It translates prefix to infix for the simple language consisting of addition and subtraction of digits between 1 and 3 without parentheses (prefix notation and postfix notation do not use parentheses). The resulting parse tree for +1-23 is shown on the right. Note that the output language (infix notation) has parentheses.
rs +tr rs |-tr rs |tr et em et em et em tr 1|2|3 em
The table below shows the semantic actions or rules needed for our translator. Production with Semantic Action Semantic Rule
rest { print('(') } + term { print('+') } rest { print(')') } rest.t := '(' || term.t || '+' || rest.t || ')' rest { print('(') } - term { print('-') } rest { print(')') } rest.t := '(' || term.t || '-' || rest.t || ')' rest term term 1 { print('1') } term 2 { print('2') } term 3 { print('3') } Homework: 2.8. Simple syntax-directed definitions If the semantic rules of a syntax-directed definition all have the property that the new annotation for the left hand side (LHS) of the production is just the concatenation of the annotations for the nonterminals on the RHS in the same order as the nonterminals appear in the production, we call the syntax-directed definition simple. It is still called simple if new strings are interleaved with the original annotations. So the example just done is a simple syntax-directed definition. Remark: We shall see later that, in many cases a simple syntax-directed definition permits one to execute the semantic actions while parsing and not construct the parse tree at all. rest.t := term.t term.t := '1' term.t := '2' term.t := '3'
1/26/12
2.4: Parsing
Objective: Given a string of tokens and a grammar, produce a parse tree yielding that string (or at least determine if such a tree exists). We will learn both top-down (begin with the start symbol, i.e. the root of the tree) and bottom up (begin with the leaves) techniques. In the remainder of this chapter we just do top down, which is easier to implement by hand, but is less general. Chapter 4 covers both approaches. Tools (so called parser generators) often use bottom-up techniques. In this section we assume that the lexical analyzer has already scanned the source input and converted it into a sequence of tokens.
Top-down parsing
Consider the following simple language, which derives a subset of the types found in the (now somewhat dated) programming language Pascal. I am using the same example as the book so that the compiler code they give will be applicable. We have two nonterminals, type, which is the start symbol, and simple, which represents the simple types. There are 8 terminals, which are tokens produced by the lexer and correspond closely with constructs in pascal itself. I do not assume you know pascal. (The authors appear to assume the reader knows pascal, but do not assume knowledge of C.) Specifically, we have. 1. 2. 3. 4. 5. 6. integer and char id for identifier array and of used in array declarations meaning pointer to num for a (positive whole) number dotdot for .. (used to give a range like 6..9)
Parsing is easy in principle and for certain grammars (e.g., the two above) it actually is easy. The two fundamental steps (we start at the root since this is top-down parsing) are 1. At the current (nonterminal) node, select a production whose LHS is this nonterminal and whose RHS matches the input at this point. Make the RHS the children Class node (one file:///D:/cc pdf/Compilersof this Notes.htm child per RHS symbol).
19/133
1/26/12
children of this node (one child per RHS symbol). 2. Go to the next node needing a subtree. When programmed this becomes a procedure for each nonterminal that chooses a production for the node and calls procedures for each nonterminal in the RHS. Thus it is recursive in nature and descends the parse tree. We call these parsers recursive descent. The big problem is what to do if the current node is the LHS of more than one production. The small problem is what do we mean by the next node needing a subtree. The easiest solution to the big problem would be to assume that there is only one production having a given terminal as LHS. There are two possibilities 1. No circularity. For example
ep tr +tr -9 xr em em tr fco /fco em atr atr fco dgt atr ii dgt7 ii
2. Circularity
ep tr +tr xr em em tr fco /fco em atr atr fco (ep ) atr xr
This is even worse; there are no (finite) sentences. Only an infinite sentence beginning (((((((((. So this won't work. We need to have multiple productions with the same LHS. How about trying them all? We could do this! If we get stuck where the current tree cannot match the input we are trying to parse, we would backtrack. Instead, we will look ahead one token in the input and only choose productions that can yield a result starting with this token. Furthermore, we will (in this section) restrict ourselves to predictive parsing in which there is only production that can yield a result starting with a given token. This solution to the big problem also solves the small problem. Since we are trying to match the next token in the input, we must choose the leftmost (nonterminal) node to give children to.
Predictive parsing
Let's return to pascal array type grammar and consider the three productions having type as LHS. Even when I write the short form
tp y pdf/Compilers e i | r a i p Class d ipe f file:///D:/cc e s m l | Notes.htm a r y [ s m l ] o
20/133
1/26/12
I view it as three productions. For each production P we wish to consider the set FIRST(P) consisting of those tokens that can appear as the first symbol of a string derived from the RHS of P. We actually define FIRST(RHS) rather than FIRST(P), but I often say first set of the production when I should really say first set of the RHS of the production. Definition: Let r be the RHS of a production P. FIRST(r) is the set of tokens that can appear as the first symbol in a string derived from r. To use predictive parsing, we make the following Assumption: Let P and Q be two productions with the same LHS. Then FIRST(P) and FIRST(Q) are disjoint. Thus, if we know both the LHS and the token that must be first, there is (at most) one production we can apply. BINGO! An example of predictive parsing This table gives the FIRST sets for our pascal array type example. Production type simple type id simple integer simple char simple num dotdot num {} { integer } { char } { num } FIRST { integer, char, num }
The three productions with type as LHS have disjoint FIRST sets. Similarly the three productions with simple as LHS have disjoint FIRST sets. Thus predictive parsing can be used. We process the input left to right and call the current token lookahead since it is how far we are looking ahead in the input to determine the production to use. The movie on the right shows the process in action. Homework: A. Construct the corresponding table for
rs +tr rs |-tr rs |tr et em et em et em tr 1|2|3 em
-productions
file:///D:/cc pdf/Compilers Class Notes.htm 21/133
1/26/12
Not all grammars are as friendly as the last example. The first complication is when occurs in a RHS. If this happens or if the RHS can generate , then is included in FIRST. But would always match the current input position! The rule is that if lookahead is not in FIRST of any production with the desired LHS, we use the (unique!) production (with that LHS) that has as RHS. The second edition, which I just obtained now does a C instead of a pascal example. The productions are
sm ep ; tt xr |i (ep )sm f xr tt |fr(otxr;otxr;otxr)sm o pep pep pep tt |ohr te otxrep | pep xr
For completeness, on the right is the beginning of a movie for the C example. Note the use of the -production at the end since no other entry in FIRST will match ;
Left Recursion
Another complication. Consider
ep ep +tr xr xr em ep tr xr em
For the first production the RHS begins with the LHS. This is called left recursion. If a recursive descent parser would pick
22/133
1/26/12
this production, the result would be that the next node to consider is again expr and the lookahead has not changed. An infinite loop occurs. Consider instead
ep tr rs xr em et rs +tr rs et em et rs et
Both pairs of productions generate the same possible token strings, namely
tr +tr +..+tr em em . em
The second pair is called right recursive since the RHS ends (has on the right) the LHS. If you draw the parse trees generated, you will see that, for left recursive productions, the tree grows to the left; whereas, for right recursive, it grows to the right. Note also that, according to the trees generated by the first pair, the additions are performed right to left; whereas, for the second pair, they are performed left to right. That is, for
tr +tr +tr em em em
the tree from the first pair has the left + at the top (why?); whereas, the tree from the second pair has the right + at the top. In general, for any A, R, , and , we can replace the pair
AA|
One problem that we must solve is that this grammar is left recursive.
1/26/12
RR|R|
This time we have actions so, for example is + t r { p i t ' ' } em rn(+) However, the formulas still hold and we get
ep tr rs xr em et rs +tr {pit'' }rs et em rn(+) et |-tr {pit'' }rs em rn(-) et | tr 0 em {pit'' } rn(0) ... |9 {pit'' } rn(9)
2.6.3: Constants
This chapter considers only numerical integer constants. They are computed one digit at a time by value=10*value+digit. The parser will therefore receive the token num rather than a sequence of digits. Recall that our previous parsers considered only one digit numbers. The value of the constant is stored as the attribute of the token num. Indeed <token,attribute> pairs are passed from the scanner to the parser.
1/26/12
contains 4 tokens. The scanner will convert the input into i = i + i ;(id standing for identifier). d d d Although there are three id tokens, the first and second represent the lexeme sum; the third represents x. These must be distinguished. Many language keywords, for example then, are syntactically the same as identifiers. These also must be distinguished. The symbol table will accomplish these tasks. Care must be taken when one lexeme is a proper subset of another. Consider x yversus x = < <y When the < is read, the scanner needs to read another character to see if it is an =. But if that second character is y, the current token is < and the y must be pushed back onto the input stream so that the configuration is the same after scanning < as it is after scanning <=. Also consider t e versus t e e v l e one is a keyword and the other an id. hn hnwau,
Interface
As indicated the scanner reads characters and occasionally pushes one back to the input stream. The downstream interface is to the parser to which <token,attribute> pairs are passed.
In anticipation of other operators with higher precedence, we introduce factor and, for good measure, include parentheses for overriding the precedence. So our grammar becomes.
ep xr ep +tr xr em {pit'' } rn(+) ep xr ep -tr xr em {pit'' } rn(-) ep xr tr em tr em fco atr fco (ep )|nm{pitnmvle } atr xr u rn(u,au)
The factor() procedure follows the familiar recursive descent pattern: find a production with lookahead in FIRST and do what the RHS says.
Interface
i s r ( , )returns the index of a new entry storing the pair (lexeme s, netst l o u ( )returns the index for x or 0 okps
token t).
if not there.
Reserved keywords
Simply insert them into the symbol table prior to examining any input. Then they can be found when used correctly and,
file:///D:/cc pdf/Compilers Class Notes.htm 25/133
1/26/12
since their corresponding token will not be id, any use of them where an identifier is required can be flagged.
isr(dv,i) net"i"dv
Implementation
Probably the simplest would be
src smalTp { tut ytbeye ca lxm[INME] hr eeeBGUBR; it tkn n oe; }smal[NTEBGUBR; ytbeAOHRINME]
The space inefficiency of having a fixed size entry for all lexemes is poor, so the authors use a (standard) technique of concatenating all the strings into one big string and storing pointers to the beginning of each of the substrings.
Arithmetic instructions
An instruction for each simple op (e.g., add, mul). Complicated ops (e.g., sqrt) require several instructions. We assume an instruction exists for each of the ops we use. The instruction consumes one or two operands from the tos and places the result on the tos.
1/26/12
Stack manipulation
push v push v (onto stack) rvalue l push contents of (location) l lvalue l push address of l pop := pop r-value on tos put into the location specified by l-value 2nd on the stack; both are popped
Translating expressions
Machine instructions to evaluate an expression mimic the postfix form of the expression. That is we generate code to evaluate the left operand, then code to evaluate the write operand, and finally the code to evaluate the operation itself. For example y : 7 * x + 6 * ( + w becomes = x z )
lau y vle ps 7 uh rau x vle x * ps 6 uh rau z vle rau w vle + * + : =
To say this more formally we define two attributes. For any nonterminal, the attribute t gives its translation and for the terminal i , the attribute lexeme gives its string representation. d Assuming we have already given the semantic rules for expr (i.e., assuming that the annotation expr.t is known to contain the translation for expr) then the semantic rule for the assignment statement is
sm i : ep tt d = xr {sm. : 'vle | i.eie| ep. | : } ttt = lau' | dlxm | xrt | =
Control flow
There are several ways of specifying conditional and unconditional jumps. We choose the following 5 instructions. The simplifying assumption is that the abstract machine supports symbolic labels. The back end of the compiler would have to translate this into machine instructions for the actual computer, e.g. absolute or relative jumps (jump 3450 or jump +500). label l target of jump goto l gofalse pop stack; jump if value is false gotrue pop stack; jump if value is true halt
1/26/12
Emitting a translation
Rewriting the above as a semantic action (rather than a rule) we get the following, where emit() is a function that prints its arguments in whatever form is required for the abstract machine (e.g., it deals with line length limits, required whitespace, etc).
sm i tt f ep xr te hn sm1 tt {ot: nwae;ei(gfle,ot;} u = elbl mt'oas' u) {ei(lbl,ot } mt'ae' u)
Don't forget that expr is itself a nonterminal. So by the time we reach o t = e l b l we will have already parsed expr u:nwae, and thus will have done any associated actions, such as emit()'ing instructions. These instructions will have left a boolean on the tos. It is this boolean that is tested by the emitted gofalse. More precisely, the action written to the right of expr will be the third child of stmt in the tree. Since a postorder traversal visits the children in order, the second child expr will have been visited (just) prior to visiting the action. Pseudocode for stmt (fig 2.34) Look how simple it is! Don't forget that the FIRST sets for the productions having stmt as LHS are disjoint!
poeuesm rcdr tt itgrts,ot nee et u; i loaed=i te f okha d hn / frtsti {d frasgmn / is e s i} o sinet ei(lau' tkna) / pse lau o ls mt'vle, oevl; / uhs vle f h mthi) ac(d; / mv ps tels / oe at h h] mth'=) ac(:'; / mv ps te: / oe at h = ep; xr / pse rau o rso ts / uhs vle f h n o ei(:'; mt'=) / d teasgmn (mte i bo) / o h sinet Oitd n ok es i loaed='f te le f okha i' hn mth'f) ac(i'; / mv ps tei / oe at h f ep; xr / pse boeno ts / uhs ola n o ot: nwae(; u = elbl) ei(gfle,ot; mt'oas' u) / oti itgr ei mksalgllbl / u s nee, mt ae ea ae mth'hn) ac(te'; / mv ps tete / oe at h hn sm; tt / rcriecl / eusv al ei(lbl,ot mt'ae' u) / ei aanmksotlgl / mt gi ae u ea es i .. le f . / wie rpa/o ec / hl, eetd, t es err) le ro(; edsm; n tt
Description
The grammar with semantic actions is as follows. All the actions come at the end since we are generating postfix. this is not always the case.
satls ef tr it o ls ep ;ls it xr it ls it
file:///D:/cc pdf/Compilers Class Notes.htm
ep ep +tr xr xr em
1/26/12
ep ep +tr xr xr em |ep -tr xr em |tr em tr tr *fco em em atr |tr /fco em atr |tr dvfco em i atr |tr mdfco em o atr |fco atr fco (ep ) atr xr |i d |nm u
{pit'' } rn(+) {pit'';} rn(-) {pit'' } rn(*) {pit'' } rn(/) {pit'I' } rn(DV) {pit'O' } rn(MD)
Contains lexan(), the lexical analyzer, which is called by the parser to obtain the next token. The attribute value is assigned to tokenval and white space is stripped. lexme token attribute value
white space sequence of digits NUM numeric value div DIV mod MOD other seq of a letter then letters and digits ID index into symbol table eof char DONE other char that char NONE
Pre. asrc
Using a recursive descent technique, one writes routines for each nonterminal in the grammar. In fact the book combines term and morefactors into one routine.
tr( { em) itt n ; fco(; atr) / nww sol cl mrfcos(,btisedcd i iln / o e hud al oeatrl) u nta oe t nie wietu) hl(re / mrfco nnemnli rgtrcrie / oeatr otria s ih eusv sic (okha){ / loaedstb mth) wth loaed / okha e y ac( cs '' cs '' cs DV cs MD / altesm ae *: ae /: ae I: ae O: / l h ae t=loaed okha; / nee frei( blw / edd o mt) eo
29/133
1/26/12
mthloaed ac(okha) fco(; atr) ei(,OE; mttNN) cniu; otne dfut eal: rtr; eun
/ si oe teoeao / kp vr h prtr / segamrfrmrfcos / e rma o oeatr / Csmnisfrcs / eatc o ae / teeslnpouto / h pio rdcin
The insert(s,t) and lookup(s) routines described previously are in symbol.c The routine init() preloads the symbol table with the defined keywords.
Errc ro.
Does almost nothing. The only help is that the line number, calculated by lexan() is printed.
Two Questions
1. How come this compiler was so easy? 2. Why isn't the final exam next week? One reason is that much was deliberately simplified. Specifically note that No real machine code generated (no back end). No optimizations (improvement to generated code). FIRST sets disjoint. No semantic analysis. Input language very simple. Output language very simple and closely related to input. Also, I presented the material way too fast to expect full understanding.
The lexer also might do some housekeeping such as eliminating whitespace and comments. Some call these tasks scanning, but others call the entire task scanning. After the lexer, individual characters are no longer examined by the compiler; instead tokens (the output of the lexer) are used.
1/26/12
do when the input is erroneous and we get to a point where no production can be applied? The simplest solution is to abort the compilation stating that the program is wrong, perhaps giving the line number and location where the parser could not proceed. We would like to do better and at least find other errors. We could perhaps skip input up to a point where we can begin anew (e.g. after a statement ending semicolon), or perhaps make a small change to the input around lookahead so that we can proceed.
3.2.2: Sentinels
A useful programming improvement to combine testing for the end of a buffer with determining the character read.
1/26/12
over unicode. Definition: The concatenation of strings s and t is the string formed by appending the string t to s. It is written st. Example: s = s = s for any string s. We view concatenation as a product (see Monoid in wikipedia http://en.wikipedia.org/wiki/Monoid). It is thus natural to define s0= and si+1=sis. Example: s1=s, s4=ssss. More string terminology A prefix of a string is a portion starting from the beginning and a suffix is a portion ending at the end. More formally, Definitions: A prefix of s is any string obtained from s by removing (possibly zero) characters from the end of s. A suffix is defined analogously and a substring of s is obtained by deleting a prefix and a suffix. Example: If s is 123abc, then (1) s itself and are each a prefix, suffix, and a substring. (2) 12 are 123a are prefixes. (3) 3abc is a suffix. (4) 23a is a substring. Definitions: A proper prefix of s is a prefix of s other than and s itself. Similarly, proper suffixes and proper substrings of s do not include and s. Definition: A subsequence of s is formed by deleting (possibly) positions from s. We say positions rather than characters since s may for example contain 5 occurrences of the character Q and we only want to delete a certain 3 of them. Example: issssii is a subsequence of Mississippi. Homework: 3.1b, 3.5 (c and e are optional).
1/26/12
Definition: The positive closure of L, denoted L+ is L1 L2 ... Example: {0,1,2,3,4,5,6,7,8,9}+ gives all unsigned integers, but with some ugly versions. It has 3, 03, 000003. {0} ( {1,2,3,4,5,6,7,8,9} ({0,1,2,3,4,5,6,7,8,9,0}* ) ) seems better. In these notes I may write * for * and + for +, but that is strictly speaking wrong and I will not do it on the board or on exams or on lab assignments. Example: {a,b}* is {,a,b,aa,ab,ba,bb,aaa,aab,aba,abb,baa,bab,bba,bbb,...}. {a,b}+ is {a,b,aa,ab,ba,bb,aaa,aab,aba,abb,baa,bab,bba,bbb,...}. {,a,b}* is {,a,b,aa,ab,ba,bb,...}. {,a,b}+ is the same as {,a,b}*. The book gives other examples based on L={letters} and D={digits}, which you should read..
Parentheses, if present, control the order of operations. Without parentheses the following precedence rules apply. The postfix unary operator * has the highest precedence. The book mentions that it is left associative. (I don't see how a postfix unary operator can be right associative or how a prefix unary operator such as unary - could be left associative.) Concatenation has the second highest precedence and is left associative. | has the lowest precedence and is left associative. The book gives various algebraic laws (e.g., associativity) concerning these operators. The reason we don't include the positive closure is that for any RE r+ = rr* . Homework: 3.6 a and b.
34/133
1/26/12
where the d's are unique and not in and ri is a regular expressions over {d1,...,di-1}. Note that each di can depend on all the previous d's. Example: C identifiers can be described by the following regular definition
lte_A|B|..|Z|a|b|..|z|_ etr . . dgt0|1|..|9 ii . Cdlte_(lte_|dgt* I etr etr ii)
Homework: 3.8 for the C language (you might need to read a C manual first to find out all the numerical constants in C), 3.10a.
Goal is to perform the lexical analysis needed for the following grammar.
sm i ep te sm tt f xr hn tt |i ep te sm es sm f xr hn tt le tt | ep tr rlptr xr em eo em / rlpi rltoa oeao = > ec / eo s eainl prtr , , t |tr em tr i em d |nme ubr
Recall that the terminals are the tokens, the nonterminals produce terminals. A regular definition for the terminals is
dgt[-] ii 09 dgt dgt+ iis iis nme dgt ( dgt) ([-?dgt) ubr iis . iis? E+] iis? lte [-az etr AZ-] i lte (lte |dgt) d etr etr ii * i i f f te te hn hn es es le le rlp<|>|< |> |=|< eo = = >
Attribute
Whitespace ws
where blank, tab, and newline are symbols used to represent the corresponding ascii characters. Recall that the lexer will be called by the parser when the latter needs a new token. If the lexer then recognizes the token ws, it does not return it to the parser but instead goes on to recognize the next token, which is then returned. Note that you can't have two consecutive ws tokens in the input because, for a given token, the lexer will match the longest lexeme starting at the current position that yields this token. The table on the right summarizes the situation. For the parser all the relational ops are to be treated the same so they are all the same token, relop. Naturally, other parts of the compiler will need to distinguish between the various relational ops so that appropriate code is generated. Hence, they have distinct attribute values.
1/26/12
since you have used it for the current lexeme. If the first character was =, you return (relop,EQ).
Note again the star affixed to the final state. Two questions remain. 1. How do we distinguish between identifiers and keywords such as "then", which also match the pattern in the transition diagram? 2. What is (gettoken(), installID())? We will continue to assume that the keywords are reserved, i.e., may not be used as identifiers. (What if this is not the caseas in Pl/I, which had no reserved words? Then the lexer does not distinguish between keywords and identifiers and the parser must.) We will use the method mentioned last chapter and have the keywords installed into the symbol table prior to any invocation of the lexer. The symbol table entry will indicate that the entry is a keyword. installID() checks if the lexeme is already in the table. If it is not present, the lexeme is install as an id token. In either case a pointer to the entry is returned. gettoken() examines the lexeme and returns the token name, either id or a name corresponding to a reserved keyword. Both installID() and gettoken() access the buffer to obtain the lexeme of interest The text also gives another method to distinguish between identifiers and keywords.
file:///D:/cc pdf/Compilers Class Notes.htm 37/133
1/26/12
The "delim" in the diagram represents any of the whitespace characters, say space, tab, and newline. The final star is there because we needed to find a non-whitespace character in order to know when the whitespace ends and this character begins the next token. There is no action performed at the accepting state. Indeed the lexer does not return to the parser, but starts again from its beginning as it still must find the next token. Recognizing Numbers The diagram below is from the second edition. It is essentially a combination of the three diagrams in the first edition.
This certainly looks formidable, but it is not that bad; it follows from the regular expression. In class go over the regular expression and show the corresponding parts in the diagram. When an accepting states is reached, action is required but is not shown on the diagram. Just as identifiers are stored in a symbol table and a pointer is returned, there is a corresponding number table in which numbers are stored. These numbers are needed when code is generated. Depending on the source language, we may wish to indicate in the table whether this is a real or integer. A similar, but more complicated, transition diagram could be produced if they language permitted complex numbers as well. Homework: Write transition diagrams for the regular expressions in problems 3.6 a and b, 3.7 a and b.
38/133
1/26/12
Accepting states often need to take some action and return to the parser. Many of these accepting states (the ones with stars) need to restore one character of input. This is called retract() in the code. What should the code for a particular diagram do if at one state the character read is not one of those for which a next state has been defined? That is, what if the character read is not the label of any of the outgoing arcs? This means that we have failed to find the token corresponding to this diagram. The code calls fail(). This is not an error case. It simply means that the current input does not match this particular token. So we need to go to the code section for another diagram after restoring the input pointer so that we start the next diagram at the point where this failing diagram started. If we have tried all the diagram, then we have a real failure and need to print an error message and perhaps try to repair the input. Note that the order the diagrams are tried is important. If the input matches more than one token, the first one tried will be chosen.
TKNgteo( OE eRlp) / TKNhstocmoet / OE a w opnns TKNrtoe =nwRLP; OE eTkn e(EO) / Frtcmoetsthr / is opnn e ee wie(re hl tu) sic(tt) wthsae cs 0 c=nxCa(; ae : ethr) i ( = '' f c = <) sae=1 tt ; es i ( = '' sae=5 le f c = =) tt ; es i ( = '' sae=6 le f c = >) tt ; es fi(; le al) bek ra; cs 1 .. ae : . .. . cs 8 rtat) / a acpigsaewt asa ae : erc(; / n cetn tt ih tr rtoe.trbt =G; / scn cmoet eTknatiue T / eod opnn rtr(eTkn; eunrtoe)
Second edition additions The description above corresponds to the one given in the first edition. The newer edition gives two other methods for combining the multiple transition-diagrams (in addition to the one above). 1. Unlike the method above, which tries the diagrams one at a time, the first new method tries them in parallel. That is, each character read is passed to each diagram (that hasn't already failed). Care is needed when one diagram has accepted the input, but others still haven't failed and may accept a longer prefix of the input. 2. The final possibility discussed, which appears to be promising, is to combine all the diagrams into one. That is easy for the example we have been considering because all the diagrams begin with different characters being matched. Hence we just have one large start with multiple outgoing edges. It is more difficult when there is a character that can begin more than one diagram.
3.5.1: Use of L x e
Let us pretend I am writing a compiler for a language called pink. I produce a file, call it lex.l, that describes pink in a
file:///D:/cc pdf/Compilers Class Notes.htm 39/133
1/26/12
manner shown below. I then run the lex compiler (a normal program), giving it lex.l as input. The lex compiler output is always a file called lex.yy.c, a program written in C. One of the procedures in lex.yy.c (call it pinkLex()) is the lexer itself, which reads a character input stream and produces a sequence of tokens. pinkLex() also sets a global value yylval that is shared with the parser. I then compile lex.yy.c together with a the parser (typically the output of lex's cousin yacc, a parser generator) to produce say pinkfront, which is an executable program that is the front end for my pink compiler.
The lex program for the example we have been working with follows (it is typed in straight from the book).
% { / dfntoso mnfs cntns * eiiin f aiet osat L,L,E,N,G,G, T E Q E T E I,TE,ES,I,NME,RLP* F HN LE D UBR EO / % } / rglrdfntos* * eua eiiin / dlm ei [\\] tn w s {ei} dlm* lte etr [-az AZ-] dgt ii [-] 09 i d {etr(lte}dgt) lte}{etr{ii}* nme ubr {ii}(.dgt+?E+]{ii}) dgt+\{ii})([-?dgt+? % % {s w} i f te hn es le {d i} {ubr nme} "" < "= <" "" = "> <" "" > "= >" % % itisalD){*fnto t isaltelxm,woefrtcaatr n ntlI( / ucin o ntl h eee hs is hrce i pitdt b ytx,adwoelnt i yln, s one o y yet n hs egh s yeg it tesmo tbeadrtr apitrteeo no h ybl al n eun one hrt * / } itisalu( {*smlrt isalD btpt nmrclcntns n ntlNm) / iia o ntlI, u us ueia osat it asprt tbe no eaae al * / {*n ato adn rtr *} / o cin n o eun / {eunI)} rtr(F; {eunTE)} rtr(HN; {eunES)} rtr(LE; {yvl=(n)isalD) rtr(D; yla it ntlI(; eunI)} {yvl=(n)isalu(;rtr(UBR; yla it ntlNm) eunNME)} {yvl=L;rtr(EO)} yla T eunRLP; {yvl=L;rtr(EO)} yla E eunRLP; {yvl=E;rtr(EO)} yla Q eunRLP; {yvl=N;rtr(EO)} yla E eunRLP; {yvl=G;rtr(EO)} yla T eunRLP; {yvl=G;rtr(EO)} yla E eunRLP;
The first, declaration, section includes variables and constants as well as the all-important regular definitions that define the building blocks of the target language, i.e., the language that the generated lexer will analyze. The next, translation rules, section gives the patterns of the lexemes that the lexer will recognize and the actions to be performed upon recognition. Normally, these actions include returning a token name to the parser and often returning other information about the token via file:///D:/cc pdf/Compilers Class Notes.htm the shared variable yylval.
40/133
1/26/12
information about the token via the shared variable yylval. If a return is not specified the lexer continues executing and finds the next lexeme present. Comments on the Lex Program Anything between %{ and %} is not processed by lex, but instead is copied directly to lex.yy.c. So we could have had statements like
#eieL 1 dfn T 2 #eieL 1 dfn E 3
The regular definitions are mostly self explanatory. When a definition is later used it is surrounded by {}. A backslash \ is used when a special symbol like * or . is to be used to stand for itself, e.g. if we wanted to match a literal star in the input for multiplication. Each rule is fairly clear: when a lexeme is matched by the left, pattern, part of the rule, the right, action, part is executed. Note that the value returned is the name (an integer) of the corresponding token. For simple tokens like the one named IF, which correspond to only one lexeme, no further data need be sent to the parser. There are several relational operators so a specification of which lexeme matched RELOP is saved in yylval. For id's and numbers's, the lexeme is stored in a table by the install functions and a pointer to the entry is placed in yylval for future use. Everything in the auxiliary function section is copied directly to lex.yy.c. Unlike declarations enclosed in %{ %}, however, auxiliary functions may be used in the actions
is an if/then statement and IF is a keyword. Sometimes the lack of reserved words makes lexical disambiguation impossible, however, in this case the slash / operator of lex is sufficient to distinguish the two cases. Consider
I /\.\{etr F (*)lte}
This only matches IF when it is followed by a ( some text a ) and a letter. The only FORTRAN statements that match this are the if/then shown above; so we have found a lexeme that matches the if token. However, the lexeme is just the IF and not the rest of the pattern. The slash tells lex to put the rest back into the input and match it for the next and subsequent tokens. Homework: 3.11. Homework: Modify the lex program in section 3.5.2 so that: (1) the keyword while is recognized, (2) the comparison operators are those used in the C language, (3) the underscore is permitted as another letter (this problem is easy).
file:///D:/cc pdf/Compilers Class Notes.htm 41/133
1/26/12
1/26/12
the symbols in the word in order) ends at an accepting state. It essentially tries all such paths at once and accepts if any end at an accepting state. Patterns like (a|b)*abb are useful regular expressions! If the alphabet is ascii, consider *.java. Homework: For the NFA to the right, indicate all the paths labeled aabb.
1/26/12
We now consider a much more realistic model, a DFA. Definition: A deterministic finite automata or DFA is a special case of an NFA having the restrictions 1. No edge is labeled with 2. For any state s and symbol a, there is exactly one edge leaving s with label a. This is realistic. We are at a state and examine the next character in the string, depending on the character we go to exactly one new state. Looks like a switch statement to me. Minor point: when we write a transition table for a DFA, the entries are elements not sets so there are no {} present. Simulating a DFA Indeed a DFA is so reasonable there is an obvious algorithm for simulating it (i.e., reading a string and deciding whether or not it is in the language accepted by the DFA). We present it now. The second edition has switched to C syntax: = is assignment == is comparison. I am going to change to this notation since I strongly suspect that most of the class is much more familiar with C/C++/java/C# than with algol60/algol68/pascal/ada (the last is my personal favorite). As I revisit past sections of the notes to fix errors, I will change the examples from algol to C usage of =. I realize that this makes the notes incompatible with the edition you have, but hope and believe that this will not cause any serious problems.
s = s; / s a t s a e N T = i a s g m n / tr tt. OE s sinet 0 c=nxCa(; ethr) / a"rmn"ra / piig ed wie( ! ef { hl c = o) s=mv(,) oesc; c=nxCa(; ethr) } i ( i i F testo acpigsae)rtr "e" f s s n , h e f cetn tts eun ys es rtr "o le eun n"
3.7: From Regular Expressions to Automata 3.7.0: Not losing site of the forest due to the trees
This is not from the book. Do not forget the goal of the chapter is to understand lexical analysis. We saw, when looking at Lex, that regular expressions are a key in this task. So we want to recognize regular expressions (say the ones representing tokens). We are going to see two methods. 1. Convert the regular expression to an NFA and simulate the NFA. 2. Convert the regular expression to an NFA, convert the NFA to a DFA, and simulate the DFA. So we need to learn 4 techniques. 1. 2. 3. 4. Convert a regular expression to an NFA Simulate an NFA Convert an NFA to a DFA Simulate a DFA.
The list I just gave is in the order the algorithms would be appliedbut you would use either 2 or (3 and 4). The two editions differ in the order the techniques are presented, but neither does it in the order I just gave. Indeed, we just did item #4. I will follow the order of 2nd ed but give pointers to the first edition where they differ.
file:///D:/cc pdf/Compilers Class Notes.htm 44/133
1/26/12
Next we want the a-successor of D0, i.e., the D-state that occurs when we start at {1,2,4,5,6,7,9} D3 D0 and move along an edge labeled a. We call this successor D1. Since D0 consists {1,2,4,5,6,7,10} D4 of the N-states corresponding to , D1 is the N-states corresponding to "a"="a". We compute the a-successor of all the N-states in D0 and then form the -closure. Next we compute the b-successor of D0 the same way and call it D2.
We continue forming a- and b-successors of all the D-states until no new D-states result (there is only a finite number of subsets of all the N-states so this process does indeed stop). This gives the table on the right. D4 is the only D-accepting state as it is the only D-state containing the (only) N-accepting state 10. Theoretically, this algorithm is awful since for a set with k elements, there are 2k subsets. Fortunately, normally only a small fraction of the possible subsets occur in practice. Homework: Convert the NFA from the homework for section 3.6 to a DFA.
1/26/12
Instead of producing the DFA, we can run the subset algorithm as a simulation itself. This is item #2 in my list of techniques
S = c o u e s) -lsr(0 ; c=nxCa(; ethr) wie(c! ef){ hl = o S=couemv(,); -lsr(oeSc) c=nxCa(; ethr) } i (SF! )rtr "e" f = eun ys; es rtr "o; le eun n"
1/26/12
1. We begin by constructing the three NFAs. To save space, the third NFA is not the one that would be constructed by our algorithm, but is an equivalent smaller one. For example, some unnecessary -transitions have been eliminated. If one view the lex executable as a compiler transforming lex source into NFAs, this would be
47/133
1/26/12
2. 3. 4.
5. 6. 7. 8.
9.
10.
considered an "optimization". We introduce a new start state and -transitions as in the previous section. We start at the -closure of the start state, which is {0,1,3,7}. The first a (remember the input is aaba) takes us to {2,4,7}. This includes an accepting state and indeed we have matched the first patten. However, we do not stop since we may find a longer match. The next a takes us to {7}. The b takes us to {8}. The next a fails since there are no a-transitions out of state 8. So we must back up to before trying the last a. We are back in {8} and ask if one of these N-states (I know there is only one, but there could be more) is an accepting state. Indeed state 8 is accepting for third pattern. If there were more than one accepting state in the list, we would choose the one in the earliest listed pattern. Action3 would now be performed.
1/26/12
Skipped
4.1: Introduction
4.1.1: The role of the parser
Conceptually, the parser accepts a sequence of tokens and produces a parse tree. As we saw in the previous chapter the parser calls the lexer to obtain the next token. In practice this might not occur. 1. The source program might have errors. 2. Instead of explicitly constructing the parse tree, the actions that the downstream components of the front end would do on the tree can be integrated with the parser and done incrementally on components of the tree. There are three classes for grammar-based parsers. 1. universal 2. top-down 3. bottom-up The universal parsers are not used in practice as they are inefficient. As expected, top-down parsers start from the root of the tree and proceed downward; whereas, bottom-up parsers start from the leaves and proceed upward. The commonly used top-down and bottom parsers are not universal. That is, there are grammars that cannot be used with them. The LL and LR parsers are important in practice. Hand written parsers are often LL. Specifically, the predictive parsers we looked at in chapter two are for LL grammars. The LR grammars form a larger class. Parsers for this class are usually constructed with the aid of automatic tools.
F(E)|i d
49/133