Академический Документы
Профессиональный Документы
Культура Документы
Prepared By
2. UNIT-II .................................................................................................................................... 30
4. UNIT- IV .................................................................................................................................. 62
4.4 LISP..................................................................................................................................... 71
5. UNIT -V ................................................................................................................................... 98
UNIT - I
Language Design Issues: History-Role of Programming languages - environments -
Impact of machine Architectures - Language Translation Issues: Programming language
Syntax- Stages in Translation - formal Translation models - recursive descent Parsing
UNIT - II
Modeling Language Properties: Formal Properties of Languages- Language Semantics-
Elementary data Types: Properties of Types and Object- Scalar Data Types - Composite
Data Types
UNIT - III
Encapsulation: Structure data types - Abstract data types - Encapsulation by sub
programs Type Definitions Inheritance: - Polymorphisms
UNIT -IV
Functional Programming: Programs as Functions- Functional Programming in an
Imperative Language - LISP - Functional Programming with static typing - delayed
evaluation- Mathematical functional programming- recursive functions and lambda
calculus - Logic programming : Logic and Logic Programs - Horn Clauses - Prolog -
Problems with logic programming
UNIT V
Formal Semantics: Sample small language - operational Semantics - Denotation
Semantics - Axiomatic Semantics - Program correctness - Parallel Programming: Parallel
Processing and programming languages - threads - Semaphore - monitors-message
passing - parallelism Non Imperative Languages
TEXT BOOKS :
1. Terrence W Pratt, Marvin V Zelkowitz, “Programming Languages - Design and
Implementation,” PHI Publications, 4th edition, 2008
2. Kenneth C. Louden , “Programming Languages-Principles and Practices,” Cengage
Learning Publications , 2nd Edition, 2008
1. UNIT-I
1.1 Language Design Issues
1
Software Architectures
a) Mainframe era
b) Personal computers
2
c) Networking era
3
Naturalness for the application - program structure reflects the logical structure
of algorithm
Support for abstraction - program data reflects problem being solved
Ease of program verification - verifying that program correctly performs its
required function
Programming environment - external support for the language: editors, testing
programs.
Portability of programs - transportability of the resulting programs from the
computer on which they are developed to other computer systems
Cost of use - program execution, program translation, program creation, and
program maintenance
c) Imperative languages
Statement oriented languages that change machine state (C, Pascal, FORTRAN,
COBOL)
Computation: a sequence of machine states (contents of memory)
Syntax: S1, S2, S3.
d) Applicative (functional) languages
Function composition is major operation (ML, LISP)
Syntax: P1(P2(P3(X)))
Programming consists of building the function that computes the answer
e) Rule-based languages
Actions are specified by rules that check for the presence of certain enabling
conditions. (Prolog)
The order of execution is determined by the enabling conditions, not by the order
of the statements.
Syntax: Answer ® specification rule
4
f) Object-oriented languages
Imperative languages that merge applicative design with imperative statements
(Java, C++, Smalltalk)
Syntax: Set of objects (classes) containing data (imperative concepts) and
methods (applicative concepts)
Usually, a language is designed to support a given programming paradigm.
However a language can be used to implement paradigms other than the
intended one.
Language standardization
The need for standards - to increase portability of programs. Language standard
defined by national standards bodies:
ISO - International Standards organization
IEEE - Institute of Electrical and Electronics Engineers
ANSI - American National Standards Institute
Working group of volunteers set up to define standard
Agree on features for new standard
Vote on standard
If approved by working group, submitted to parent organization for
approval.
Problem: When to standardize a language?
If too late - many incompatible versions - FORTRAN in 1960s was already a
de facto standard, but no two were the same
If too early - no experience with language - Ada in 1983 had no running
compilers
Problem: What happens with the software developed before the standardization?
Ideally, new standards have to be compatible with older standards.
5
Internationalization
Some of the internationalization issues:
What character codes to use?
Collating sequences? - How do you alphabetize various languages?
Dates? - What date is 10/12/01? Is it a date in October or December?
Time? - How do you handle time zones, summer time in Europe, daylight
savings time in US, Southern hemisphere is 6 months out of phase with
northern hemisphere, Date to change from summer to standard time is not
consistent.
Currency? - How to handle dollars, pounds, marks, francs, etc.
6
Markup
Not a programming per se, but used to specify the layout of information in Web
Documents
Components:
1. Data - types of elementary data items and data structures to be manipulated
2. Primitive Operations - operations to manipulate data
3. Sequence Control - mechanisms for controlling the sequence of operations
7
4. Data Access - mechanisms to supply data to the operations
5. Storage Management - mechanisms for memory allocation
6. Operating Environment - mechanisms for I/O (i.e. communication)
8
executed by CPU are machine language instructions. Some of them are
implemented in hardware - e.g. increment a register by 1.Some are implemented
in Firmware - e.g. add the contents of two registers.
E. Firmware
A set of machine-language instructions implemented by programs,
called micro programs, stored in programmable read-only memory in the
Computer (PROM).
Example: The language in problem #4, p.42 consists of very low-level
instructions. It would be implemented in hardware (hardwired). The instruction
a = a + b cloud be implemented in firmware.
Advantages:
Flexibility - by replacing the PROM component we can increase the set of
machine instructions, e.g. we can implement vector multiplication as a machine
operation.
Less cost of the hardware - the simpler the instructions, the easier to hardwire
them.
F. Software
a) Translators and virtual architectures
Input: high-level language program Output: machine language code
Types of translators (high to low level):
1. Preprocessor:
Input: extended form of high-level language Output: standard form of high-level
Language
2. Compiler:
Input: high-level language program Output (object language): assembler code
3. Assembler:
Input: assembly language Output: one-to-one correspondence to a machine language
9
Code
4. Loader/Link editor:
Input: assembler/machine language reloadable program
Output: executable program (absolute addresses are assigned)
G. Software simulation - Interpreters
The program is not translated to machine language code. Instead, it is executed
by another program.
Example: Prolog interpreter written in C++
Advantages of interpreted languages:
very easy to be implemented
easy to debug
flexibility of language - easy to modify the interpreter
Portability - as long as the interpreter is portable, the language is also
portable.
Disadvantages of interpreted languages: slow execution.
H. Virtual machines
Program: data + operations on these data Computer: implementation of data
structures + implementation of operations
Hardware computer: elementary data items, very simple operations
Firmware computer: elementary data items, machine language instructions
Software computer: each programming environment defines specific
software computer.
E.G - the operating systems is one specific virtual computer
A programming language also defines a virtual computer.
10
Basic Hierarchy:
1. Software
2. Firmware
3. Hardware
Software sub-hierarchy - depends on how a given programming environment is
implemented
Example of virtual machines hierarchy:
Java applets
Java virtual machine - used to implement the Java applets
C virtual machine - used to implement Java
Operating system - used to implement C
Firmware - used to implement machine language
Hardware - used to implement firmware micro programs
I) Binding and binding times
Binding - fixing a feature to have a specific value among a set of possible values.
E.G. - your program may be named in different ways and when you choose a
Particular name you have done a binding.
Different programming features have different binding times, depending on:
The nature of the feature, e.g. you choose the names of the variables
in the source code; the operating system chooses the physical address of the
variables.
The implementation of the feature - in certain cases the programmer has a
choice to specify the binding time for a given feature.
Binding occurs at:
At language definition - concerns available data types and language structures,
e.g. in C++ the assignment statement is =, while in Pascal it is :=
11
At language implementation - concerns representation of data structures and
operations, e.g. representation of numbers and arithmetic operations
At translation -Chosen by the programmer - variable types and assignments
Chosen by the compiler - relative locations of variables and arrays
Chosen by the loader - absolute locations
At execution -Memory contents on entry to a subprogram (copying arguments to
parameter locations) At arbitrary points (when executing assignment statements)
Recursive programs Dynamic libraries
Example: X = X+10 (see p. 62) Discuss what happens at each stage listed above.
Importance of binding times
1. If done at translation time - more efficiency is gained
2. If done at execution time - more flexibility is gained.
12
3. Verifiability – ability to prove program correctness (very difficult issue)
4. Translatability – ease of translating the program into executable form.
5. Lack of ambiguity – the syntax should provide for ease of avoiding
ambiguous structures..
Basic syntactic concepts in a programming language
1. Character set – the alphabet of the language.
Several different character sets are used: ASCII, EBCIDIC, and Unicode.
2. Identifiers – strings of letters of digits usually beginning with a letter
3. Operator Symbols – +-*/
4. Keywords or Reserved Words – used as a fixed part of the syntax of a
statement.
5. Noise words – optional words inserted into statements to improve readability.
6. Comments – used to improve readability and for documentation purposes.
Comments are usually enclosed by special markers.
7. Blanks – rules vary from language to language. Usually only significant in
literal strings.
8. Delimiters – used to denote the beginning and the end of syntactic constructs.
9. Expressions – functions that access data objects in a program and return a
value
10. Statements – these are the sentences of the language, describe a task to be
performed.
14
b) Syntactic analysis (parsing) – determines the structure of the program,
as defined by the language grammar.
c) Semantic analysis - assigns meaning to the syntactic structures
Example:
int variable1;
The meaning is that the program needs 4 bytes in the memory to serve
as a location for variable1. Further on, a specific set of operations only can be
used with variable1, namely integer operations.
The semantic analysis builds the bridge between analysis and synthesis.
d) Basic semantic tasks:
1. Symbol–table maintenance
2. Insertion of implicit information
3. Error detection
4. Macro processing and compile-time operations
The result of the semantic analysis is an internal representation, suitable to be
used for code optimization and code generation
Synthesis of the object program
The final result is the executable code of the program. It is obtained in three main steps:
Optimization - Code optimization involves the application of rules and algorithms
applied to the intermediate and/ or assembler code with the purpose to make it more
efficient, i.e. faster and smaller.
Code generation - generating assembler commands with relative memory addresses for
the separate program modules - obtaining the object code of the program.
Linking and loading - resolving the addresses - obtaining the executable code of the
program.
15
Bootstrapping
1. The compiler for a given language can be written in the same language.
The process is based on the notion of a virtual machine.
2. A virtual machine is characterized by the set of operations, assumed to be
executable by the machine. For example, the set of operations in Homework 1
can be the set of operations for a virtual machine.
3. A real machine (at the lowest level with machine code operations implemented
in hardware)
4. A firmware machine (next level - its set is the assembler language operations and
the program that translates them into machine operations is stored in a special
read-only memory)
5. A virtual machine for some internal representation (this is the third level, and
there is a program that translates each operation into assembler code)
6. A compiler for the language L (some language) written in L (the same language)
a. The translation of the compiler into the internal representation is done
manually - the programmer manually re-writes the compiler into the
internal representation. This is done once and though tedious, it is not
difficult - the programmer uses the algorithm that is encoded into the
compiler.
b. From there on the internal representation is translated into assembler and
then into machine language.
7. Syntax is concerned with the structure of programs. The formal description of
the syntax of a language is called grammar
Grammars consist of rewriting rules and may be used for both recognition
and generation of sentences.
16
1.8 Formal Translation Models
1. Syntax is concerned with the structure of programs. The formal description of the
syntax of a language is called grammar
2. Grammars consist of rewriting rules and may be used for both recognition and
generation of sentences.
3. Grammars are independent of the syntactic analysis.
There are infinitely many sentences and we cannot write a rule for each
individual sentence. We need these categories in order to describe the structural
patterns of the sentences. For example, the basic sentence pattern in English is a noun
phrase followed by a verb phrase.
17
A sequence of words that constitutes a given category is called a constituent. For
example, the boldface parts in each of the sentences below correspond to a constituent
called verb phrase.
Rule Meaning
18
BNF notation
1. Grammars for programming languages use a special notation called BNF (Backus-
Naur form):
2. The non-terminal symbols are enclosed in < > Instead of ®, the symbol ::= is used The
vertical bar is used in the same way meaning choice are used to represent optional
constituents.
3. BNF notation is equivalent to the first notation in the examples above.
BNF notation:
E®E+T
T®T*P
P ® (E)
19
Derivations, Parse trees, Ambiguity
Rule1: S ® SS
Rule2: S ® (S)
Rule3: S ® ( )
SÞ(S) by Rule1
Þ(SS) by Rule2
Þ(()S) by Rule3
Þ(()()) by Rule3
The strings obtained at each step are called sentential forms. They may contain both
terminal and non-terminal symbols. The last string obtained in the derivation contains
only terminal symbols. It is called a sentence in the language.
This derivation is performed in a leftmost manner. That is, at each step the leftmost
variable in the sentential form is replaced.
20
Parsing is the process of syntactic analysis of a string to determine whether
the string is a correct sentence or not. It can be displayed in the form of a parse tree:
Each non-terminal symbol is expanded by applying a grammar rule that contains the
symbol in its left-hand side. Its children are the symbols in the right-hand side of the
rule.
Note: The order of applying the rules depends on the symbols to be expanded.
At each tree level we may apply several different rules corresponding to the nodes to be
expanded.
Ambiguity: The case when the grammar rules can generate two possible parse trees for
one and the same sequence of terminal symbols.
21
Example:
Following the grammar rules above, there are two possible interpretations:
If a < b then
22
Here is the parse tree for the second interpretation:
23
A grammar that contains such rules is called ambiguous.
It is very important that grammars for programming languages are not ambiguous.
Types of grammars
There are 4 types of grammars depending on the rule format.
1. Regular grammars: (Type 3)
1. A ® a
2. A ® aB
2. Context-free grammars (Type 2)
A ® any string consisting of terminals and non-terminals
3. Context-sensitive grammars (Type 2)
1. String1 ® String2
2. String1 and String2 are any strings consisting of terminals and non-
terminals, provided that the length of String1 is not greater than the length of
String.
24
4. General grammars (Type 0)
String1 ® String2, no restrictions.
Regular grammars and regular expressions
Strings of symbols may be composed of other strings by means of
Concatenation - appending two strings, and Kleene star operation – any
Repetition of the string. E.g. a* can be a, or aa, or aaaaaaa, etc
Given an alphabet ∑, regular expressions consist of string concatenations
combined with the symbols U and *, possibly using '(' and ')'.
There is one special symbol used to denote an empty expression: Ø
Formal definition:
1. Ø and each member of ∑ is a regular expression.
2. If α and β are regular expressions, then (α β) is a regular expression.
3. If α and β are regular expressions, then α U β is a regular expression.
4. If α is a regular expression, then α* is a regular expression.
5. Nothing else is a regular expression.
Example:
0 U 1, (0 U 1)1*
(0 U 1)*01
26
For LL(1)
27
L (1) Parser
Input buffer
– Our string to be parsed. We will assume that its end is marked with a
special symbol $.
Output
– A production rule representing a step of the derivation sequence (left-
most derivation) of the string in the input buffer.
Stack
o contains the grammar symbols
o at the bottom of the stack, there is a special end marker symbol $.
o Initially the stack contains only the symbol $ and the starting symbol S.
$S initial stack
o When the stack is emptied (ie. only $ left in the stack), the parsing is
completed.
Parsing table
o a two-dimensional array M[A,a]
o each row is a non-terminal symbol
o each column is a terminal symbol or the special symbol $
o Each entry holds a production rule.
Parser Actions
The symbol at the top of the stack (say X) and the current symbol in the input
string (say a) determine the parser action.
There are four possible parser actions.
1. If X and a are $ → parser halts (successful completion)
28
2. If X and a are the same terminal symbol (different from $)
→ Parser pops X from the stack, and moves the next symbol in the input buffer.
3. If X is a non-terminal
→ Parser looks at the parsing table entry M[X, a]. If M[X, a] holds a production
rule XY1Y2...Yk, it pops X from the stack and pushes Yk, Yk-1... Y1 into the
stack. The Parser also outputs the production rule XY1Y2...Yk to represent a
step of the Derivation.
4. none of the above → error
– All empty entries in the parsing table are errors.
– If X is a terminal symbol different from a, this is also an error case.
5. The construction of predictive L L (1) parser is based on two very important
functions and those are FIRST and FOLLOW.
6. For the construction
7. Computation of FIRST and FOLLOW function
8. Construction the predictive parsing table using FIRST and FOLLOW functions
9. Parse the input string with the help of predictive parsing table
--------------------------------------------UNIT-1 COMPLETED----------------------------------------
29
2. UNIT-II
3. Extended BNF
Introduction
Rule Attribute
b) Operational models
c) Applicative models
31
d) Denotation semantics
e) Lambda calculus
f) Axiomatic models
g) Specification models
32
Describe the relationship among various functions that implement a program.
Example of a specification model:
Algebraic data types - describe the meaning in terms of algebraic operations, e.g. pop
(push(S, x)) = S
1. Type: determines the set of data values that the object may take and the
applicable operations.
2. Name: the binding of a name to a data object.
3. Component: the binding of a data object to one or more data objects.
4. Location: the storage location in memory assigned by the system.
5. Value: the assignment of a bit pattern to a name.
Data objects are created and exist during the execution of the program. Some
data objects exist only while the program is running. They are called transient data
objects. Other data objects continue to exist after the program terminates, e.g. data files.
They are called persistent data objects. In certain applications, e.g. transaction-based
34
systems the data and the programs coexist practically indefinitely and they need a
mechanism to indicate that an object is persistent. Languages that provide such
mechanisms are called persistent languages.
B. Data types
A data type is a class of data objects with a set of operations for creating and
manipulating them.
Values that data object of that type may have determined by the type of the object
usually an ordered set, i.e. it has a least and a greatest value. Operations that define the
possible manipulations of data objects of that type.
Primitive - specified as part of the language definition
Programmer-defined (as subprograms, or class methods)
An operation is defined by:
1. Domain - set of possible input arguments
2. Range - set of possible results
3. Action - how the result is produced
35
The domain and the range are specified by the operation signature
the number, order, and data types of the arguments in the domain,
the number, order, and data type of the resulting range mathematical notation
for the specification:
op name: arg type x arg type x … x arg type ® result type
The action is specified in the operation implementation
Sources of ambiguity in the definition of programming language operations
Subtypes: a data type that is part of a larger class. Examples: in C, C++ int, short,
long and char are variations of integers.
The operations available to the larger class are available to the subtype.
This can be implemented using inheritance.
Declarations provide information about the name and type of data objects
needed during program execution.
Explicit – programmer defined
Implicit – system defined
e.g. in FORTRAN - the first letter in the name of the variable determines the type
Perl - the variable is declared by assigning a value $abc = 'a string' $abc is a string
variable $abc = 7 $abc is an integer variable
Operation declarations: prototypes of the functions or subroutines that is programmer-
defined.
Examples: declaration: float Sub (int, float) signature: Sub: int x float --> float
Purpose of declaration
Choice of storage representation
Storage management
Declaration determines the lifetime of a variable, and allows for more efficient
memory usage.
Specifying polymorphic operations.
Depending on the data types operations having same name may have different
meaning, e.g. integer addition and float addition
In most language +, -. *, / are overloaded Ada - allows the programmer to
overload subprograms ML - full polymorphism
37
Declarations provide for static type checking
D. Type checking and type conversion
Type checking: checking that each operation executed by a program receives
the proper number of arguments of the proper data types.
Static type checking is done at compilation
Dynamic type checking is done at run-time.
Dynamic type checking – Perl and Prolog Implemented by storing a type tag in each
data object
Advantages: Flexibility
Disadvantages:
Difficult to debug
Type information must be kept during execution
Software implementation required as most hardware does not provide support
Concern for static type checking affects language aspects:
Declarations, data-control structures, provisions for separate compilation of
subprograms
Strong typing: all type errors can be statically checked
Type inference: implicit data types, used if the interpretation is unambiguous.
Used in ML
E. Type Conversion
Explicit type conversion: routines to change from one data type to another.
Pascal: the function round - converts a real type into integer C - cast, e.g. (int) X for float
X converts the value of X to type integer
Coercion: implicit type conversion, performed by the system. Pascal: + integer and real,
integer is converted to real Java - permits implicit coercions if the operation is widening
C++ - and explicit cast must be given.
Two opposite approaches to type coercions:
38
1. No coercions, any type mismatch is considered an error : Pascal, Ada
2. Coercions are the rule. Only if no conversion is possible, error is reported.
Advantages of coercions: free the programmer from some low level concerns,
as adding real numbers and integers.
Disadvantages: may hide serious programming errors.
F. Assignment and Initialization
Assignment - the basic operation for changing the binding of a value to a data object.
For each named object, its position on the right-hand-side of the assignment operator
(=) is a content-of access, and its position on the left-hand-side of the assignment
operator is an address-of access.
address-of is an L-value
contents-of is an R-value
Value, by itself, generally means R-value
39
G. Initialization
Uninitialized data object - a data object has been created, but no value is assigned,
i.e. only allocation of a block storage has been performed.
Scalar data types represent a single object, i.e. only one value can be derived.
In general, scalar objects follow the hardware architecture of a computer.
1. Numeric data types
Maximal and minimal values - depending on the hardware. In some languages these
values represented as defined constants.
Operations
Arithmetic
Relational
Assignment
Bit operations
40
2. Floating-point real numbers
Specification
41
Example: enum Student Class {Fresh, Soph, Junior, Senior} the variable Student
Class may accept only one of the four listed values.
5. Booleans
Specification: Two values: true and false. Can be given explicitly as enumeration,
as in Pascal and Ada. Basic operations: and, or, not.
1. Use a particular bit for the value, e.g. the last bit; 1 - true, 0 -false.
2. Use the entire storage; a zero value would then be false, otherwise - true.
Characters
a) Character strings:
Data objects that are composed of a sequence of characters
42
Fixed declared length - storage allocation at translation time
The data object is always a character string of a
declared length.
Strings longer than the declared length are truncated.
Variable length to a declared bound - storage allocation at translation time.
An upper bound for length is set and any string over
that length is truncated
Unbounded length - storage allocation at run time. Strings can be of any
length. Strings are arrays of characters No string type
declaration. Null character determines the end of a string.
Operations
1. Concatenation – appending two strings one after another
2. Relational operation on strings – equal, less than, greater than
3. Substring selection using positioning subscripts
4. Substring selection using pattern matching
5. Input/output formatting
6. Dynamic strings - the string is evaluated at run time.
7. Perl: "$ABC" will be evaluated as a name of a variable, and the contents of the
variable will be used.
Implementation
Fixed declared length: a packed vector of characters
Variable length to a declared bound: a descriptor that contains the maximum
length and the current length
Unbounded length: either a linked storage of fixed-length data objects or a
contiguous array of characters with dynamic run time storage allocation.
b) Pointers and programmer-constructed objects
43
1. Pointers are variables that contain the location of other data objects
2. Allow to construct complex data objects.
3. Used to link together the components of the complex data objects.
Specification:
Pointers may reference data objects only of a single type – C, Pascal, and Ada.
Pointer may reference data objects of any type. – Smalltalk
C, C++: pointers are data objects and can be manipulated by the program
Java: pointers are hidden data structures, managed by the language
implementation
Operations:
Creation operation:
Allocates a block of storage for the new data object, and returns its address to be
stored in the pointer variable. No name of the location is necessary as the
reference would be by the pointer.
Selection operation: the contents of the pointer are used as an address in the
memory.
Implementation
Methods:
Absolute addresses stored in the pointer. Allows for storing the new object
anywhere in the memory
Relative addresses: offset with respect to some base address. Requires initial
allocation of a block of storage to be used by the data objects. The address of each
object is relative to the address of the block.
Advantages: the entire block can be moved to another location without
44
invalidating the addresses in the pointers, as they are relative, not absolute.
Implementation problems:
45
Direct Access Files: any single component can be accessed at random just as in an
array.
Key: the subscript to access a component.
Implementation: a key table is kept in main memory
Indexed Sequential Files: similar to direct access files using a key combined
with capability to process the file sequentially. The file must be ordered by the
key
-------------------------------------------------UNIT-II COMPLETED-------------------------------------
46
3. UNIT - III
3.1 Encapsulation
If a data member is private it means it can only be accessed within the same
class. No outside class can access private data member (variable) of other class.
1. Number of components
2. Fixed size – Arrays Variable size – stacks, lists. Pointer is used to link
components.
Type of each component
1. Homogeneous – all components are the same type
2. Heterogeneous – components are of different types
47
2. Sequential
Random
3. Insertion/deletion of components
4. Whole-data structure operations Creation/destruction of data structures
B. Implementation of data structure types
Storage representation
Includes:
1. storage for the components
2. optional descriptor - to contain some or all of the attributes
Sequential representation:
The data structure is stored in a single contiguous block of storage that includes
both descriptor and components. Used for fixed-size structures, homogeneous
structures (arrays, character strings)
Linked representation:
The data structure is stored in several noncontiguous blocks of storage, linked
together through pointers. Used for variable-size structured (trees, lists) Stacks, queues,
lists can be represented in either way. Linked representation is more flexible and
ensures true variable size; however it has to be software simulated.
Implementation of operations on data structures
Component selection in sequential representation:
Base address plus offset calculation. Add component size to current location to
move to next component.
Component selection in linked representation:
Move from address location to address location following the chain of pointers.
Storage management
Access paths to a structured data object - to endure access to the object for its
processing. Created using a name or a pointer.
48
Two central problems:
Garbage – the data object is bound but access path is destroyed. Memory cannot be
unbound.
Dangling references – the data object is destroyed, but the access path still exists.
C. Declarations and type checking for data structures
Access - can be implemented efficiently if the length of the components of the array
is known at compilation time. The address of each selected element can be
computed using an arithmetic expression.
Whole array operations, e.g. copying an array - may require much memory.
Associative arrays
Instead of using an integer index, elements are selected by a key value, that is a
part of the element. Usually the elements are sorted by the key and binary search is
performed to find an element in the array.
49
E. Records
1. Number of components
2. Data type of each component
3. Selector used to name each component.
Implementation:
Storage: single sequential block of memory where the components are stored
sequentially.
Selection: provided the type of each component is known, the location can be
computed at translation time.
F. Other structured data objects
50
3.3 Abstract Data Types
Information hiding
Information hiding is the term used for the central principal in the design of
programmer-defined abstract data types.
1. By providing a virtual computer that is simpler to use and more powerful than
the actual underlying hardware computer.
2. The language provides facilities that aid the programmer to construct
abstractions.
51
3.4 Encapsulation by subprograms
A. Subprograms as abstract operations
Implementation of a subprogram:
52
The body is encapsulated; its components cannot be accessed separately by the user of
the subprogram. The interface with the user (the calling program) is accomplished by
means of arguments and returned results.
Subprogram activation
A data structure (record) created upon invoking the subprogram. It exists while
the subprogram is being executed. After execution the activation record is destroyed.
C. Implementation of subprogram definition and invocation
A better approach:
The executable statements and constants are invariant part of the subprogram
they do not need to be copied for each execution of the subprogram. A single
copy is used for all activations of the subprogram. This copy is called code
segment. This is the static part.
53
The activation record contains only the parameters, results and local data.
This is the dynamic part. It has same structure, but different values for the
variables.
On the left is the subprogram definition. On the right is the activation record created
during execution. It contains the types and number of variables used by the
subprogram and the assigned memory locations at each execution of the subprogram.
The definition serves as a template to create the activation record (the use of the word
template is different from the keyword template in class definitions in C++, though its
generic meaning is the same - a pattern, a frame to be filled in with particular values. In
class definitions the binding refers to the data types and it is performed at compilation
time, while here the binding refers to memory locations and data values, and it is
performed at execution time.)
54
Generic subprograms: have a single name but several different definitions –
overloaded.
a) Basics
Type definitions are used to define new data types. Note, that they do not define a
complete abstract data type, because the definitions of the operations are not included.
Examples:
Typedef int key type;
Key type key1, key2;
These statements will be processed at translation time and the type of key1 and key2
will be set to integer.
Struct rational_number
55
{Int numerator, denominator ;};
Name equivalence: two data types are considered equivalent only if they have the
same name.
Issues
every object must have an assigned type, there can be no anonymous types.
A singe type definition must serve all or large parts of a program.
Structural equivalence: two data types are considered equivalent if they define data
objects that have the same internal components.
Issues
Do components need to be exact duplicates? Canfield order is different in
records? Can field sizes vary?
Data object equality
56
We can consider two objects to be equal if each member in one object is
identical to the corresponding member of the other object. However there still may be a
problem. Consider for example the rational numbers 1/2 and 3/6.
Are they equal according to the above definition?
Basic idea: The data components and the programs that implement the operations are
hidden from the external world. The object is encapsulated.
E.G. private section: accessible only to the class functions (class functions are called
also methods) public section: contains the methods - to be used by other programs
Generic abstract data types - use templates
This is the case when the data components may be of different type, however the
operations stay the same, e.g. a list of integers, a list of characters.
Instantiation occurs at compiling time
57
1. Derived classes
Generalization and specialization: Down the hierarchy the objects become more
specialized, up the hierarchy - more generalized.
Derived classes inherit data components and/or methods further on; they can specify
their own data components and their own specific methods. The specific parts may
have same names as in the parent - they override the definition in the parent class.
Implementation
Copy-based approach (Direct encapsulation) - each instance of a class object has its
own data storage containing all data components - specific plus inherited.
Delegation-based approach (Indirect encapsulation) – the object uses the data storage
of the base class.
58
2. Multiple inheritance
4. Inheritance of methods
Class Figure
{
Public:
Figure ();
Virtual void draw ();
Virtual void erase ();
Void center ();
Void set color (T Color);
Void position center ();
};
Void Figure:: center()
{
Erase ();
Position center ();
59
Draw ();
}
Class Box: public Figure
{
Public:
Box ();
Void draw ();
Void erase ();
};
Int MAIN:
Box a_box;
a_box. Draw (); // overrides base class
a_box.set_color(C); // inherits the method
a_box. Center (); // makes use of virtual
// functions
Abstract Classes - can serve only as templates, no data objects can be declared
with the name of the class. Specified by NULL virtual functions
Example:
When printing a person's name, we may want to print the full name, or to print only the
first and the last name. We may use two functions that have the same name but
different number of arguments:
--------------------------------------UNIT-III COMPLETED--------------------------------------
61
4. UNIT- IV
If we ignore the “how” and focus on the result, or the “what” of the
computation, the program becomes a virtual black box that transforms input
into output
Independent variable: the x in f(x), representing any value from the set X
62
Dependent variable: the y from the set Y, defined by y=f(x)
63
Assignment statements allow memory locations to be reset with new values
o Its value depends only on the values of its arguments (and possibly
nonlocal variables)
64
Referential transparency: the property whereby a function’s value depends
only on the values of its variables (and nonlocal variables)
Examples:
o rand function is not because it depends on the state of the machine and
previous calls to itself
Value semantics: semantics in which h names are associated only to values, not
memory locations
65
Composition: essential operation on functions
o (g o f)(x) = g(f(x))
66
Common characteristics of functional programming languages:
Some functional languages are pure, i.e. they contain no imperative features at
all. Examples: Haskell, Miranda, Gofer.
Impure languages may have assignment-statements, goto:s, while-loops, etc.
Examples: LISP, ML, Scheme.
67
Applications of Functional Languages
Haskell is statically typed (the signature of all functions and the types of all
variables are known prior to execution);
Haskell uses lazy rather than eager evaluation (expressions are only evaluated
when needed);
Haskell uses type inference to assign types to expressions, freeing the
programmer from having to give explicit types;
Haskell is pure (it has no side-effects).
68
The advantage of lazy evaluation is that it allows us to construct infinite objects
piece by piece as necessary
Consider the following function which can be used to produce infinite lists of
integer values:
D. Haskell – Currying
69
4.3 Functional Programming in an Imperative Language
Comparing Functional and Imperative Languages
Imperative Languages:
Efficient execution
Complex semantics
Complex syntax
Concurrency is programmer designed
Functional Languages:
Simple semantics
Simple syntax
70
Functional Languages:
Simple semantics
Simple syntax
Inefficient execution
Programs can automatically be made concurrent
Sequence of instructions
Modification of variables (memory cells)
Test of variables (memory cells)
Transformation of states (automata)
Construction of programs
Describe what has to be computed
Organize the sequence of computations into steps
Organize the variables
Correctness
Specifications by pre/post-conditions
Loop invariants
Symbolic execution
Expressions f(z) + x / 2 can be different from x / 2 + f(z)namely when f modifies
the value of x (by side effect)Variables
The assignment x: = x + 1 modifies a memory cell as a side effect
4.4 LISP
LISP is the first functional programming language, it contains two forms those
are:
o Data object types: originally only atoms and lists
71
o List form: parenthesized collections of sub lists and/or atoms e.g.,(A B (C
D) E)
Fundamentals of Functional Programming Languages:
The objective of the design of a FPL is to mimic mathematical functions to the
greatest extent possible. The basic process of computation is fundamentally
different in a FPL than in an imperative language
In an imperative language, operations are done and the results are stored in
variables for later use
In an FPL, the evaluation of a function always produces the same result given the
same parameters
72
Scheme:
A mid-1970s dialect of LISP, designed to be cleaner, more modern, and simpler
version than the contemporary dialects of LISP, Uses only static scoping
Functions are first-class entities, They can be the values of expressions and
elements of lists,
They can be assigned to variables and passed as parameters
73
Functional programming languages are based on the lambda-calculus. The
lambda-calculus grew out of an attempt by Alonzo Church and Stephen Kleene
in the early 1930s to formalize the notion of computability (also known
as constructability and effective calculability).
It is a formalization of the notion of functions as rules (as opposed to functions as
tuples). As with mathematical expressions, it is characterized by the principle
that the value of an expression depends only on the values of its sub
expressions.
The lambda-calculus is a simple language with few constructs and a simple
semantics. But, it is expressive; it is sufficiently powerful to express all
computable functions.
As an informal example of the lambda-calculus, consider the function defined by
the polynomial expression
x2 + 3x - 5.
The variable x is a parameter. In the lambda-calculus, the notation λx.M is used
to denote a function with parameter x and body M. That is, x is mapped to M.
We rewrite our function in this format
λx.(x2+ 3x - 5)
74
and read it as ``the function of x whose value is defined by x2 + 3x - 5''. The lambda-
calculus uses prefix form and so we rewrite the body in prefix form,
((- 4) 5)
And finally the subtraction
-1.
We say that the variable x is bound in the lambda-expression λx. A variable occurring in
the lambda-expression which is not bound is said to be free. The variable x is free in the
lambda-expression λy.((+ x) y). The scope of the variable introduced (or bound) by
lambda is the entire body of the lambda-abstraction.
75
The lambda-notation extends readily to functions of several arguments. Functions of
more than one argument can be curried to produce functions of single arguments. For
example, the polynomial expression xy can be written as
λx. λy. xy
When the lambda-abstraction λx. λy. xy is applied to a single argument as in (λx. λy. xy
5) the result is λy. 5y, a function which multiplies its argument by 5. A function of more
than one argument is regarded as a functional of one variable whose value is a function
of the remaining variables, in this case, ``multiply by a constant function.''
The pure lambda-calculus does not have any built-in functions or constants. Therefore,
it is appropriate to speak of the lambda-calculi as a family of languages for computation
with functions. Different languages are obtained for different choices of functions and
constants.
(+ (* 5 6) (* 8 3)) → (+ 30 (* 8 3))
→ (+ 30 24)
→ 54
76
When the expression is the application of a lambda-abstraction to a term, the term is
substituted for the bound variable. This substitution is called β-reduction. In the
following sequence of reductions, the first step an example of β-reduction. The second
step is the reduction required by the addition operator.
(λx.((+ 3) x))
((+ 3) 4)
The pure lambda-calculus has just three constructs: primitive symbols, function
application, and function creation. Figure N.1gives the syntax of the lambda-calculus.
Example: (if a b c)
77
Using applicative order evaluation for functions makes semantics and
implementation easier
Languages with the property that functions are strict are easier to implement,
although no strictness can be a desirable property
When called as p(true, 1 div 0), it returns 1 since y is never reached in the code of
p
78
Such “shell” procedures are sometimes referred to as pass by name thunks, or
just thunks
In Scheme and ML, the lambda and fn function value constructors can be used to
surround parameters with function shells
Example:
Delayed evaluation can introduce inefficiency when the same delayed expression
is repeatedly evaluated
Scheme uses a memorization process to store the value of the delayed object the
first time it is forced and then return this value for each subsequent call to force
This can be achieved in a functional language without explicit calls to delay and
force
79
Required runtime rules for lazy evaluation:
o All bindings of local names in let and letrec expressions are delayed
Same-fringe problem for lists: two lists have the same fringe if they contain the
same non-null atoms in the same order
To determine if two lists have the same fringe, must flatten them to just lists of
their atoms
flatten function: can be viewed as a filter; reduces a list to a list of its atoms
80
Lazy evaluation will compute only enough of the flattened lists as necessary
before their elements disagree
Side effects, in particular assignment, do not mix well with lazy evaluation
81
3. Apply-to-all
A functional form that takes a single function as a parameter and yields a list of
values obtained by applying the given function to each element of a list of parameters
Form: α For h (x) ≡ x * x * x α ( h, (3, 2, 4)) yields (27, 8, 64) This looks like map( ).
Now consider the formula as a set equation: fact = fact0 U fact', where fact' is
the function formed for each n by the formula n * fact (n-1).
82
But consider what would happen if we used fact0 as an approximation for fact
in the formula for fact':
So apply the equation again, this time using fact1 as an approximation for fact
in fact'. We get yet another point! Call this function fact2 . Continue.
If we try the process once again, we find that we get no new points: fact =
fact0 U fact '.
The function fact is said to be a fixed point of the recursive equation for fact.
Indeed, fact is the smallest such set with this property and is essentially
unique.
So it makes sense to define the fact function to be fact : we say that the
recursive definition has least-fixed-point semantics.
Not all sets allow least fixed point solutions to recursive equations. Sets that
do are called domains.
Domain theory tells us when recursive definitions will work (and when they
won't).
FAC: Y H
Y is called a fixed point combinatory.
84
With the function Y, this definition of FAC does not use of recursion. From the previous
two definitions, the function Y has the property that
Y H = H (Y H)
As an example, here is the computation of FAC 1 using the Y combinatory.
FAC 1 = (Y H) 1
= H (Y H) 1
= λfac.(λn.(if (= n 0) 1 (* n (fac (- n 1))))) (Y H) 1
= λn.(if (= n 0) 1 (* n((Y H)(- n 1)))) 1
= if (= 1 0) 1 (* 1 ((Y H)(-11)))
= (* 1 ((Y H)(-11)))
= (* 1 ((Y H)0))
= (* 1 (H (Y H) 0))
...
= (* 1 1)
= 1
The function Y can be defined in the lambda-calculus.
Y : λh.(λx.(h (x x)) λx.(h (x x)))
It is especially interesting because it is defined as a lambda-abstraction without using
recursion. To show that this lambda-expression properly defines the Y combinatory,
here it is applied to H.
85
λxy.x + y instead of λx.λy.x + y
Issues such as delayed evaluation, recursion, and scope can be studied with
mathematical precision in the lambda calculus.
This expression:
86
A reduction rule permits 2 to be substituted for x in the lambda, yielding this:
The set of constants and the set of variables are not specified by the grammar
In the expression
Typed lambda calculus: more restrictive form that includes the notion of data
type, thus reducing the set of expressions that are allowed
o Must change the name of the variable in the inner lambda abstraction
(alpha-conversion)
88
Syntax for lambda calculus
Two operations:
Grammar:
lexpr variable . lexpr | lexpr lexpr
| ( lexpr ) | variable | constant
Examples
x . + ((y. (x. x y) 2) x) y
(x . x y) y
The set of variables is unspecified, but doesn't matter very much, as long as it
is (countably) infinite.
89
The set of constants isn't specified either, and this can make a difference in
terms of what you want to express. This set may be infinite (all integers) or
finite or even empty (pure lambda calculus). Thus, there are really many
kinds of lambda calculus.
h (x x)) ( x. h (x x)))
Some expressions are equivalent to others in lambda calculus even though they
are syntactically distinct:
o x . x is equivalent to y . y()
o x . y x is equivalent to y ()
o ( x . x) y is equivalent to y()
90
Conversion operations depend on the notion of the scope (or binding) of a
variable in an abstraction.
The variable x in the expression (x.E) is said to be bound by the lambda. The
scope of the binding is the expression E.
Execution is initiated by a query or goal, which the system attempts to prove true
or false, based on the existing set of assertions.
For this reason, logic programming systems are sometimes called deductive
databases.
Computing ancestors:
A parent is an ancestor.
If A is an ancestor of B, and B is an ancestor of C, then A is an ancestor of C. (a
typical relation: called??)
91
A mother is a parent.
A father is a parent.
Bill is the father of Jill.
Jill is the mother of Sam.
Bob is the father of Sam.
A. Horn Clauses
92
The null clause: 0 positive and 0 negative literals. Appears only as the end of a
resolution proof.
Drop the quantifiers (i.e., assume them implicitly). Distinguish variables from
constants, predicates, and functions by upper/lower case:
Modified Horn clause syntax: write the clauses backward, with :- as the
(backward) arrow, comma as "and" and semicolon as "or":
Ancestor(X,Y) :- parent(X,Y).
Ancestor(X,Y) :-
ancestor(X,Z), ancestor(Z,Y).
Parent(X,Y) :- mother(X,Y).
Parent(X,Y) :- father(X,Y).
Father (bill, jill).
Mother (jill, sam).
Father (bob, am).
factorial (0,1).
Factorial (N,N*M) :- factorial(N-1,M).
Parent(X,Y) ancestor(X,Y).
Ancestor (A,B) and ancestor(B,C) ancestor(A,C).
Mother(X,Y) parent(X,Y).
Father(X,Y) parent(X,Y).
Father (bill,jill).
Mother (jill,sam).
Father (bob,sam).
Factorial (0,1).
Factorial (N-1,M) factorial(N,N*M).
93
B. Prolog
PROLOG was developed in France and England in the late 70s the intent was to
provide a language that could accommodate logic statements and has largely
been used in AI but also to a lesser extent as a database language or to solve
database related problems
Elements of Prolog
Terms – constant, variable, structure constants are atoms or integers (atoms are
like those symbols found in Lisp) variables are not bound to types, but are bound
to values when instantiated (via unification) an instantiation will last as long as it
takes to complete a goal proving something is true, or reaching a dead end with
the current instantiation structures are predicates and are represented as
functor(parameter list) where functor is the name of the predicate
– headless clauses are statements that are always true in reality, a headless
clause is a rule whose condition is always true
94
RULES
All rules are stated in Horn clause form the consequence comes first the symbol :-
is used to separate the consequence from the antecedent .And is denoted by , and
Or is denoted by ; or separating the rule into two separately rules Variables in
rules are indicated with upper-case letters rules end with a .
o examples:
o we can use _ as a wildcard meaning this is true if we can find any clause
that fits
Advantages
95
Deficiencies of Prolog
– Prolog offers built-in control of resolution and unification you often have
to force a problem into the resolution mold to solve it with Prolog (most
problems cannot or should not be solved in this way)
Inefficiencies of resolution
Given a query or goal, Prolog tries to pattern match the goal with the left-hand
sides of all clauses, in a sequential top-down fashion.
96
Any lhs that matches causes the terms to be set up sequentially as sub goals,
which Prolog immediately tries to match in turn with lhs terms.
-------------------------------------UNIT-IV COMPLETED-----------------------------------------------
97
5. UNIT -V
99
Another semantics for the example is (1 + 2) + (4 + 5) ⇒ (1 + 2) + 9 ⇒ 3 + 9 ⇒ 12.
The
Outcome is the same, and a set of rules that has this property is called confluent
[25].
A structural operational semantics is a term-rewriting system plus a set of
inference rules that state precisely the context in which a computation step can
be undertaken. (A structural operational
Semantics is sometimes called “small-step semantics,” because each computation
step is a small step towards the final answer.) Say that we demand left-to-right
computation of arithmetic expressions. This is encoded as follows:
N1 + N2 ⇒ N′ where N′ is the sum of N1 and N2
E1 ⇒ E′
1
E1 + E2 ⇒ E′
1 + E2
E2 ⇒ E′
2
N + E2 ⇒ N + E′
The first rule goes as before; the second rule states, if the left operand of an
addition expression can be rewritten, then do this. The third rule is the crucial
one: if the right operand of an addition expression can be rewritten and the left
operand is already a numeral (completely evaluated), then rewrite the right
operand. Working together, the three rules force left-to-right evaluation.
Each computation step must be deduced by the rules. For (1 + 2) + (4 + 5), we
deduce this
initial computation step:
1+2⇒3
(1 + 2) + (4 + 5) ⇒ 3 + (4 + 5)
100
Thus, the first step is (1+2)+(4+5) ⇒ 3+(4+5); note that we cannot deduce that
(1+2)+(4+5) ⇒
(1 + 2) + 9. The next computation step is justified by this deduction:
4+5⇒9
3 + (4 + 5) ⇒ 3 + 9
The last deduction is simply 3 + 9 ⇒ 12, and we are finished. The example shows
why the semantics is “structural”: each computation step is explicitly embedded
into the structure of the overall program.
Operational semantics is often used to expose implementation concepts, like
instruction counters, storage vectors, and stacks. For example, say our semantics
of arithmetic must show how a stack holds intermediate results. We use a state of
form hs, ci, where s is the stack and c is the arithmetic expression to evaluate. A
stack containing n items is written v1 :: v2 :: ... :: vn :: nil,
Where v1 is the topmost item and nil marks the bottom of the stack. The c
component will be written as a stack as well. The initial state for an arithmetic
expression, p, is written hnil, p :: nili, and computation proceeds until the state
appears as hv :: nil, nili; we say that the result is v.
101
meaning, 12. The implementation that computes the 12 is a separate issue,
perhaps addressed by an operational semantics.
The assignment of meaning to programs is performed compositionally: the
meaning of a phrase is built from the meanings of its sub phrases. We now see
this in the denotation semantics of the arithmetic language. First, we assert that
meanings of arithmetic expressions must be taken from the domain (“set”) of
natural numbers, Nat = {0, 1, 2, ...}, and there is a binary, mathematical function,
plus : Nat × Nat → Nat, which maps a pair of natural numbers to their sum
The denotation semantics definition of arithmetic is simple and elegant:
E : Expression →Nat
E[[N]] = N
E[[E1 + E2]] = plus(E[[E1]], E[[E2]])
The first line states that E is the name of the function that maps arithmetic
expressions to
their meanings. Since there are two BNF constructions for expressions, E is
completely defined by the two equational clauses. (This is a Tarksi-style
interpretation, as used in symbolic logic to give meaning to logical propositions .
The interesting clause is the one for E1 +E2; it says that the meanings of E1 and
E2 are combined compositionally by plus. Here is the denotation semantics of
our example program:
E[[(1 + 2) + (4 + 5)]] = plus(E[[1 + 2]], E[[4 + 5]])
= plus(plus(E[[1]], E[[2]]), plus(E[[4]], E[[5]]))
= plus(plus(1, 2), plus(4, 5)) = plus(3, 9) = 12
Read the above as follows: the meaning of (1+2)+(4+5) equals the meanings of
1+2 and 4+5 Added together. Since the meaning of 1 + 2 is 3, and the meaning of
4 + 5 is 9, the meaning of the overall expression is 12. This reading says nothing
102
about order of evaluation or run-time data structures—it states only
mathematical meaning.
Here is an alternative way of understanding the semantics; write a set of
simultaneous equations based on the denotation definition:
103
Motivation
A correct program is one that does exactly what it is intended to do, no more and
no less.
This requires a language for specifying precisely what the program is intended to
do.
o Hoare invented “
Theorem provers
o PVS
Modeling languages
Specification languages
o JML
o Eiffel
104
o Java
o Spark/Ada
Specification Methodology
o Design by contract
Partial correctness
There is no guarantee that an arbitrary program will terminate normally. That is,
for some inputs,
It may enter an infinite loop, or
E.g., consider a C-like factorial function n! Whose parameter n and result are int
values? Passing 21 as an argument should return n! = 51090942171709440000.
P and Q if, whenever s begins in a state that satisfies P, it terminates in state that
satisfies Q.
Issues to be addressed:
Variable definitions
o Mutable: values may be assigned to the variables and changed
during program execution (as in sequential languages).
o Definitional: variable may be assigned a value only once.
Parallel composition: A parallel statement, which causes additional threads of
control
to begin executing.
Execution models (Program structure)
Communication
Shared memory with common data objects accessed by each parallel program;
106
5.9 Threads
Threads are lightweight tasks (i.e. sharing the same address space)
communication using shared data (and the join method)
Concurrent units
1. Run () method of various objects
a) Descendants of the Thread class
b) classes implementing Runnable
2 .The main method
Class My Thread extends Thread {
Public void run () {...}
}
...
Thread myTh = new My Thread ();
myTh.start();
start executes the thread
yield voluntarily yields the processor
sleep postpones for a specified time (in ms)
(after which the thread joins the ready queue)
join waits for a (different) thread to finish
5.10 Semaphore
Semaphore is a variable that has an integer value
May be initialized to a nonnegative number
Wait operation decrements the semaphore value
Signal operation increments semaphore value
Each task performs:
Wait;
107
/* critical section */
Signal;
Producer: Consumer:
108
s.counter--
else
(s.queue).insert(P)
suspend P’s execution
end
signal(s)
if isempty(s.queue) then
s.counter++
else
P = (s.queue).remove()
mark P as ready
end
5.10 Monitors
Solve some of the problems of semaphores formalized 1973 (Hansen), named
1974 (Hoare) use encapsulation shared data are packaged with operations on
these data the representation is hidden – basically a kind of ADT access
mechanism is part of the monitor if some subroutine of the monitor is running,
all other calling tasks are put into the queue guarantee mutual exclusion
cooperation is programmer’s responsibility can be implementer using
semaphores (and vice versa)
e.g. synchronized in Java
A monitor is a programming language construct that supports controlled access
to shared data synchronization code is added by the compiler
A monitor encapsulates:
109
o shared data structures
Data can only be accessed from within the monitor, using the provided
procedures protects the data from unstructured access
only one thread can be executing inside at any time thus, synchronization is
implicitly associated with the monitor – it “comes for free” If a second thread
tries to execute a monitor procedure, it blocks until the first has left the monitor
Condition Variables
“Required” for monitors So useful they’re often provided even when monitors
aren’t available.
110
Three operations on condition variables
wait(c)
signal(c)
broadcast(c)
111
Cooperation of two processes between the process that owns the data and the
process that wants to access the data.
This constraint, while onerous, make the underlying costs very explicit to the
programmer
Message-passing programs are written using the asynchronous or loosely
synchronous paradigms
o All concurrent tasks execute asynchronously.
o Tasks or subsets of tasks synchronize to perform interactions.
Languages
o Send/Receive statements, specific constructs to define communication
channels, etc. as an integral part of the language OCCAM is an old MP
language, whose MP constructs were directly supported by the
Transporter machine language
o Languages permit compile-time analyses, type checking, deadlock
checking, etc.
Libraries offer a set of MP primitives, and are linkable to many sequential
languages (C/C++, F77/F90, etc.)
o MPI (Message Passing Interface)
o PVM (Parallel Virtual Machine)
o Optimized version for specialized networks, but there exist versions that
work for TCP/IP over an Ethernet network
MPI (Message Passing Interface)
o standard for parallel programming
o Universities, Research centers, Industries were involves
o there exist public-domain implementations
o mpich – maintained by Argonne National Laboratories
PVM (Parallel Virtual Machine)
112
o first MP library that has been largely adopted
o Homogeneous high-performance clusters, but also heterogeneous
distributed architectures composed of remote hosts over Internet
o developed by Oak Ridge National Laboratories
o public domain
Process creation
o At loading time process number decided at loading time used in MPI
(mpich), now available on PVM too
SPMD (Single Program Multiple Data) : same code executed by the all the
process copies
o At running time given an executable code, create a process executing that
code used in PVM (spawn), now available in MPI too in principle,
processes can execute different codes (MPMD –
Multiple Programs Multiple Data)
o At compiling time old approach: all is decided statically (number and
mapping of processes) OCCAM on Meiko CS1 – Transporter-based MPI
defines a standard library for message-passing that can be used to develop
portable message-passing programs using either C/C++ or Fortran.
The MPI standard defines both the syntax as well as the semantics of a core set of
library routines.
Vendor implementations of MPI are available on almost all commercial parallel
computers.
It is possible to write fully-functional message-passing programs by using only
six routines.
MPI_Init Initializes MPI.
MPI_Finalize Terminates MPI.
MPI_Comm_size determines the number of processes.
113
MPI_Comm_rank determines the label of calling process.
MPI_Send sends a message.
MPI_Recv receives a message.
MPI_Init is called prior to any calls to other MPI routines. Its purpose is to
initialize the MPI environment.
MPI_Finalize is called at the end of the computation, and it performs various
clean-up tasks to terminate the MPI environment.
The prototypes of these two functions are:
Int MPI_Init (int *argc, char ***argv)
Int MPI_Finalize ()
MPI_Init also strips off any MPI related command-line arguments.
All MPI routines, data-types, and constants are prefixed by “MPI_”.
The return code for successful completion is MPI_SUCCESS parallelism Non
Imperative Languages
The execution of more than one program/subprogram simultaneously.
A subprogram that can execute concurrently with other subprograms is called a
task or a process.
Hardware supported:
o multiprocessor systems
o distributed computer systems
Software simulated - : time-sharing
Variable definitions
Mutable: values may be assigned to the variables and changed during program
execution (as in sequential languages).
Definitional: variable may be assigned a value only once.
Parallel composition: A parallel statement, which causes additional threads of control
to begin executing
114
Execution models (Program structure)
Transformational: E.G. parallel matrix multiplication
Reactive
Communication
shared memory with common data objects accessed by each parallel
program; message
Synchronization:
Parallel programs must be able to coordinate actions
In the itemized list below we describe the main properties of the imperative
paradigm.
Characteristics:
o Discipline and idea digital hardware technology and the ideas of Von
Neumann
o Incremental change of the program state as a function of time.
o Execution of computational steps in an order governed by control
structures. We call the steps for commands
o Straightforward abstractions of the way a traditional Von Neumann
computer works
o Similar to descriptions of everyday routines, such as food recipes and car
repair
115
o Typical commands offered by imperative languages
Assignment, IO, procedure calls
o Language representatives
Fortran, Algol, Pascal, Basic, C
o The natural abstraction is the procedure
Abstracts one or more actions to a procedure, which can be calledas a single
command.
"Procedural programming"
We use several names for the computational steps in an imperative language. The word
statement is often used with the special computer science meaning 'a elementary
instruction in a source language'. The word instruction is another possibility; we prefer
to devote this word the computational steps performed at the machine level. We will
use the word 'command' for the imperatives in a high level imperative programming
language.
------------------------------------------------UNIT-V COMPLETED-----------------------------------
116
: I M.Sc
OBJECTIVE TEST-I
117
a) Object code b) machine code
c) source program d) interactive program
8. Programming language 'FORTRAN' stands for
a) formula translator b) formula translation
c) Free translator d) free translation
9. Programming language which is used for scientific purposes and work is to be done
in batches is
a) PASCAL b) FORTRAN
c) LOGO d) COMAL
10. Programming language 'COMAL' stand for
a) common algorithmic language
b) common arithmetic language
c) common arithmetic learning
d) common algorithmic learning
118
M.Sc (CS) Degree examination
PRINCIPLES OF PROGRAMMING LANGUAGES
Time: Three hours maximum: 75 marks
SECTION A-(05X05=25 marks)
Answer ALL the Questions
1) a) Write short notes on language design issues (or)
b) Discuss in detail about history of programming languages
2) a) Explain about elementary data types in detail? (or)
b) Write short notes on scalar data types?
3) a) Write short notes on encapsulation and structured data types (or)
b) Write short notes on type definition.
4) a) Explain in detail about functional programming? (or)
b) Write shorts on programs with functions?
5) a) Explain about formal semantics with sample small languages? (or)
b) Write short notes on operational semantics?
SECTION C-(05X10=50 marks)
Answer ALL the Questions
6) a) Explain in detail about formal translation models (or)
b) Explain briefly stages in translation.
7) a) Explain about composite data types and scalar data types (or)
b) Explain about formal properties of languages
8) a) Explain briefly about encapsulation by subprograms. (or)
b) Explain about polymorphism and inheritance.
9) a) Explain on recursive functions and lambda calculus (or)
b) Explain briefly on logic and logic programs
10) a) Discuss about message passing and monitors (or)
b) Explain about semaphore and threads.
119
M.Sc (CS) Degree examination
PRINCIPLES OF PROGRAMMING LANGUAGES
Time: Three hours maximum: 75 marks
SECTION A-(05X05=25 marks)
Answer ALL the Questions
1) a) Write short notes programming environment (or)
b) Explain about recursive descent parsing
2) a) Write short notes on language semantics (or)
b) Discuss about properties of types and objects?
3) a) What is inheritance? Explain the concepts of inheritance. (or)
b) Write short notes on implicit passing of information
4) a) Explain about mathematical functional programming? (or)
b) Discuss on delayed evaluation?
5) a) Explain in detail about program correctness? (or)
b) Explain about monitors and threads?
SECTION C-(05X10=50 marks)
Answer ALL the Questions
6) a) Explain in detail on impact of machine architectures (or)
b) Explain briefly stages in translation.
7) a) Discuss about modeling language properties. (or)
b) Explain about composite data types.
8) a) Explain briefly about encapsulation by subprograms. (or)
b) Explain about polymorphism and inheritance.
9) a) Explain briefly on LISP (or)
b) Explain briefly on functional programming with static typing
10) a) Discuss about denotation semantics (or)
b) Explain the parallel processing and programming languages.
120