1
20IT509
PRINCIPLES OF
PROGRAMMING
LANGUAGES
M.Saranya
AP/IT
UNIT I SYNTAX AND
SEMANTICS
• Evolution of programming languages – describing syntax –
context-free grammars – attribute grammars – describing
semantics – lexical analysis – parsing – recursive-descent –
bottom up parsing
2
3
OBJECTIVES
• To understand and describe syntax and semantics of
programming languages
• To understand data, data types, and basic statements
• To understand call-return architecture and ways of
implementing them
• To understand object-orientation, concurrency, and
event handling in programming languages
• To develop programs in non-procedural programming
paradigms
4
• Evolution of programming languages
• Describing syntax
– Context-free grammars
– Attribute grammars
• Describing semantics
• Lexical analysis
– Parsing
• Recursive-decent
• Bottom up parsing
5
Improved background for choosing
appropriate languages
• C vs. Modula-3 vs. C++ for systems programming
• Fortran vs. APL vs. Ada for numerical computations
• Ada vs. Modula-2 for embedded systems
• Common Lisp vs. Scheme vs. Haskell for symbolic data
manipulation
• Java vs. C/CORBA for networked PC programs
6
Increased ability to learn new languages
• Easy to walk down language family tree
• Concepts are similar across languages
• If you think in terms of iteration, recursion,
abstraction (for example), you will find it easier
to assimilate the syntax and semantic details of
a new language than if you try to pick it up in a
vacuum
• Analogy to human languages: good grasp of
grammar makes it easier to pick up new
languages
7
Increased capacity to express ideas
Figure out how to do things in languages that don't support them:
• lack of suitable control structures in Fortran use comments and
programmer discipline for control structures lack of recursion
in Fortran, CSP, etc
• write a recursive algorithm then use mechanical recursion
elimination (even for things that aren't quite tail recursive)
• lack of named constants and enumerations in Fortran
• use variables that are initialized once, then never changed
• lack of modules in C and Pascal
• use comments and programmer discipline
• lack of iterators in just about everything
• fake them with (member?) functions
8
What makes a language successful?
• Easy to learn (BASIC, Pascal, LOGO, Scheme)
• Easy to express things, easy use once fluent,
"powerful” (C, Common Lisp, APL, Algol-68, Perl)
• Easy to implement (BASIC, Forth)
• Possible to compile to very good (fast/small) code
(Fortran)
• Backing of a powerful sponsor (COBOL, PL/1,
Ada, Visual Basic)
• Wide dissemination at minimal cost (Pascal,
Turing, Java)
9
What makes a successful language?
The following key characteristics:
– Simplicity and readability
– Clarity about binding
– Reliability
– Support
– Abstraction
– Orthogonality
– Efficient implementation
10
Simplicity and Readability
• Small instruction set
– E.g., Java vs Scheme
• Simple syntax
– E.g., C/C++/Java vs Python
• Benefits:
– Ease of learning
– Ease of programming
11
A language element is bound to a property at the time that property is
defined for it.
So a binding is the association between an object and a property of that
object
– Examples:
• a variable and its type
• a variable and its value
– Early binding takes place at compile-time
– Late binding takes place at run time
Clarity about Binding
12
Reliability
A language is reliable if:
– Program behavior is the same on different
platforms
• E.g., early versions of Fortran
– Type errors are detected
• E.g., C vs Haskell
– Semantic errors are properly trapped
• E.g., C vs C++
– Memory leaks are prevented
• E.g., C vs Java
13
Language Support
• Accessible (public domain)
compilers/interpreters
• Good texts and tutorials
• Wide community of users
• Integrated with development environments
(IDEs)
14
Abstraction in Programming
• Data
– Programmer-defined types/classes
– Class libraries
• Procedural
– Programmer-defined functions
– Standard function libraries
15
Orthogonality
A language is orthogonal if its features are
built upon a small, mutually independent set
of primitive operations.
• Fewer exceptional rules = conceptual
simplicity
– E.g., restricting types of arguments to a function
• Tradeoffs with efficiency
16
Efficient implementation
• Embedded systems
– Real-time responsiveness (e.g., navigation)
– Failures of early Ada implementations
• Web applications
– Responsiveness to users (e.g., Google search)
• Corporate database applications
– Efficient search and updating
• AI applications
– Modeling human behaviors
17
• Why do we have programming languages?
– way of thinking---way of expressing
algorithms
• languages from the user's point of view
– abstraction of virtual machine---way of
specifying what you want the hardware to do
without getting down into the bits
• languages from the implementor's point of view
What is a language for?
18
Genealogy of common high-level programming languages
19
History
• Early History : The first programmers
• The 1940s: Von Neumann and Zuse
• The 1950s: The First Programming Language
• The 1960s: An Explosion in Programming
languages
• The 1970s: Simplicity, Abstraction, Study
• The 1980s: Consolidation and New Directions
• The 1990s: Internet and the Web
• The 2000s: tbd
20
Early History: The First
Programmer
• Jacquard loom of early 1800s
– Translated card patterns into cloth designs
• Charles Babbage’s analytical engine (1830s &
40s)
Programs were cards with data and operations
• Ada Lovelace – first programmer
“The engine can arrange and combine its
numerical quantities exactly as if they were
letters or any other general symbols; And in
fact might bring out its results in algebraic
notation, were provision made.”
21
The 1940s: Von Neumann and
Zuse
• Konrad Zuse (Plankalkul)
– in Germany - in isolation because of the war
– defined Plankalkul (program calculus) circa 1945 but
never implemented it.
– Wrote algorithms in the language, including a program
to play chess.
– His work finally published in 1972.
– Included some advanced data type features such as
• Floating point, used twos complement and hidden bits
• Arrays
• records (that could be nested)
22
Plankalkul notation
A(7) := 5 * B(6)
| 5 * B => A
V | 6 7 (subscripts)
S | 1.n 1.n (data types)
23
• Initial computers were programmed in raw
machine code.
• These were entirely numeric.
• What was wrong with using machine code?
Everything!
• Poor readability
• Poor modifiability
• Expression coding was tedious
• Inherit deficiencies of hardware, e.g., no
indexing or floating point numbers
Machine Code
(1940’s)
24
• Short Code or SHORTCODE - John Mauchly, 1949.
• Pseudocode interpreter for math problems, on
Eckert and Mauchly’s BINAC and later on
UNIVAC I and II.
• Possibly the first attempt at a higher level language.
• Expressions were coded, left to right, e.g.:
X0 = sqrt(abs(Y0))
00 X0 03 20 06 Y0
• Some operations:
01 – 06 abs 1n (n+2)nd power
02 ) 07 + 2n (n+2)nd root
03 = 08 pause 4n if <= n
04 / 09 ( 58 print & tab
Pseudocodes
(1949)
25
More
Pseudocodes
Speedcoding; 1953-4
• A pseudocode interpreter for math on IBM 701, IBM 650.
• Developed by John Backus
• Pseudo ops for arithmetic and math functions
• Conditional and unconditional branching
• Autoincrement registers for array access
• Slow but still dominated by slowness of s/w math
• Interpreter left only 700 words left for user program
Laning and Zierler System - 1953
• Implemented on the MIT Whirlwind computer
• First "algebraic" compiler system
• Subscripted variables, function calls, expression translation
• Never ported to any other machine
26
The 1950s: The First
Programming Language
• Pseudocodes: interpreters for assembly
language like
• Fortran: the first higher level programming
language
• COBOL: he first business oriented language
• Algol: one of the most influential programming
languages ever designed
• LISP: the first language to depart from the
procedural paradigm
• APL:
27
Fortran (1954-57)
• FORmula TRANslator
• Developed at IBM under the guidance of John Backus
primarily for scientific programming
• Dramatically changed forever the way computers used
• Has continued to evolve, adding new features & concepts.
– FORTRAN II, FORTRAN IV, FORTRAN 66, FORTRAN 77, FORTRAN
90
• Always among the most efficient compilers, producing fast
code
• Still popular, e.g. for supercomputers
28
FORTRAN 0 – 1954 (not implemented)
FORTRAN I - 1957
Designed for the new IBM 704, which had index registers and
floating point hardware
Environment of development:
Computers were small and unreliable
Applications were scientific
No programming methodology or tools
Machine efficiency was most important
Impact of environment on design
• No need for dynamic storage
• Need good array handling and counting loops
• No string handling, decimal arithmetic, or powerful
input/output (commercial stuff)
Fortran 0 and 1
29
• Names could have up to six characters
• Post-test counting loop (DO)
• Formatted I/O
• User-defined subprograms
• Three-way selection statement (arithmetic IF)
IF (ICOUNT-1) 100, 200, 300
• No data typing statements
variables beginning with i, j, k, l, m or n were
integers, all else floating point
• No separate compilation
• Programs larger than 400 lines rarely compiled
correctly, mainly due to IBM 704’s poor reliability
• Code was very fast
• Quickly became widely used
Fortran I Features
30
Fortran II, IV and 77
FORTRAN II - 1958
• Independent compilation
• Fix the bugs
FORTRAN IV - 1960-62
• Explicit type declarations
• Logical selection (IF) statement
• Subprogram names could be parameters
• ANSI standard in 1966
FORTRAN 77 - 1978
• Character string handling
• Logical loop control (WHILE) statement
• IF-THEN-ELSE statement
31
Added many features of more modern programming
languages, including
• Pointers
• Recursion
• CASE statement
• Parameter type checking
• A collection of array operations, DOTPRODUCT,
MATMUL, TRANSPOSE, etc
• dynamic allocations and deallocation of arrays
• a form of records (called derived types)
• Module facility (similar Ada’s package)
Fortran 90 (1990)
32
COBOL
• COmmon Business Oriented Language
• Principal mentor: (Rear Admiral Dr.) Grace Murray
Hopper (1906-1992)
• Based on FLOW-MATIC which had such features as:
• Names up to 12 characters, with
embedded hyphens
• English names for arithmetic operators
• Data and code were completely separate
• Verbs were first word in every statement
• CODASYL committee (Conference on Data Systems
Languages) developed a programming language by the
name of COBOL
33
First CODASYL Design Meeting - May 1959
Design goals:
• Must look like simple English
• Must be easy to use, even if that means it will be
less powerful
• Must broaden the base of computer users
• Must not be biased by current compiler problems
Design committee were all from computer manufacturers
and DoD branches
Design Problems: arithmetic expressions? subscripts?
Fights among manufacturers
COBOL
34
COBOL
Contributions:
- First macro facility in a high-level language
- Hierarchical data structures (records)
- Nested selection statements
- Long names (up to 30 characters), with hyphens
- Data Division
Comments:
• First language required by DoD; would have
failed without DoD
• Still the most widely used business applications
language
35
• Beginner's All purpose Symbolic Instruction Code
• Designed by Kemeny & Kurtz at Dartmouth for the GE
225 with the goals:
• Easy to learn and use for non-science students and as a path to
Fortran and Algol
• Must be ”pleasant and friendly"
• Fast turnaround for homework
• Free and private access
• User time is more important than computer time
• Well-suited for implementation on first PCs, e.g., Gates
and Allen’s 4K Basic interpreter for the MITS Altair
personal computer (circa 1975)
• Current popular dialects: Visual BASIC
BASIC (1964)
36
LISP (1959)
• LISt Processing language (Designed at MIT by McCarthy)
• AI research needed a language that:
• Process data in lists (rather than arrays)
• Handles symbolic computation (rather than numeric)
• One universal, recursive data type: the s-expression
• An s-expression is either an atom or a list of zero or more
s-expressions
• Syntax is based on the lambda calculus
• Pioneered functional programming
• No need for variables or assignment
• Control via recursion and conditional expressions
• Status
• Still the dominant language for AI
• COMMON LISP and Scheme are contemporary dialects
• ML, Miranda, and Haskell are related languages
37
Environment of development:
1. FORTRAN had (barely) arrived for IBM 70x
2. Many other languages were being developed, all for
specific machines
3. No portable language; all were machine-dependent
4. No universal language for communicating
algorithms
ACM and GAMM met for four days for design
- Goals of the language:
1. Close to mathematical notation
2. Good for describing algorithms
3. Must be translatable to machine code
Algol
38
Algol 58
Features
• Concept of type was formalized
• Names could have any length
• Arrays could have any number of subscripts
• Parameters were separated by mode (in & out)
• Subscripts were placed in brackets
• Compound statements (begin ... end)
• Semicolon as a statement separator
• Assignment operator was :=
• if had an else-if clause
Comments:
•Not meant to be implemented, but variations of it were
(MAD, JOVIAL)
•Although IBM was initially enthusiastic, all support was
dropped by mid-1959
39
Algol
60
Modified ALGOL 58 at 6-day meeting in Paris adding such
new features as:
• Block structure (local scope)
• Two parameter passing methods
• Subprogram recursion
• Stack-dynamic arrays
• Still no I/O and no string handling
Successes:
• The standard way to publish algorithms for over 20
years
• All subsequent imperative languages are based on it
• First machine-independent language
• First language whose syntax was formally defined
(BNF)
40
Failure: Never widely used, especially in U.S.,
mostly because
1. No I/O and the character set made
programs nonportable
2. Too flexible--hard to implement
3. Entrenchment of FORTRAN
4. Formal syntax description
5. Lack of support by IBM
Algol 60
(1960)
41
APL
• A Programming Language
• Designed by K.Iverson at Harvard in late
1950’s
• A language for programming mathematical
computations
– especially those using matrices
• Functional style and many whole array
operations
• Drawback is requirement of special keyboard
42
The 1960s: An Explosion in
Programming Languages
• The development of hundreds of programming languages
• PL/I designed in 1963-4
– supposed to be all purpose
– combined features of FORTRAN, COBOL and Algol 60 and more!
– translators were slow, huge and unreliable
– some say it was ahead of its time......
• Algol 68
• SNOBOL
• Simula
• BASIC
43
PL/I
• Computing situation in 1964 (IBM's point of view)
Scientific computing
• IBM 1620 and 7090 computers
• FORTRAN
• SHARE user group
Business computing
• IBM 1401, 7080 computers
• COBOL
• GUIDE user group
• IBM’s goal: develop a single computer (IBM 360) and a
single programming language (PL/I) that would be good
for scientific and business applications.
• Eventually grew to include virtually every idea in current
practical programming languages.
44
PL/I
PL/I contributions:
1. First unit-level concurrency
2. First exception handling
3. Switch-selectable recursion
4. First pointer data type
5. First array cross sections
Comments:
• Many new features were poorly designed
• Too large and too complex
• Was (and still is) actually used for both scientific
and business applications
• Subsets (e.g. PL/C) developed which were more
manageable
45
Simula (1962-
67)
• Designed and built by Ole-Johan Dahl and Kristen
Nygaard at the Norwegian Computing Centre (NCC) in
Oslo between 1962 and 1967
• Originally designed and implemented as a language for
discrete event simulation
• Based on ALGOL 60
Primary Contributions:
• Coroutines - a kind of subprogram
• Classes (data plus methods) and objects
• Inheritance
• Dynamic binding
=> Introduced the basic ideas that developed into object-
oriented programming.
46
From the continued development of ALGOL 60, but it is not
a superset of that language
• Design is based on the concept of orthogonality
• Contributions:
• User-defined data structures
• Reference types
• Dynamic arrays (called flex arrays)
• Comments:
• Had even less usage than ALGOL 60
• Had strong influence on subsequent languages,
especially Pascal, C, and Ada
Algol 68
47
The 1970s: Simplicity,
Abstraction, Study
• Algol-W - Nicklaus Wirth and C.A.R.Hoare
– reaction against 1960s
– simplicity
• Pascal
– small, simple, efficient structures
– for teaching program
• C - 1972 - Dennis Ritchie
– aims for simplicity by reducing restrictions of the type system
– allows access to underlying system
– interface with O/S - UNIX
48
Pascal (1971)
• Designed by Wirth, who quit the ALGOL 68
committee (didn't like the direction of that
work)
• Designed for teaching structured programming
• Small, simple
• Introduces some modest improvements, such as
the case statement
• Was widely used for teaching programming ~
1980-1995.
49
C (1972-)
• Designed for systems programming at Bell
Labs by Dennis Ritchie and colleagues.
• Evolved primarily from B, but also ALGOL 68
• Powerful set of operators, but poor type
checking
• Initially spread through UNIX and the
availability of high quality, free compilers,
especially gcc.
50
Other descendants of ALGOL
• Modula-2 (mid-1970s by Niklaus Wirth at ETH)
• Pascal plus modules and some low-level
features designed for systems programming
• Modula-3 (late 1980s at Digital & Olivetti)
• Modula-2 plus classes, exception handling,
garbage collection, and concurrency
• Oberon (late 1980s by Wirth at ETH)
• Adds support for OOP to Modula-2
• Many Modula-2 features were deleted (e.g., for
statement, enumeration types, with statement,
non-integer array indices)
51
The 1980s: Consolidation and
New Paradigms
• Ada
– US Department of Defence
– European team lead by Jean Ichbiah. (Sam Lomonaco was also on
the ADA team )
• Functional programming
– Scheme, ML, Haskell
• Logic programming
– Prolog
• Object-oriented programming
– Smalltalk, C++, Eiffel
52
Ada
• In study done in 73-74 it was determined that the
DoD was spending $3B annually on software, over
half on embedded computer systems.
• The Higher Order Language Working Group was
formed and initial language requirements compiled
and refined in 75-76 and existing languages
evaluated.
• In 1997, it was concluded that none were suitable,
though Pascal, ALGOL 68 or PL/I would be a good
starting point.
• Language DoD-1 was developed through a series of
competitive contracts.
53
Ada
• Renamed Ada in May 1979.
• Reference manual, Mil. Std. 1815 approved 10
December 1980. (Ada Bryon was born 10/12/1815)
• “mandated” for use in DoD work during late 80’s and
early 90’s.
• Ada95, a joint ISO and ANSI standard, accepted in
February 1995 and included many new features.
• The Ada Joint Program Office (AJPO) closed 1
October 1998 (Same day as ISO/IEC 14882:1998 (C+
+) published!)
54
Ad
a
Contributions:
1. Packages - support for data abstraction
2. Exception handling - elaborate
3. Generic program units
4. Concurrency - through the tasking model
Comments:
• Competitive design
• Included all that was then known about software
engineering and language design
• First compilers were very difficult; the first really
usable compiler came nearly five years after the
language design was completed
• Very difficult to mandate programming technology
55
• Developed at the University of Aix
Marseille, by Comerauer and Roussel, with
some help from Kowalski at the University
of Edinburgh
• Based on formal logic
• Non-procedural
• Can be summarized as being an intelligent
database system that uses an inferencing
process to infer the truth of given queries
Logic Programming:
Prolog
56
Functional Programming
• Common Lisp: consolidation of LISP dialects
spurred practical use, as did the development of Lisp
Machines.
• Scheme: a simple and pure LISP like language used
for teaching programming.
• Logo: Used for teaching young children how to
program.
• ML: (MetaLanguage) a strongly-typed functional
language first developed by Robin Milner in the 70’s
• Haskell: polymorphicly typed, lazy, purely
functional language.
57
Smalltalk (1972-
80)
• Developed at Xerox PARC by Alan Kay and
colleagues (esp. Adele Goldberg) inspired by
Simula 67
• First compilation in 1972 was written on a bet to
come up with "the most powerful language in the
world" in "a single page of code".
• In 1980, Smalltalk 80, a uniformly object-oriented
programming environment became available as the
first commercial release of the Smalltalk language
• Pioneered the graphical user interface everyone
now uses
• Industrial use continues to the present day
58
• Developed at Bell Labs by Stroustrup
• Evolved from C and SIMULA 67
• Facilities for object-oriented programming, taken
partially from SIMULA 67, added to C
• Also has exception handling
• A large and complex language, in part because it
supports both procedural and OO programming
• Rapidly grew in popularity, along with OOP
• ANSI standard approved in November, 1997
C++ (1985)
59
Eiffel
•Eiffel - a related language that supports OOP
- (Designed by Bertrand Meyer - 1992)
- Not directly derived from any other
language
- Smaller and simpler than C++, but still has
most of the power
60
1990’s: the Internet and Web
During the 90’s, Object-oriented languages (mostly C+
+) became widely used in practical applications
The Internet and Web drove several phenomena:
– Adding concurrency and threads to existing
languages
– Increased use of scripting languages such as Perl
and Tcl/Tk
– Java as a new programming language
61
Java
• Developed at Sun in the early 1990s
with original goal of a language for
embedded computers
• Principals: Bill Joy, James Gosling, Mike
Sheradin, Patrick Naughton
• Original name, Oak, changed for copyright reasons
• Based on C++ but significantly simplified
• Supports only OOP
• Has references, but not pointers
• Includes support for applets and a form of concurrency
(i.e. threads)
62
The future
• In the 60’s, the dream was a single all-purpose
language (e.g., PL/I, Algol)
• The 70s and 80s dream expressed by Winograd
(1979)
“Just as high-level languages allow the programmer to
escape the intricacies of the machine, higher level
programming systems can provide for manipulating
complex systems. We need to shift away from algorithms
and towards the description of the properties of the
packages that we build. Programming systems will be
declarative not imperative”
• Will that dream be realised?
• Programming is not yet obsolete
Syntax and Semantics
• Introduction
•  Syntax: the form or structure of the expressions, statements, and
program
• units
•  Semantics: the meaning of the expressions, statements, and program
units
•  Syntax and semantics provide a language‘s definition
• – Users of a language definition
• – Other language designers
• – Implementers
• – Programmers (the users of the language)
63
The General Problem of
Describing Syntax
• A sentence is a string of characters over some alphabet
•  A language is a set of sentences
•  A lexeme is the lowest level syntactic unit of a language (e.g., *, sum, begin)
•  A token is a category of lexemes (e.g., identifier)
•  Languages Recognizers
• – A recognition device reads input strings of the language and decides whether the
input strings belong to the language
• – Example: syntax analysis part of a compiler
•  Languages Generators
• – A device that generates sentences of a language
• – One can determine if the syntax of a particular sentence is correct by comparing
it to the structure of the generator
64
Formal Methods of Describing
Syntax
• Backus-Naur Form and Context-Free Grammars
• – Most widely known method for describing programming language syntax
•  Extended BNF
• – Improves readability and writability of BNF
•  Grammars and Recognizers
• Backus-Naur Form and Context-Free Grammars
•  Context-Free Grammars
•  Developed by Noam Chomsky in the mid-1950s
•  Language generators, meant to describe the syntax of natural languages
•  Define a class of languages called context-free languages
• Backus-Naur Form (BNF)
•  Backus-Naur Form (1959)
• – Invented by John Backus to describe ALGOL 58
• – BNF is equivalent to context-free grammars
• – BNF is a metalanguage used to describe another language
• – In BNF, abstractions are used to represent classes of syntactic structures-- they act like syntactic
variables (also called nonterminal symbols)
65
BNF Fundamentals
• Non-terminals: BNF abstractions
•  Terminals: lexemes and tokens
•  Grammar: a collection of rules
• – Examples of BNF rules:
• ident_list> → identifier | identifer, <ident_list>
• <if_stmt> → if <logic_expr> then <stmt>
• BNF Rules
•  A rule has a left-hand side (LHS) and a right-hand side (RHS), and consists of
terminal and nonterminal symbols
•  A grammar is a finite nonempty set of rules
•  An abstraction (or nonterminal symbol) can have more than one RHS
• <stmt> → <single_stmt>
• | begin <stmt_list> end
66
• Describing Lists
•  Syntactic lists are described using recursion
<ident_list> → ident
| ident, <ident_list>
•  A derivation is a repeated application of rules, starting with the start
symbol
• and ending with a sentence (all terminal symbols)
• An Example Grammar
<program> → <stmts>
<stmts> → <stmt> | <stmt> ; <stmts>
<stmt> → <var> = <expr>
<var> → a | b | c | d
<expr> → <term> + <term> | <term> - <term>
<term> → <var> | const
67
Parse Tree
• A hierarchical representation of a
derivation
68
Derivation
•  Every string of symbols in the derivation is a sentential form
•  A sentence is a sentential form that has only terminal symbols
•  A leftmost derivation is one in which the leftmost nonterminal in each
sentential form is the one that is expanded
•  A derivation may be neither leftmost nor rightmost
69
Ambiguity in Grammars
•  A grammar is ambiguous iff it generates a sentential form that has two
or more distinct parse trees
• An Unambiguous Expression Grammar
• If we use the parse tree to indicate precedence levels of the operators, we
cannot have ambiguity
• <expr> → <expr> - <term>|<term>
• <term> → <term> / const|const
70
71
Extended Backus-Naur Form
(EBNF)
• Optional parts are placed in brackets ([ ])
• <proc_call> → ident [(<expr_list>)]
•  Alternative parts of RHSs are placed inside parentheses and separated
via vertical bars
• <term> → <term> (+|-) const
•  Repetitions (0 or more) are placed inside braces ({ })
• <ident> → letter {letter|digit}
72
BNF and EBNF
•  BNF
<expr> → <expr> + <term>
| <expr> - <term>
| <term>
<term> → <term> * <factor>
| <term> / <factor>
| <factor>
•  EBNF
<expr> → <term> {(+ | -) <term>}
<term> → <factor> {(* | /) <factor>}
73
Attribute Grammars
• Context-free grammars (CFGs) cannot describe all of the syntax of
programming languages
•  Additions to CFGs to carry some semantic info along parse trees
•  Primary value of attribute grammars (AGs):
• – Static semantics specification
• – Compiler design (static semantics checking)
• Definition
•  An attribute grammar is a context-free grammar G = (S, N, T, P) with
the following additions:
• – For each grammar symbol x there is a set A(x) of attribute values
74
• Each rule has a set of functions that define certain attributes of the
nonterminals in the rule
• – Each rule has a (possibly empty) set of predicates to check for attribute
consistency
• – Let X0 X1 ... Xn be a rule
• – Functions of the form S(X0) = f(A(X1), ... , A(Xn)) define synthesized
attributes
• – Functions of the form I(Xj) = f(A(X0), ... , A(Xn)), for i <= j <= n, define
inherited attributes
• – Initially, there are intrinsic attributes on the leaves
75
Example
•  Syntax
<assign> → <var> = <expr>
<expr> → <var> + <var> | <var>
<var> → A | B | C
•  actual_type: synthesized for <var> and <expr>
•  expected_type: inherited for <expr>
• Syntax rule :<expr> → <var>[1] + <var>[2]
• Semantic rules :<expr>.actual_type → <var>[1].actual_type
• Predicate :<var>[1].actual_type == <var>[2].actual_type
• <expr>.expected_type == <expr>.actual_type
• Syntax rule :<var> → id
• Semantic rule :<var>.actual_type  lookup (<var>.string)
76
How are attribute values
computed?
• – If all attributes were inherited, the tree could be decorated in top-down
order.
• – If all attributes were synthesized, the tree could be decorated in bottom-
up order.
• – In many cases, both kinds of attributes are used, and it is some
combination of top-down and bottom-up that must be used.
<expr>.expected_type  inherited from parent
<var>[1].actual_type  lookup (A)
<var>[2].actual_type  lookup (B)
<var>[1].actual_type =? <var>[2].actual_type
<expr>.actual_type  <var>[1].actual_type
<expr>.actual_type =? <expr>.expected_type
77
Describing the Meanings of
Programs:Dynamic Semantics
• There is no single widely acceptable notation or formalism for describing semantics
• Operational Semantics
– Describe the meaning of a program by executing its statements on a machine, either
simulated or actual. The change in the state of the machine (memory, registers,
etc.) defines the meaning of the statement
• To use operational semantics for a high-level language, a virtual machine is needed
• A hardware pure interpreter would be too expensive
• A software pure interpreter also has problems:
• – The detailed characteristics of the particular computer would make actions
difficult to understand – Such a semantic definition would be machine- dependent
78
Operational Semantics
and Axiomatic Semantics
• A better alternative: A complete computer simulation
The process:
• – Build a translator (translates source code to the machine code of an idealized
computer)
• – Build a simulator for the idealized computer
Evaluation of operational semantics:
• – Good if used informally (language manuals, etc.)
• – Extremely complex if used formally (e.g., VDL), it was used for describing semantics of
PL/I.
Axiomatic Semantics
• – Based on formal logic (predicate calculus)
• – Original purpose: formal program verification
• – Approach: Define axioms or inference rules for each statement type in the language (to
allow transformations of expressions to other expressions)
• – The expressions are called assertions
79
Axiomatic Semantics
• An assertion before a statement (a precondition) states the relationships and
• constraints among variables that are true at that point in execution
• An assertion following a statement is a postcondition
• A weakest precondition is the least restrictive precondition that will guarantee the
postcondition
• Pre-post form: {P} statement {Q}
• An example: a = b + 1 {a > 1}
• One possible precondition: {b > 10}
• Weakest precondition: {b > 0}
• Program proof process: The postcondition for the whole program is the desired result.
Work back through the program to the first statement. If the precondition on the first
statement is the same as the program spec, the program is correct.
• An axiom for assignment statements
(x = E):
{Qx->E} x = E {Q}
80
• An inference rule for sequences
• – For a sequence S1;S2:
• – {P1} S1 {P2}
• – {P2} S2 {P3}
•  An inference rule for logical pretest loops
• For the loop construct:
• {P} while B do S end {Q}
• Characteristics of the loop invariant
• I must meet the following conditions:
• – P => I (the loop invariant must be true initially)
• – {I} B {I} (evaluation of the Boolean must not change the validity of I)
• – {I and B} S {I} (I is not changed by executing the body of the loop)
• – (I and (not B)) => Q (if I is true and B is false, Q is implied)
• – The loop terminates (this can be difficult to prove)
•  The loop invariant I is a weakened version of the loop postcondition, and it
is also a precondition.
81
• Evaluation of Axiomatic Semantics:
• – Developing axioms or inference rules for all of the statements in a
language is difficult
• – It is a good tool for correctness proofs, and an excellent framework for
reasoning about programs, but it is not as useful for language users and
compiler writers
• – Its usefulness in describing the meaning of a programming language is
limited for language users or compiler writers
82
Denotational Semantics
• Based on recursive function theory
• – The most abstract semantics description method
• – Originally developed by Scott and Strachey (1970)
• – The process of building a denotational spec for a language (not necessarily easy):
• – Define a mathematical object for each language entity
• – Define a function that maps instances of the language entities onto instances of the
corresponding mathematical objects
• – The meaning of language constructs are defined by only the values of the
• program's variables
• – The difference between denotational and operational semantics: In operational semantics, the
state changes are defined by coded algorithms; in denotational semantics, they are defined by
rigorous mathematical functions
• – The state of a program is the values of all its current variables
• s = {<i1, v1>, <i2, v2>, …, <in, vn>}
• – Let VARMAP be a function that, when given a variable name and a state,
• returns the current value of the variable
• VARMAP(ij, s) = vj
83
Expressions
• Map expressions onto Z {error}

•  We assume expressions are decimal numbers, variables, or binary expressions having
one arithmetic operator and two operands, each of which can be an expression
•  Assignment Statements
• – Maps state sets to state sets
•  Logical Pretest Loops
• – Maps state sets to state sets
 The meaning of the loop is the value of the program variables after the statements in the
loop have been executed the prescribed number of times, assuming there have been no
errors
•  In essence, the loop has been converted from iteration to recursion, where the
recursive control I s mathematically defined by other recursive state mapping functions
•  Recursion, when compared to iteration, is easier to describe with mathematical rigor
84
• Evaluation of denotational semantics
– Can be used to prove the correctness of programs
– Provides a rigorous way to think about programs
– Can be an aid to language design
– Has been used in compiler generation systems
– Because of its complexity, they are of little use to language users
85
86
86
LEXICAL ANALYSIS
• ROLE OF THE LEXICAL ANALYZER
– The main function is to read the input and produce the output as
a sequence of tokens that the parser uses for syntax analysis
– The command namely “get next token” is used by the lexical
analyzer to read the input characters until it can identify the next
token
– It also performs the user interface task’s
– It also correlate error messages from compiler. The two phases
of LA are
• Scanning (simple task)
• Lexical Analysis ( complex task)
87
87
Tokens Patterns and Lexemes
• Token represents a logically cohesive sequence of characters
• The set of string is described by a rule called pattern associated with
the token
• The character sequence forming a token is called lexeme for the
token
• Tokens are keywords, operators, identifiers, constants and
punctuations
• Pattern is a rule describing the set of lexeme that can represent a
particular token in the program
• Lexeme matched by the pattern for the token represents strings of
characters
88
88
TOKEN LEXEME PATTERN
const const const
Relation <,<=,=,>,>=,<> < or <= or = or > or
>= or <>
Num 3.14,6.2 Any constant
Id Pi, count Letter followed by
letters and digits
89
Specification of Patterns for Tokens:
Regular Definitions
• Example:
letter  A | B | … | Z | a | b | … | z
digit  0 | 1 | … | 9
id  letter ( letter | digit )*
90
• We frequently use the following shorthands:
r+
= rr*
r? = r | 
[a-z] = a | b | c | … | z
• For example:
digit  [0-9]
num  digit+
(. digit+
)? ( E (+|-)? digit+
)?
91
Regular Definitions and Grammars
stmt  if expr then stmt
| if expr then stmt else stmt
| 
expr  term relop term
| term
term  id
| num
if  if
then  then
else  else
relop  < | <= | <> | > | >= | =
id  letter ( letter | digit )*
num  digit+
(. digit+
)? ( E (+|-)? digit+
)?
Grammar
Regular definitions
92
Implementing a Scanner Using Transition
Diagrams
0 2
1
6
3
4
5
7
8
return(relop, LE)
return(relop, NE)
return(relop, LT)
return(relop, EQ)
return(relop, GE)
return(relop, GT)
start <
=
>
=
>
=
other
other
*
*
9
start letter
10 11
*
other
letter or digit
return(gettoken(),
install_id())
relop  < | <= | <> | > | >= | =
id  letter ( letter | digit )*
93
Transition Graph
• An NFA can be diagrammatically represented
by a labeled directed graph called a transition
graph
0
start a
1 3
2
b b
a
b
S = {0,1,2,3}
 = {a,b}
s0 = 0
F = {3}
94
Transition Table
• The mapping  of an NFA can be represented
in a transition table
State
Input
a
Input
b
0 {0, 1} {0}
1 {2}
2 {3}
(0,a) = {0,1}
(0,b) = {0}
(1,b) = {2}
(2,b) = {3}
95
N(r2)
N(r1)
From Regular Expression to NFA (Thompson’s
Construction)
f
i 
f
a
i
f
i
N(r1)
N(r2)
start
start
start 
 

f
i
start
N(r) f
i
start



a
r1 | r2
r1r2
r*  
96
Combining the NFAs of a Set of Regular
Expressions
2
a
1
start
6
a
3
start
4 5
b b
8
b
7
start
a b
a { action1 }
abb { action2 }
a*b+ { action3 }
2
a
1
6
a
3 4 5
b b
8
b
7
a b
0
start



97
Simulating the Combined NFA Example 1
2
a
1
6
a
3 4 5
b b
8
b
7
a b
0
start



0
1
3
7
2
4
7
7 8
Must find the longest match:
Continue until no further moves are possible
When last state is accepting: execute action
action1
action2
action3
a b
a a none
action3
Parsing
• Parsing is the process to determine whether the start symbol can derive the
program or not. If the Parsing is successful then the program is a valid
program otherwise the program is invalid.
• There are generally two types of Parsers:
• Top-Down Parsers:
– In this Parsing technique we expand the start symbol to the whole
program.
– Recursive Descent and LL parsers are the Top-Down parsers.
• Bottom-Up Parsers:
– In this Parsing technique we reduce the whole program to start symbol.
– Operator Precedence Parser, LR(0) Parser, SLR Parser, LALR Parser
and CLR Parser are the Bottom-Up parsers.
98
99
PARSING TECHNIQUES
PARSER
TOP DOWN PARSER BOTTOM UP PARSER
BACKTRACKING
or
RECURSIVE
DESCENT PARSER
PREDICTIVE
PARSER
SHIFT REDUCE
PARSER
LR PARSER
SLR
PARSER
LALR
PARSER
CLR
PARSER
OPERATOR
PRECEDENCE
PARSING
100
TOP DOWN Vs BOTTOM UP
SNo TOP DOWN PARSER BOTTOM UP PARSER
1 Parse tree can be built from
root to leaves
Parse tree can be built from leaves to
root
2 This is simple to implement This is complex
3 Less efficient. Various
problems that occurs during top
down techniques are
ambiguity, left recursion, left
factoring
When the bottom up parser handles
ambiguous grammar conflicts occur
in parse table
4 It is applicable to small class of
languages
It is applicable to a broad class of
languages
5 Parsing techniques i. Recursive
descent parser ii. Predictive
parser
Parsing techniques. i. shift reduce, ii.
Operator precedence, iii. LR parser
101
RECURSIVE DESCENT PARSER
• A parser that uses collection of recursive
procedures for parsing the given input string is
called Recursive Descent parser
• The CFG is used to build the recursive routines
• The RHS of the production rule is directly
converted to a program.
• For each NT a separate procedure is written and
body of the procedure is RHS of the
corresponding NT.
RECURSIVE DESCENT PARSER
• It is a kind of Top-Down Parser. A top-down parser builds the parse tree
from the top to down, starting with the start non-terminal. A Predictive
Parser is a special case of Recursive Descent Parser, where no Back
Tracking is required.
• By carefully writing a grammar means eliminating left recursion and left
factoring from it, the resulting grammar will be a grammar that can be
parsed by a recursive descent parser.
102
103
Basic steps of construction of RD Parser
• The RHS of the rule is directly converted into program code
symbol by symbol
1. If the input symbol is NT then a call to the procedure
corresponding the non-terminal is made.
2. If the input is terminal then it is matched with the
lookahead from input. The lookahead pointer has to be
advanced on matching of the input symbol
3. If the production rule has many alternates then all these
alternates has to be combined into a single body of
procedure.
4. The parser should be activated by a procedure
corresponding to the start symbol.
104
Example
A  aBe | cBd | C
B  bB | 
C  f
proc C { match the current token with f,
proc A { and move to the next token; }
case of the current token {
a: - match the current token with a,
and move to the next token; proc B {
- call B; case of the current token {
- match the current token with e, b: - match the current token with b,
and move to the next token; and move to the next token;
c: - match the current token with c, - call B
and move to the next token; ε : do nothing
- call B; }
- match the current token with d, }
and move to the next token;
f: - call C
}
}
• Take the grammar for a straightforward
arithmetic language, for instance:
105
Parsing:
• The fundamental principle of recursive descent is Writing a collection of
recursive functions, one for each nonterminal symbol in the grammar, is
the process of parsing. A series of symbols that matches a particular rule
must be parsed by each function, which is assigned to a grammar rule.
• The expression function, which is invoked with the input string, is where
the recursive descent parser begins. Depending on whether the symbol is a
number or an opening parenthesis, the function analyses the first symbol of
the input and chooses which alternative of the term rule to apply.
• The factor function is used to parse the symbol's value if it is a number.
The expression function is used recursively to parse the expression inside
the parentheses if the symbol is an opening parenthesis. The term function
is invoked recursively to parse any subsequent multiplication or division
signs and factors after the factor or expression function has returned.
106
107
• The parser first calls the expression function with the supplied string.
• When "2" is provided as input, the function calls another function, which
then executes the data and returns
• The expression function then reads the next symbol, a plus sign. It again
calls the term function with the input ",".
108
Recursive descent parsing has
the following benefits:
• Ease of use: Because recursive descent parsing closely mimics the
grammar rules of the language being parsed, it is simple to comprehend
and use.
• Readability: The parsing code is usually set up in a structured and modular
way, which makes it easier to read and maintain.
• Recursive descent parsers can produce descriptive error messages, which
make it simpler to find and detect syntax mistakes in the input. 3. Error
reporting.
• Predictability: The predictable behavior of recursive descent parsers makes
the parsing process deterministic and clear.
109
Recursive descent parsing,
however, also has certain
drawbacks:
• Recursive descent parsers encounter difficulties with left-recursive
grammar rules since they can result in unbounded recursion. To effectively
handle left recursion, care must be made to avoid it or employ methods
like memoization.
• Recursive descent parsers rely on backtracking when internal alternatives
to a grammar rule are unsuccessful. This could result in inefficiencies,
especially if the grammar contains a lot of ambiguity or options.
• Recursive descent parsers frequently adhere to the LL(1) constraint, which
requires that they only use one token of lookahead. The grammar's
expressiveness is constrained by this restriction because it is unable to
handle some ambiguous or context-sensitive languages.
110
An outline of the Recursive
Descent Parsing algorithm is
provided below:
• Grammar: The first step in parsing a language is to define its grammar. A
set of production rules that outline the language's syntactic structure makes
up the grammar. Each rule is made up of a series of terminal and
nonterminal symbols on the right side and a nonterminal symbol on the left
side.
• Create parsing functions: For each nonterminal symbol in the grammar,
create a parsing function. The task of identifying and parsing the linguistic
expressions corresponding to each nonterminal symbol will fall to each
function.
• Input tokens read: Read the input tokens that came from the tokenizer or
lexical analyzer. The IDs, keywords, operators, and other components of
the input language are represented by these tokens.
111
• Implement parsing functions: Recursively implement each parsing
function. These steps should be followed by each function:
– Verify if the current token matches the nonterminal's anticipated symbol.
– If the nonterminal has numerous production rules, handle each alternative
using an if-else or switch statement. Each possibility ought to be represented
by a different function call or block of code.
– Recursively invoke the parsing routines for each alternative's matching
nonterminals in the rule. The parsing procedure will continue until all of the
input has been processed thanks to this recursive call.
– Take care of any additional nonterminal-specific logic, such as parse tree
construction or semantic actions.
• Start parsing: Launch the parsing operation by invoking the parsing
function that corresponds to the grammar's start symbol. The recursive
descent parsing procedure will get started with this function.
• Implement error-handling procedures to handle unusual input or notify
syntax mistakes. Give the user clear error messages when one happens so
they may comprehend and fix the issue.
112
Bottom up parsing
• Bottom-up Parsers / Shift Reduce Parsers Build the parse tree from leaves
to root. Bottom-up parsing can be defined as an attempt to reduce the input
string w to the start symbol of grammar by tracing out the rightmost
derivations of w in reverse. Eg.
113
• A general shift reduce parsing is LR parsing. The L stands for scanning the
input from left to right and R stands for constructing a rightmost derivation
in reverse.
114
115
Predictive Parsing - LL(1) Parser
• This top-down parsing algorithm is of non-
recursive type.
• In this type parsing table is built
• For LL(1)
Uses only one input symbol tp predict the parsing
process
Left most derivation
Input scanned from left to right
116
• The data structures used by LL(1) are
– Input buffer (store the input tokens)
– Stack (hold left sentential form)
– Parsing table (row of NT, column of T)
Input token
Stack Output
Parsing table
LL(1) parser
117
LL(1) Parser
input buffer
– our string to be parsed. We will assume that its end is marked with a special symbol $.
output
– a production rule representing a step of the derivation sequence (left-most derivation) of the string
in the input buffer.
stack
– contains the grammar symbols
– at the bottom of the stack, there is a special end marker symbol $.
– initially the stack contains only the symbol $ and the starting symbol S. $S  initial stack
– when the stack is emptied (ie. only $ left in the stack), the parsing is completed.
parsing table
– a two-dimensional array M[A,a]
– each row is a non-terminal symbol
– each column is a terminal symbol or the special symbol $
– each entry holds a production rule.
118
LL(1) Parser – Parser Actions
• The symbol at the top of the stack (say X) and the current symbol in the input
string (say a) determine the parser action.
• There are four possible parser actions.
1. If X and a are $ → parser halts (successful completion)
2. If X and a are the same terminal symbol (different from $)
→ parser pops X from the stack, and moves the next symbol in the input buffer.
3. If X is a non-terminal
→ parser looks at the parsing table entry M[X,a]. If M[X,a] holds a production
rule XY1Y2...Yk, it pops X from the stack and pushes Yk,Yk-1,...,Y1 into the
stack. The parser also outputs the production rule XY1Y2...Yk to represent a
step of the derivation.
4. none of the above → error
– all empty entries in the parsing table are errors.
– If X is a terminal symbol different from a, this is also an error case.
119
• The construction of predictive LL(1) parser is
based on two very important functions and those
are FIRST and FOLLOW.
• For the construction
1. Computation of FIRST and FOLLOW function
2. Construction the predictive parsing table using
FIRST and FOLLOW functions
3. Parse the input string with the help of predictive
parsing table
120
FIRST function
• FIRST(α) is a set of terminal symbols that are first
symbols appearing at RHS in derivation of α.
• If α →ε then ε is also in FIRST(α)
• Following are the rules used to compute the FIRST
functions
– 1. if the terminal symbol a then FIRST(a) ={a}
– 2. If there is a rule X→ε then FIRST(X) = {ε}
– 3. For the rule A →X1X2X3…Xk, FIRST(A) ={FIRST(X1)
U FIRST(X2) U FIRST(X3)….. U FIRST(XK)}
Where Xj≤n such that i≤ j ≤k-1
121
FOLLOW function
• FOLLOW(A) is defined as the set of terminal symbols that appear
immediately to the right of A.
• FOLLOW(A) = { a | S →α Aaβ where α and β are some grammar
symbols may be terminal or non-terminal}
• The rules for computing FOLLOW function are as given below –
1. For the start symbol S place $ in FOLLOW(S)
2. If there is a production A→αBβ then everything in FIRST(β)
without ε is to be placed in FOLLOW(B)
3. If there is a production A →αBβ or A →αB and FIRST(β) = {ε}
then FOLLOW(A) = FOLLOW(B) or FOLLOW(B)=FOLLOW(A).
That means everything in FOLLOW(A) is in FOLLOW(B)
122
FIRST AND FOLLOW EXAMPLE
E→TE’; E’ →+TE’|ε; T →FT’;
T’→*FT’| ε; F →(E)|id.
• E→TE’; T →FT’; F →(E)|id.
• FIRST(E)=FIRST(T)=FIRST(F)
• Here, F →(E) and F→|id
• So, FIRST(F)={(, id}
• FIRST(E’) = {+,ε} since, E’ →+TE’|ε;
• FIRST(T’) = {*,ε} since, T’→*FT’|ε;
123
• FOLLOW(E)
• For F→(E)
– As there is F→(E), symbol ) is appears immediately after E. so ) will be in
FOLLOW(E)
– By rule A→αBβ, we can map this with F →(E) then,
FOLLOW(E)=FIRST()) = {)}
• Since E is a start symbol, $ will be in FOLLOW(E)
– Hence, FOLLOW(E) = {), $}
• FOLLOW(E’)
• For E →TE’ By rule A→αBβ, we can map this with E →TE’ then
FOLLOW(E) is in FOLLOW(E’)
– FOLLOW(E’)={),$}
• For E’→+TE’ FOLLOW(E’) is in FOLLOW(E’)
– FOLLOW(E’)={),$}
• FOLLOW(T)
• For E →TE’
– By rule A → αBβ, FOLLOW(B) = {FIRST(β) – ε}, so FOLLOW(T) =
{FIRST(E’)-ε} = {+}
• For E’→ +TE’
– By rule A → αBβ, FOLLOW(T)=FOLLOW(E’). so, FOLLOW(T)={),$}
– Hence FOLLOW(T) = {+, ), $}
124
• FOLLOW(T’)
– For T →FT’
• By A→αBβ, then FOLLOW(T’) = FOLLOW(T) = {+,),$}
– For T → *FT’
• By A→αBβ, then FOLLOW(T’) = FOLLOW(T) = {+,),$}
• Hence FOLLOW(T’)={+,),$}
• FOLLOW(F)
– For T →FT’
• By A→αBβ, then FOLLOW(F)={FIRST(T’) – ε}
• FOLLOW(F) = {*}
– For T → *FT’
• By A→αBβ, then FOLLOW(F)=FOLLOW(T’) = {+,),$}
• Hence, FOLLOW(F) = {+, * , ) , $}
125
Predictive parsing table construction
• For the rule A →α of grammar G
1. For each a in FIRST(α) create M[A,a] = A →α
where a is a terminal symbol
2. For ε in FIRST(α) create entry in M[A,b] = A
→α where b is the symbols from FOLLOW(A)
3. If ε is in FIRST(α) and $ is in FOLLOW(A) then
create entry in the table M[A,$] = A →α
4. All the remaining entries in the table M are
marked as ERROR
126
PARSING TABLE
Id + * ( ) $
E E→TE’ E→TE’
E’ E’→+TE’ E’→ ε E’→ ε
T T→ FT’ T→ FT’
T’ T’→ ε T’→ *FT’ T’→ ε T’→ ε
F F→ id F→ (E)
Lets parse the input string id+id*id using the above table. At initial configuration
stack will contain start symbol E, in the input buffer the input string is placed and
ended with $
127
Stack Input Action
$E id+id*id$
$E’T id+id*id$ E → TE’
$E’T’F id+id*id$ T→FT’
$E’T’id id+id*id$ F→id
$E’T’ +id*id$
$E’ +id*id$ T’ →ε
$E’T+ +id*id$ E’ → +TE’
$E’T Id*id$
$E’T’F Id*id$ T→FT’
$E’T’id Id*id$ F→id
$E’T’ *id$
$E’T’F* *id$ T’ →FT’
$E’T’F Id$
$E’T’id Id$ F→id
$E’T’ $
$E’ $ T’ →ε
$ $ E’ → ε
128
BOTTOM UP PARSING
• The input string is taken first, and we try to reduce
this string with the help of grammar and try to
obtain the start symbol
• The process of parsing halts successfully as soon
as we reach the start symbol
• Handle – pruning
– find the substring that could be reduces by appropriate
non-terminal is called handle
– Handle is the string of substring that matches the right
side of the production and we can reduce
– In other words, a process of detecting handles and using
them in reduction
129
HANDLE PRUNING
• Consider the grammar E→E+E; E→id
• RMD for the string id+id+id
– E => E+E
– E=> E+E+E
– E=>E+E+id
– E=>E+id+id
– E=>id+id+id
The bold strings are called handles
130
SHIFT REDUCE PARSER
• It attempts to construct parse tree from leaves
to root.
• It requires the following data structures
– The input buffer storing the input string
– A stack for storing and accessing the LHS and
RHS of rules
W$
Input buffer
$S
Stack
131
PARSING OPERATIONS
• SHIFT
– Moving of the symbols from input buffer onto the stack
• REDUCE
– If the handles present in the top of the stack then reduction
of it by appropriate rule. RHS is popped and LHS is pushed
• ACCEPT
– If the stack contains start symbol only and input buffer is
empty at the same time that action is called accept
• ERROR
– A situation in which parser cannot either shift or reduce the
symbols
132
• Two rules followed
– If the incoming operator has more priority than in
stack operator then perform SHIFT
– If in stack operator has same or less priority than
the priority of incoming operators then perform
REDUCE
Viable prefixes are the set of prefixes of right sentential forms that can appear
on the stack of shift/reduce parser are called viable prefixes. It is always possible
to add terminals to the end of a viable prefix to obtain a right sentential form
Consider the grammar E→ E-E; E → E*E; E → id. Perform shift-
reduce parsing of the input string id-id*id
133
STACK INPUT BUFFER PARSING ACTION
$ id-id*id$ Shift
$id -id*id$ Reduce by E→ id
$E -id*id$ Shift
$E- id*id$ Shift
$E-id *id$ Reduce by E→ id
$E-E *id$ Shift
$E-E* id$ Shift
$E-E*id $ Reduce E→ id
$E-E*E $ Reduce E→ E*E
$E-E $ Reduce E→ E-E
$E $ Accept
134
OPERATOR PRECEDENCE
PARSER
• A grammar G is said to be operator precedence if
it poses following properties
– No production rule on the right side is ε
– There should not be any production rule possessing
two adjacent non-terminals at the right hand side
• Parsing method
– Construct OPP relations(table)
– Identify the handles
– Implementation using stack
135
• Advantage of OPP
– Simple to implement
• Disadvantages of OPP
– Operator minus has two different
precedence(unary and binary). Hence, it is hard to
handle tokens like minus sign
– This can be applicable to only small class of
grammars
• Application
– The operator precedence parsing is done in a
language having operators.
136
LR Parsers
• The most powerful shift-reduce parsing (yet efficient) is:
LR(k) parsing.
left to right right-most k lookhead
scanning derivation (k is omitted  it is 1)
• LR parsing is attractive because:
– LR parsing is most general non-backtracking shift-reduce parsing, yet it is
still efficient.
– The class of grammars that can be parsed using LR methods is a proper
superset of the class of grammars that can be parsed with predictive parsers.
LL(1)-Grammars  LR(1)-Grammars
– An LR-parser can detect a syntactic error as soon as it is possible to do so a
left-to-right scan of the input.
137
LR Parsers
• LR-Parsers
– covers wide range of grammars.
– SLR – simple LR parser
– LR – most general LR parser
– LALR – intermediate LR parser (look-head LR
parser)
– SLR, LR and LALR work same (they used the
same algorithm), only their parsing tables are
different.
138
LR Parsing Algorithm
Sm1
Xm
Sm-1
Xm-1
.
.
S1
X1
S0
a1 ... ai ... an $
Action Table
terminals and $
S
t four
a actions
t
e
S
Goto Table
non-terminal
s
t each item
a is a state
t
e
S
LR Parsing
Algorithm
stack
input
output
139
Parsing method
• Initialize the stack with start symbol and invokes
scanner to get next token
• It determines Sj the state currently on the top of the
stack and ai the current input symbol
• It consults the parsing table for the action [Sj, ai] which
can have one of the four values
– Si means shift state I
– rj means reduce by rule j
– Accept means successful parsing is done
– Error indicates syntactical error
140
Simple LR parsing (SLR) definitions
• LR(0) items
– The LR(0) item for grammar G is production rule in which symbol . Is
inserted at some position in RHS of the rule.
• Example
S→.ABC
S→A.BC
S→AB.C
S→ABC.
• Augmented grammar
– If a grammar G is having start symbol S then augmented grammar is a
new grammar G’ in which S’ is a new start symbol such that S’→S
– The purpose this grammar is to indicate the acceptance of input. That is
when parser is about to reduce S’→S it reaches to acceptance state
141
• Kernel items
– It is a collection of items S’→.S and all the items whose dots are not at
the leftmost end of RHS of the rule
– Non-kernel items
• The collection of all the items in which . Are at the left end of RHS of
the rule
• Functions
– Closure
– Goto
– These are two important functions required to create collection of
canonical set of items
• Viable prefix-
– set of prefixes in the right sentential form of production A→α. This set
can appear on the stack during shift/reduce action
142
Closure operation
• For a CFG G, if I is the set of items then the function closure(I) can be constructed using following rules
– Consider I is a set of canonical items and initially every item I is added to closure(I)
– If rule A  .B is a rule in closure(I) and there is another rule for B such as B.
then,
– Closure(I) :
• A  .B
• B.
143
• This rule is applied until no more new items can
be added to closure(I).
• The meaning of rule A  .B id that
during derivation of the input string at
some point we may require strings
derivable from B as input.
• A non-terminal immediately to the right of .
Indicates that it has to be expanded shortly
144
Goto operation
• If there is a production A  .B then
goto(A  .B, B) = A  B.
• this means simply shifting of . One
position ahead over the grammar
symbol( T or NT)
• The rule A  .B is in I then the same
goto function can be written as goto(I,B)
145
• Construct the SLR(1) parsing table for
1 E→E+T
2 E →T
3 T →T*F
4 T →F
5 F →(E)
6 F →id
146
I0:
E’→.E
E →.E+T
E →.T
T →.T*F
T →.F
F →.(E)
F →.id
Goto(I0,E)
I1: E’→E.
E → E.+T
Goto(I0,T)
I2: E →T.
T →T.*F
Goto(I0,F)
I3: T →F.
Goto(I0,()
I4: T →(.E)
E →.E+T
E →.T
T →.T*F
T →.F
F →.(E)
F →.id
Goto(I0, id)
I5: F →id.
Goto(I2, *)
I7: T →T*.F
F →.(E)
F →.id
Goto(I4, E)
I8: F →(E.)
E →E.+T
Goto(I6, T)
I9: E →E+T.
T →T.*F
Goto(I7, F)
I10: T →T*F.
Goto(I8, ))
I11: F →(E).
Goto(I1, +)
I6: E →E+.T
T →.T*F
T →.F
F →.(E)
F →.id
147
• FOLLOW(E’) = {$}
• FOLLOW(E) = {+,),$}
• FOLLOW(T) = {+,*,),$}
• FOLLOW(F) = {+,*,),$}
148
state id + * ( ) $ E T F
0 s5 s4 1 2 3
1 s6 acc
2 r2 s7 r2 r2
3 r4 r4 r4 r4
4 s5 s4 8 2 3
5 r6 r6 r6 r6
6 s5 s4 9 3
7 s5 s4 10
8 s6 s11
9 r1 s7 r1 r1
10 r3 r3 r3 r3
Action Table Goto Table
149
STACK INPUT
BUFFER
ACTION
TABLE
GOTO
TABLE
PARSING
ACTION
$0 Id*id*id$ [0,id]=s5 Shift
$0id5 *id+id$ [5,*]=r6 [0,f]=3 Reduce
F→id
$0F3 *id*id$ [3,*]=r4 [0,T]=2 Reduce T→F
$0T2 *id+id$ [2,*]=s7 Shift
$0T2*7 Id+id$ [7,id]=s5 Shift
$0T2*7id5 +id$ [5,+]=r6 [7,F]=10 reduce
$0T2*7F10 +id$ [10,+]=r3 [0,T]=2 Reduce
$0T2 +id$ [2,+]=r2 [0,E]=1 Reduce
$0E1 +id$ [1,=]=s6 Shift
$0E1+6 +id$ [6,id]=s5 Shift
$0E1+6ID5 $ [5,$]=r6 [6,F]=3 Reduce
$0E1+6F3 $ [3,$]=r4 [6,T]=9 Reduce
$0E1+6T9 $ [9,$]=r1 [0,E]=1 Reduce
$0E1 $ Accept accept
150
CLR PARSING or LR(1)
PARSING
• Construction of canonical set of items along with lookahead
• For the grammar G initially add S’→.S in the set of item C
• For each set of items Ii in C and for each grammar symbol X (T ot
NT) add closure(Ii,X). This process is repeated by applying
goto(Ii,X) for each X in Ii such that goto(Ii,X) is not empty and not
in C. The set of items has to constructed until no more set of items
can be added to C
• The closure function can be computed as : for each item
[A→α.Xβ, a] is in I and rule [A→αX.β, a] is not in goto items
then add [A→αX.β, a] to goto items
• This process is repeated until no more set of items can be added to
the collection C
151
CONSTRUCTION OF CLR PARSING TABLE
• Construct set of items C={I0,I1,I2,...In} where C is a collection of
set of LR(1) items for the input grammar G’.
• The parsing actions are based on each items Ii.
– If [A→αBβ, b] is in Ii and goto(Ii, a)=Ii then create a entry in the action
table action[Ii,a]=shift j.
– If there is a production A→α., a] in Ii then in action table
action[Ii,a]=reduce by A→α. Here A should not be S’.
– If there is a production S’ →S.,$ in Ii then action[i,$]=accept.
• The goto part of LR table can be filled as: the goto transition for
state i is considered for NT only. If goto(Ii,A)=Ij, then
goto(Ii,A)=j
• All other entries are defined as ERROR
152
EXAMPLES
• Construct CLR for the grammar
E→E+E/T
T →T*F/F
F →(E)/id.
• FOLLOW(E) = {+,),$}
FIRST(E)={(,id}
• FOLLOW(T) = {+,*,),$} FIRST(T)={(,id}
• FOLLOW(F) = {+,*,),$} FIRST(F)={(,id}
153
• Augmented grammar
E’ → E
E →E+T
E →T
T →T*F
T →F
F →(E)
F→id
• LR(1) items
•LR(0) items
E’ →. E
E →.E+T
E →.T
T →.T*F
T →.F
F →.(E)
F→.id
•LR(1) items
E’ →.E, $
E →.E+T, $/+
E →.T, $/+
T →.T*F, $/+/*
T →.F, $/+/*
F →.(E), $/+/*
F→.id, $/+/*
154
Goto(I0, E)
I1 : E’→ E. , $
E →E.+T, $/+
Goto(I0,T)
I2: E →T.,$/+
T →T.*F,$/+/*
Goto(I0,F)
I3: T →F., $/+/*
Goto(I0,( )
I4: F →(.E), $/+,*
E →.E+T, )/+
E →.T, )/+
T →.T*F, )/+/*
T → .F, )/+,*
F →.(E), )/+/*
F →.id, ),+,*
Goto(I0, id)
I5: F →id. , $/+/*
Goto(I1,+)
I6: E →E+.T, $/+
T →.T*F, $/+/*
T →.F, $/+/*
F →.(E), $/+/*
F →.id, $/+/*
155
STAT
ES
+ * ( ) Id $ E T F
0 S4 S5 1 2 3
1 S6 ACC
2 R2 S7 R2
3 R4 R4 R4
4 S11 S12 8 9 10
5 R6 R6 R6
6 S4 S5 13 3
7 S4 S5 14
8 S16 S15
9 R2 S17 R2
10 R4 R4 R4
156
11 S11 S12 18 9 10
12 R6 R6 R6
13 R1 S7 R1
14 R3 R3 R3
15 R5 R5 R5
16 S11 S12 19 10
17 S11 S12 20
18 S16 S21
19 R1 S17 R1
20 R3 R3 R3
21 R3 R5 R5
157
STACK INPUT
BUFFER
ACTION
$0 id+id*id$ Shift s5
$0id5 +id*id$ R6
$0F3 +id*id$ R4
$0T2 +id*id$ R2
$0E1 +id*id$ S6
$0E1+6 id*id$ S5
$0E1+6 id 5 *id$ R6
$0E1+6 F 3 *id$ R4
$0E1+6 T13 *id$ S7
$0E1+6T13*7 Id$ S5
$0E1+6T13*7id 5 $ R6
$0E1+6T13*7F14 $ R3
$0E1+6T13 $ R1
$0E1 $ ACC
158
LALR PARSING
• Construction of LALR parsing table
• Construct LR(1) items
• Merge two states Ii and Ij if the first component are matching and create
a new state replacing one of the older states such as Iij = Ii U Ij
• The parsing actions are based on each item Ii.
– If [A→α.aβ, b] is in Ii and goto(Ii,a)=Ij then create an entry in the action table
action[Ii,a]= shift j
– If there is a production [A→α., a] in Ii then in the action table action[Ii,a]=reduce
by A→α. Here A should not be S’.
– If there is a production S’ →S,$ in Ii then action[i,$]=accept
• The goto part : the goto transitions for state i is considered for NTonly.
If goto(Ii,A)=Ij, then goto[Ii,A]=j
• If the parsing action conflicts, then the grammar is not LALR(1). All
other entries are ERROR
159
LALR STATES FROM CLR
I2,9: E→T., $/+/)
T → T.*F, $/+/)/*
I3,10: T →F. , $/+/)/*
I4,11: F →(.E) , $/+/)/*
E →.E+T, )/+
E →.T,)/+
T →.T*F, )/+/*
T →.F, )/+/*
F →.(E), )/+/*
F →.id, )/+/*
I5,12: F →id. , $/+/)/*
I6,16: E →E+.T, $/).+
T →.T*F, $/)/+/*
T → .F, $/+/)/*
F →.(E), $/)/+/*
F →.id, $/)/+/*
I7,17: T →T*.F, $/+/)/*
F →.(E), $/+/)/*
F →.id, $/+/*/)
I8,18: F →(E.), $/+/)/*
E →E.+T, )/+
I13,19: E →E+T., $/)/+
T →T.*F, $/)/+/*
I14, 40: T →T*F., $/+/)/*
I15, 21: F →(E). , $/+/)/*
160
STATE + * ( ) Id $ E T F
0 S4,11 S5,12 1 2,9 3,10
1 S6,16 ACC
2,9 R2 S7,17 R2 R2
3,10 R4 R4 R4 R4
4,11 S4,11 S5,12 8,18 2,9 3,10
5,12 R6 R6 R6 R6
6,16 S4,11 S5,12 13,9 3,10
7,17 S4,11 S5,12 14,20
8,18 S6,16 S15,21
13,19 R1 S7,17 R1
14,20 R3 R3 R3 R3
15,21 R5 R5 R5 R5

PPL unit 1 syntax and semantics- evolution of programming language lexical analysis

  • 1.
  • 2.
    UNIT I SYNTAXAND SEMANTICS • Evolution of programming languages – describing syntax – context-free grammars – attribute grammars – describing semantics – lexical analysis – parsing – recursive-descent – bottom up parsing 2
  • 3.
    3 OBJECTIVES • To understandand describe syntax and semantics of programming languages • To understand data, data types, and basic statements • To understand call-return architecture and ways of implementing them • To understand object-orientation, concurrency, and event handling in programming languages • To develop programs in non-procedural programming paradigms
  • 4.
    4 • Evolution ofprogramming languages • Describing syntax – Context-free grammars – Attribute grammars • Describing semantics • Lexical analysis – Parsing • Recursive-decent • Bottom up parsing
  • 5.
    5 Improved background forchoosing appropriate languages • C vs. Modula-3 vs. C++ for systems programming • Fortran vs. APL vs. Ada for numerical computations • Ada vs. Modula-2 for embedded systems • Common Lisp vs. Scheme vs. Haskell for symbolic data manipulation • Java vs. C/CORBA for networked PC programs
  • 6.
    6 Increased ability tolearn new languages • Easy to walk down language family tree • Concepts are similar across languages • If you think in terms of iteration, recursion, abstraction (for example), you will find it easier to assimilate the syntax and semantic details of a new language than if you try to pick it up in a vacuum • Analogy to human languages: good grasp of grammar makes it easier to pick up new languages
  • 7.
    7 Increased capacity toexpress ideas Figure out how to do things in languages that don't support them: • lack of suitable control structures in Fortran use comments and programmer discipline for control structures lack of recursion in Fortran, CSP, etc • write a recursive algorithm then use mechanical recursion elimination (even for things that aren't quite tail recursive) • lack of named constants and enumerations in Fortran • use variables that are initialized once, then never changed • lack of modules in C and Pascal • use comments and programmer discipline • lack of iterators in just about everything • fake them with (member?) functions
  • 8.
    8 What makes alanguage successful? • Easy to learn (BASIC, Pascal, LOGO, Scheme) • Easy to express things, easy use once fluent, "powerful” (C, Common Lisp, APL, Algol-68, Perl) • Easy to implement (BASIC, Forth) • Possible to compile to very good (fast/small) code (Fortran) • Backing of a powerful sponsor (COBOL, PL/1, Ada, Visual Basic) • Wide dissemination at minimal cost (Pascal, Turing, Java)
  • 9.
    9 What makes asuccessful language? The following key characteristics: – Simplicity and readability – Clarity about binding – Reliability – Support – Abstraction – Orthogonality – Efficient implementation
  • 10.
    10 Simplicity and Readability •Small instruction set – E.g., Java vs Scheme • Simple syntax – E.g., C/C++/Java vs Python • Benefits: – Ease of learning – Ease of programming
  • 11.
    11 A language elementis bound to a property at the time that property is defined for it. So a binding is the association between an object and a property of that object – Examples: • a variable and its type • a variable and its value – Early binding takes place at compile-time – Late binding takes place at run time Clarity about Binding
  • 12.
    12 Reliability A language isreliable if: – Program behavior is the same on different platforms • E.g., early versions of Fortran – Type errors are detected • E.g., C vs Haskell – Semantic errors are properly trapped • E.g., C vs C++ – Memory leaks are prevented • E.g., C vs Java
  • 13.
    13 Language Support • Accessible(public domain) compilers/interpreters • Good texts and tutorials • Wide community of users • Integrated with development environments (IDEs)
  • 14.
    14 Abstraction in Programming •Data – Programmer-defined types/classes – Class libraries • Procedural – Programmer-defined functions – Standard function libraries
  • 15.
    15 Orthogonality A language isorthogonal if its features are built upon a small, mutually independent set of primitive operations. • Fewer exceptional rules = conceptual simplicity – E.g., restricting types of arguments to a function • Tradeoffs with efficiency
  • 16.
    16 Efficient implementation • Embeddedsystems – Real-time responsiveness (e.g., navigation) – Failures of early Ada implementations • Web applications – Responsiveness to users (e.g., Google search) • Corporate database applications – Efficient search and updating • AI applications – Modeling human behaviors
  • 17.
    17 • Why dowe have programming languages? – way of thinking---way of expressing algorithms • languages from the user's point of view – abstraction of virtual machine---way of specifying what you want the hardware to do without getting down into the bits • languages from the implementor's point of view What is a language for?
  • 18.
    18 Genealogy of commonhigh-level programming languages
  • 19.
    19 History • Early History: The first programmers • The 1940s: Von Neumann and Zuse • The 1950s: The First Programming Language • The 1960s: An Explosion in Programming languages • The 1970s: Simplicity, Abstraction, Study • The 1980s: Consolidation and New Directions • The 1990s: Internet and the Web • The 2000s: tbd
  • 20.
    20 Early History: TheFirst Programmer • Jacquard loom of early 1800s – Translated card patterns into cloth designs • Charles Babbage’s analytical engine (1830s & 40s) Programs were cards with data and operations • Ada Lovelace – first programmer “The engine can arrange and combine its numerical quantities exactly as if they were letters or any other general symbols; And in fact might bring out its results in algebraic notation, were provision made.”
  • 21.
    21 The 1940s: VonNeumann and Zuse • Konrad Zuse (Plankalkul) – in Germany - in isolation because of the war – defined Plankalkul (program calculus) circa 1945 but never implemented it. – Wrote algorithms in the language, including a program to play chess. – His work finally published in 1972. – Included some advanced data type features such as • Floating point, used twos complement and hidden bits • Arrays • records (that could be nested)
  • 22.
    22 Plankalkul notation A(7) :=5 * B(6) | 5 * B => A V | 6 7 (subscripts) S | 1.n 1.n (data types)
  • 23.
    23 • Initial computerswere programmed in raw machine code. • These were entirely numeric. • What was wrong with using machine code? Everything! • Poor readability • Poor modifiability • Expression coding was tedious • Inherit deficiencies of hardware, e.g., no indexing or floating point numbers Machine Code (1940’s)
  • 24.
    24 • Short Codeor SHORTCODE - John Mauchly, 1949. • Pseudocode interpreter for math problems, on Eckert and Mauchly’s BINAC and later on UNIVAC I and II. • Possibly the first attempt at a higher level language. • Expressions were coded, left to right, e.g.: X0 = sqrt(abs(Y0)) 00 X0 03 20 06 Y0 • Some operations: 01 – 06 abs 1n (n+2)nd power 02 ) 07 + 2n (n+2)nd root 03 = 08 pause 4n if <= n 04 / 09 ( 58 print & tab Pseudocodes (1949)
  • 25.
    25 More Pseudocodes Speedcoding; 1953-4 • Apseudocode interpreter for math on IBM 701, IBM 650. • Developed by John Backus • Pseudo ops for arithmetic and math functions • Conditional and unconditional branching • Autoincrement registers for array access • Slow but still dominated by slowness of s/w math • Interpreter left only 700 words left for user program Laning and Zierler System - 1953 • Implemented on the MIT Whirlwind computer • First "algebraic" compiler system • Subscripted variables, function calls, expression translation • Never ported to any other machine
  • 26.
    26 The 1950s: TheFirst Programming Language • Pseudocodes: interpreters for assembly language like • Fortran: the first higher level programming language • COBOL: he first business oriented language • Algol: one of the most influential programming languages ever designed • LISP: the first language to depart from the procedural paradigm • APL:
  • 27.
    27 Fortran (1954-57) • FORmulaTRANslator • Developed at IBM under the guidance of John Backus primarily for scientific programming • Dramatically changed forever the way computers used • Has continued to evolve, adding new features & concepts. – FORTRAN II, FORTRAN IV, FORTRAN 66, FORTRAN 77, FORTRAN 90 • Always among the most efficient compilers, producing fast code • Still popular, e.g. for supercomputers
  • 28.
    28 FORTRAN 0 –1954 (not implemented) FORTRAN I - 1957 Designed for the new IBM 704, which had index registers and floating point hardware Environment of development: Computers were small and unreliable Applications were scientific No programming methodology or tools Machine efficiency was most important Impact of environment on design • No need for dynamic storage • Need good array handling and counting loops • No string handling, decimal arithmetic, or powerful input/output (commercial stuff) Fortran 0 and 1
  • 29.
    29 • Names couldhave up to six characters • Post-test counting loop (DO) • Formatted I/O • User-defined subprograms • Three-way selection statement (arithmetic IF) IF (ICOUNT-1) 100, 200, 300 • No data typing statements variables beginning with i, j, k, l, m or n were integers, all else floating point • No separate compilation • Programs larger than 400 lines rarely compiled correctly, mainly due to IBM 704’s poor reliability • Code was very fast • Quickly became widely used Fortran I Features
  • 30.
    30 Fortran II, IVand 77 FORTRAN II - 1958 • Independent compilation • Fix the bugs FORTRAN IV - 1960-62 • Explicit type declarations • Logical selection (IF) statement • Subprogram names could be parameters • ANSI standard in 1966 FORTRAN 77 - 1978 • Character string handling • Logical loop control (WHILE) statement • IF-THEN-ELSE statement
  • 31.
    31 Added many featuresof more modern programming languages, including • Pointers • Recursion • CASE statement • Parameter type checking • A collection of array operations, DOTPRODUCT, MATMUL, TRANSPOSE, etc • dynamic allocations and deallocation of arrays • a form of records (called derived types) • Module facility (similar Ada’s package) Fortran 90 (1990)
  • 32.
    32 COBOL • COmmon BusinessOriented Language • Principal mentor: (Rear Admiral Dr.) Grace Murray Hopper (1906-1992) • Based on FLOW-MATIC which had such features as: • Names up to 12 characters, with embedded hyphens • English names for arithmetic operators • Data and code were completely separate • Verbs were first word in every statement • CODASYL committee (Conference on Data Systems Languages) developed a programming language by the name of COBOL
  • 33.
    33 First CODASYL DesignMeeting - May 1959 Design goals: • Must look like simple English • Must be easy to use, even if that means it will be less powerful • Must broaden the base of computer users • Must not be biased by current compiler problems Design committee were all from computer manufacturers and DoD branches Design Problems: arithmetic expressions? subscripts? Fights among manufacturers COBOL
  • 34.
    34 COBOL Contributions: - First macrofacility in a high-level language - Hierarchical data structures (records) - Nested selection statements - Long names (up to 30 characters), with hyphens - Data Division Comments: • First language required by DoD; would have failed without DoD • Still the most widely used business applications language
  • 35.
    35 • Beginner's Allpurpose Symbolic Instruction Code • Designed by Kemeny & Kurtz at Dartmouth for the GE 225 with the goals: • Easy to learn and use for non-science students and as a path to Fortran and Algol • Must be ”pleasant and friendly" • Fast turnaround for homework • Free and private access • User time is more important than computer time • Well-suited for implementation on first PCs, e.g., Gates and Allen’s 4K Basic interpreter for the MITS Altair personal computer (circa 1975) • Current popular dialects: Visual BASIC BASIC (1964)
  • 36.
    36 LISP (1959) • LIStProcessing language (Designed at MIT by McCarthy) • AI research needed a language that: • Process data in lists (rather than arrays) • Handles symbolic computation (rather than numeric) • One universal, recursive data type: the s-expression • An s-expression is either an atom or a list of zero or more s-expressions • Syntax is based on the lambda calculus • Pioneered functional programming • No need for variables or assignment • Control via recursion and conditional expressions • Status • Still the dominant language for AI • COMMON LISP and Scheme are contemporary dialects • ML, Miranda, and Haskell are related languages
  • 37.
    37 Environment of development: 1.FORTRAN had (barely) arrived for IBM 70x 2. Many other languages were being developed, all for specific machines 3. No portable language; all were machine-dependent 4. No universal language for communicating algorithms ACM and GAMM met for four days for design - Goals of the language: 1. Close to mathematical notation 2. Good for describing algorithms 3. Must be translatable to machine code Algol
  • 38.
    38 Algol 58 Features • Conceptof type was formalized • Names could have any length • Arrays could have any number of subscripts • Parameters were separated by mode (in & out) • Subscripts were placed in brackets • Compound statements (begin ... end) • Semicolon as a statement separator • Assignment operator was := • if had an else-if clause Comments: •Not meant to be implemented, but variations of it were (MAD, JOVIAL) •Although IBM was initially enthusiastic, all support was dropped by mid-1959
  • 39.
    39 Algol 60 Modified ALGOL 58at 6-day meeting in Paris adding such new features as: • Block structure (local scope) • Two parameter passing methods • Subprogram recursion • Stack-dynamic arrays • Still no I/O and no string handling Successes: • The standard way to publish algorithms for over 20 years • All subsequent imperative languages are based on it • First machine-independent language • First language whose syntax was formally defined (BNF)
  • 40.
    40 Failure: Never widelyused, especially in U.S., mostly because 1. No I/O and the character set made programs nonportable 2. Too flexible--hard to implement 3. Entrenchment of FORTRAN 4. Formal syntax description 5. Lack of support by IBM Algol 60 (1960)
  • 41.
    41 APL • A ProgrammingLanguage • Designed by K.Iverson at Harvard in late 1950’s • A language for programming mathematical computations – especially those using matrices • Functional style and many whole array operations • Drawback is requirement of special keyboard
  • 42.
    42 The 1960s: AnExplosion in Programming Languages • The development of hundreds of programming languages • PL/I designed in 1963-4 – supposed to be all purpose – combined features of FORTRAN, COBOL and Algol 60 and more! – translators were slow, huge and unreliable – some say it was ahead of its time...... • Algol 68 • SNOBOL • Simula • BASIC
  • 43.
    43 PL/I • Computing situationin 1964 (IBM's point of view) Scientific computing • IBM 1620 and 7090 computers • FORTRAN • SHARE user group Business computing • IBM 1401, 7080 computers • COBOL • GUIDE user group • IBM’s goal: develop a single computer (IBM 360) and a single programming language (PL/I) that would be good for scientific and business applications. • Eventually grew to include virtually every idea in current practical programming languages.
  • 44.
    44 PL/I PL/I contributions: 1. Firstunit-level concurrency 2. First exception handling 3. Switch-selectable recursion 4. First pointer data type 5. First array cross sections Comments: • Many new features were poorly designed • Too large and too complex • Was (and still is) actually used for both scientific and business applications • Subsets (e.g. PL/C) developed which were more manageable
  • 45.
    45 Simula (1962- 67) • Designedand built by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Centre (NCC) in Oslo between 1962 and 1967 • Originally designed and implemented as a language for discrete event simulation • Based on ALGOL 60 Primary Contributions: • Coroutines - a kind of subprogram • Classes (data plus methods) and objects • Inheritance • Dynamic binding => Introduced the basic ideas that developed into object- oriented programming.
  • 46.
    46 From the continueddevelopment of ALGOL 60, but it is not a superset of that language • Design is based on the concept of orthogonality • Contributions: • User-defined data structures • Reference types • Dynamic arrays (called flex arrays) • Comments: • Had even less usage than ALGOL 60 • Had strong influence on subsequent languages, especially Pascal, C, and Ada Algol 68
  • 47.
    47 The 1970s: Simplicity, Abstraction,Study • Algol-W - Nicklaus Wirth and C.A.R.Hoare – reaction against 1960s – simplicity • Pascal – small, simple, efficient structures – for teaching program • C - 1972 - Dennis Ritchie – aims for simplicity by reducing restrictions of the type system – allows access to underlying system – interface with O/S - UNIX
  • 48.
    48 Pascal (1971) • Designedby Wirth, who quit the ALGOL 68 committee (didn't like the direction of that work) • Designed for teaching structured programming • Small, simple • Introduces some modest improvements, such as the case statement • Was widely used for teaching programming ~ 1980-1995.
  • 49.
    49 C (1972-) • Designedfor systems programming at Bell Labs by Dennis Ritchie and colleagues. • Evolved primarily from B, but also ALGOL 68 • Powerful set of operators, but poor type checking • Initially spread through UNIX and the availability of high quality, free compilers, especially gcc.
  • 50.
    50 Other descendants ofALGOL • Modula-2 (mid-1970s by Niklaus Wirth at ETH) • Pascal plus modules and some low-level features designed for systems programming • Modula-3 (late 1980s at Digital & Olivetti) • Modula-2 plus classes, exception handling, garbage collection, and concurrency • Oberon (late 1980s by Wirth at ETH) • Adds support for OOP to Modula-2 • Many Modula-2 features were deleted (e.g., for statement, enumeration types, with statement, non-integer array indices)
  • 51.
    51 The 1980s: Consolidationand New Paradigms • Ada – US Department of Defence – European team lead by Jean Ichbiah. (Sam Lomonaco was also on the ADA team ) • Functional programming – Scheme, ML, Haskell • Logic programming – Prolog • Object-oriented programming – Smalltalk, C++, Eiffel
  • 52.
    52 Ada • In studydone in 73-74 it was determined that the DoD was spending $3B annually on software, over half on embedded computer systems. • The Higher Order Language Working Group was formed and initial language requirements compiled and refined in 75-76 and existing languages evaluated. • In 1997, it was concluded that none were suitable, though Pascal, ALGOL 68 or PL/I would be a good starting point. • Language DoD-1 was developed through a series of competitive contracts.
  • 53.
    53 Ada • Renamed Adain May 1979. • Reference manual, Mil. Std. 1815 approved 10 December 1980. (Ada Bryon was born 10/12/1815) • “mandated” for use in DoD work during late 80’s and early 90’s. • Ada95, a joint ISO and ANSI standard, accepted in February 1995 and included many new features. • The Ada Joint Program Office (AJPO) closed 1 October 1998 (Same day as ISO/IEC 14882:1998 (C+ +) published!)
  • 54.
    54 Ad a Contributions: 1. Packages -support for data abstraction 2. Exception handling - elaborate 3. Generic program units 4. Concurrency - through the tasking model Comments: • Competitive design • Included all that was then known about software engineering and language design • First compilers were very difficult; the first really usable compiler came nearly five years after the language design was completed • Very difficult to mandate programming technology
  • 55.
    55 • Developed atthe University of Aix Marseille, by Comerauer and Roussel, with some help from Kowalski at the University of Edinburgh • Based on formal logic • Non-procedural • Can be summarized as being an intelligent database system that uses an inferencing process to infer the truth of given queries Logic Programming: Prolog
  • 56.
    56 Functional Programming • CommonLisp: consolidation of LISP dialects spurred practical use, as did the development of Lisp Machines. • Scheme: a simple and pure LISP like language used for teaching programming. • Logo: Used for teaching young children how to program. • ML: (MetaLanguage) a strongly-typed functional language first developed by Robin Milner in the 70’s • Haskell: polymorphicly typed, lazy, purely functional language.
  • 57.
    57 Smalltalk (1972- 80) • Developedat Xerox PARC by Alan Kay and colleagues (esp. Adele Goldberg) inspired by Simula 67 • First compilation in 1972 was written on a bet to come up with "the most powerful language in the world" in "a single page of code". • In 1980, Smalltalk 80, a uniformly object-oriented programming environment became available as the first commercial release of the Smalltalk language • Pioneered the graphical user interface everyone now uses • Industrial use continues to the present day
  • 58.
    58 • Developed atBell Labs by Stroustrup • Evolved from C and SIMULA 67 • Facilities for object-oriented programming, taken partially from SIMULA 67, added to C • Also has exception handling • A large and complex language, in part because it supports both procedural and OO programming • Rapidly grew in popularity, along with OOP • ANSI standard approved in November, 1997 C++ (1985)
  • 59.
    59 Eiffel •Eiffel - arelated language that supports OOP - (Designed by Bertrand Meyer - 1992) - Not directly derived from any other language - Smaller and simpler than C++, but still has most of the power
  • 60.
    60 1990’s: the Internetand Web During the 90’s, Object-oriented languages (mostly C+ +) became widely used in practical applications The Internet and Web drove several phenomena: – Adding concurrency and threads to existing languages – Increased use of scripting languages such as Perl and Tcl/Tk – Java as a new programming language
  • 61.
    61 Java • Developed atSun in the early 1990s with original goal of a language for embedded computers • Principals: Bill Joy, James Gosling, Mike Sheradin, Patrick Naughton • Original name, Oak, changed for copyright reasons • Based on C++ but significantly simplified • Supports only OOP • Has references, but not pointers • Includes support for applets and a form of concurrency (i.e. threads)
  • 62.
    62 The future • Inthe 60’s, the dream was a single all-purpose language (e.g., PL/I, Algol) • The 70s and 80s dream expressed by Winograd (1979) “Just as high-level languages allow the programmer to escape the intricacies of the machine, higher level programming systems can provide for manipulating complex systems. We need to shift away from algorithms and towards the description of the properties of the packages that we build. Programming systems will be declarative not imperative” • Will that dream be realised? • Programming is not yet obsolete
  • 63.
    Syntax and Semantics •Introduction •  Syntax: the form or structure of the expressions, statements, and program • units •  Semantics: the meaning of the expressions, statements, and program units •  Syntax and semantics provide a language‘s definition • – Users of a language definition • – Other language designers • – Implementers • – Programmers (the users of the language) 63
  • 64.
    The General Problemof Describing Syntax • A sentence is a string of characters over some alphabet •  A language is a set of sentences •  A lexeme is the lowest level syntactic unit of a language (e.g., *, sum, begin) •  A token is a category of lexemes (e.g., identifier) •  Languages Recognizers • – A recognition device reads input strings of the language and decides whether the input strings belong to the language • – Example: syntax analysis part of a compiler •  Languages Generators • – A device that generates sentences of a language • – One can determine if the syntax of a particular sentence is correct by comparing it to the structure of the generator 64
  • 65.
    Formal Methods ofDescribing Syntax • Backus-Naur Form and Context-Free Grammars • – Most widely known method for describing programming language syntax •  Extended BNF • – Improves readability and writability of BNF •  Grammars and Recognizers • Backus-Naur Form and Context-Free Grammars •  Context-Free Grammars •  Developed by Noam Chomsky in the mid-1950s •  Language generators, meant to describe the syntax of natural languages •  Define a class of languages called context-free languages • Backus-Naur Form (BNF) •  Backus-Naur Form (1959) • – Invented by John Backus to describe ALGOL 58 • – BNF is equivalent to context-free grammars • – BNF is a metalanguage used to describe another language • – In BNF, abstractions are used to represent classes of syntactic structures-- they act like syntactic variables (also called nonterminal symbols) 65
  • 66.
    BNF Fundamentals • Non-terminals:BNF abstractions •  Terminals: lexemes and tokens •  Grammar: a collection of rules • – Examples of BNF rules: • ident_list> → identifier | identifer, <ident_list> • <if_stmt> → if <logic_expr> then <stmt> • BNF Rules •  A rule has a left-hand side (LHS) and a right-hand side (RHS), and consists of terminal and nonterminal symbols •  A grammar is a finite nonempty set of rules •  An abstraction (or nonterminal symbol) can have more than one RHS • <stmt> → <single_stmt> • | begin <stmt_list> end 66
  • 67.
    • Describing Lists • Syntactic lists are described using recursion <ident_list> → ident | ident, <ident_list> •  A derivation is a repeated application of rules, starting with the start symbol • and ending with a sentence (all terminal symbols) • An Example Grammar <program> → <stmts> <stmts> → <stmt> | <stmt> ; <stmts> <stmt> → <var> = <expr> <var> → a | b | c | d <expr> → <term> + <term> | <term> - <term> <term> → <var> | const 67
  • 68.
    Parse Tree • Ahierarchical representation of a derivation 68
  • 69.
    Derivation •  Everystring of symbols in the derivation is a sentential form •  A sentence is a sentential form that has only terminal symbols •  A leftmost derivation is one in which the leftmost nonterminal in each sentential form is the one that is expanded •  A derivation may be neither leftmost nor rightmost 69
  • 70.
    Ambiguity in Grammars • A grammar is ambiguous iff it generates a sentential form that has two or more distinct parse trees • An Unambiguous Expression Grammar • If we use the parse tree to indicate precedence levels of the operators, we cannot have ambiguity • <expr> → <expr> - <term>|<term> • <term> → <term> / const|const 70
  • 71.
  • 72.
    Extended Backus-Naur Form (EBNF) •Optional parts are placed in brackets ([ ]) • <proc_call> → ident [(<expr_list>)] •  Alternative parts of RHSs are placed inside parentheses and separated via vertical bars • <term> → <term> (+|-) const •  Repetitions (0 or more) are placed inside braces ({ }) • <ident> → letter {letter|digit} 72
  • 73.
    BNF and EBNF • BNF <expr> → <expr> + <term> | <expr> - <term> | <term> <term> → <term> * <factor> | <term> / <factor> | <factor> •  EBNF <expr> → <term> {(+ | -) <term>} <term> → <factor> {(* | /) <factor>} 73
  • 74.
    Attribute Grammars • Context-freegrammars (CFGs) cannot describe all of the syntax of programming languages •  Additions to CFGs to carry some semantic info along parse trees •  Primary value of attribute grammars (AGs): • – Static semantics specification • – Compiler design (static semantics checking) • Definition •  An attribute grammar is a context-free grammar G = (S, N, T, P) with the following additions: • – For each grammar symbol x there is a set A(x) of attribute values 74
  • 75.
    • Each rulehas a set of functions that define certain attributes of the nonterminals in the rule • – Each rule has a (possibly empty) set of predicates to check for attribute consistency • – Let X0 X1 ... Xn be a rule • – Functions of the form S(X0) = f(A(X1), ... , A(Xn)) define synthesized attributes • – Functions of the form I(Xj) = f(A(X0), ... , A(Xn)), for i <= j <= n, define inherited attributes • – Initially, there are intrinsic attributes on the leaves 75
  • 76.
    Example •  Syntax <assign>→ <var> = <expr> <expr> → <var> + <var> | <var> <var> → A | B | C •  actual_type: synthesized for <var> and <expr> •  expected_type: inherited for <expr> • Syntax rule :<expr> → <var>[1] + <var>[2] • Semantic rules :<expr>.actual_type → <var>[1].actual_type • Predicate :<var>[1].actual_type == <var>[2].actual_type • <expr>.expected_type == <expr>.actual_type • Syntax rule :<var> → id • Semantic rule :<var>.actual_type  lookup (<var>.string) 76
  • 77.
    How are attributevalues computed? • – If all attributes were inherited, the tree could be decorated in top-down order. • – If all attributes were synthesized, the tree could be decorated in bottom- up order. • – In many cases, both kinds of attributes are used, and it is some combination of top-down and bottom-up that must be used. <expr>.expected_type  inherited from parent <var>[1].actual_type  lookup (A) <var>[2].actual_type  lookup (B) <var>[1].actual_type =? <var>[2].actual_type <expr>.actual_type  <var>[1].actual_type <expr>.actual_type =? <expr>.expected_type 77
  • 78.
    Describing the Meaningsof Programs:Dynamic Semantics • There is no single widely acceptable notation or formalism for describing semantics • Operational Semantics – Describe the meaning of a program by executing its statements on a machine, either simulated or actual. The change in the state of the machine (memory, registers, etc.) defines the meaning of the statement • To use operational semantics for a high-level language, a virtual machine is needed • A hardware pure interpreter would be too expensive • A software pure interpreter also has problems: • – The detailed characteristics of the particular computer would make actions difficult to understand – Such a semantic definition would be machine- dependent 78
  • 79.
    Operational Semantics and AxiomaticSemantics • A better alternative: A complete computer simulation The process: • – Build a translator (translates source code to the machine code of an idealized computer) • – Build a simulator for the idealized computer Evaluation of operational semantics: • – Good if used informally (language manuals, etc.) • – Extremely complex if used formally (e.g., VDL), it was used for describing semantics of PL/I. Axiomatic Semantics • – Based on formal logic (predicate calculus) • – Original purpose: formal program verification • – Approach: Define axioms or inference rules for each statement type in the language (to allow transformations of expressions to other expressions) • – The expressions are called assertions 79
  • 80.
    Axiomatic Semantics • Anassertion before a statement (a precondition) states the relationships and • constraints among variables that are true at that point in execution • An assertion following a statement is a postcondition • A weakest precondition is the least restrictive precondition that will guarantee the postcondition • Pre-post form: {P} statement {Q} • An example: a = b + 1 {a > 1} • One possible precondition: {b > 10} • Weakest precondition: {b > 0} • Program proof process: The postcondition for the whole program is the desired result. Work back through the program to the first statement. If the precondition on the first statement is the same as the program spec, the program is correct. • An axiom for assignment statements (x = E): {Qx->E} x = E {Q} 80
  • 81.
    • An inferencerule for sequences • – For a sequence S1;S2: • – {P1} S1 {P2} • – {P2} S2 {P3} •  An inference rule for logical pretest loops • For the loop construct: • {P} while B do S end {Q} • Characteristics of the loop invariant • I must meet the following conditions: • – P => I (the loop invariant must be true initially) • – {I} B {I} (evaluation of the Boolean must not change the validity of I) • – {I and B} S {I} (I is not changed by executing the body of the loop) • – (I and (not B)) => Q (if I is true and B is false, Q is implied) • – The loop terminates (this can be difficult to prove) •  The loop invariant I is a weakened version of the loop postcondition, and it is also a precondition. 81
  • 82.
    • Evaluation ofAxiomatic Semantics: • – Developing axioms or inference rules for all of the statements in a language is difficult • – It is a good tool for correctness proofs, and an excellent framework for reasoning about programs, but it is not as useful for language users and compiler writers • – Its usefulness in describing the meaning of a programming language is limited for language users or compiler writers 82
  • 83.
    Denotational Semantics • Basedon recursive function theory • – The most abstract semantics description method • – Originally developed by Scott and Strachey (1970) • – The process of building a denotational spec for a language (not necessarily easy): • – Define a mathematical object for each language entity • – Define a function that maps instances of the language entities onto instances of the corresponding mathematical objects • – The meaning of language constructs are defined by only the values of the • program's variables • – The difference between denotational and operational semantics: In operational semantics, the state changes are defined by coded algorithms; in denotational semantics, they are defined by rigorous mathematical functions • – The state of a program is the values of all its current variables • s = {<i1, v1>, <i2, v2>, …, <in, vn>} • – Let VARMAP be a function that, when given a variable name and a state, • returns the current value of the variable • VARMAP(ij, s) = vj 83
  • 84.
    Expressions • Map expressionsonto Z {error}  •  We assume expressions are decimal numbers, variables, or binary expressions having one arithmetic operator and two operands, each of which can be an expression •  Assignment Statements • – Maps state sets to state sets •  Logical Pretest Loops • – Maps state sets to state sets  The meaning of the loop is the value of the program variables after the statements in the loop have been executed the prescribed number of times, assuming there have been no errors •  In essence, the loop has been converted from iteration to recursion, where the recursive control I s mathematically defined by other recursive state mapping functions •  Recursion, when compared to iteration, is easier to describe with mathematical rigor 84
  • 85.
    • Evaluation ofdenotational semantics – Can be used to prove the correctness of programs – Provides a rigorous way to think about programs – Can be an aid to language design – Has been used in compiler generation systems – Because of its complexity, they are of little use to language users 85
  • 86.
    86 86 LEXICAL ANALYSIS • ROLEOF THE LEXICAL ANALYZER – The main function is to read the input and produce the output as a sequence of tokens that the parser uses for syntax analysis – The command namely “get next token” is used by the lexical analyzer to read the input characters until it can identify the next token – It also performs the user interface task’s – It also correlate error messages from compiler. The two phases of LA are • Scanning (simple task) • Lexical Analysis ( complex task)
  • 87.
    87 87 Tokens Patterns andLexemes • Token represents a logically cohesive sequence of characters • The set of string is described by a rule called pattern associated with the token • The character sequence forming a token is called lexeme for the token • Tokens are keywords, operators, identifiers, constants and punctuations • Pattern is a rule describing the set of lexeme that can represent a particular token in the program • Lexeme matched by the pattern for the token represents strings of characters
  • 88.
    88 88 TOKEN LEXEME PATTERN constconst const Relation <,<=,=,>,>=,<> < or <= or = or > or >= or <> Num 3.14,6.2 Any constant Id Pi, count Letter followed by letters and digits
  • 89.
    89 Specification of Patternsfor Tokens: Regular Definitions • Example: letter  A | B | … | Z | a | b | … | z digit  0 | 1 | … | 9 id  letter ( letter | digit )*
  • 90.
    90 • We frequentlyuse the following shorthands: r+ = rr* r? = r |  [a-z] = a | b | c | … | z • For example: digit  [0-9] num  digit+ (. digit+ )? ( E (+|-)? digit+ )?
  • 91.
    91 Regular Definitions andGrammars stmt  if expr then stmt | if expr then stmt else stmt |  expr  term relop term | term term  id | num if  if then  then else  else relop  < | <= | <> | > | >= | = id  letter ( letter | digit )* num  digit+ (. digit+ )? ( E (+|-)? digit+ )? Grammar Regular definitions
  • 92.
    92 Implementing a ScannerUsing Transition Diagrams 0 2 1 6 3 4 5 7 8 return(relop, LE) return(relop, NE) return(relop, LT) return(relop, EQ) return(relop, GE) return(relop, GT) start < = > = > = other other * * 9 start letter 10 11 * other letter or digit return(gettoken(), install_id()) relop  < | <= | <> | > | >= | = id  letter ( letter | digit )*
  • 93.
    93 Transition Graph • AnNFA can be diagrammatically represented by a labeled directed graph called a transition graph 0 start a 1 3 2 b b a b S = {0,1,2,3}  = {a,b} s0 = 0 F = {3}
  • 94.
    94 Transition Table • Themapping  of an NFA can be represented in a transition table State Input a Input b 0 {0, 1} {0} 1 {2} 2 {3} (0,a) = {0,1} (0,b) = {0} (1,b) = {2} (2,b) = {3}
  • 95.
    95 N(r2) N(r1) From Regular Expressionto NFA (Thompson’s Construction) f i  f a i f i N(r1) N(r2) start start start     f i start N(r) f i start    a r1 | r2 r1r2 r*  
  • 96.
    96 Combining the NFAsof a Set of Regular Expressions 2 a 1 start 6 a 3 start 4 5 b b 8 b 7 start a b a { action1 } abb { action2 } a*b+ { action3 } 2 a 1 6 a 3 4 5 b b 8 b 7 a b 0 start   
  • 97.
    97 Simulating the CombinedNFA Example 1 2 a 1 6 a 3 4 5 b b 8 b 7 a b 0 start    0 1 3 7 2 4 7 7 8 Must find the longest match: Continue until no further moves are possible When last state is accepting: execute action action1 action2 action3 a b a a none action3
  • 98.
    Parsing • Parsing isthe process to determine whether the start symbol can derive the program or not. If the Parsing is successful then the program is a valid program otherwise the program is invalid. • There are generally two types of Parsers: • Top-Down Parsers: – In this Parsing technique we expand the start symbol to the whole program. – Recursive Descent and LL parsers are the Top-Down parsers. • Bottom-Up Parsers: – In this Parsing technique we reduce the whole program to start symbol. – Operator Precedence Parser, LR(0) Parser, SLR Parser, LALR Parser and CLR Parser are the Bottom-Up parsers. 98
  • 99.
    99 PARSING TECHNIQUES PARSER TOP DOWNPARSER BOTTOM UP PARSER BACKTRACKING or RECURSIVE DESCENT PARSER PREDICTIVE PARSER SHIFT REDUCE PARSER LR PARSER SLR PARSER LALR PARSER CLR PARSER OPERATOR PRECEDENCE PARSING
  • 100.
    100 TOP DOWN VsBOTTOM UP SNo TOP DOWN PARSER BOTTOM UP PARSER 1 Parse tree can be built from root to leaves Parse tree can be built from leaves to root 2 This is simple to implement This is complex 3 Less efficient. Various problems that occurs during top down techniques are ambiguity, left recursion, left factoring When the bottom up parser handles ambiguous grammar conflicts occur in parse table 4 It is applicable to small class of languages It is applicable to a broad class of languages 5 Parsing techniques i. Recursive descent parser ii. Predictive parser Parsing techniques. i. shift reduce, ii. Operator precedence, iii. LR parser
  • 101.
    101 RECURSIVE DESCENT PARSER •A parser that uses collection of recursive procedures for parsing the given input string is called Recursive Descent parser • The CFG is used to build the recursive routines • The RHS of the production rule is directly converted to a program. • For each NT a separate procedure is written and body of the procedure is RHS of the corresponding NT.
  • 102.
    RECURSIVE DESCENT PARSER •It is a kind of Top-Down Parser. A top-down parser builds the parse tree from the top to down, starting with the start non-terminal. A Predictive Parser is a special case of Recursive Descent Parser, where no Back Tracking is required. • By carefully writing a grammar means eliminating left recursion and left factoring from it, the resulting grammar will be a grammar that can be parsed by a recursive descent parser. 102
  • 103.
    103 Basic steps ofconstruction of RD Parser • The RHS of the rule is directly converted into program code symbol by symbol 1. If the input symbol is NT then a call to the procedure corresponding the non-terminal is made. 2. If the input is terminal then it is matched with the lookahead from input. The lookahead pointer has to be advanced on matching of the input symbol 3. If the production rule has many alternates then all these alternates has to be combined into a single body of procedure. 4. The parser should be activated by a procedure corresponding to the start symbol.
  • 104.
    104 Example A  aBe| cBd | C B  bB |  C  f proc C { match the current token with f, proc A { and move to the next token; } case of the current token { a: - match the current token with a, and move to the next token; proc B { - call B; case of the current token { - match the current token with e, b: - match the current token with b, and move to the next token; and move to the next token; c: - match the current token with c, - call B and move to the next token; ε : do nothing - call B; } - match the current token with d, } and move to the next token; f: - call C } }
  • 105.
    • Take thegrammar for a straightforward arithmetic language, for instance: 105
  • 106.
    Parsing: • The fundamentalprinciple of recursive descent is Writing a collection of recursive functions, one for each nonterminal symbol in the grammar, is the process of parsing. A series of symbols that matches a particular rule must be parsed by each function, which is assigned to a grammar rule. • The expression function, which is invoked with the input string, is where the recursive descent parser begins. Depending on whether the symbol is a number or an opening parenthesis, the function analyses the first symbol of the input and chooses which alternative of the term rule to apply. • The factor function is used to parse the symbol's value if it is a number. The expression function is used recursively to parse the expression inside the parentheses if the symbol is an opening parenthesis. The term function is invoked recursively to parse any subsequent multiplication or division signs and factors after the factor or expression function has returned. 106
  • 107.
  • 108.
    • The parserfirst calls the expression function with the supplied string. • When "2" is provided as input, the function calls another function, which then executes the data and returns • The expression function then reads the next symbol, a plus sign. It again calls the term function with the input ",". 108
  • 109.
    Recursive descent parsinghas the following benefits: • Ease of use: Because recursive descent parsing closely mimics the grammar rules of the language being parsed, it is simple to comprehend and use. • Readability: The parsing code is usually set up in a structured and modular way, which makes it easier to read and maintain. • Recursive descent parsers can produce descriptive error messages, which make it simpler to find and detect syntax mistakes in the input. 3. Error reporting. • Predictability: The predictable behavior of recursive descent parsers makes the parsing process deterministic and clear. 109
  • 110.
    Recursive descent parsing, however,also has certain drawbacks: • Recursive descent parsers encounter difficulties with left-recursive grammar rules since they can result in unbounded recursion. To effectively handle left recursion, care must be made to avoid it or employ methods like memoization. • Recursive descent parsers rely on backtracking when internal alternatives to a grammar rule are unsuccessful. This could result in inefficiencies, especially if the grammar contains a lot of ambiguity or options. • Recursive descent parsers frequently adhere to the LL(1) constraint, which requires that they only use one token of lookahead. The grammar's expressiveness is constrained by this restriction because it is unable to handle some ambiguous or context-sensitive languages. 110
  • 111.
    An outline ofthe Recursive Descent Parsing algorithm is provided below: • Grammar: The first step in parsing a language is to define its grammar. A set of production rules that outline the language's syntactic structure makes up the grammar. Each rule is made up of a series of terminal and nonterminal symbols on the right side and a nonterminal symbol on the left side. • Create parsing functions: For each nonterminal symbol in the grammar, create a parsing function. The task of identifying and parsing the linguistic expressions corresponding to each nonterminal symbol will fall to each function. • Input tokens read: Read the input tokens that came from the tokenizer or lexical analyzer. The IDs, keywords, operators, and other components of the input language are represented by these tokens. 111
  • 112.
    • Implement parsingfunctions: Recursively implement each parsing function. These steps should be followed by each function: – Verify if the current token matches the nonterminal's anticipated symbol. – If the nonterminal has numerous production rules, handle each alternative using an if-else or switch statement. Each possibility ought to be represented by a different function call or block of code. – Recursively invoke the parsing routines for each alternative's matching nonterminals in the rule. The parsing procedure will continue until all of the input has been processed thanks to this recursive call. – Take care of any additional nonterminal-specific logic, such as parse tree construction or semantic actions. • Start parsing: Launch the parsing operation by invoking the parsing function that corresponds to the grammar's start symbol. The recursive descent parsing procedure will get started with this function. • Implement error-handling procedures to handle unusual input or notify syntax mistakes. Give the user clear error messages when one happens so they may comprehend and fix the issue. 112
  • 113.
    Bottom up parsing •Bottom-up Parsers / Shift Reduce Parsers Build the parse tree from leaves to root. Bottom-up parsing can be defined as an attempt to reduce the input string w to the start symbol of grammar by tracing out the rightmost derivations of w in reverse. Eg. 113
  • 114.
    • A generalshift reduce parsing is LR parsing. The L stands for scanning the input from left to right and R stands for constructing a rightmost derivation in reverse. 114
  • 115.
    115 Predictive Parsing -LL(1) Parser • This top-down parsing algorithm is of non- recursive type. • In this type parsing table is built • For LL(1) Uses only one input symbol tp predict the parsing process Left most derivation Input scanned from left to right
  • 116.
    116 • The datastructures used by LL(1) are – Input buffer (store the input tokens) – Stack (hold left sentential form) – Parsing table (row of NT, column of T) Input token Stack Output Parsing table LL(1) parser
  • 117.
    117 LL(1) Parser input buffer –our string to be parsed. We will assume that its end is marked with a special symbol $. output – a production rule representing a step of the derivation sequence (left-most derivation) of the string in the input buffer. stack – contains the grammar symbols – at the bottom of the stack, there is a special end marker symbol $. – initially the stack contains only the symbol $ and the starting symbol S. $S  initial stack – when the stack is emptied (ie. only $ left in the stack), the parsing is completed. parsing table – a two-dimensional array M[A,a] – each row is a non-terminal symbol – each column is a terminal symbol or the special symbol $ – each entry holds a production rule.
  • 118.
    118 LL(1) Parser –Parser Actions • The symbol at the top of the stack (say X) and the current symbol in the input string (say a) determine the parser action. • There are four possible parser actions. 1. If X and a are $ → parser halts (successful completion) 2. If X and a are the same terminal symbol (different from $) → parser pops X from the stack, and moves the next symbol in the input buffer. 3. If X is a non-terminal → parser looks at the parsing table entry M[X,a]. If M[X,a] holds a production rule XY1Y2...Yk, it pops X from the stack and pushes Yk,Yk-1,...,Y1 into the stack. The parser also outputs the production rule XY1Y2...Yk to represent a step of the derivation. 4. none of the above → error – all empty entries in the parsing table are errors. – If X is a terminal symbol different from a, this is also an error case.
  • 119.
    119 • The constructionof predictive LL(1) parser is based on two very important functions and those are FIRST and FOLLOW. • For the construction 1. Computation of FIRST and FOLLOW function 2. Construction the predictive parsing table using FIRST and FOLLOW functions 3. Parse the input string with the help of predictive parsing table
  • 120.
    120 FIRST function • FIRST(α)is a set of terminal symbols that are first symbols appearing at RHS in derivation of α. • If α →ε then ε is also in FIRST(α) • Following are the rules used to compute the FIRST functions – 1. if the terminal symbol a then FIRST(a) ={a} – 2. If there is a rule X→ε then FIRST(X) = {ε} – 3. For the rule A →X1X2X3…Xk, FIRST(A) ={FIRST(X1) U FIRST(X2) U FIRST(X3)….. U FIRST(XK)} Where Xj≤n such that i≤ j ≤k-1
  • 121.
    121 FOLLOW function • FOLLOW(A)is defined as the set of terminal symbols that appear immediately to the right of A. • FOLLOW(A) = { a | S →α Aaβ where α and β are some grammar symbols may be terminal or non-terminal} • The rules for computing FOLLOW function are as given below – 1. For the start symbol S place $ in FOLLOW(S) 2. If there is a production A→αBβ then everything in FIRST(β) without ε is to be placed in FOLLOW(B) 3. If there is a production A →αBβ or A →αB and FIRST(β) = {ε} then FOLLOW(A) = FOLLOW(B) or FOLLOW(B)=FOLLOW(A). That means everything in FOLLOW(A) is in FOLLOW(B)
  • 122.
    122 FIRST AND FOLLOWEXAMPLE E→TE’; E’ →+TE’|ε; T →FT’; T’→*FT’| ε; F →(E)|id. • E→TE’; T →FT’; F →(E)|id. • FIRST(E)=FIRST(T)=FIRST(F) • Here, F →(E) and F→|id • So, FIRST(F)={(, id} • FIRST(E’) = {+,ε} since, E’ →+TE’|ε; • FIRST(T’) = {*,ε} since, T’→*FT’|ε;
  • 123.
    123 • FOLLOW(E) • ForF→(E) – As there is F→(E), symbol ) is appears immediately after E. so ) will be in FOLLOW(E) – By rule A→αBβ, we can map this with F →(E) then, FOLLOW(E)=FIRST()) = {)} • Since E is a start symbol, $ will be in FOLLOW(E) – Hence, FOLLOW(E) = {), $} • FOLLOW(E’) • For E →TE’ By rule A→αBβ, we can map this with E →TE’ then FOLLOW(E) is in FOLLOW(E’) – FOLLOW(E’)={),$} • For E’→+TE’ FOLLOW(E’) is in FOLLOW(E’) – FOLLOW(E’)={),$} • FOLLOW(T) • For E →TE’ – By rule A → αBβ, FOLLOW(B) = {FIRST(β) – ε}, so FOLLOW(T) = {FIRST(E’)-ε} = {+} • For E’→ +TE’ – By rule A → αBβ, FOLLOW(T)=FOLLOW(E’). so, FOLLOW(T)={),$} – Hence FOLLOW(T) = {+, ), $}
  • 124.
    124 • FOLLOW(T’) – ForT →FT’ • By A→αBβ, then FOLLOW(T’) = FOLLOW(T) = {+,),$} – For T → *FT’ • By A→αBβ, then FOLLOW(T’) = FOLLOW(T) = {+,),$} • Hence FOLLOW(T’)={+,),$} • FOLLOW(F) – For T →FT’ • By A→αBβ, then FOLLOW(F)={FIRST(T’) – ε} • FOLLOW(F) = {*} – For T → *FT’ • By A→αBβ, then FOLLOW(F)=FOLLOW(T’) = {+,),$} • Hence, FOLLOW(F) = {+, * , ) , $}
  • 125.
    125 Predictive parsing tableconstruction • For the rule A →α of grammar G 1. For each a in FIRST(α) create M[A,a] = A →α where a is a terminal symbol 2. For ε in FIRST(α) create entry in M[A,b] = A →α where b is the symbols from FOLLOW(A) 3. If ε is in FIRST(α) and $ is in FOLLOW(A) then create entry in the table M[A,$] = A →α 4. All the remaining entries in the table M are marked as ERROR
  • 126.
    126 PARSING TABLE Id +* ( ) $ E E→TE’ E→TE’ E’ E’→+TE’ E’→ ε E’→ ε T T→ FT’ T→ FT’ T’ T’→ ε T’→ *FT’ T’→ ε T’→ ε F F→ id F→ (E) Lets parse the input string id+id*id using the above table. At initial configuration stack will contain start symbol E, in the input buffer the input string is placed and ended with $
  • 127.
    127 Stack Input Action $Eid+id*id$ $E’T id+id*id$ E → TE’ $E’T’F id+id*id$ T→FT’ $E’T’id id+id*id$ F→id $E’T’ +id*id$ $E’ +id*id$ T’ →ε $E’T+ +id*id$ E’ → +TE’ $E’T Id*id$ $E’T’F Id*id$ T→FT’ $E’T’id Id*id$ F→id $E’T’ *id$ $E’T’F* *id$ T’ →FT’ $E’T’F Id$ $E’T’id Id$ F→id $E’T’ $ $E’ $ T’ →ε $ $ E’ → ε
  • 128.
    128 BOTTOM UP PARSING •The input string is taken first, and we try to reduce this string with the help of grammar and try to obtain the start symbol • The process of parsing halts successfully as soon as we reach the start symbol • Handle – pruning – find the substring that could be reduces by appropriate non-terminal is called handle – Handle is the string of substring that matches the right side of the production and we can reduce – In other words, a process of detecting handles and using them in reduction
  • 129.
    129 HANDLE PRUNING • Considerthe grammar E→E+E; E→id • RMD for the string id+id+id – E => E+E – E=> E+E+E – E=>E+E+id – E=>E+id+id – E=>id+id+id The bold strings are called handles
  • 130.
    130 SHIFT REDUCE PARSER •It attempts to construct parse tree from leaves to root. • It requires the following data structures – The input buffer storing the input string – A stack for storing and accessing the LHS and RHS of rules W$ Input buffer $S Stack
  • 131.
    131 PARSING OPERATIONS • SHIFT –Moving of the symbols from input buffer onto the stack • REDUCE – If the handles present in the top of the stack then reduction of it by appropriate rule. RHS is popped and LHS is pushed • ACCEPT – If the stack contains start symbol only and input buffer is empty at the same time that action is called accept • ERROR – A situation in which parser cannot either shift or reduce the symbols
  • 132.
    132 • Two rulesfollowed – If the incoming operator has more priority than in stack operator then perform SHIFT – If in stack operator has same or less priority than the priority of incoming operators then perform REDUCE Viable prefixes are the set of prefixes of right sentential forms that can appear on the stack of shift/reduce parser are called viable prefixes. It is always possible to add terminals to the end of a viable prefix to obtain a right sentential form Consider the grammar E→ E-E; E → E*E; E → id. Perform shift- reduce parsing of the input string id-id*id
  • 133.
    133 STACK INPUT BUFFERPARSING ACTION $ id-id*id$ Shift $id -id*id$ Reduce by E→ id $E -id*id$ Shift $E- id*id$ Shift $E-id *id$ Reduce by E→ id $E-E *id$ Shift $E-E* id$ Shift $E-E*id $ Reduce E→ id $E-E*E $ Reduce E→ E*E $E-E $ Reduce E→ E-E $E $ Accept
  • 134.
    134 OPERATOR PRECEDENCE PARSER • Agrammar G is said to be operator precedence if it poses following properties – No production rule on the right side is ε – There should not be any production rule possessing two adjacent non-terminals at the right hand side • Parsing method – Construct OPP relations(table) – Identify the handles – Implementation using stack
  • 135.
    135 • Advantage ofOPP – Simple to implement • Disadvantages of OPP – Operator minus has two different precedence(unary and binary). Hence, it is hard to handle tokens like minus sign – This can be applicable to only small class of grammars • Application – The operator precedence parsing is done in a language having operators.
  • 136.
    136 LR Parsers • Themost powerful shift-reduce parsing (yet efficient) is: LR(k) parsing. left to right right-most k lookhead scanning derivation (k is omitted  it is 1) • LR parsing is attractive because: – LR parsing is most general non-backtracking shift-reduce parsing, yet it is still efficient. – The class of grammars that can be parsed using LR methods is a proper superset of the class of grammars that can be parsed with predictive parsers. LL(1)-Grammars  LR(1)-Grammars – An LR-parser can detect a syntactic error as soon as it is possible to do so a left-to-right scan of the input.
  • 137.
    137 LR Parsers • LR-Parsers –covers wide range of grammars. – SLR – simple LR parser – LR – most general LR parser – LALR – intermediate LR parser (look-head LR parser) – SLR, LR and LALR work same (they used the same algorithm), only their parsing tables are different.
  • 138.
    138 LR Parsing Algorithm Sm1 Xm Sm-1 Xm-1 . . S1 X1 S0 a1... ai ... an $ Action Table terminals and $ S t four a actions t e S Goto Table non-terminal s t each item a is a state t e S LR Parsing Algorithm stack input output
  • 139.
    139 Parsing method • Initializethe stack with start symbol and invokes scanner to get next token • It determines Sj the state currently on the top of the stack and ai the current input symbol • It consults the parsing table for the action [Sj, ai] which can have one of the four values – Si means shift state I – rj means reduce by rule j – Accept means successful parsing is done – Error indicates syntactical error
  • 140.
    140 Simple LR parsing(SLR) definitions • LR(0) items – The LR(0) item for grammar G is production rule in which symbol . Is inserted at some position in RHS of the rule. • Example S→.ABC S→A.BC S→AB.C S→ABC. • Augmented grammar – If a grammar G is having start symbol S then augmented grammar is a new grammar G’ in which S’ is a new start symbol such that S’→S – The purpose this grammar is to indicate the acceptance of input. That is when parser is about to reduce S’→S it reaches to acceptance state
  • 141.
    141 • Kernel items –It is a collection of items S’→.S and all the items whose dots are not at the leftmost end of RHS of the rule – Non-kernel items • The collection of all the items in which . Are at the left end of RHS of the rule • Functions – Closure – Goto – These are two important functions required to create collection of canonical set of items • Viable prefix- – set of prefixes in the right sentential form of production A→α. This set can appear on the stack during shift/reduce action
  • 142.
    142 Closure operation • Fora CFG G, if I is the set of items then the function closure(I) can be constructed using following rules – Consider I is a set of canonical items and initially every item I is added to closure(I) – If rule A  .B is a rule in closure(I) and there is another rule for B such as B. then, – Closure(I) : • A  .B • B.
  • 143.
    143 • This ruleis applied until no more new items can be added to closure(I). • The meaning of rule A  .B id that during derivation of the input string at some point we may require strings derivable from B as input. • A non-terminal immediately to the right of . Indicates that it has to be expanded shortly
  • 144.
    144 Goto operation • Ifthere is a production A  .B then goto(A  .B, B) = A  B. • this means simply shifting of . One position ahead over the grammar symbol( T or NT) • The rule A  .B is in I then the same goto function can be written as goto(I,B)
  • 145.
    145 • Construct theSLR(1) parsing table for 1 E→E+T 2 E →T 3 T →T*F 4 T →F 5 F →(E) 6 F →id
  • 146.
    146 I0: E’→.E E →.E+T E →.T T→.T*F T →.F F →.(E) F →.id Goto(I0,E) I1: E’→E. E → E.+T Goto(I0,T) I2: E →T. T →T.*F Goto(I0,F) I3: T →F. Goto(I0,() I4: T →(.E) E →.E+T E →.T T →.T*F T →.F F →.(E) F →.id Goto(I0, id) I5: F →id. Goto(I2, *) I7: T →T*.F F →.(E) F →.id Goto(I4, E) I8: F →(E.) E →E.+T Goto(I6, T) I9: E →E+T. T →T.*F Goto(I7, F) I10: T →T*F. Goto(I8, )) I11: F →(E). Goto(I1, +) I6: E →E+.T T →.T*F T →.F F →.(E) F →.id
  • 147.
    147 • FOLLOW(E’) ={$} • FOLLOW(E) = {+,),$} • FOLLOW(T) = {+,*,),$} • FOLLOW(F) = {+,*,),$}
  • 148.
    148 state id +* ( ) $ E T F 0 s5 s4 1 2 3 1 s6 acc 2 r2 s7 r2 r2 3 r4 r4 r4 r4 4 s5 s4 8 2 3 5 r6 r6 r6 r6 6 s5 s4 9 3 7 s5 s4 10 8 s6 s11 9 r1 s7 r1 r1 10 r3 r3 r3 r3 Action Table Goto Table
  • 149.
    149 STACK INPUT BUFFER ACTION TABLE GOTO TABLE PARSING ACTION $0 Id*id*id$[0,id]=s5 Shift $0id5 *id+id$ [5,*]=r6 [0,f]=3 Reduce F→id $0F3 *id*id$ [3,*]=r4 [0,T]=2 Reduce T→F $0T2 *id+id$ [2,*]=s7 Shift $0T2*7 Id+id$ [7,id]=s5 Shift $0T2*7id5 +id$ [5,+]=r6 [7,F]=10 reduce $0T2*7F10 +id$ [10,+]=r3 [0,T]=2 Reduce $0T2 +id$ [2,+]=r2 [0,E]=1 Reduce $0E1 +id$ [1,=]=s6 Shift $0E1+6 +id$ [6,id]=s5 Shift $0E1+6ID5 $ [5,$]=r6 [6,F]=3 Reduce $0E1+6F3 $ [3,$]=r4 [6,T]=9 Reduce $0E1+6T9 $ [9,$]=r1 [0,E]=1 Reduce $0E1 $ Accept accept
  • 150.
    150 CLR PARSING orLR(1) PARSING • Construction of canonical set of items along with lookahead • For the grammar G initially add S’→.S in the set of item C • For each set of items Ii in C and for each grammar symbol X (T ot NT) add closure(Ii,X). This process is repeated by applying goto(Ii,X) for each X in Ii such that goto(Ii,X) is not empty and not in C. The set of items has to constructed until no more set of items can be added to C • The closure function can be computed as : for each item [A→α.Xβ, a] is in I and rule [A→αX.β, a] is not in goto items then add [A→αX.β, a] to goto items • This process is repeated until no more set of items can be added to the collection C
  • 151.
    151 CONSTRUCTION OF CLRPARSING TABLE • Construct set of items C={I0,I1,I2,...In} where C is a collection of set of LR(1) items for the input grammar G’. • The parsing actions are based on each items Ii. – If [A→αBβ, b] is in Ii and goto(Ii, a)=Ii then create a entry in the action table action[Ii,a]=shift j. – If there is a production A→α., a] in Ii then in action table action[Ii,a]=reduce by A→α. Here A should not be S’. – If there is a production S’ →S.,$ in Ii then action[i,$]=accept. • The goto part of LR table can be filled as: the goto transition for state i is considered for NT only. If goto(Ii,A)=Ij, then goto(Ii,A)=j • All other entries are defined as ERROR
  • 152.
    152 EXAMPLES • Construct CLRfor the grammar E→E+E/T T →T*F/F F →(E)/id. • FOLLOW(E) = {+,),$} FIRST(E)={(,id} • FOLLOW(T) = {+,*,),$} FIRST(T)={(,id} • FOLLOW(F) = {+,*,),$} FIRST(F)={(,id}
  • 153.
    153 • Augmented grammar E’→ E E →E+T E →T T →T*F T →F F →(E) F→id • LR(1) items •LR(0) items E’ →. E E →.E+T E →.T T →.T*F T →.F F →.(E) F→.id •LR(1) items E’ →.E, $ E →.E+T, $/+ E →.T, $/+ T →.T*F, $/+/* T →.F, $/+/* F →.(E), $/+/* F→.id, $/+/*
  • 154.
    154 Goto(I0, E) I1 :E’→ E. , $ E →E.+T, $/+ Goto(I0,T) I2: E →T.,$/+ T →T.*F,$/+/* Goto(I0,F) I3: T →F., $/+/* Goto(I0,( ) I4: F →(.E), $/+,* E →.E+T, )/+ E →.T, )/+ T →.T*F, )/+/* T → .F, )/+,* F →.(E), )/+/* F →.id, ),+,* Goto(I0, id) I5: F →id. , $/+/* Goto(I1,+) I6: E →E+.T, $/+ T →.T*F, $/+/* T →.F, $/+/* F →.(E), $/+/* F →.id, $/+/*
  • 155.
    155 STAT ES + * () Id $ E T F 0 S4 S5 1 2 3 1 S6 ACC 2 R2 S7 R2 3 R4 R4 R4 4 S11 S12 8 9 10 5 R6 R6 R6 6 S4 S5 13 3 7 S4 S5 14 8 S16 S15 9 R2 S17 R2 10 R4 R4 R4
  • 156.
    156 11 S11 S1218 9 10 12 R6 R6 R6 13 R1 S7 R1 14 R3 R3 R3 15 R5 R5 R5 16 S11 S12 19 10 17 S11 S12 20 18 S16 S21 19 R1 S17 R1 20 R3 R3 R3 21 R3 R5 R5
  • 157.
    157 STACK INPUT BUFFER ACTION $0 id+id*id$Shift s5 $0id5 +id*id$ R6 $0F3 +id*id$ R4 $0T2 +id*id$ R2 $0E1 +id*id$ S6 $0E1+6 id*id$ S5 $0E1+6 id 5 *id$ R6 $0E1+6 F 3 *id$ R4 $0E1+6 T13 *id$ S7 $0E1+6T13*7 Id$ S5 $0E1+6T13*7id 5 $ R6 $0E1+6T13*7F14 $ R3 $0E1+6T13 $ R1 $0E1 $ ACC
  • 158.
    158 LALR PARSING • Constructionof LALR parsing table • Construct LR(1) items • Merge two states Ii and Ij if the first component are matching and create a new state replacing one of the older states such as Iij = Ii U Ij • The parsing actions are based on each item Ii. – If [A→α.aβ, b] is in Ii and goto(Ii,a)=Ij then create an entry in the action table action[Ii,a]= shift j – If there is a production [A→α., a] in Ii then in the action table action[Ii,a]=reduce by A→α. Here A should not be S’. – If there is a production S’ →S,$ in Ii then action[i,$]=accept • The goto part : the goto transitions for state i is considered for NTonly. If goto(Ii,A)=Ij, then goto[Ii,A]=j • If the parsing action conflicts, then the grammar is not LALR(1). All other entries are ERROR
  • 159.
    159 LALR STATES FROMCLR I2,9: E→T., $/+/) T → T.*F, $/+/)/* I3,10: T →F. , $/+/)/* I4,11: F →(.E) , $/+/)/* E →.E+T, )/+ E →.T,)/+ T →.T*F, )/+/* T →.F, )/+/* F →.(E), )/+/* F →.id, )/+/* I5,12: F →id. , $/+/)/* I6,16: E →E+.T, $/).+ T →.T*F, $/)/+/* T → .F, $/+/)/* F →.(E), $/)/+/* F →.id, $/)/+/* I7,17: T →T*.F, $/+/)/* F →.(E), $/+/)/* F →.id, $/+/*/) I8,18: F →(E.), $/+/)/* E →E.+T, )/+ I13,19: E →E+T., $/)/+ T →T.*F, $/)/+/* I14, 40: T →T*F., $/+/)/* I15, 21: F →(E). , $/+/)/*
  • 160.
    160 STATE + *( ) Id $ E T F 0 S4,11 S5,12 1 2,9 3,10 1 S6,16 ACC 2,9 R2 S7,17 R2 R2 3,10 R4 R4 R4 R4 4,11 S4,11 S5,12 8,18 2,9 3,10 5,12 R6 R6 R6 R6 6,16 S4,11 S5,12 13,9 3,10 7,17 S4,11 S5,12 14,20 8,18 S6,16 S15,21 13,19 R1 S7,17 R1 14,20 R3 R3 R3 R3 15,21 R5 R5 R5 R5

Editor's Notes

  • #5 [Scott]
  • #7 [S]; Whorf hypothesis statement from http://www.mnsu.edu/emuseum/cultural/language/whorf.html, retrieved 2010-08-17
  • #8 [Scott]
  • #9 [T]
  • #10 [T]
  • #11 [T]
  • #12 [T]
  • #13 [T]
  • #14 [T]
  • #15 [T]
  • #16 [T]
  • #17 [S]
  • #20 Real programming dated form the 1940’s but earlier machines could have data supplied Programs for the analytical engine consisted of a sequence of cards with data and operations
  • #27 Sebesta p.45 John Backus and Team at IBM released a report in 1954 describing FORTRAN 0 It stated Fortran would : provide efficiency of hand-coded programs it would eliminate coding errors and the debugging process!!!