did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780321486813

Compilers Principles, Techniques, and Tools

by ; ; ;
  • ISBN13:

    9780321486813

  • ISBN10:

    0321486811

  • Edition: 2nd
  • Format: Hardcover
  • Copyright: 2006-08-31
  • Publisher: Pearson
  • View Upgraded Edition

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
  • Buyback Icon We Buy This Book Back!
    In-Store Credit: $32.81
    Check/Direct Deposit: $31.25
    PayPal: $31.25
  • Complimentary 7-Day eTextbook Access - Read more
    When you rent or buy this book, you will receive complimentary 7-day online access to the eTextbook version from your PC, Mac, tablet, or smartphone. Feature not included on Marketplace Items.
List Price: $206.65 Save up to $150.41
  • Buy Used
    $154.99
    Add to Cart Free Shipping Icon Free Shipping

    USUALLY SHIPS IN 2-4 BUSINESS DAYS

    7-Day eTextbook Access 7-Day eTextbook Access

Supplemental Materials

What is included with this book?

Summary

"This new edition of the classic "Dragon" book has been completely revised to include the most recent developments to compiling. The book provides a thorough introduction to compiler design and continues to emphasize the applicability of compiler technology to a broad range of problems in software design and development. The first hall of the book is designed for use in an undergraduate compilers course while the second half can be used in a graduate course stressing code optimization."--BOOK JACKET.

Author Biography

Alfred V. Aho is Lawrence Gussman Professor of Computer Science at Columbia University. Professor Aho has won several awards including the Great Teacher Award for 2003 from the Society of Columbia Graduates and the IEEE John von Neumann Medal.  He is a member of the National Academy of Engineering and a fellow of the ACM and IEEE.

 

Monica S. Lam is a Professor of Computer Science at Stanford University, was the Chief Scientist at Tensilica and the founding CEO of moka5. She led the SUIF project which produced one of the most popular research compilers, and pioneered numerous compiler techniques used in industry.

 

Ravi Sethi launched the research organization in Avaya and is president of Avaya Labs.  Previously, he was a senior vice president at Bell Labs in Murray Hill and chief technical officer for communications software at Lucent Technologies. He has held teaching positions at the Pennsylvania State University and the University of Arizona, and has taught at Princeton University and Rutgers.  He is a fellow of the ACM.

 

Jeffrey Ullman is CEO of Gradiance and a Stanford W. Ascherman Professor of Computer Science at Stanford University. His research interests include database theory, database integration, data mining, and education using the information infrastructure.  He is a member of the National Academy of Engineering, a fellow of the ACM, and winner of the Karlstrom Award and Knuth Prize.

 

Table of Contents

1 Introduction
1(38)
1.1 Language Processors
1(3)
1.1.1 Exercises for Section 1.1
3(1)
1.2 The Structure of a Compiler
4(8)
1.2.1 Lexical Analysis
5(3)
1.2.2 Syntax Analysis
8(1)
1.2.3 Semantic Analysis
8(1)
1.2.4 Intermediate Code Generation
9(1)
1.2.5 Code Optimization
10(1)
1.2.6 Code Generation
10(1)
1.2.7 Symbol-Table Management
11(1)
1.2.8 The Grouping of Phases into Passes
11(1)
1.2.9 Compiler-Construction Tools
12(1)
1.3 Evolution of Programming Languages
12(3)
1.3.1 The Move to Higher-level Languages
13(1)
1.3.2 Impacts on Compilers
14(1)
1.3.3 Exercises for Section 1.3
14(1)
1.4 The Science of Building a Compiler
15(2)
1.4.1 Modeling in Compiler Design and Implementation
15(1)
1.4.2 The Science of Code Optimization
15(2)
1.5 Applications of Compiler Technology
17(8)
1.5.1 Implementation of High-Level Programming Languages
17(2)
1.5.2 Optimizations for Computer Architectures
19(2)
1.5.3 Design of New Computer Architectures
21(1)
1.5.4 Program Translations
22(1)
1.5.5 Software Productivity Tools
23(2)
1.6 Language Basics
25(11)
1.6.1 The Static/Dynamic Distinction
25(1)
1.6.2 Environments and States
26(2)
1.6.3 Static Scope and Block Structure
28(3)
1.6.4. Explicit Access Control
31(1)
1.6.5 Dynamic Scope
31(2)
1.6.6 Parameter Passing Mechanisms
33(2)
1.6.7 Aliasing
35(1)
1.6.8 Exercises for Section 1.6
35(1)
1.7 Summary of Chapter 1
36(2)
1.8 References for Chapter 1
38(1)
2 A Simple Syntax-Directed Translator
39(70)
2.1 Introduction
40(2)
2.2 Syntax Definition
42(10)
2.2.1 Definition of Grammars
42(2)
2.2.2 Derivations
44(1)
2.2.3 Parse Trees
45(2)
2.2.4 Ambiguity
47(1)
2.2.5 Associativity of Operators
48(1)
2.2.6 Precedence of Operators
48(3)
2.2.7 Exercises for Section 2.2
51(1)
2.3 Syntax-Directed Translation
52(8)
2.3.1 Postfix Notation
53(1)
2.3.2 Synthesized Attributes
54(2)
2.3.3 Simple Syntax-Directed Definitions
56(1)
2.3.4 Tree Traversals
56(1)
2.3.5 Translation Schemes
57(3)
2.3.6 Exercises for Section 2.3
60(1)
2.4 Parsing
60(8)
2.4.1 Top-Down Parsing
61(3)
2.4.2 Predictive Parsing
64(1)
2.4.3 When to Use E-Productions
65(1)
2.4.4 Designing a Predictive Parser
66(1)
2.4.5 Left Recursion
67(1)
2.4.6 Exercises for Section 2.4
68(1)
2.5 A Translator for Simple Expressions
68(8)
2.5.1 Abstract and Concrete Syntax
69(1)
2.5.2 Adapting the Translation Scheme
70(2)
2.5.3 Procedures for the Nonterminals
72(1)
2.5.4 Simplifying the Translator
73(1)
2.5.5 The Complete Program
74(2)
2.6 Lexical Analysis
76(9)
2.6.1 Removal of White Space and Comments
77(1)
2.6.2 Reading Ahead
78(1)
2.6.3 Constants
78(1)
2.6.4 Recognizing Keywords and Identifiers
79(2)
2.6.5 A Lexical Analyzer
81(3)
2.6.6 Exercises for Section 2.6
84(1)
2.7 Symbol Tables
85(6)
2.7.1 Symbol Table Per Scope
86(3)
2.7.2 The Use of Symbol Tables
89(2)
2.8 Intermediate Code Generation
91(14)
2.8.1 Two Kinds of Intermediate Representations
91(1)
2.8.2 Construction of Syntax Trees
92(5)
2.8.3 Static Checking
97(2)
2.8.4 Three-Address Code
99(6)
2.8.5 Exercises for Section 2.8
105(1)
2.9 Summary of Chapter 2
105(4)
3 Lexical Analysis
109(82)
3.1 The Role of the Lexical Analyzer
109(6)
3.1.1 Lexical Analysis Versus Parsing
110(1)
3.1.2 Tokens, Patterns, and Lexemes
111(1)
3.1.3 Attributes for Tokens
112(1)
3.1.4 Lexical Errors
113(1)
3.1.5 Exercises for Section 3.1
114(1)
3.2 Input Buffering
115(1)
3.2.1 Buffer Pairs
115(1)
3.2.2 Sentinels
116(1)
3.3 Specification of Tokens
116(12)
3.3.1 Strings and Languages
117(2)
3.3.2 Operations on Languages
119(1)
3.3.3 Regular Expressions
120(3)
3.3.4 Regular Definitions
123(1)
3.3.5 Extensions of Regular Expressions
124(1)
3.3.6 Exercises for Section 3.3
125(3)
3.4 Recognition of Tokens
128(12)
3.4.1 Transition Diagrams
130(2)
3.4.2 Recognition of Reserved Words and Identifiers
132(1)
3.4.3 Completion of the Running Example
133(1)
3.4.4 Architecture of a Transition-Diagram-Based Lexical Analyzer
134(2)
3.4.5 Exercises for Section 3.4
136(4)
3.5 The Lexical-Analyzer Generator Lex
140(7)
3.5.1 Use of Lex
140(1)
3.5.2 Structure of Lex Programs
141(3)
3.5.3 Conflict Resolution in Lex
144(1)
3.5.4 The Lookahead Operator
144(2)
3.5.5 Exercises for Section 3.5
146(1)
3.6 Finite Automata
147(5)
3.6.1 Nondeterministic Finite Automata
147(1)
3.6.2 Transition Tables
148(1)
3.6.3 Acceptance of Input Strings by Automata
149(1)
3.6.4 Deterministic Finite Automata
149(2)
3.6.5 Exercises for Section 3.6
151(1)
3.7 From Regular Expressions to Automata
152(14)
3.7.1 Conversion of an NFA to a DFA
152(4)
3.7.2 Simulation of an NFA
156(1)
3.7.3 Efficiency of NFA Simulation
157(2)
3.7.4 Construction of an NFA from a Regular Expression
159(4)
3.7.5 Efficiency of String-Processing Algorithms
163(3)
3.7.6 Exercises for Section 3.7
166(1)
3.8 Design of a Lexical-Analyzer Generator
166(7)
3.8.1 The Structure of the Generated Analyzer
167(1)
3.8.2 Pattern Matching Based on NFA's
168(2)
3.8.3 DFA's for Lexical Analyzers
170(1)
3.8.4 Implementing the Lookahead Operator
171(1)
3.8.5 Exercises for Section 3.8
172(1)
3.9 Optimization of DFA-Based Pattern Matchers
173(14)
3.9.1 Important States of an NFA
173(2)
3.9.2 Functions Computed From the Syntax Tree
175(1)
3.9.3 Computing nullable, firstpos, and lastpos
176(1)
3.9.4 Computing followpos
177(2)
3.9.5 Converting a Regular Expression Directly to a DFA
179(1)
3.9.6 Minimizing the Number of States of a DFA
180(4)
3.9.7 State Minimization in Lexical Analyzers
184(1)
3.9.8 Trading Time for Space in DFA Simulation
185(1)
3.9.9 Exercises for Section 3.9
186(1)
3.10 Summary of Chapter 3
187(2)
3.11 References for Chapter 3
189(2)
4 Syntax Analysis
191(112)
4.1 Introduction
192(5)
4.1.1 The Role of the Parser
192(1)
4.1.2 Representative Grammars
193(1)
4.1.3 Syntax Error Handling
194(1)
4.1.4 Error-Recovery Strategies
195(2)
4.2 Context-Free Grammars
197(12)
4.2.1 The Formal Definition of a Context-Free Grammar
197(1)
4.2.2 Notational Conventions
198(1)
4.2.3 Derivations
199(2)
4.2.4 Parse Trees and Derivations
201(2)
4.2.5 Ambiguity
203(1)
4.2.6 Verifying the Language Generated by a Grammar
204(1)
4.2.7 Context-Free Grammars Versus Regular Expressions
205(1)
4.2.8 Exercises for Section 4.2
206(3)
4.3 Writing a Grammar
209(8)
4.3.1 Lexical Versus Syntactic Analysis
209(1)
4.3.2 Eliminating Ambiguity
210(2)
4.3.3 Elimination of Left Recursion
212(2)
4.3.4 Left Factoring
214(1)
4.3.5 Non-Context-Free Language Constructs
215(1)
4.3.6 Exercises for Section 4.3
216(1)
4.4 Top-Down Parsing
217(16)
4.4.1 Recursive-Descent Parsing
219(1)
4.4.2 FIRST and FOLLOW
220(2)
4.4.3 LL(1) Grammars
222(4)
4.4.4 Nonrecursive Predictive Parsing
226(2)
4.4.5 Error Recovery in Predictive Parsing
228(3)
4.4.6 Exercises for Section 4.4
231(2)
4.5 Bottom-Up Parsing
233(8)
4.5.1 Reductions
234(1)
4.5.2 Handle Pruning
235(1)
4.5.3 Shift-Reduce Parsing
236(2)
4.5.4 Conflicts During Shift-Reduce Parsing
238(2)
4.5.5 Exercises for Section 4.5
240(1)
4.6 Introduction to LR Parsing: Simple LR
241(18)
4.6.1 Why LR Parsers?
241(1)
4.6.2 Items and the LR(0) Automaton
242(6)
4.6.3 The LR-Parsing Algorithm
248(4)
4.6.4 Constructing SLR-Parsing Tables
252(4)
4.6.5 Viable Prefixes
256(1)
4.6.6 Exercises for Section 4.6
257(2)
4.7 More Powerful LR Parsers
259(19)
4.7.1 Canonical LR(1) Items
260(1)
4.7.2 Constructing LR(1) Sets of Items
261(4)
4.7.3 Canonical LR(1) Parsing Tables
265(1)
4.7.4 Constructing LALR Parsing Tables
266(4)
4.7.5 Efficient Construction of LALR Parsing Tables
270(5)
4.7.6 Compaction of LR Parsing Tables
275(2)
4.7.7 Exercises for Section 4.7
277(1)
4.8 Using Ambiguous Grammars
278(9)
4.8.1 Precedence and Associativity to Resolve Conflicts
279(2)
4.8.2 The "Dangling-Else" Ambiguity
281(2)
4.8.3 Error Recovery in LR Parsing
283(2)
4.8.4 Exercises for Section 4.8
285(2)
4.9 Parser Generators
287(10)
4.9.1 The Parser Generator Yacc
287(4)
4.9.2 Using Yacc with Ambiguous Grammars
291(3)
4.9.3 Creating Yacc Lexical Analyzers with Lex
294(1)
4.9.4 Error Recovery in Yacc
295(2)
4.9.5 Exercises for Section 4.9
297(1)
4.10 Summary of Chapter 4
297(3)
4.11 References for Chapter 4
300(3)
5 Syntax-Directed Translation
303(54)
5.1 Syntax-Directed Definitions
304(6)
5.1.1 Inherited and Synthesized Attributes
304(2)
5.1.2 Evaluating an SDD at the Nodes of a Parse Tree
306(3)
5.1.3 Exercises for Section 5.1
309(1)
5.2 Evaluation Orders for SDD's
310(8)
5.2.1 Dependency Graphs
310(2)
5.2.2 Ordering the Evaluation of Attributes
312(1)
5.2.3 S-Attributed Definitions
312(1)
5.2.4 L-Attributed Definitions
313(1)
5.2.5 Semantic Rules with Controlled Side Effects
314(3)
5.2.6 Exercises for Section 5.2
317(1)
5.3 Applications of Syntax-Directed Translation
318(6)
5.3.1 Construction of Syntax Trees
318(3)
5.3.2 The Structure of a Type
321(2)
5.3.3 Exercises for Section 5.3
323(1)
5.4 Syntax-Directed Translation Schemes
324(13)
5.4.1 Postfix Translation Schemes
324(1)
5.4.2 Parser-Stack Implementation of Postfix SDT's
325(2)
5.4.3 SDT's With Actions Inside Productions
327(1)
5.4.4 Eliminating Left Recursion From SDT's
328(3)
5.4.5 SDT's for L-Attributed Definitions
331(5)
5.4.6 Exercises for Section 5.4
336(1)
5.5 Implementing L-Attributed SDD's
337(16)
5.5.1 Translation During Recursive-Descent Parsing
338(2)
5.5.2 On-The-Fly Code Generation
340(3)
5.5.3 L-Attributed SDD's and LL Parsing
343(5)
5.5.4 Bottom-Up Parsing of L-Attributed SDD's
348(4)
5.5.5 Exercises for Section 5.5
352(1)
5.6 Summary of Chapter 5
353(1)
5.7 References for Chapter 5
354(3)
6 Intermediate-Code Generation
357(70)
6.1 Variants of Syntax Trees
358(5)
6.1.1 Directed Acyclic Graphs for Expressions
359(1)
6.1.2 The Value-Number Method for Constructing DAG's
360(2)
6.1.3 Exercises for Section 6.1
362(1)
6.2 Three-Address Code
363(7)
6.2.1 Addresses and Instructions
364(2)
6.2.2 Quadruples
366(1)
6.2.3 Triples
367(2)
6.2.4 Static Single-Assignment Form
369(1)
6.2.5 Exercises for Section 6.2
370(1)
6.3 Types and Declarations
370(8)
6.3.1 Type Expressions
371(1)
6.3.2 Type Equivalence
372(1)
6.3.3 Declarations
373(1)
6.3.4 Storage Layout for Local Names
373(3)
6.3.5 Sequences of Declarations
376(1)
6.3.6 Fields in Records and Classes
376(2)
6.3.7 Exercises for Section 6.3
378(1)
6.4 Translation of Expressions
378(8)
6.4.1 Operations Within Expressions
378(2)
6.4.2 Incremental Translation
380(1)
6.4.3 Addressing Array Elements
381(2)
6.4.4 Translation of Array References
383(1)
6.4.5 Exercises for Section 6.4
384(2)
6.5 Type Checking
386(13)
6.5.1 Rules for Type Checking
387(1)
6.5.2 Type Conversions
388(2)
6.5.3 Overloading of Functions and Operators
390(1)
6.5.4 Type Inference and Polymorphic Functions
391(4)
6.5.5 An Algorithm for Unification
395(3)
6.5.6 Exercises for Section 6.5
398(1)
6.6 Control Flow
399(11)
6.6.1 Boolean Expressions
399(1)
6.6.2 Short-Circuit Code
400(1)
6.6.3 Flow-of-Control Statements
401(2)
6.6.4 Control-Flow Translation of Boolean Expressions
403(2)
6.6.5 Avoiding Redundant Gotos
405(3)
6.6.6 Boolean Values and Jumping Code
408(1)
6.6.7 Exercises for Section 6.6
408(2)
6.7 Backpatching
410(8)
6.7.1 One-Pass Code Generation Using Backpatching
410(1)
6.7.2 Backpatching for Boolean Expressions
411(2)
6.7.3 Flow-of-Control Statements
413(3)
6.7.4 Break-, Continue-, and Goto-Statements
416(1)
6.7.5 Exercises for Section 6.7
417(1)
6.8 Switch-Statements
418(4)
6.8.1 Translation of Switch-Statements
419(1)
6.8.2 Syntax-Directed Translation of Switch-Statements
420(1)
6.8.3 Exercises for Section 6.8
421(1)
6.9 Intermediate Code for Procedures
422(2)
6.10 Summary of Chapter 6
424(1)
6.11 References for Chapter 6
425(2)
7 Run-Time Environments
427(78)
7.1 Storage Organization
427(3)
7.1.1 Static Versus Dynamic Storage Allocation
429(1)
7.2 Stack Allocation of Space
430(11)
7.2.1 Activation Trees
430(3)
7.2.2 Activation Records
433(3)
7.2.3 Calling Sequences
436(2)
7.2.4 Variable-Length Data on the Stack
438(2)
7.2.5 Exercises for Section 7.2
440(1)
7.3 Access to Nonlocal Data on the Stack
441(11)
7.3.1 Data Access Without Nested Procedures
442(1)
7.3.2 Issues With Nested Procedures
442(1)
7.3.3 A Language With Nested Procedure Declarations
443(1)
7.3.4 Nesting Depth
443(2)
7.3.5 Access Links
445(2)
7.3.6 Manipulating Access Links
447(1)
7.3.7 Access Links for Procedure Parameters
448(1)
7.3.8 Displays
449(2)
7.3.9 Exercises for Section 7.3
451(1)
7.4 Heap Management
452(11)
7.4.1 The Memory Manager
453(1)
7.4.2 The Memory Hierarchy of a Computer
454(1)
7.4.3 Locality in Programs
455(2)
7.4.4 Reducing Fragmentation
457(3)
7.4.5 Manual Deallocation Requests
460(3)
7.4.6 Exercises for Section 7.4
463(1)
7.5 Introduction to Garbage Collection
463(7)
7.5.1 Design Goals for Garbage Collectors
464(2)
7.5.2 Reachability
466(2)
7.5.3 Reference Counting Garbage Collectors
468(2)
7.5.4 Exercises for Section 7.5
470(1)
7.6 Introduction to Trace-Based Collection
470(13)
7.6.1 A Basic Mark-and-Sweep Collector
471(2)
7.6.2 Basic Abstraction
473(2)
7.6.3 Optimizing Mark-and-Sweep
475(1)
7.6.4 Mark-and-Compact Garbage Collectors
476(2)
7.6.5 Copying collectors
478(4)
7.6.6 Comparing Costs
482(1)
7.6.7 Exercises for Section 7.6
482(1)
7.7 Short-Pause Garbage Collection
483(11)
7.7.1 Incremental Garbage Collection
483(2)
7.7.2 Incremental Reachability Analysis
485(2)
7.7.3 Partial-Collection Basics
487(1)
7.7.4 Generational Garbage Collection
488(2)
7.7.5 The Train Algorithm
490(3)
7.7.6 Exercises for Section 7.7
493(1)
7.8 Advanced Topics in Garbage Collection
494(6)
7.8.1 Parallel and Concurrent Garbage Collection
495(2)
7.8.2 Partial Object Relocation
497(1)
7.8.3 Conservative Collection for Unsafe Languages
498(1)
7.8.4 Weak References
498(1)
7.8.5 Exercises for Section 7.8
499(1)
7.9 Summary of Chapter 7
500(2)
7.10 References for Chapter 7
502(3)
8 Code Generation
505(78)
8.1 Issues in the Design of a Code Generator
506(6)
8.1.1 Input to the Code Generator
507(1)
8.1.2 The Target Program
507(1)
8.1.3 Instruction Selection
508(2)
8.1.4 Register Allocation
510(1)
8.1.5 Evaluation Order
511(1)
8.2 The Target Language
512(6)
8.2.1 A Simple Target Machine Model
512(3)
8.2.2 Program and Instruction Costs
515(1)
8.2.3 Exercises for Section 8.2
516(2)
8.3 Addresses in the Target Code
518(7)
8.3.1 Static Allocation
518(2)
8.3.2 Stack Allocation
520(2)
8.3.3 Run-Time Addresses for Names
522(2)
8.3.4 Exercises for Section 8.3
524(1)
8.4 Basic Blocks and Flow Graphs
525(8)
8.4.1 Basic Blocks
526(2)
8.4.2 Next-Use Information
528(1)
8.4.3 Flow Graphs
529(1)
8.4.4 Representation of Flow Graphs
530(1)
8.4.5 Loops
531(1)
8.4.6 Exercises for Section 8.4
531(2)
8.5 Optimization of Basic Blocks
533(9)
8.5.1 The DAG Representation of Basic Blocks
533(1)
8.5.2 Finding Local Common Subexpressions
534(1)
8.5.3 Dead Code Elimination
535(1)
8.5.4 The Use of Algebraic Identities
536(1)
8.5.5 Representation of Array References
537(2)
8.5.6 Pointer Assignments and Procedure Calls
539(1)
8.5.7 Reassembling Basic Blocks From DAG's
539(2)
8.5.8 Exercises for Section 8.5
541(1)
8.6 A Simple Code Generator
542(7)
8.6.1 Register and Address Descriptors
543(1)
8.6.2 The Code-Generation Algorithm
544(3)
8.6.3 Design of the Function getReg
547(1)
8.6.4 Exercises for Section 8.6
548(1)
8.7 Peephole Optimization
549(4)
8.7.1 Eliminating Redundant Loads and Stores
550(1)
8.7.2 Eliminating Unreachable Code
550(1)
8.7.3 Flow-of-Control Optimizations
551(1)
8.7.4 Algebraic Simplification and Reduction in Strength
552(1)
8.7.5 Use of Machine Idioms
552(1)
8.7.6 Exercises for Section 8.7
553(1)
8.8 Register Allocation and Assignment
553(5)
8.8.1 Global Register Allocation
553(1)
8.8.2 Usage Counts
554(2)
8.8.3 Register Assignment for Outer Loops
556(1)
8.8.4 Register Allocation by Graph Coloring
556(1)
8.8.5 Exercises for Section 8.8
557(1)
8.9 Instruction Selection by Tree Rewriting
558(9)
8.9.1 Tree-Translation Schemes
558(2)
8.9.2 Code Generation by Tiling an Input Tree
560(3)
8.9.3 Pattern Matching by Parsing
563(2)
8.9.4 Routines for Semantic Checking
565(1)
8.9.5 General Tree Matching
565(2)
8.9.6 Exercises for Section 8.9
567(1)
8.10 Optimal Code Generation for Expressions
567(6)
8.10.1 Ershov Numbers
567(1)
8.10.2 Generating Code From Labeled Expression Trees
568(2)
8.10.3 Evaluating Expressions with an Insufficient Supply of Registers
570(2)
8.10.4 Exercises for Section 8.10
572(1)
8.11 Dynamic Programming Code-Generation
573(5)
8.11.1 Contiguous Evaluation
574(1)
8.11.2 The Dynamic Programming Algorithm
575(2)
8.11.3 Exercises for Section 8.11
577(1)
8.12 Summary of Chapter 8
578(1)
8.13 References for Chapter 8
579(4)
9 Machine-Independent Optimizations
583(124)
9.1 The Principal Sources of Optimization
584(13)
9.1.1 Causes of Redundancy
584(1)
9.1.2 A Running Example: Quicksort
585(1)
9.1.3 Semantics-Preserving Transformations
586(2)
9.1.4 Global Common Subexpressions
588(2)
9.1.5 Copy Propagation
590(1)
9.1.6 Dead-Code Elimination
591(1)
9.1.7 Code Motion
592(1)
9.1.8 Induction Variables and Reduction in Strength
592(4)
9.1.9 Exercises for Section 9.1
596(1)
9.2 Introduction to Data-Flow Analysis
597(21)
9.2.1 The Data-Flow Abstraction
597(2)
9.2.2 The Data-Flow Analysis Schema
599(1)
9.2.3 Data-Flow Schemas on Basic Blocks
600(1)
9.2.4 Reaching Definitions
601(7)
9.2.5 Live-Variable Analysis
608(2)
9.2.6 Available Expressions
610(4)
9.2.7 Summary
614(1)
9.2.8 Exercises for Section 9.2
615(3)
9.3 Foundations of Data-Flow Analysis
618(14)
9.3.1 Semilattices
618(5)
9.3.2 Transfer Functions
623(3)
9.3.3 The Iterative Algorithm for General Frameworks
626(2)
9.3.4 Meaning of a Data-Flow Solution
628(3)
9.3.5 Exercises for Section 9.3
631(1)
9.4 Constant Propagation
632(7)
9.4.1 Data-Flow Values for the Constant-Propagation Frame-work
633(1)
9.4.2 The Meet for the Constant-Propagation Framework
633(1)
9.4.3 Transfer Functions for the Constant-Propagation Frame-work
634(1)
9.4.4 Monotonicity of the Constant-Propagation Framework
635(1)
9.4.5 Nondistributivity of the Constant-Propagation Framework
635(2)
9.4.6 Interpretation of the Results
637(1)
9.4.7 Exercises for Section 9.4
637(2)
9.5 Partial-Redundancy Elimination
639(16)
9.5.1 The Sources of Redundancy
639(3)
9.5.2 Can All Redundancy Be Eliminated?
642(2)
9.5.3 The Lazy-Code-Motion Problem
644(1)
9.5.4 Anticipation of Expressions
645(1)
9.5.5 The Lazy-Code-Motion Algorithm
646(9)
9.5.6 Exercises for Section 9.5
655(1)
9.6 Loops in Flow Graphs
655(17)
9.6.1 Dominators
656(4)
9.6.2 Depth-First Ordering
660(1)
9.6.3 Edges in a Depth-First Spanning Tree
661(1)
9.6.4 Back Edges and Reducibility
662(3)
9.6.5 Depth of a Flow Graph
665(1)
9.6.6 Natural Loops
665(2)
9.6.7 Speed of Convergence of Iterative Data-Flow Algorithms
667(2)
9.6.8 Exercises for Section 9.6
669(3)
9.7 Region-Based Analysis
672(14)
9.7.1 Regions
672(1)
9.7.2 Region Hierarchies for Reducible Flow Graphs
673(3)
9.7.3 Overview of a Region-Based Analysis
676(2)
9.7.4 Necessary Assumptions About Transfer Functions
678(2)
9.7.5 An Algorithm for Region-Based Analysis
680(4)
9.7.6 Handling Nonreducible Flow Graphs
684(2)
9.7.7 Exercises for Section 9.7
686(1)
9.8 Symbolic Analysis
686(14)
9.8.1 Affine Expressions of Reference Variables
687(2)
9.8.2 Data-Flow Problem Formulation
689(5)
9.8.3 Region-Based Symbolic Analysis
694(5)
9.8.4 Exercises for Section 9.8
699(1)
9.9 Summary of Chapter 9
700(3)
9.10 References for Chapter 9
703(4)
10 Instruction-Level Parallelism 707(62)
10.1 Processor Architectures
708(2)
10.1.1 Instruction Pipelines and Branch Delays
708(1)
10.1.2 Pipelined Execution
709(1)
10.1.3 Multiple Instruction Issue
710(1)
10.2 Code-Scheduling Constraints
710(11)
10.2.1 Data Dependence
711(1)
10.2.2 Finding Dependences Among Memory Accesses
712(1)
10.2.3 Tradeoff Between Register Usage and Parallelism
713(3)
10.2.4 Phase Ordering Between Register Allocation and Code Scheduling
716(1)
10.2.5 Control Dependence
716(1)
10.2.6 Speculative Execution Support
717(2)
10.2.7 A Basic Machine Model
719(1)
10.2.8 Exercises for Section 10.2
720(1)
10.3 Basic-Block Scheduling
721(6)
10.3.1 Data-Dependence Graphs
722(1)
10.3.2 List Scheduling of Basic Blocks
723(2)
10.3.3 Prioritized Topological Orders
725(1)
10.3.4 Exercises for Section 10.3
726(1)
10.4 Global Code Scheduling
727(11)
10.4.1 Primitive Code Motion
728(2)
10.4.2 Upward Code Motion
730(1)
10.4.3 Downward Code Motion
731(1)
10.4.4 Updating Data Dependences
732(1)
10.4.5 Global Scheduling Algorithms
732(4)
10.4.6 Advanced Code Motion Techniques
736(1)
10.4.7 Interaction with Dynamic Schedulers
737(1)
10.4.8 Exercises for Section 10.4
737(1)
10.5 Software Pipelining
738(27)
10.5.1 Introduction
738(2)
10.5.2 Software Pipelining of Loops
740(3)
10.5.3 Register Allocation and Code Generation
743(1)
10.5.4 Do-Across Loops
743(2)
10.5.5 Goals and Constraints of Software Pipelining
745(4)
10.5.6 A Software-Pipelining Algorithm
749(1)
10.5.7 Scheduling Acyclic Data-Dependence Graphs
749(2)
10.5.8 Scheduling Cyclic Dependence Graphs
751(7)
10.5.9 Improvements to the Pipelining Algorithms
758(1)
10.5.10 Modular Variable Expansion
758(3)
10.5.11 Conditional Statements
761(1)
10.5.12 Hardware Support for Software Pipelining
762(1)
10.5.13 Exercises for Section 10.5
763(2)
10.6 Summary of Chapter 10
765(1)
10.7 References for Chapter 10
766(3)
11 Optimizing for Parallelism and Locality 769(134)
11.1 Basic Concepts
771(11)
11.1.1 Multiprocessors
772(1)
11.1.2 Parallelism in Applications
773(2)
11.1.3 Loop-Level Parallelism
775(2)
11.1.4 Data Locality
777(1)
11.1.5 Introduction to Affine Transform Theory
778(4)
11.2 Matrix Multiply: An In-Depth Example
782(6)
11.2.1 The Matrix-Multiplication Algorithm
782(3)
11.2.2 Optimizations
785(3)
11.2.3 Cache Interference
788(1)
11.2.4 Exercises for Section 11.2
788(1)
11.3 Iteration Spaces
788(13)
11.3.1 Constructing Iteration Spaces from Loop Nests
788(3)
11.3.2 Execution Order for Loop Nests
791(1)
11.3.3 Matrix Formulation of Inequalities
791(2)
11.3.4 Incorporating Symbolic Constants
793(1)
11.3.5 Controlling the Order of Execution
793(5)
11.3.6 Changing Axes
798(1)
11.3.7 Exercises for Section 11.3
799(2)
11.4 Affine Array Indexes
801(3)
11.4.1 Affine Accesses
802(1)
11.4.2 Affine and Nonaffine Accesses in Practice
803(1)
11.4.3 Exercises for Section 11.4
804(1)
11.5 Data Reuse
804(11)
11.5.1 Types of Reuse
805(1)
11.5.2 Self Reuse
806(3)
11.5.3 Self-Spatial Reuse
809(2)
11.5.4 Group Reuse
811(3)
11.5.5 Exercises for Section 11.5
814(1)
11.6 Array Data-Dependence Analysis
815(13)
11.6.1 Definition of Data Dependence of Array Accesses
816(1)
11.6.2 Integer Linear Programming
817(1)
11.6.3 The GCD Test
818(2)
11.6.4 Heuristics for Solving Integer Linear Programs
820(3)
11.6.5 Solving General Integer Linear Programs
823(2)
11.6.6 Summary
825(1)
11.6.7 Exercises for Section 11.6
826(2)
11.7 Finding Synchronization-Free Parallelism
828(25)
11.7.1 An Introductory Example
828(2)
11.7.2 Affine Space Partitions
830(1)
11.7.3 Space-Partition Constraints
831(4)
11.7.4 Solving Space-Partition Constraints
835(3)
11.7.5 A Simple Code-Generation Algorithm
838(3)
11.7.6 Eliminating Empty Iterations
841(3)
11.7.7 Eliminating Tests from Innermost Loops
844(2)
11.7.8 Source-Code Transforms
846(5)
11.7.9 Exercises for Section 11.7
851(2)
11.8 Synchronization Between Parallel Loops
853(8)
11.8.1 A Constant Number of Synchronizations
853(1)
11.8.2 Program-Dependence Graphs
854(3)
11.8.3 Hierarchical Time
857(2)
11.8.4 The Parallelization Algorithm
859(1)
11.8.5 Exercises for Section 11.8
860(1)
11.9 Pipelining
861(23)
11.9.1 What is Pipelining'?
861(2)
11.9.2 Successive Over-Relaxation (SOR): An Example
863(1)
11.9.3 Fully Permutable Loops
864(1)
11.9.4 Pipelining Fully Permutable Loops
864(3)
11.9.5 General Theory
867(1)
11.9.6 Time-Partition Constraints
868(4)
11.9.7 Solving Time-Partition Constraints by Farkas' Lemma
872(3)
11.9.8 Code Transformations
875(5)
11.9.9 Parallelism With Minimum Synchronization
880(2)
11.9.10 Exercises for Section 11.9
882(2)
11.10 Locality Optimizations
884(9)
11.10.1 Temporal Locality of Computed Data
885(1)
11.10.2 Array Contraction
885(2)
11.10.3 Partition Interleaving
887(3)
11.10.4 Putting it All Together
890(2)
11.10.5 Exercises for Section 11.10
892(1)
11.11 Other Uses of Affine Transforms
893(4)
11.11.1 Distributed memory machines
894(1)
11.11.2 Multi-Instruction-Issue Processors
895(1)
11.11.3 Vector and SIMD Instructions
895(1)
11.11.4 Prefetching
896(1)
11.12 Summary of Chapter 11
897(2)
11.13 References for Chapter 11
899(4)
12 Interprocedural Analysis 903(62)
12.1 Basic Concepts
904(12)
12.1.1 Call Graphs
904(2)
12.1.2 Context Sensitivity
906(2)
12.1.3 Call Strings
908(2)
12.1.4 Cloning-Based Context-Sensitive Analysis
910(1)
12.1.5 Summary-Based Context-Sensitive Analysis
911(3)
12.1.6 Exercises for Section 12.1
914(2)
12.2 Why Interprocedural Analysis?
916(5)
12.2.1 Virtual Method Invocation
916(1)
12.2.2 Pointer Alias Analysis
917(1)
12.2.3 Parallelization
917(1)
12.2.4 Detection of Software Errors and Vulnerabilities
917(1)
12.2.5 SQL Injection
918(2)
12.2.6 Buffer Overflow
920(1)
12.3 A Logical Representation of Data Flow
921(12)
12.3.1 Introduction to Datalog
921(1)
12.3.2 Datalog Rules
922(2)
12.3.3 Intensional and Extensional Predicates
924(3)
12.3.4 Execution of Datalog Programs
927(1)
12.3.5 Incremental Evaluation of Datalog Programs
928(2)
12.3.6 Problematic Datalog Rules
930(2)
12.3.7 Exercises for Section 12.3
932(1)
12.4 A Simple Pointer-Analysis Algorithm
933(8)
12.4.1 Why is Pointer Analysis Difficult
934(1)
12.4.2 A Model for Pointers and References
935(1)
12.4.3 Flow Insensitivity
936(1)
12.4.4 The Formulation in Datalog
937(1)
12.4.5 Using Type Information
938(1)
12.4.6 Exercises for Section 12.4
939(2)
12.5 Context-Insensitive Interprocedural Analysis
941(4)
12.5.1 Effects of a Method Invocation
941(2)
12.5.2 Call Graph Discovery in Datalog
943(1)
12.5.3 Dynamic Loading and Reflection
944(1)
12.5.4 Exercises for Section 12.5
945(1)
12.6 Context-Sensitive Pointer Analysis
945(6)
12.6.1 Contexts and Call Strings
946(3)
12.6.2 Adding Context to Datalog Rules
949(1)
12.6.3 Additional Observations About Sensitivity
949(1)
12.6.4 Exercises for Section 12.6
950(1)
12.7 Datalog Implementation by BDD's
951(7)
12.7.1 Binary Decision Diagrams
951(2)
12.7.2 Transformations on BDD's
953(1)
12.7.3 Representing Relations by BDD's
954(1)
12.7.4 Relational Operations as BDD Operations
954(3)
12.7.5 Using BDD's for Points-to Analysis
957(1)
12.7.6 Exercises for Section 12.7
958(1)
12.8 Summary of Chapter 12
958(3)
12.9 References for Chapter 12
961(4)
A A Complete Front End 965(24)
A.1 The Source Language
965(1)
A.2 Main
966(1)
A.3 Lexical Analyzer
967(3)
A.4 Symbol Tables and Types
970(1)
A.5 Intermediate Code for Expressions
971(3)
A.6 Jumping Code for Boolean Expressions
974(4)
A.7 Intermediate Code for Statements
978(3)
A.8 Parser
981(5)
A.9 Creating the Front End
986(3)
B Finding Linearly Independent Solutions 989(4)
Index 993

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program