did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

did-you-know? rent-now

Amazon no longer offers textbook rentals. We do!

We're the #1 textbook rental company. Let us show you why.

9780131491793

Essentials of Computer Architecture

by
  • ISBN13:

    9780131491793

  • ISBN10:

    0131491792

  • Edition: 1st
  • Format: Hardcover
  • Copyright: 2004-08-13
  • Publisher: Pearson

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $179.00 Save up to $82.34
  • Rent Book $96.66
    Add to Cart Free Shipping Icon Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 24-48 HOURS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

Supplemental Materials

What is included with this book?

Summary

This new primer offers readers an introduction to the fundamental concepts of computer architecture, providing a solid foundation for constructing programs that are more efficient and less prone to errors. Rather than focusing on engineering details, the guide approaches basic architectural concepts from the view of the programmer.Educates programmers on the three essential areas of architecture: processors, memories, and I/O systems, helping them to improve program efficiency by understanding the consequences of programming choices and allowing them to pinpoint sources of bugs. Emphasizes essential programming concepts such as two's-compliment arithmetic and ranges of integer values. Includes high-level topics like parallelism, pipelining, and performance.For anyone interested in learning more about computer architecture.

Author Biography

Douglas E. Comer is a Distinguished Professor of Computer Science at Purdue University.

Table of Contents

Preface xxi
Introduction And Overview
1(6)
The Importance Of Architecture
1(1)
Learning The Essentials
1(1)
Organization Of The Text
2(1)
What We Will Omit
3(1)
Terminology: Architecture And Design
3(1)
Summary
3(4)
PART I Basics
Fundamentals Of Digital Logic
7(22)
Introduction
7(1)
Electrical Terminology: Voltage And Current
7(1)
The Transistor
8(1)
Logic Gates
9(1)
Symbols Used For Gates
10(1)
Construction Of Gates From Transistors
11(1)
Example Interconnection Of Gates
12(2)
Multiple Gates Per Integrated Circuit
14(1)
The Need For More Than Combinatorial Circuits
15(1)
Circuits That Maintain State
15(1)
Transition Diagrams
16(1)
Binary Counters
17(1)
Clocks And Sequences
18(2)
The Important Concept Of Feedback
20(2)
Starting A Sequence
22(1)
Iteration In Software Vs. Replication In Hardware
22(1)
Gate And Chip Minimization
23(1)
Using Spare Gates
24(1)
Power Distribution And Heat Dissipation
24(1)
Timing
25(1)
Physical Size And Process Technologies
26(1)
Circuit Boards And Layers
27(1)
Levels Of Abstraction
27(1)
Summary
28(1)
Data And Program Representation
29(18)
Introduction
29(1)
Digital Logic And Abstraction
29(1)
Bits And Bytes
30(1)
Byte Size And Possible Values
30(1)
Binary Arithmetic
31(1)
Hexadecimal Notation
32(1)
Notation For Hexadecimal And Binary Constants
33(1)
Character Sets
34(1)
Unicode
35(1)
Unsigned Integers, Overflow, And Underflow
35(1)
Numbering Bits And Bytes
36(1)
Signed Integers
37(1)
An Example Of Two's Complement Numbers
38(1)
Sign Extension
39(1)
Floating Point
40(2)
Special Values
42(1)
Range Of IEEE Floating Point Values
42(1)
Data Aggregates
42(1)
Program Representation
43(1)
Summary
43(4)
PART II Processors
The Variety Of Processors And Computational Engines
47(14)
Introduction
47(1)
Von Neumann Architecture
47(1)
Definition Of A Processor
48(1)
The Range Of Processors
48(1)
Hierarchical Structure And Computational Engines
49(2)
Structure Of A Conventional Processor
51(1)
Definition Of An Arithmetic Logic Unit (ALU)
52(1)
Processor Categories And Roles
52(2)
Processor Technologies
54(1)
Stored Programs
54(1)
The Fetch-Execute Cycle
55(1)
Clock Rate And Instruction Rate
56(1)
Control: Getting Started And Stopping
57(1)
Starting The Fetch-Execute Cycle
57(1)
Summary
58(3)
Processor Types And Instruction Sets
61(22)
Introduction
61(1)
Mathematical Power, Convenience, And Cost
61(1)
Instruction Set And Representation
62(1)
Opcodes, Operands, And Results
63(1)
Typical Instruction Format
63(1)
Variable-Length Vs. Fixed-Length Instructions
63(1)
General-Purpose Registers
64(1)
Floating Point Registers And Register Identification
65(1)
Programming With Registers
65(1)
Register Banks
66(1)
Complex And Reduced Instruction Sets
67(1)
RISC Design And The Execution Pipeline
68(1)
Pipelines And Instruction Stalls
69(2)
Other Causes Of Pipeline Stalls
71(1)
Consequences For Programmers
71(1)
Programming, Stalls, And No-Op Instructions
72(1)
Forwarding
72(1)
Types Of Operations
73(1)
Program Counter, Fetch-Execute, And Branching
73(2)
Subroutine Calls, Arguments, And Register Windows
75(1)
An Example Instruction Set
76(2)
Minimalistic Instruction Set
78(1)
The Principle Of Orthogonality
79(1)
Condition Codes And Conditional Branching
80(1)
Summary
80(3)
Operand Addressing And Instruction Representation
83(12)
Introduction
83(1)
Zero, One, Two, Or Three Address Designs
83(1)
Zero Operands Per Instruction
84(1)
One Operand Per Instruction
85(1)
Two Operands Per Instruction
85(1)
Three Operands Per Instruction
86(1)
Operand Sources And Immediate Values
86(1)
The Von Neumann Bottleneck
87(1)
Explicit And Implicit Operand Encoding
88(1)
Operands That Combine Multiple Values
89(1)
Tradeoffs In The Choice Of Operands
90(1)
Values In Memory And Indirect Reference
91(1)
Operand Addressing Modes
92(1)
Summary
93(2)
CPUs: Microcode, Protection, And Processor Modes
95(20)
Introduction
95(1)
A Central Processor
95(1)
CPU Complexity
96(1)
Modes Of Execution
97(1)
Backward Compatibility
97(1)
Changing Modes
98(1)
Privilege And Protection
99(1)
Multiple Levels Of Protection
99(1)
Microcoded Instructions
100(2)
Microcode Variations
102(1)
The Advantage Of Microcode
102(1)
Making Microcode Visible To Programmers
103(1)
Vertical Microcode
103(1)
Horizontal Microcode
104(1)
Example Horizontal Microcode
105(2)
A Horizontal Microcode Example
107(1)
Operations That Require Multiple Cycles
108(1)
Horizontal Microcode And Parallel Execution
109(1)
Look-Ahead And High Performance Execution
110(1)
Parallelism And Execution Order
111(1)
Out-Of-Order Instruction Execution
111(1)
Conditional Branches And Branch Prediction
112(1)
Consequences For Programmers
113(1)
Summary
113(2)
Assembly Languages And Programming Paradigm
115(22)
Introduction
115(1)
Characteristics Of A High-level Programming Language
115(1)
Characteristics Of A Low-Level Programming Language
116(1)
Assembly Language
117(1)
Assembly Language Syntax And Opcodes
118(2)
Operand Order
120(1)
Register Names
121(1)
Operand Types
122(1)
Assembly Language Programming Paradigm And Idioms
122(1)
Assembly Code For Conditional Execution
123(1)
Assembly Code For A Conditional Alternative
124(1)
Assembly Code For Definite Iteration
124(1)
Assembly Code For Indefinite Iteration
125(1)
Assembly Code For Procedure Invocation
125(1)
Assembly Code For Parameterized Procedure Invocation
126(1)
Consequence For Programmers
127(1)
Assembly Code For Function Invocation
128(1)
Interaction Between Assembly And High-Level Languages
128(1)
Assembly Code For Variables And Storage
129(1)
Two-Pass Assembler
130(1)
Assembly Language Macros
131(3)
Summary
134(3)
PART III Memories
Memory And Storage
137(6)
Introduction
137(1)
Definition
137(1)
The Key Aspects Of Memory
138(1)
Characteristics Of Memory Technologies
138(2)
The Important Concept Of A Memory Hierarchy
140(1)
Instruction And Data Store
140(1)
The Fetch-Store Paradigm
141(1)
Summary
141(2)
Physical Memory And Physical Addressing
143(20)
Introduction
143(1)
Characteristics Of Computer Memory
143(1)
Static And Dynamic RAM Technologies
144(1)
Measures Of Memory Technology
145(1)
Density
146(1)
Separation Of Read And Write Performance
146(1)
Latency And Memory Controllers
146(1)
Synchronized Memory Technologies
147(1)
Multiple Data Rate Memory Technologies
148(1)
Examples Of Memory Technologies
148(1)
Memory Organization
148(1)
Memory Access And Memory Bus
149(1)
Memory Transfer Size
150(1)
Physical Addresses And Words
150(1)
Physical Memory Operations
150(1)
Word Size And Other Data Types
151(1)
An Extreme Case: Byte Addressing
151(1)
Byte Addressing With Word Transfers
152(1)
Using Powers Of Two
153(1)
Byte Alignment And Programming
154(1)
Memory Size And Address Space
154(1)
Programming With Word Addressing
155(1)
Measures Of Memory Size
155(1)
Pointers And Data Structures
156(1)
A Memory Dump
156(2)
Indirection And Indirect Operands
158(1)
Memory Banks And Interleaving
158(1)
Content Addressable Memory
159(1)
Ternary CAM
160(1)
Summary
160(3)
Virtual Memory Technologies And Virtual Addressing
163(22)
Introduction
163(1)
Definition
163(1)
A Virtual Example: Byte Addressing
164(1)
Virtual Memory Terminology
164(1)
An Interface To Multiple Physical Memory Systems
164(2)
Address Translation Or Address Mapping
166(1)
Avoiding Arithmetic Calculation
167(1)
Discontiguous Address Spaces
168(1)
Other Memory Organizations
169(1)
Motivation For Virtual Memory
169(1)
Multiple Virtual Spaces And Multiprogramming
170(1)
Multiple Levels Of Virtualization
171(1)
Creating Virtual Spaces Dynamically
171(1)
Base-Bound Registers
172(1)
Changing The Virtual Space
172(1)
Virtual Memory, Base-Bound, And Protection
173(1)
Segmentation
174(1)
Demand Paging
175(1)
Hardware And Software For Demand Paging
175(1)
Page Replacement
176(1)
Paging Terminology And Data Structures
176(1)
Address Translation In A Paging System
177(1)
Using Powers Of Two
178(1)
Presence, Use, And Modified Bits
179(1)
Page Table Storage
180(1)
Paging Efficiency And A Translation Lookaside Buffer
181(1)
Consequences For Programmers
182(1)
Summary
183(2)
Caches And Caching
185(22)
Introduction
185(1)
Definition
185(1)
Characteristics Of A Cache
186(1)
The Importance Of Caching
187(1)
Examples Of Caching
188(1)
Cache Terminology
188(1)
Best And Worst Case Cache Performance
189(1)
Cache Performance On A Typical Sequence
190(1)
Cache Replacement Policy
190(1)
LRU Replacement
191(1)
Multi-level Cache Hierarchy
191(1)
Preloading Caches
192(1)
Caches Used With Memory
192(1)
TLB As A Cache
193(1)
Demand Paging As A Form Of Caching
193(1)
Physical Memory Cache
194(1)
Write Through And Write Back
194(1)
Cache Coherence
195(1)
L1, L2, and L3 Caches
196(1)
Sizes Of L1, L2, And L3 Caches
197(1)
Instruction And Data Caches
197(1)
Virtual Memory Caching And A Cache Flush
198(1)
Implementation Of Memory Caching
199(1)
Direct Mapping Memory Cache
200(1)
Using Powers Of Two For Efficiency
201(1)
Set Associative Memory Cache
202(1)
Consequences For Programmers
203(1)
Summary
204(3)
PART IV I/O
Input/Output Concepts And Terminology
207(8)
Introduction
207(1)
Input And Output Devices
207(1)
Control Of An External Device
208(1)
Data Transfer
209(1)
Serial And Parallel Data Transfers
209(1)
Self-Clocking Data
210(1)
Full-Duplex And Half-Duplex Interaction
210(1)
Interface Latency And Throughput
211(1)
The Fundamental Idea Of Multiplexing
211(1)
Multiple Devices Per External Interface
212(1)
A Processor's View Of I/O
213(1)
Summary
213(2)
Buses And Bus Architectures
215(22)
Introduction
215(1)
Definition Of A Bus
215(1)
Processors, I/O Devices, And Buses
216(1)
Proprietary And Standardized Buses
216(1)
Shared Buses And An Access Protocol
217(1)
Multiple Buses
217(1)
A Parallel, Passive Mechanism
217(1)
Physical Connections
217(1)
Bus Interface
218(1)
Address, Control, And Data Lines
219(1)
The Fetch-Store Paradigm
220(1)
Fetch-Store Over A Bus
220(1)
The Width Of A Bus
220(1)
Multiplexing
221(1)
Bus Width And Size Of Data Items
222(1)
Bus Address Space
223(1)
Potential Errors
224(1)
Address Configuration And Sockets
225(1)
Many Buses Or One Bus
226(1)
Using Fetch-Store With Devices
226(1)
An Example Of Device Control Using Fetch-Store
226(1)
Operation Of An Interface
227(1)
Asymmetric Assignments
228(1)
Unified Memory And Device Addressing
228(2)
Holes In The Address Space
230(1)
Address Map
230(1)
Program Interface To A Bus
231(1)
Bridging Between Two Buses
232(1)
Main And Auxiliary Buses
232(2)
Consequences For Programmers
234(1)
Switching Fabrics
234(1)
Summary
235(2)
Programmed And Interrupt-Driven I/O
237(18)
Introduction
237(1)
I/O Paradigms
237(1)
Programmed I/O
238(1)
Synchronization
238(1)
Polling
239(1)
Code For Polling
239(2)
Control And Status Registers
241(1)
Processor Use And Polling
241(1)
First, Second, And Third Generation Computers
242(1)
Interrupt-Driven I/O
242(1)
A Hardware Interrupt Mechanism
243(1)
Interrupts And The Fetch-Execute Cycle
243(1)
Handling An Interrupt
244(1)
Interrupt Vectors
245(1)
Initialization And Enabling And Disabling Interrupts
246(1)
Preventing Interrupt Code From Being Interrupted
246(1)
Multiple Levels Of Interrupts
246(1)
Assignment Of Interrupt Vectors And Priorities
247(1)
Dynamic Bus Connections And Pluggable Devices
248(1)
The Advantage Of Interrupts
249(1)
Smart Devices And Improved I/O Performance
249(1)
Direct Memory Access (DMA)
250(1)
Buffer Chaining
251(1)
Scatter Read And Gather Write Operations
252(1)
Operation Chaining
252(1)
Summary
253(2)
A Programmer's View Of Devices, I/O, And Buffering
255(24)
Introduction
255(1)
Definition Of A Device Driver
256(1)
Device Independence, Encapsulation, And Hiding
256(1)
Conceptual Parts Of A Device Driver
257(1)
Two Types Of Devices
258(1)
Example Flow Through A Device Driver
258(1)
Queued Output Operations
259(2)
Forcing An Interrupt
261(1)
Queued Input Operations
261(1)
Devices That Support Bi-Directional Transfer
262(1)
Asynchronous Vs. Synchronous Programming Paradigm
263(1)
Asynchrony, Smart Devices, And Mutual Exclusion
264(1)
I/O As Viewed By An Application
264(1)
Run-Time I/O Libraries
265(1)
The Library/Operating System Dichotomy
266(1)
I/O Operations The OS Supports
267(1)
The Cost Of I/O Operations
268(1)
Reducing The System Call Overhead
268(1)
The Important Concept Of Buffering
269(1)
Implementation of Buffering
270(1)
Flushing A Buffer
271(1)
Buffering On Input
272(1)
Effectiveness Of Buffering
272(1)
Buffering In An Operating System
273(1)
Relation To Caching
274(1)
An Example: The Unix Standard I/O Library
274(1)
Summary
274(5)
PART V Advanced Topics
Parallelism
279(20)
Introduction
279(1)
Parallel And Pipelined Architectures
279(1)
Characterizations Of Parallelism
280(1)
Microscopic Vs. Macroscopic
280(1)
Examples Of Microscopic Parallelism
281(1)
Examples Of Macroscopic Parallelism
281(1)
Symmetric Vs. Asymmetric
282(1)
Fine-grain Vs. Coarse-grain Parallelism
282(1)
Explicit Vs. Implicit Parallelism
283(1)
Parallel Architectures
283(1)
Types Of Parallel Architectures (Flynn Classification)
283(1)
Single Instruction Single Data (SISD)
284(1)
Single Instruction Multiple Data (SIMD)
284(2)
Multiple Instructions Multiple Data (MIMD)
286(2)
Communication, Coordination, And Contention
288(2)
Performance Of Multiprocessors
290(2)
Consequences For Programmers
292(2)
Redundant Parallel Architectures
294(1)
Distributed And Cluster Computers
295(1)
Summary
296(3)
Pipelining
299(12)
Introduction
299(1)
The Concept Of Pipelining
299(2)
Software Pipelining
301(1)
Software Pipeline Performance And Overhead
302(1)
Hardware Pipelining
303(1)
How Hardware Pipelining Increases Performance
303(3)
When Pipelining Can Be Used
306(1)
The Conceptual Division Of Processing
307(1)
Pipeline Architectures
307(1)
Pipeline Setup, Stall, And Flush Times
308(1)
Definition Of Superpipeline Architecture
308(1)
Summary
309(2)
Assessing Performance
311(8)
Introduction
311(1)
Measuring Power And Performance
311(1)
Measures Of Computational Power
312(1)
Application Specific Instruction Counts
313(1)
Instruction Mix
314(1)
Standardized Benchmarks
315(1)
I/O And Memory Bottlenecks
316(1)
Boundary Between Hardware And Software
316(1)
Choosing Items To Optimize
317(1)
Amdahl's Law And Parallel Systems
317(1)
Summary
318(1)
Architecture Examples And Hierarchy
319(12)
Introduction
319(1)
Architectural Levels
319(1)
System-Level Architecture: A Personal Computer
320(1)
Bus Interconnection And Bridging
321(1)
Controller Chips And Physical Architecture
322(1)
Virtual Buses
323(2)
Connection Speeds
325(1)
Bridging Functionality And Virtual Buses
325(1)
Board-Level Architecture
325(2)
Chip-Level Architecture
327(1)
Structure Of Functional Units On A Chip
328(1)
Summary
329(1)
Hierarchy Beyond Computer Architectures
329(2)
Appendix 1 Lab Exercises For A Computer Architecture Course
331(28)
A1.1 Introduction
331(1)
A1.2 Digital Hardware For A Lab
332(1)
A1.3 Solderless Breadboard
332(1)
A1.4 Using A Solderless Breadboard
333(1)
A1.5 Testing
334(1)
A1.6 Power And Ground Connections
334(1)
A1.7 Lab Exercises
335(24)
1 Introduction and account configuration
337(2)
2 Digital Logic: Use of a breadboard
339(2)
3 Digital Logic: Building an adder from gates
341(2)
4 Digital Logic: Clocks and demultiplexing
343(2)
5 Representation: Testing big endian vs. little endian
345(2)
6 Representation: A hex dump program in C
347(2)
7 Processors: Learn a RISC assembly language
349(2)
8 Processors: Function that can be called from C
351(2)
9 Memory: row-major and column-major array storage
353(2)
10 Input/Output: a buffered I/O library
355(2)
11 A hex dump program in assembly language
357(2)
Bibliography 359(2)
Index 361

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Excerpts

This book began when I was assigned to help salvage an undergraduate computer organization course. The course had suffered years of neglect: it had been taught by a series of professors, mostly visitors, who had little or no interest or background in digital hardware, and the curriculum had deteriorated to a potpourri of topics that were only loosely related to hardware architectures. In some semesters, students spent the entire class studying Boolean Algebra, without even the slightest connection to actual hardware. In others, students learned the arcane details of one particular assembly language, without a notion of alternatives. Is a computer organization course worth saving? Absolutely! In many Computer Science programs, the computer organization course is the only time students are exposed to fundamental concepts that explain the structure of the computer they are programming. Understanding the hardware makes it possible to construct programs that are more efficient and less prone to errors. In a broad sense, a basic knowledge of architecture helps programmers improve program efficiency by understanding the consequences of programming choices. Knowing how the hardware works can also improve the programming process by allowing programmers to pinpoint the source of bugs quickly. Finally, graduates need to understand basic architectural concepts to pass job application tests given by firms like Intel and Microsoft. One of the steps in salvaging our architecture course consisted in looking at textbooks. We discovered the texts could be divided into roughly two types: texts aimed at beginning engineering students who would go on to design hardware, and texts written for CS students that attempt to include topics from compilers, operating systems, and (in at least one case) a complete explanation of how Internet protocols operate. Neither approach seemed appropriate for a single, introductory course on the subject. We wanted a book that (1) focused on the concepts rather than engineering details (because our students are not focused on hardware design); (2) explained the subject from a programmer''s point of view, and emphasized consequences for programmers; and (3) did not try to cover several courses'' worth of material. When no text was found, it seemed that the only solution was to create one. The text is divided into five parts. Part 1 covers the basics of digital logic, gates, and data representation. We emphasize the representation chapter because notions of two''s-compliment arithmetic and ranges of integer values are essential in programming. Parts 2, 3, and 4 cover the three essential areas of architecture: processors, memories, and I/O systems. In each case, the chapters give students enough background to understand how the mechanisms operate and the consequences for programmers. Finally, Part 5 covers advanced topics like parallelism, pipelining, and performance. An Appendix describes an important aspect of the course: a hands-on lab where students can learn by doing. Although most lab problems focus on programming, students should spend the first few weeks in lab wiring a few gates on a breadboard. The equipment is inexpensive (we spent less than fifteen dollars per student on permanent equipment; students purchase their own set of chips for under twenty dollars). We have set up a web site to accompany the book at: http://www.eca.cs.purdue.edu Rajesh Subraman has agreed to manage the site, which contains a set of class presentation materials created by the author as well as a set created by Rajesh. We invite other instructors to contribute their materials. The text and lab exercises have been used at Purdue; students have been extremely positive about both. We received notes of thanks for the text and course. For many students, the lab is their first experience with hardware, and they are enthusiastic. My thanks to the many individuals who contributed to the book. Bernd Wolfinger provided extensive reviews and made several important suggestions about topics and direction. Dan Ardelean, James Cernak, and Tim Korb gave detailed comments on many chapters. Dave Capka reviewed early chapters. Rajesh Subraman taught from the book and provided his thoughts about the content. In the CS 250 class at Purdue, the following students each identified one or more typos in the manuscript: Nitin Alreja, Alex Cox, David Ehrmann, Roger Maurice Elion, Andrew Lee, Stan Luban, Andrew L. Soderstrom, and Brandon Wuest. Finally, I thank my wife, Chris, for her patient and careful editing and valuable suggestions that improve and polish each book. Douglas E. Comer June, 2004

Rewards Program