IMPORTANT COVID-19 UPDATES

9781584884477

Speculative Execution In High Performance Computer Architectures

by ;
  • ISBN13:

    9781584884477

  • ISBN10:

    1584884479

  • Format: Hardcover
  • Copyright: 2005-05-26
  • Publisher: Chapman & Hall/

Note: Supplemental materials are not guaranteed with Rental or Used book purchases.

Purchase Benefits

  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $175.00 Save up to $147.18
  • Rent Book $157.50
    Add to Cart Free Shipping

    TERM
    PRICE
    DUE
    USUALLY SHIPS IN 3-5 BUSINESS DAYS
    *This item is part of an exclusive publisher rental program and requires an additional convenience fee. This fee will be reflected in the shopping cart.

Supplemental Materials

What is included with this book?

Summary

Until now, there were few textbooks that focused on the dynamic subject of speculative execution, a topic that is crucial to the development of high performance computer architectures. Speculative Execution in High Performance Computer Architectures describes many recent advances in speculative execution techniques. It covers cutting-edge research projects, as well as numerous commercial implementations that demonstrate the value of this latency-hiding technique.The book begins with a review of control speculation techniques that use instruction cache prefetching, branch prediction and predication, and multi-path execution. It then examines dataflow speculation techniques including data cache prefetching, address value and data value speculation, pre-computation, and coherence speculation. This textbook also explores multithreaded approaches, emphasizing profile-guided speculation, speculative microarchitectures, and compiler techniques.

Table of Contents

Introduction
1(8)
David R. Kaeli
Pen-Chung Yew
Speculation for the Instruction Stream
2(1)
Speculation for the Data Stream
3(1)
Compiler and Multithreading-Based Speculation
4(2)
Speculative Microarchitectures
6(3)
References
6(3)
Instruction Cache Prefetching
9(20)
Glenn Reinman
Introduction
9(1)
Scaling Trends
9(1)
Instruction Cache Design
10(4)
Instruction Cache Prefetching
14(12)
Future Challenges
26(3)
References
27(2)
Branch Prediction
29(58)
Philip G. Emma
What Are Branch Instructions and Why Are They Important? Some History
29(7)
Why Are Branches Important to Performance?
36(6)
Three Dimensions of a Branch Instruction, and When to Predict Branches
42(6)
How to Predict Whether a Branch is Taken
48(6)
Predicting Branch Target Addresses
54(4)
The Subroutine Return - A Branch That Changes Its Target Address
58(4)
Putting It All Together
62(6)
Very High ILP Environments
68(2)
Summary
70(17)
References
72(15)
Trace Caches
87(22)
Eric Rotenberg
Introduction
87(2)
Instruction Fetch Unit with Trace Cache
89(8)
Trace Cache Design Space
97(7)
Summary
104(5)
References
104(5)
Branch Predication
109(26)
David August
Introduction
109(3)
A Generalized Predication Model
112(3)
Compilation for Predicated Execution
115(4)
Architectures with Predication
119(4)
In Depth: Predication in Intel IA-64
123(4)
Predication in Hardware Design
127(2)
The Future of Predication
129(6)
References
129(6)
Multipath Execution
135(26)
Augustus K. Uht
Introduction
135(1)
Motivation and Essentials
136(1)
Taxonomy and Characterization
137(11)
Microarchitecture Examples
148(6)
Issues
154(3)
Status, Summary, and Predictions
157(4)
References
157(4)
Data Cache Prefetching
161(26)
Yan Solihin
Donald Yeung
Introduction
161(1)
Software Prefetching
162(11)
Hardware Prefetching
173(14)
References
181(6)
Address Prediction
187(28)
Avi Mendelson
Introduction
187(2)
Taxonomy of Address Calculation and Prediction
189(5)
Address Prediction
194(6)
Speculative Memory Disambiguation
200(11)
Summary and Remarks
211(4)
References
212(3)
Data Speculation
215(30)
Yiannakis Sazeides
Pedro Marcuello
James E. Smith
Antonio Gonzalez
Introduction
215(1)
Data Value Speculation
216(8)
Data Dependence Speculation
224(6)
Verification and Recovery
230(4)
Related Work and Applications
234(1)
Trends, Challenges, and the Future
235(2)
Acknowledgments
237(8)
References
237(8)
Instruction Precomputation
245(24)
Joshua J. Yi
Resit Sendag
David J. Lilja
Introduction
245(1)
Value Reuse
246(1)
Redundant Computations
247(1)
Instruction Precomputation
248(5)
Simulation Methodology
253(1)
Performance Results for Instruction Precomputation
253(7)
An Analytical Evaluation of Instruction Precomputation
260(2)
Extending Instruction Precomputation by Incorporating Speculation
262(1)
Related Work
263(2)
Conclusion
265(1)
Acknowledgment
265(4)
References
266(3)
Profile-Based Speculation
269(32)
Youfeng Wu
Jesse Fang
Introduction
269(1)
Commonly Used Profiles
270(2)
Profile Collection
272(4)
Profile Usage Models
276(2)
Profile-Based ILP Speculations
278(4)
Profile-Based Data Prefetching
282(5)
Profile-Based Thread Level Speculations
287(3)
Issues with Profile-Based Speculation
290(2)
Summary
292(9)
References
293(8)
Compilation and Speculation
301(32)
Jin Lin
Wei-Chung Hsu
Pen-Chung Yew
Introduction
301(2)
Runtime Profiling
303(7)
Speculative Analysis Framework
310(4)
General Speculative Optimizations
314(7)
Recovery Code Generation
321(8)
Conclusion
329(4)
References
330(3)
Multithreading and Speculation
333(22)
Pedro Marcuello
Jesus Sanchez
Antonio Gonzalez
Introduction
333(3)
Speculative Multithreaded Architectures
336(3)
Helper Threads
339(2)
Speculative Architectural Threads
341(8)
Some Speculative Multithreaded Architectures
349(1)
Concluding Remarks
350(5)
References
350(5)
Exploiting Load/Store Parallelism via Memory Dependence Prediction
355(38)
Andreas Moshovos
Exploiting Load/Store Parallelism
357(4)
Memory Dependence Speculation
361(2)
Memory Dependence Speculation Policies
363(2)
Mimicking Ideal Memory Dependence Speculation
365(4)
Implementation Framework
369(5)
Related Work
374(1)
Experimental Results
374(13)
Summary
387(6)
References
389(4)
Resource Flow Microarchitectures
393(28)
David A. Morano
David R. Kaeli
Augustus K. Uht
Introduction
393(2)
Motivation for More Parallelism
395(1)
Resource Flow Basic Concepts
396(10)
Representative Microarchitectures
406(10)
Summary
416(5)
References
417(4)
Index 421

Rewards Program

Write a Review