9780521683371

An Introduction to Parallel and Vector Scientific Computation

by
  • ISBN13:

    9780521683371

  • ISBN10:

    0521683378

  • Format: Paperback
  • Copyright: 2006-08-14
  • Publisher: Cambridge University Press
  • Purchase Benefits
  • Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $54.00 Save up to $1.62
  • Buy New
    $52.38
    Add to Cart Free Shipping

    SPECIAL ORDER: 1-2 WEEKS

Supplemental Materials

What is included with this book?

  • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

Summary

In this text, students of applied mathematics, science and engineering are introduced to fundamental ways of thinking about the broad context of parallelism. The authors begin by giving the reader a deeper understanding of the issues through a general examination of timing, data dependencies, and communication. These ideas are implemented with respect to shared memory, parallel and vector processing, and distributed memory cluster computing. Threads, OpenMP, and MPI are covered, along with code examples in Fortran, C, and Java. The principles of parallel computation are applied throughout as the authors cover traditional topics in a first course in scientific computing. Building on the fundamentals of floating point representation and numerical error, a thorough treatment of numerical linear algebra and eigenvector/eigenvalue problems is provided. By studying how these algorithms parallelize, the reader is able to explore parallelism inherent in other computations, such as Monte Carlo methods.

Author Biography

Ronald W. Shonkwiler is a professor in the School of Mathematics at the Georgia Institute of Technology Lew Lefton is the Director of Information Technology for the School of Mathematics and the College of Sciences at the Georgia Institute of Technology

Table of Contents

Preface xi
PART I MACHINES AND COMPUTATION 1(100)
1 Introduction – The Nature of High-Performance Computation
3(24)
1.1 Computing Hardware Has Undergone Vast Improvement
4(5)
1.2 SIMD–Vector Computers
9(5)
1.3 MIMD – True, Coarse-Grain Parallel Computers
14(2)
1.4 Classification of Distributed Memory Computers
16(4)
1.5 Amdahl's Law and Profiling
20(3)
Exercises
23(4)
2 Theoretical Considerations – Complexity
27(17)
2.1 Directed Acyclic Graph Representation
27(6)
2.2 Some Basic Complexity Calculations
33(4)
2.3 Data Dependencies
37(4)
2.4 Programming Aids
41(1)
Exercises
41(3)
3 Machine Implementations
44(57)
3.1 Early Underpinnings of Parallelization-Multiple Processes
45(5)
3.2 Threads Programming
50(6)
3.3 Java Threads
56(7)
3.4 SGI Threads and OpenMP
63(4)
3.5 MPI
67(13)
3.6 Vector Parallelization on the Cray
80(9)
3.7 Quantum Computation
89(8)
Exercises
97(4)
PART II LINEAR SYSTEMS 101(130)
4 Building Blocks – Floating Point Numbers and Basic Linear Algebra
103(23)
4.1 Floating Point Numbers and Numerical Error
104(6)
4.2 Round-off Error Propagation
110(3)
4.3 Basic Matrix Arithmetic
113(7)
4.4 Operations with Banded Matrices
120(4)
Exercises
124(2)
5 Direct Methods for Linear Systems and LU Decomposition
126(36)
5.1 Triangular Systems
126(8)
5.2 Gaussian Elimination
134(16)
5.3 ijk-Forms for LU Decomposition
150(5)
5.4 Bordering Algorithm for LU Decomposition
155(1)
5.5 Algorithm for Matrix Inversion in log² n Time
156(2)
Exercises
158(4)
6 Direct Methods for Systems with Special Structure
162(10)
6.1 Tridiagonal Systems – Thompson's Algorithm
162(1)
6.2 Tridiagonal Systems – Odd–Even Reduction
163(3)
6.3 Symmetric Systems – Cholesky Decomposition
166(4)
Exercises
170(2)
7 Error Analysis and QR Decomposition
172(14)
7.1 Error and Residual – Matrix Norms
172(8)
7.2 Givens Rotations
180(4)
Exercises
184(2)
8 Iterative Methods for Linear Systems
186(20)
8.1 Jacobi Iteration or the Method of Simultaneous Displacements
186(3)
8.2 Gauss–Seidel Iteration or the Method of Successive Displacements
189(2)
8.3 Fixed-Point Iteration
191(2)
8.4 Relaxation Methods
193(1)
8.5 Application to Poisson's Equation
194(4)
8.6 Parallelizing Gauss—Seidel Iteration
198(2)
8.7 Conjugate Gradient Method
200(4)
Exercises
204(2)
9 Finding Eigenvalues and Eigenvectors
206(25)
9.1 Eigenvalues and Eigenvectors
206(3)
9.2 The Power Method
209(1)
9.3 Jordan Cannonical Form
210(5)
9.4 Extensions of the Power Method
215(2)
9.5 Parallelization of the Power Method
217(1)
9.6 The QR Method for Eigenvalues
217(4)
9.7 Householder Transformations
221(5)
9.8 Hessenberg Form
226(1)
Exercises
227(4)
PART III MONTE CARLO METHODS 231(34)
10 Monte Carlo Simulation
233(11)
10.1 Quadrature (Numerical Integration)
233(9)
Exercises
242(2)
11 Monte Carlo Optimization
244(21)
11.1 Monte Carlo Methods for Optimization
244(5)
11.2 IIP Parallel Search
249(2)
11.3 Simulated Annealing
251(4)
11.4 Genetic Algorithms
255(3)
11.5 Iterated Improvement Plus Random Restart
258(4)
Exercises
262(3)
APPENDIX: PROGRAMMING EXAMPLES 265(20)
MPI Examples
267(3)
Send a Message Sequentially to All Processors (C)
267(1)
Send and Receive Messages (Fortran)
268(2)
Fork Example
270(5)
Polynomial Evaluation(C)
270(5)
Lan Example
275(5)
Distributed Execution on a LAN(C)
275(5)
Threads Example
280(2)
Dynamic Scheduling(C)
280(2)
SGI Example
282(3)
Backsub.f(Fortran)
282(3)
References 285(1)
Index 286

Rewards Program

Write a Review