rent-now

Rent More, Save More! Use code: ECRENTAL

5% off 1 book, 7% off 2 books, 10% off 3+ books

9780262692151

MPI: The Complete Reference (Vol. 1) - 2nd Edition

by
  • ISBN13:

    9780262692151

  • ISBN10:

    0262692155

  • Edition: 2nd
  • Format: Paperback
  • Copyright: 1998-09-19
  • Publisher: Mit Pr
  • Purchase Benefits
  • Free Shipping Icon Free Shipping On Orders Over $35!
    Your order must be $35 or more to qualify for free economy shipping. Bulk sales, PO's, Marketplace items, eBooks and apparel do not qualify for this offer.
  • eCampus.com Logo Get Rewarded for Ordering Your Textbooks! Enroll Now
List Price: $10.75

Summary

Since its release in summer 1994, the Message Passing Interface (MPI) specification has become a standard for message-passing libraries for parallel computations. There exist more than a dozen implementations on a variety of computing platforms, from the IBM SP-2 supercomputer to PCs running Windows NT. The initial MPI Standard, known as MPI-1, has been modified over the last two years. This volume, the definitive reference manual for the latest version of MPI-1, contains a complete specification of the MPI Standard. It is annotated with comments that clarify complicated issues, including why certain design choices were made, how users are intended to use the interface, and how they should construct their version of MPI. The volume also provides many detailed, illustrative programming examples.

Table of Contents

Series Foreword xiii(2)
Preface xv
1 Introduction
1(28)
1.1 Message Passing and Standardization
1(3)
1.2 Who Should Use This Standard?
4(1)
1.3 What Platforms Are Targets for Implementation?
4(1)
1.4 What Is Included in MPI?
5(1)
1.5 Version of MPI
6(1)
1.6 MPI Conventions and Design Choices
6(3)
1.6.1 Document Notation
6(1)
1.6.2 Naming Conventions
7(1)
1.6.3 Procedure Specification
8(1)
1.7 Semantic Terms
9(2)
1.8 Function Argument Data Type
11(3)
1.8.1 Opaque Objects
11(2)
1.8.2 Array Arguments
13(1)
1.8.3 State
13(1)
1.8.4 Named Constants
13(1)
1.8.5 Choice
14(1)
1.8.6 Addresses
14(1)
1.8.7 File Offsets
14(1)
1.9 Language Binding
14(9)
1.9.1 Deprecated Names and Functions
15(1)
1.9.2 Fortran Binding Issues
15(3)
1.9.3 C Binding Issues
18(1)
1.9.4 C++ Binding Issues
19(4)
1.10 Processes
23(1)
1.11 Error Handling
24(2)
1.12 Implementation Issues
26(3)
1.12.1 Independence of Basic Runtime Routines
26(1)
1.12.2 Interaction with Signals
27(2)
2 Point-to-Point Communication
29(94)
2.1 Introduction and Overview
29(4)
2.2 Blocking Send and Receive Operations
33(11)
2.2.1 Blocking Send
33(1)
2.2.2 Send Buffer and Message Data
33(3)
2.2.3 Message Envelope
36(1)
2.2.4 Comments on Send
37(1)
2.2.5 Blocking Receive
38(1)
2.2.6 Receive Buffer
39(1)
2.2.7 Message Selection
39(1)
2.2.8 Return Status
39(3)
2.2.9 Comments on Receive
42(2)
2.3 Datatype Matching and Data Conversion
44(5)
2.3.1 Type Matching Rules
44(3)
2.3.2 Data Conversion
47(2)
2.3.3 Comments on Data Conversion
49(1)
2.4 Semantics of Blocking Point-to-Point
49(7)
2.4.1 Buffering and Safety
49(3)
2.4.2 Order
52(3)
2.4.3 Progress
55(1)
2.4.4 Fairness
55(1)
2.5 Example--Jacobi Iteration
56(5)
2.6 Send-Receive
61(4)
2.7 Null Processes
65(3)
2.8 Nonblocking Communication
68(20)
2.8.1 Request Objects
69(1)
2.8.2 Posting Operations
69(2)
2.8.3 Completion Operations
71(2)
2.8.4 Examples
73(5)
2.8.5 Freeing Requests
78(1)
2.8.6 Nondestructive Test of status
79(1)
2.8.7 Semantics of Nonblocking Communications
80(5)
2.8.8 Comments on Semantics of Nonblocking Communications
85(3)
2.9 Multiple Completions
88(9)
2.10 Probe and Cancel
97(6)
2.11 Persistent Communication Requests
103(5)
2.12 Communication-Complete Calls with Null Request Handles
108(3)
2.13 Communication Modes
111(12)
2.13.1 Blocking Calls
113(1)
2.13.2 Nonblocking Calls
114(2)
2.13.3 Persistent Requests
116(2)
2.13.4 Buffer Allocation and Usage
118(2)
2.13.5 Model Implementation of Buffered Mode
120(1)
2.13.6 Comments on Communication Modes
121(2)
3 User-Defined Datatypes and Packing
123(68)
3.1 Introduction
123(1)
3.2 Introduction to User-Defined Datatypes
123(4)
3.3 Datatype Accessors
127(1)
3.4 Datatype Constructors
128(19)
3.4.1 Dup
128(1)
3.4.2 Contiguous
129(1)
3.4.3 Vector
130(2)
3.4.4 Hvector
132(5)
3.4.5 Indexed
137(2)
3.4.6 Block Indexed
139(1)
3.4.7 Hindexed
140(2)
3.4.8 Struct
142(5)
3.5 Use of Derived Datatypes
147(5)
3.5.1 Commit
147(1)
3.5.2 Deallocation
148(1)
3.5.3 Relation to count
149(1)
3.5.4 Type Matching
149(1)
3.5.5 Message Length
150(2)
3.6 Address Function
152(2)
3.7 Datatype Resizing
154(6)
3.7.1 True Extent of Datatypes
159(1)
3.8 Absolute Addresses
160(1)
3.9 Array Datatype Constructors
161(9)
3.10 Portability of Datatypes
170(1)
3.11 Deprecated Functions
171(4)
3.12 Pack and Unpack
175(16)
3.12.1 Canonical MPI_PACK and MPI_UNPACK
184(4)
3.12.2 Derived Datatypes versus Pack/Unpack
188(3)
4 Collective Communications
191(64)
4.1 Introduction and Overview
191(2)
4.2 Operational Details
193(1)
4.3 Communicator Argument
194(1)
4.4 Barrier Synchronization
195(1)
4.5 Global Communication Functions
195(2)
4.6 Broadcast
197(2)
4.6.1 An Example Using MPI_BCAST
198(1)
4.7 Gather
199(11)
4.7.1 Examples Using MPI_GATHER
200(2)
4.7.2 Gather, Vector Variant
202(1)
4.7.3 Examples Using MPI_GATHERV
203(7)
4.8 Scatter
210(5)
4.8.1 An Example Using MPI_SCATTER
211(1)
4.8.2 Scatter: Vector Variant
212(1)
4.8.3 Examples Using MPI_SCATTERV
213(2)
4.9 Gather to All
215(3)
4.9.1 An Example Using MPI_ALLGATHER
216(1)
4.9.2 Gather to All: Vector Variant
217(1)
4.10 All to All Scatter/Gather
218(7)
4.10.1 All to All: Vector Variant
219(2)
4.10.2 All to All: Generalized Function
221(1)
4.10.3 An Example Using MPI_ALLTOALLW
222(3)
4.11 Global Reduction Operations
225(17)
4.11.1 Reduce
225(4)
4.11.2 Predefined Reduce Operations
229(3)
4.11.3 MINLOC and MAXLOC
232(6)
4.11.4 All Reduce
238(2)
4.11.5 Reduce-Scatter
240(2)
4.12 Scan Operations
242(2)
4.12.1 Inclusive Scan
242(1)
4.12.2 Exclusive Scan
243(1)
4.13 User-Defined Operations for Reduce and Scan
244(6)
4.14 The Semantics of Collective Communications
250(5)
5 Communicators
255(64)
5.1 Introduction
255(2)
5.1.1 Division of Processes
255(1)
5.1.2 Avoiding Message Conflicts between Modules
256(1)
5.1.3 Extensibility by Users
256(1)
5.1.4 Safety
256(1)
5.2 Overview
257(5)
5.2.1 Groups
257(1)
5.2.2 Communicator
258(1)
5.2.3 Communication Domains
259(2)
5.2.4 Compatibility with Previous Practice
261(1)
5.3 Group Management
262(10)
5.3.1 Group Accessors
262(2)
5.3.2 Group Constructors
264(7)
5.3.3 Group Destructors
271(1)
5.4 Communicator Management
272(8)
5.4.1 Communicator Accessors
272(2)
5.4.2 Communicator Constructors
274(5)
5.4.3 Communicator Destructor
279(1)
5.5 Safe Parallel Libraries
280(7)
5.6 Caching
287(17)
5.6.1 Introduction
287(3)
5.6.2 Caching Functions
290(11)
5.6.3 Deprecated Functions
301(3)
5.7 Intercommunication
304(15)
5.7.1 Introduction
304(4)
5.7.2 Intercommunicator Accessors
308(2)
5.7.3 Intercommunicator Constructors and Destructors
310(4)
5.7.4 Examples
314(5)
6 Process Topologies
319(34)
6.1 Introduction
319(1)
6.2 Virtual Topologies
320(1)
6.3 Overlapping Topologies
321(1)
6.4 Embedding in MPI
322(1)
6.5 Cartesian Topology Functions
323(10)
6.5.1 Cartesian Constructor Function
323(1)
6.5.2 Cartesian Convenience Function: MPI_DIMS_CREATE
324(1)
6.5.3 Cartesian Inquiry Functions
325(2)
6.5.4 Cartesian Translator Functions
327(1)
6.5.5 Cartesian Shift Function
328(3)
6.5.6 Cartesian Partition Function
331(1)
6.5.7 Cartesian Low-Level Functions
332(1)
6.6 Graph Topology Functions
333(7)
6.6.1 Graph Constructor Function
334(2)
6.6.2 Graph Inquiry Functions
336(1)
6.6.3 Graph Information Functions
337(2)
6.6.4 Low-Level Graph Functions
339(1)
6.7 Topology Inquiry Functions
340(1)
6.8 An Application Example
340(13)
7 Environmental Management
353(34)
7.1 MPI Process Startup
353(4)
7.2 Intialization and Exit
357(7)
7.3 Implementation Information
364(4)
7.3.1 MPI Version Number
364(1)
7.3.2 Environmental Inquiries
365(3)
7.4 Timers and Synchronization
368(1)
7.5 Error Handling
369(15)
7.5.1 Error Handlers
371(8)
7.5.2 Deprecated Functions
379(1)
7.5.3 Error Codes
380(4)
7.6 Interaction with Executing Environment
384(3)
7.6.1 Independence of Basic Runtime Routines
384(1)
7.6.2 Interaction with Signals in POSIX
385(2)
8 The MPI Profiling Interface
387(8)
8.1 Requirements
387(1)
8.2 Discussion
388(1)
8.3 Logic of the Design
388(2)
8.3.1 Miscellaneous Control of Profiling
389(1)
8.4 Examples
390(4)
8.4.1 Profiler Implementation
390(1)
8.4.2 MPI Library Implementation
391(2)
8.4.3 Complications
393(1)
8.5 Multiple Levels of Interception
394(1)
9 Conclusions
395(14)
9.1 Design Issues
395(2)
9.1.1 Why Is MPI So Big?
395(1)
9.1.2 Should We Be Concerned about the Size of MPI?
395(1)
9.1.3 Why Does MPI Not Guarantee Buffering?
396(1)
9.2 Portable Programming with MPI
397(9)
9.2.1 Dependency on Buffering
398(6)
9.2.2 Collective Communication and Synchronization
404(1)
9.2.3 Ambiguous Communications and Portability
404(2)
9.3 Heterogeneous Computing with MPI
406(1)
9.4 MPI Implementations
407(2)
References 409(2)
Constants Index 411(4)
Function Index 415(6)
Index 421

Supplemental Materials

What is included with this book?

The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any access cards, study guides, lab manuals, CDs, etc.

The Used, Rental and eBook copies of this book are not guaranteed to include any supplemental materials. Typically, only the book itself is included. This is true even if the title states it includes any access cards, study guides, lab manuals, CDs, etc.

Rewards Program