- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
A hands-on guide to writing a Message Passing Interface, this book takes the reader on a tour across major MPI implementations, best optimization techniques, application relevant usage hints, and a historical retrospective of the MPI world, all based on a quarter of a century spent inside MPI. Readers will learn to write MPI implementations from scratch, and to design and optimize communication mechanisms using pragmatic subsetting as the guiding principle. Inside the Message Passing Interface also covers MPI quirks and tricks to achieve best performance.
Dr. Alexander Supalov created the…mehr
Andere Kunden interessierten sich auch für
- Mark BecknerSharePoint Online Development, Configuration, and Administration27,99 €
- Guillaume PitronThe Dark Cloud18,99 €
- Jaron LanierTen Arguments for Deleting Your Social Media Accounts Right Now12,99 €
- Modern Perspectives on Virtual Communications and Social Networking147,99 €
- Game Theory for Wireless Communications and Networking90,99 €
- Michael G. SolomonFundamentals of Communications and Networking with Cloud Labs Access [With eBook]210,99 €
- Michael G SolomonFundamentals of Communications and Networking112,99 €
-
-
-
A hands-on guide to writing a Message Passing Interface, this book takes the reader on a tour across major MPI implementations, best optimization techniques, application relevant usage hints, and a historical retrospective of the MPI world, all based on a quarter of a century spent inside MPI. Readers will learn to write MPI implementations from scratch, and to design and optimize communication mechanisms using pragmatic subsetting as the guiding principle. Inside the Message Passing Interface also covers MPI quirks and tricks to achieve best performance.
Dr. Alexander Supalov created the Intel Cluster Tools product line, including the Intel MP Library that he designed and led between 2003 and 2015. He invented the common MPICH ABI and also guided Intel efforts in the MPI Forum during the development of the MPI-2.1, MPI-2.2, and MPI-3 standards. Before that, Alexander designed new finite-element mesh-generation methods, contributing to the PARMACS and PARASOL interfaces, and developed the first full MPI-2 and IMPI implementations in the world. He graduated from the Moscow Institute of Physics and Technology in 1990, and earned his PhD in applied mathematics at the Institute of Numerical Mathematics of the Russian Academy of Sciences in 1995. Alexander holds 26 patents (more pending worldwide).
Dr. Alexander Supalov created the Intel Cluster Tools product line, including the Intel MP Library that he designed and led between 2003 and 2015. He invented the common MPICH ABI and also guided Intel efforts in the MPI Forum during the development of the MPI-2.1, MPI-2.2, and MPI-3 standards. Before that, Alexander designed new finite-element mesh-generation methods, contributing to the PARMACS and PARASOL interfaces, and developed the first full MPI-2 and IMPI implementations in the world. He graduated from the Moscow Institute of Physics and Technology in 1990, and earned his PhD in applied mathematics at the Institute of Numerical Mathematics of the Russian Academy of Sciences in 1995. Alexander holds 26 patents (more pending worldwide).
Produktdetails
- Produktdetails
- Verlag: De Gruyter
- Seitenzahl: 384
- Erscheinungstermin: 24. September 2018
- Englisch
- Abmessung: 240mm x 170mm x 21mm
- Gewicht: 716g
- ISBN-13: 9781501515545
- ISBN-10: 1501515543
- Artikelnr.: 48064515
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
- Verlag: De Gruyter
- Seitenzahl: 384
- Erscheinungstermin: 24. September 2018
- Englisch
- Abmessung: 240mm x 170mm x 21mm
- Gewicht: 716g
- ISBN-13: 9781501515545
- ISBN-10: 1501515543
- Artikelnr.: 48064515
- Herstellerkennzeichnung Die Herstellerinformationen sind derzeit nicht verfügbar.
Dr. Alexander Supalov, Supalov HPC, Germany
- Introduction - Learn what expects you inside the book
- What this book is about
- Who should read this book
- Notation and conventions
- How to read this book
- Overview
- Parallel computer
- Intraprocessor parallelism
- Interprocessor parallelism
- Exercises
- MPI standard
- MPI history
- Related standards
- Exercises
- MPI subsetting
- Motivation
- Typical examples
- Implementation practice
- Exercises
- Shared memory - Learn how to create a simple MPI subset capable of basic blocking point-to-point and collective operations over shared memory
- Subset definition
- General assumptions
- Blocking point-to-point communication
- Blocking collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intraprocess performance
- Interprocess performance
- Exercises
- Startup and termination
- Process creation
- Two processes
- More processes
- Connection establishment
- Process termination
- Exercises
- Blocking point-to-point communication
- Limited message length
- Blocking protocol
- Unlimited message length
- Double buffering
- Eager protocol
- Rendezvous protocol
- Exercises
- Blocking collective operations
- Naive algorithms
- Barrier
- Broadcast
- Reduce and Allreduce
- Exercises
- Sockets - Learn how to create an MPI subset capable of all point-to-point and blocking collective operations over Ethernet and other IP capable networks
- Subset definition
- General assumptions
- Blocking point-to-point communication
- Nonblocking point-to-point operations
- Blocking collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intranode performance
- Internode performance
- Exercises
- Synchronous progress engine
- Communication establishment
- Data transfer
- Exercises
- Startup and termination
- Process creation
- Startup command
- Process daemon
- Out-of-band communication
- Host name resolution
- Connection establishment
- At startup (eager)
- On request (lazy)
- Process termination
- Exercises
- Blocking point-to-point communication
- Source and tag matching
- Unexpected messages
- Exercises
- Nonblocking point-to-point communication
- Request management
- Exercises
- Blocking collective operations
- Communication context
- Basic algorithms
- Tree based algorithms
- Circular algorithms
- Hypercube algorithms
- Exercises
- OFA libfabrics - Learn how to create an MPI subset capable of all point-to-point and collective operations over InfiniBand and upcoming future networks
- Subset definition
- General assumptions
- Point-to-point operations
- Collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intranode performance
- Internode performance
- Exercises
- Startup and termination
- Process creation
- Credential exchange
- Connection establishment
- Process termination
- Exercises
- Point-to-point communication
- Blocking communication
- Nonblocking communication
- Exercises
- Collective operations
- Advanced algorithms
- Blocking operations
- Nonblocking operations
- Exercises
- Advanced features - Learn how to add advanced MPI features including but not limited to heterogeneity, one-sided communication, file I/O, and language bindings
- Communication modes
- Standard
- Buffered
- Synchronous
- Heterogeneity
- Basic datatypes
- Simple datatypes
- Derived datatypes
- Exercises
- Groups, communicators, topologies
- Group management
- Communicator management
- Process topologies
- Exercises
- One-sided communication
- Mapped implementation
- Native implementation
- Exercises
- File I/O
- Standard I/O
- MPI file I/O
- Exercises
- Language bindings
- Fortran
- C++
- Java
- Python
- Exercises
- Optimization - Learn how to optimize MPI internally by using advanced implementation techniques and available special hardware
- Direct data transfer
- Direct memory access
- Remote direct memory access
- Exercises
- Threads
- Thread support level
- Threads as MPI processes
- Shared memory extensions
- Exercises
- Multiple fabrics
- Synchronous progress engine
- Asynchronous progress engine
- Hybrid progress engine
- Exercises
- Dedicated hardware
- Synchronization
- Special memory
- Auxiliary networks
- Exercises
- Look ahead - Learn to recognize MPI advantages and drawbacks to better assess its future
- MPI axioms
- Reliable data transfer
- Ordered message delivery
- Dense process rank sequence
- Exercises
- MPI-4 en route
- Fault tolerance
- Exercises
- Beyond MPI
- Exascale challenge
- Exercises
- References - Learn about books that may further extend your knowledge
Appendices
- MPI Families - Learn about major MPI implementation families, their genesis, architecture and relative performance
- MPICH
- Genesis
- Architecture
- Details
- MPICH
- MVAPICH
- Intel MPI
- ...
- Exercises
- OpenMPI
- Genesis
- Architecture
- Details
- Exercises
- Comparison
- Market
- Features
- Performance
- Exercises
- Alternative interfaces - Learn about other popular interfaces that are used to implement MPI
- DAPL
- ...
- Exercises
- SHMEM
- ...
- Exercises
- GasNET
- ...
- Exercises
- Portals
- ...
- Exercises
- Solutions to all exercises - Learn how to answer all those questions
- Introduction - Learn what expects you inside the book
- What this book is about
- Who should read this book
- Notation and conventions
- How to read this book
- Overview
- Parallel computer
- Intraprocessor parallelism
- Interprocessor parallelism
- Exercises
- MPI standard
- MPI history
- Related standards
- Exercises
- MPI subsetting
- Motivation
- Typical examples
- Implementation practice
- Exercises
- Shared memory - Learn how to create a simple MPI subset capable of basic blocking point-to-point and collective operations over shared memory
- Subset definition
- General assumptions
- Blocking point-to-point communication
- Blocking collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intraprocess performance
- Interprocess performance
- Exercises
- Startup and termination
- Process creation
- Two processes
- More processes
- Connection establishment
- Process termination
- Exercises
- Blocking point-to-point communication
- Limited message length
- Blocking protocol
- Unlimited message length
- Double buffering
- Eager protocol
- Rendezvous protocol
- Exercises
- Blocking collective operations
- Naive algorithms
- Barrier
- Broadcast
- Reduce and Allreduce
- Exercises
- Sockets - Learn how to create an MPI subset capable of all point-to-point and blocking collective operations over Ethernet and other IP capable networks
- Subset definition
- General assumptions
- Blocking point-to-point communication
- Nonblocking point-to-point operations
- Blocking collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intranode performance
- Internode performance
- Exercises
- Synchronous progress engine
- Communication establishment
- Data transfer
- Exercises
- Startup and termination
- Process creation
- Startup command
- Process daemon
- Out-of-band communication
- Host name resolution
- Connection establishment
- At startup (eager)
- On request (lazy)
- Process termination
- Exercises
- Blocking point-to-point communication
- Source and tag matching
- Unexpected messages
- Exercises
- Nonblocking point-to-point communication
- Request management
- Exercises
- Blocking collective operations
- Communication context
- Basic algorithms
- Tree based algorithms
- Circular algorithms
- Hypercube algorithms
- Exercises
- OFA libfabrics - Learn how to create an MPI subset capable of all point-to-point and collective operations over InfiniBand and upcoming future networks
- Subset definition
- General assumptions
- Point-to-point operations
- Collective operations
- Exercises
- Communication mechanisms
- Basic communication
- Intranode performance
- Internode performance
- Exercises
- Startup and termination
- Process creation
- Credential exchange
- Connection establishment
- Process termination
- Exercises
- Point-to-point communication
- Blocking communication
- Nonblocking communication
- Exercises
- Collective operations
- Advanced algorithms
- Blocking operations
- Nonblocking operations
- Exercises
- Advanced features - Learn how to add advanced MPI features including but not limited to heterogeneity, one-sided communication, file I/O, and language bindings
- Communication modes
- Standard
- Buffered
- Synchronous
- Heterogeneity
- Basic datatypes
- Simple datatypes
- Derived datatypes
- Exercises
- Groups, communicators, topologies
- Group management
- Communicator management
- Process topologies
- Exercises
- One-sided communication
- Mapped implementation
- Native implementation
- Exercises
- File I/O
- Standard I/O
- MPI file I/O
- Exercises
- Language bindings
- Fortran
- C++
- Java
- Python
- Exercises
- Optimization - Learn how to optimize MPI internally by using advanced implementation techniques and available special hardware
- Direct data transfer
- Direct memory access
- Remote direct memory access
- Exercises
- Threads
- Thread support level
- Threads as MPI processes
- Shared memory extensions
- Exercises
- Multiple fabrics
- Synchronous progress engine
- Asynchronous progress engine
- Hybrid progress engine
- Exercises
- Dedicated hardware
- Synchronization
- Special memory
- Auxiliary networks
- Exercises
- Look ahead - Learn to recognize MPI advantages and drawbacks to better assess its future
- MPI axioms
- Reliable data transfer
- Ordered message delivery
- Dense process rank sequence
- Exercises
- MPI-4 en route
- Fault tolerance
- Exercises
- Beyond MPI
- Exascale challenge
- Exercises
- References - Learn about books that may further extend your knowledge
Appendices
- MPI Families - Learn about major MPI implementation families, their genesis, architecture and relative performance
- MPICH
- Genesis
- Architecture
- Details
- MPICH
- MVAPICH
- Intel MPI
- ...
- Exercises
- OpenMPI
- Genesis
- Architecture
- Details
- Exercises
- Comparison
- Market
- Features
- Performance
- Exercises
- Alternative interfaces - Learn about other popular interfaces that are used to implement MPI
- DAPL
- ...
- Exercises
- SHMEM
- ...
- Exercises
- GasNET
- ...
- Exercises
- Portals
- ...
- Exercises
- Solutions to all exercises - Learn how to answer all those questions