To the outside world, a "supercomputer" appears to be a single system. In fact, it's a cluster of computers that share a local area network and have the ability to work together on a single problem as a team. Many businesses used to consider supercomputing beyond the reach of their budgets, but new Linux applications have made high-performance clusters more affordable than ever. These days, the promise of low-cost supercomputing is one of the main reasons many businesses choose Linux over other operating systems.This new guide covers everything a newcomer to clustering will need to plan,…mehr
To the outside world, a "supercomputer" appears to be a single system. In fact, it's a cluster of computers that share a local area network and have the ability to work together on a single problem as a team. Many businesses used to consider supercomputing beyond the reach of their budgets, but new Linux applications have made high-performance clusters more affordable than ever. These days, the promise of low-cost supercomputing is one of the main reasons many businesses choose Linux over other operating systems.This new guide covers everything a newcomer to clustering will need to plan, build, and deploy a high-performance Linux cluster. The book focuses on clustering for high-performance computation, although much of its information also applies to clustering for high-availability (failover and disaster recovery). The book discusses the key tools you'll need to get started, including good practices touse while exploring the tools and growing a system. You'll learn about planning, hardware choices, bulk installation of Linux on multiple systems, and other basic considerations. Then, you'll learn about software options that can save you hours--or even weeks--of deployment time.Since a wide variety of options exist in each area of clustering software, the author discusses the pros and cons of the major free software projects and chooses those that are most likely to be helpful to new cluster administrators and programmers. A few of the projects introduced in the book include: MPI, the most popular programming library for clusters. This book offers simple but realistic introductory examples along with some pointers for advanced use. OSCAR and Rocks, two comprehensive installation and administrative systems openMosix (a convenient tool for distributing jobs), Linux kernel extensions that migrate processes transparently for load balancing PVFS, one of the parallel filesystems that make clustering I/O easier C3, a set of commands for administering multiple systemsGanglia, OpenPBS, and cloning tools (Kickstart, SIS and G4U) are also covered. The book looks at cluster installation packages (OSCAR&Rocks) and then considers the core packages individually for greater depth or for folks wishing to do a custom installation. Guidelines for debugging, profiling, performance tuning, and managing jobs from multiple users round out this immensely useful book.
Joseph D. Sloan has been working with computers since the mid-1970s.He began using Unix as a graduate student in 1981, first as anapplications programmer and later as a system programmer and systemadministrator. Since 1988 he has taught computer science, first atLander University and more recently at Wofford College where he can befound using the software described in this book.
Inhaltsangabe
Preface Audience Organization Conventions How to Contact Us Using Code Examples Acknowledgments Part I: An Introduction to Clusters Chapter 1: Cluster Architecture 1.1 Modern Computing and the Role of Clusters 1.2 Types of Clusters 1.3 Distributed Computing and Clusters 1.4 Limitations 1.5 My Biases Chapter 2: Cluster Planning 2.1 Design Steps 2.2 Determining Your Cluster's Mission 2.3 Architecture and Cluster Software 2.4 Cluster Kits 2.5 CD-ROM-Based Clusters 2.6 Benchmarks Chapter 3: Cluster Hardware 3.1 Design Decisions 3.2 Environment Chapter 4: Linux for Clusters 4.1 Installing Linux 4.2 Configuring Services 4.3 Cluster Security Part II: Getting Started Quickly Chapter 5: openMosix 5.1 What Is openMosix? 5.2 How openMosix Works 5.3 Selecting an Installation Approach 5.4 Installing a Precompiled Kernel 5.5 Using openMosix 5.6 Recompiling the Kernel 5.7 Is openMosix Right for You? Chapter 6: OSCAR 6.1 Why OSCAR? 6.2 What's in OSCAR 6.3 Installing OSCAR 6.4 Security and OSCAR 6.5 Using switcher 6.6 Using LAM/MPI with OSCAR Chapter 7: Rocks 7.1 Installing Rocks 7.2 Managing Rocks 7.3 Using MPICH with Rocks Part III: Building Custom Clusters Chapter 8: Cloning Systems 8.1 Configuring Systems 8.2 Automating Installations 8.3 Notes for OSCAR and Rocks Users Chapter 9: Programming Software 9.1 Programming Languages 9.2 Selecting a Library 9.3 LAM/MPI 9.4 MPICH 9.5 Other Programming Software 9.6 Notes for OSCAR Users 9.7 Notes for Rocks Users Chapter 10: Management Software 10.1 C3 10.2 Ganglia 10.3 Notes for OSCAR and Rocks Users Chapter 11: Scheduling Software 11.1 OpenPBS 11.2 Notes for OSCAR and Rocks Users Chapter 12: Parallel Filesystems 12.1 PVFS 12.2 Using PVFS 12.3 Notes for OSCAR and Rocks Users Part IV: Cluster Programming Chapter 13: Getting Started with MPI 13.1 MPI 13.2 A Simple Problem 13.3 An MPI Solution 13.4 I/O with MPI 13.5 Broadcast Communications Chapter 14: Additional MPI Features 14.1 More on Point-to-Point Communication 14.2 More on Collective Communication 14.3 Managing Communicators 14.4 Packaging Data Chapter 15: Designing Parallel Programs 15.1 Overview 15.2 Problem Decomposition 15.3 Mapping Tasks to Processors 15.4 Other Considerations Chapter 16: Debugging Parallel Programs 16.1 Debugging and Parallel Programs 16.2 Avoiding Problems 16.3 Programming Tools 16.4 Rereading Code 16.5 Tracing with printf 16.6 Symbolic Debuggers 16.7 Using gdb and ddd with MPI 16.8 Notes for OSCAR and Rocks Users Chapter 17: Profiling Parallel Programs 17.1 Why Profile? 17.2 Writing and Optimizing Code 17.3 Timing Complete Programs 17.4 Timing C Code Segments 17.5 Profilers 17.6 MPE 17.7 Customized MPE Logging 17.8 Notes for OSCAR and Rocks Users Part V: Appendix Appendix A: References A.1 Books A.2 URLs Colophon
Preface Audience Organization Conventions How to Contact Us Using Code Examples Acknowledgments Part I: An Introduction to Clusters Chapter 1: Cluster Architecture 1.1 Modern Computing and the Role of Clusters 1.2 Types of Clusters 1.3 Distributed Computing and Clusters 1.4 Limitations 1.5 My Biases Chapter 2: Cluster Planning 2.1 Design Steps 2.2 Determining Your Cluster's Mission 2.3 Architecture and Cluster Software 2.4 Cluster Kits 2.5 CD-ROM-Based Clusters 2.6 Benchmarks Chapter 3: Cluster Hardware 3.1 Design Decisions 3.2 Environment Chapter 4: Linux for Clusters 4.1 Installing Linux 4.2 Configuring Services 4.3 Cluster Security Part II: Getting Started Quickly Chapter 5: openMosix 5.1 What Is openMosix? 5.2 How openMosix Works 5.3 Selecting an Installation Approach 5.4 Installing a Precompiled Kernel 5.5 Using openMosix 5.6 Recompiling the Kernel 5.7 Is openMosix Right for You? Chapter 6: OSCAR 6.1 Why OSCAR? 6.2 What's in OSCAR 6.3 Installing OSCAR 6.4 Security and OSCAR 6.5 Using switcher 6.6 Using LAM/MPI with OSCAR Chapter 7: Rocks 7.1 Installing Rocks 7.2 Managing Rocks 7.3 Using MPICH with Rocks Part III: Building Custom Clusters Chapter 8: Cloning Systems 8.1 Configuring Systems 8.2 Automating Installations 8.3 Notes for OSCAR and Rocks Users Chapter 9: Programming Software 9.1 Programming Languages 9.2 Selecting a Library 9.3 LAM/MPI 9.4 MPICH 9.5 Other Programming Software 9.6 Notes for OSCAR Users 9.7 Notes for Rocks Users Chapter 10: Management Software 10.1 C3 10.2 Ganglia 10.3 Notes for OSCAR and Rocks Users Chapter 11: Scheduling Software 11.1 OpenPBS 11.2 Notes for OSCAR and Rocks Users Chapter 12: Parallel Filesystems 12.1 PVFS 12.2 Using PVFS 12.3 Notes for OSCAR and Rocks Users Part IV: Cluster Programming Chapter 13: Getting Started with MPI 13.1 MPI 13.2 A Simple Problem 13.3 An MPI Solution 13.4 I/O with MPI 13.5 Broadcast Communications Chapter 14: Additional MPI Features 14.1 More on Point-to-Point Communication 14.2 More on Collective Communication 14.3 Managing Communicators 14.4 Packaging Data Chapter 15: Designing Parallel Programs 15.1 Overview 15.2 Problem Decomposition 15.3 Mapping Tasks to Processors 15.4 Other Considerations Chapter 16: Debugging Parallel Programs 16.1 Debugging and Parallel Programs 16.2 Avoiding Problems 16.3 Programming Tools 16.4 Rereading Code 16.5 Tracing with printf 16.6 Symbolic Debuggers 16.7 Using gdb and ddd with MPI 16.8 Notes for OSCAR and Rocks Users Chapter 17: Profiling Parallel Programs 17.1 Why Profile? 17.2 Writing and Optimizing Code 17.3 Timing Complete Programs 17.4 Timing C Code Segments 17.5 Profilers 17.6 MPE 17.7 Customized MPE Logging 17.8 Notes for OSCAR and Rocks Users Part V: Appendix Appendix A: References A.1 Books A.2 URLs Colophon
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Shop der buecher.de GmbH & Co. KG Bürgermeister-Wegele-Str. 12, 86167 Augsburg Amtsgericht Augsburg HRA 13309