Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers / Edition 2

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers / Edition 2

ISBN-10:
0131405632
ISBN-13:
9780131405639
Pub. Date:
03/04/2004
Publisher:
Pearson Education
ISBN-10:
0131405632
ISBN-13:
9780131405639
Pub. Date:
03/04/2004
Publisher:
Pearson Education
Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers / Edition 2

Parallel Programming: Techniques and Applications Using Networked Workstations and Parallel Computers / Edition 2

$213.32
Current price is , Original price is $213.32. You
$213.32 
  • SHIP THIS ITEM
    This item is available online through Marketplace sellers.
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$181.25 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

This item is available online through Marketplace sellers.


Overview

This accessible text covers the techniques of parallel programming in a practical manner that enables readers to write and evaluate their parallel programs. Supported by the National Science Foundation and exhaustively class-tested, it is the first text of its kind that does not require access to a special multiprocessor system, concentrating instead on parallel programs that can be executed on networked computers using freely available parallel software tools. KEY TOPICS: The book covers the timely topic of cluster programming, interesting to many programmers due to the recent availability of low-cost computers. Uses MPI pseudocodes to describe algorithms and allows different programming tools to be implemented, and provides readers with thorough coverage of shared memory programming, including Pthreads and OpenMP. MARKET: Useful as a professional reference for programmers and system administrators.


Product Details

ISBN-13: 9780131405639
Publisher: Pearson Education
Publication date: 03/04/2004
Edition description: Subsequent
Pages: 496
Product dimensions: 271.65(w) x 362.20(h) x 47.24(d)

Read an Excerpt

PREFACE: Preface

The purpose of this text is to introduce parallel programming techniques. Parallel program-ming uses multiple computers, or computers with multiple internal processors, to solve a problem at a greater computational speed than using a single computer. It also offers the opportunity to tackle larger problems; that is, problems with more computational steps or more memory requirements, the latter because multiple computers and multiprocessor systems often have more total memory than a single computer. In this text, we concentrate upon the use of multiple computers that communicate between themselves by sending messages; hence the term message-passing parallel programming. The computers we use can be different types (PC, SUN, SGI, etc.) but must be interconnected by a network, and a software environment must be present for intercomputer message passing. Suitable networked computers are very widely available as the basic computing platform for students so that acquisition of specially designed multiprocessor systems can usually be avoided. Several software tools are available for message-passing parallel programming, including PVM and several implementations of MPI, which are all freely available. Such software can also be used on specially designed multiprocessor systems should these systems be available for use. So far as practicable, we discuss techniques and applications in a system-independent fashion.

The text is divided into two parts, Part I and Part II. In Part I, the basic techniques of parallel programming are developed. The chapters of Part I cover all the essential aspects, using simple problems to demonstrate techniques. Thetechniques themselves, however, can be applied to a wide range of problems. Sample code is given usually first as sequential code and then as realistic parallel pseudocode. Often, the underlying algorithm is already parallel in nature and the sequential version has "unnaturally" serialized it using loops. Of course, some algorithms have to be reformulated for efficient parallel solution, and this reformulation may not be immediately apparent. One chapter in Part I introduces a type of parallel programming not centered around message-passing multicomputers, but around specially designed shared memory multiprocessor systems. This chapter describes the use of Pthreads, an IEEE multiprocessor standard system that is widely available and can be used on a single computer.

The prerequisites for studying Part I are knowledge of sequential programming, such as from using the C language and associated data structures. Part I can be studied immediately after basic sequential programming has been mastered. Many assignments here can be attempted without specialized mathematical knowledge. If MPI or PVM is used for the assignments, programs are written in C with message-passing library calls. The descriptions of the specific library calls needed are given in the appendices.

Many parallel computing problems have specially developed algorithms, and in Part II problem-specific algorithms are studied in both non-numeric and numeric domains. For Part II, some mathematical concepts are needed such as matrices. Topics covered in Part II include sorting, matrix multiplication, linear equations, partial differential equations, image processing, and searching and optimization. Image processing is particularly suitable for parallelization and is included as an interesting application with significant potential for projects. The fast Fourier transform is discussed in the context of image processing. This important transform is also used in many other areas, including signal processing and voice recognition.

A large selection of "real-life" problems drawn from practical situations is presented at the end of each chapter. These problems require no specialized mathematical knowledge and are a unique aspect of this text. They develop skills in using parallel programming techniques rather than simply learning to solve specific problems such as sorting numbers or multiplying matrices.

Topics in Part I are suitable as additions to normal sequential programming classes. At the University of North Carolina at Charlotte (UNCC), we introduce our freshmen students to parallel programming in this way. In that context, the text is a supplement to a sequential programming course text. The sequential programming language is assumed to be C or C++. Part I and Part II together is suitable as a more advanced undergraduate parallel programming/computing course, and at UNCC we use the text in that manner.

Full details of the UNCC environment and site-specific details can be found at
...

Table of Contents

I. BASIC TECHNIQUES.

1. Parallel Computers.

2. Message-Passing Computing.

3. Embarrassingly Parallel Computations.

4. Partitioning and Divide-and-Conquer Strategies.

5. Pipelined Computations.

6. Synchronous Computations.

7. Load Balancing and Termination Detection.

8. Programming with Shared Memory.

9. Distributed Shared Memory Systems and Programming.

II. ALGORITHMS AND APPLICATIONS.

10. Sorting Algorithms.

11. Numerical Algorithms.

12. Image Processing.

13. Searching and Optimization.

Appendix A: Basic MPI Routines. Appendix B: Basic Pthread Routines. Appendix C: OpenMP Directives, Library Functions, and Environment Variables Index.

Preface

PREFACE: Preface

The purpose of this text is to introduce parallel programming techniques. Parallel program-ming uses multiple computers, or computers with multiple internal processors, to solve a problem at a greater computational speed than using a single computer. It also offers the opportunity to tackle larger problems; that is, problems with more computational steps or more memory requirements, the latter because multiple computers and multiprocessor systems often have more total memory than a single computer. In this text, we concentrate upon the use of multiple computers that communicate between themselves by sending messages; hence the term message-passing parallel programming. The computers we use can be different types (PC, SUN, SGI, etc.) but must be interconnected by a network, and a software environment must be present for intercomputer message passing. Suitable networked computers are very widely available as the basic computing platform for students so that acquisition of specially designed multiprocessor systems can usually be avoided. Several software tools are available for message-passing parallel programming, including PVM and several implementations of MPI, which are all freely available. Such software can also be used on specially designed multiprocessor systems should these systems be available for use. So far as practicable, we discuss techniques and applications in a system-independent fashion.

The text is divided into two parts, Part I and Part II. In Part I, the basic techniques of parallel programming are developed. The chapters of Part I cover all the essential aspects, using simple problems to demonstrate techniques.Thetechniques themselves, however, can be applied to a wide range of problems. Sample code is given usually first as sequential code and then as realistic parallel pseudocode. Often, the underlying algorithm is already parallel in nature and the sequential version has "unnaturally" serialized it using loops. Of course, some algorithms have to be reformulated for efficient parallel solution, and this reformulation may not be immediately apparent. One chapter in Part I introduces a type of parallel programming not centered around message-passing multicomputers, but around specially designed shared memory multiprocessor systems. This chapter describes the use of Pthreads, an IEEE multiprocessor standard system that is widely available and can be used on a single computer.

The prerequisites for studying Part I are knowledge of sequential programming, such as from using the C language and associated data structures. Part I can be studied immediately after basic sequential programming has been mastered. Many assignments here can be attempted without specialized mathematical knowledge. If MPI or PVM is used for the assignments, programs are written in C with message-passing library calls. The descriptions of the specific library calls needed are given in the appendices.

Many parallel computing problems have specially developed algorithms, and in Part II problem-specific algorithms are studied in both non-numeric and numeric domains. For Part II, some mathematical concepts are needed such as matrices. Topics covered in Part II include sorting, matrix multiplication, linear equations, partial differential equations, image processing, and searching and optimization. Image processing is particularly suitable for parallelization and is included as an interesting application with significant potential for projects. The fast Fourier transform is discussed in the context of image processing. This important transform is also used in many other areas, including signal processing and voice recognition.

A large selection of "real-life" problems drawn from practical situations is presented at the end of each chapter. These problems require no specialized mathematical knowledge and are a unique aspect of this text. They develop skills in using parallel programming techniques rather than simply learning to solve specific problems such as sorting numbers or multiplying matrices.

Topics in Part I are suitable as additions to normal sequential programming classes. At the University of North Carolina at Charlotte (UNCC), we introduce our freshmen students to parallel programming in this way. In that context, the text is a supplement to a sequential programming course text. The sequential programming language is assumed to be C or C++. Part I and Part II together is suitable as a more advanced undergraduate parallel programming/computing course, and at UNCC we use the text in that manner.

Full details of the UNCC environment and site-specific details can be found at
...
From the B&N Reads Blog

Customer Reviews