discuss: HOWTO proposal


Previous by date: 5 Apr 2001 14:40:25 -0000 Re: Funny Situation with a DAT Tape, Dan York
Next by date: 5 Apr 2001 14:40:25 -0000 Re: Howto proposal, David Lloyd
Previous in thread: 5 Apr 2001 14:40:25 -0000 Re: HOWTO proposal, Martin Hamilton
Next in thread: 5 Apr 2001 14:40:25 -0000 Re: Howto proposal, David Lloyd

Subject: Howto proposal
From: ####@####.####
Date: 5 Apr 2001 14:40:25 -0000
Message-Id: <9786.200104051440@deimos.ex.ac.uk.ex.ac.uk>

I would like to write a "parallel programming using MPI" howto, with a
working title of "Parallel programming with MPI: a practical guide".

It would be designed to fit alomg side the existing clustering and
parallel programming howtos (the latter has a brief section on MPI)
and would be unashamedly targetted at programmers with no parallel
programming experience who need to get to be reasonably competent as
quickly as possible. It would be pretty platform independent with an
appendix on linux clustering.

FWIW, I'm the systems manager and programmer for an IBM SP parallel
supercomputer at the University of Exeter, UK and I also manage the
usual assortment of linux boxes including an ethernet cluster and a "I
thought it was meant to be here last week" dedicated beowulf.

Hope you like the provisional outline below.

John

Dr John Rowe
University of Exeter
UK


* Introduction
** Purpose and audience

* Background
** PCs have got faster, supercomputers haven't
** Platform-independent parallelisation software.
** Supercomputers still have some advantages

* What is an MPI program?

* Programs that parallelise
** Codes consisting of independent calculations parallelise well
** Data dependency inhibits parallelisation
** Interconnect performance is dominated by latency
** Will my program parallise using ethernet?
** Summary

* Strategy of parallelising programs
** High level not low level: top down not bottom up
** Only parallelise the bits that take the time
** Use profilers to find the expensive routines
** The 80/20 rule and Amdahl's law 
** Think data as well as calculations

* MPI initialization and information
** mpif.h  defines various MPI_* constants
** MPI_INIT initialises MPI
** MPI_COMM_SIZE returns the number of processes. 
** MPI_COMM_RANK returns the unique rank of each process
** MPI_FINALIZE exits MPI

* Input/output
** The order of output from shared channels is unpredictable
** Files should be shared for reading only
** Per-process output files should have unique names 

* Point to point communication
** Sends and receives are matched 
** MPI_RECV can wild-card the sender and the tag, MPI_SEND cannot
** MPI_ABORT tries to abort all the MPI processes 

* Collective communication provides high-level data sharing and reduction

* Asymmetrical timings: master/slave configurations
** Processes may be divided into groups and subgroups
** Communicators allow us to choose the rank of their processes within subgroups
** MPI_COMM_SPLIT creates new communicators

* Tips, problems and solutions

* Features of MPI not covered in this guide
** Non-blocking and buffered communication
** Process topologies

* Further reading

* Appendix: Using a cluster of workstations for parallel computing

* Appendix: Installing and using MPI

* Appendix: Further sources of information

* Appendix: MPI subroutine reference

Previous by date: 5 Apr 2001 14:40:25 -0000 Re: Funny Situation with a DAT Tape, Dan York
Next by date: 5 Apr 2001 14:40:25 -0000 Re: Howto proposal, David Lloyd
Previous in thread: 5 Apr 2001 14:40:25 -0000 Re: HOWTO proposal, Martin Hamilton
Next in thread: 5 Apr 2001 14:40:25 -0000 Re: Howto proposal, David Lloyd


  ©The Linux Documentation Project, 2014. Listserver maintained by dr Serge Victor on ibiblio.org servers. See current spam statz.