Learn how to use the Message Passing Interface (MPI), the standard framework for parallel computing in distributed-memory systems, to parallelize your scientific applications. You will learn the basic concepts of message passing, including domain decomposition, collective communications, and several MPI library functions. A hands-on exercise parallelizing a machine learning model application on an XSEDE supercomputing cluster will be used so participants can practice these concepts. Follow-up sessions will be offered to further help participants with the exercise.
You will learn:
- The message-passing interface paradigm
- Core components of an MPI message: body, envelope
- MPI processes and communicators
- Collective communications: broadcast messages and reductions
Pre-requisites:
- Basic C/C++ or Fortran programming skills
- Basic knowledge of parallel computing concepts
- Familiarity with a remote Linux server text editor: vi, nano, or emacs
Registration is required. Go to this form to register by April 11.