This workshop is a 2-hour session where participants will work on exercises to parallelize model scientific applications using OpenMP, a shared-memory application programming interface. It will build on the previous Introduction to Parallel Computing on High-Performance Systems workshop that explored concepts and tools such as parallel loop scheduling, explicit data declarations, reduction clauses, and OpenMP library functions. It will be entirely hands-on, with granted access to a supercomputing cluster via a supporting XSEDE allocation.
You will learn how to:
- Balance workloads to threads in parallel loops using different schedules (static, dynamic, guided) and tune your loop parallelization for optimal performance
- Explicitly declare data contexts (private, shared) to avoid race conditions and improve the quality of your code, facilitating collaborations
- Use reduction clauses (additions, maximum and minimum values) to calculate properties without thread concurrency
- Use OpenMP library functions to span teams of threads, mix data and task parallelism, insert barriers, and much more
Pre-requisites:
- Basic C/C++ or Fortran programming skills
- Basic parallel programming knowledge
- Basic Linux skills (e.g. compiling code, navigating the file system)
- Familiarity with a remote Linux server text editor: vi, nano, or emacs
Registration is required. Go to this form to register by February 6.