TL;RD: Repository of examples of parallel algorithms
The objective of this project is to help write parallel algorithms using OpenMP, MPI and OpenCL. A complete knowledge of these libraries can be found online, and hence won't be discussed here. You can find the resources linked at the bottom of this page.
Here we will look more on use case side of things. This is best achieved by leading through examples. First a serial code for the algorithm will be provided, and then we will modify the serial code accordingly to get a parallelized version using a combination of OpenMP, MPI and OpenCL.
Please note that the code for the parallel algorithms isn't guaranteed to be the best/most efficient. Also, most of the programming language related optimizations are avoided, since that is not the objective here (they are often times ugly and impairs readability). For the sake of clarity and better understanding, the code is tried to be kept as simple as possible.
- Open Multi-Processing
- An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism.
- Comprised of three primary API components:
- Compiler Directives
- Runtime Library Routines
- Environment Variables
- OpenMP Is Not:
- Meant for distributed memory parallel systems (by itself)
- Necessarily implemented identically by all vendors
- Guaranteed to make the most efficient use of shared memory
- Required to check for data dependencies, data conflicts, race conditions, deadlocks, or code sequences that cause a program to be classified as non-conforming
- Designed to handle parallel I/O.
- The programmer is responsible for synchronizing input and output.
- Message Passing Interface
- MPI is a specification for the developers and users of message passing libraries. By itself, it is NOT a library - but rather the specification of what such a library should be.
- MPI primarily addresses the message-passing parallel programming model: data is moved from the address space of one process to that of another process through cooperative operations on each process.
- Simply stated, the goal of the Message Passing Interface is to provide a widely used standard for writing message passing programs. The interface attempts to be:
- Practical
- Portable
- Efficient
- Flexible
- Open Computing Language
- It is a framework for writing programs that execute across heterogeneous platforms consisting of central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), field-programmable gate arrays (FPGAs) and other processors or hardware accelerators.
- SIMD processing.
Project | Serial | OpenMP | MPI | MPI+OpenMP | OpenCL |
---|---|---|---|---|---|
Pi calculation | Yes | Yes | Yes | Yes | Yes |
Prime: Get the largest prime number and count(prime) below a given upper limit. | Yes | Yes | Yes | Yes | - |
Matrix Multiplication | Yes | Yes | - | - | Yes |
Image processing: Nearest neighbour interpolation image scaling (nni) | Yes | Yes | - | - | Yes |
Graph: Breadth first search (bfs) | Yes | Yes | - | - | Yes |
Graph: Single source shortest path (sssp) | Yes | - | - | - | Yes |
- Prerequisites:
- After finishing up up with the prerequisites, try running the
setup.sh
script.- This will download and setup some graph datasets and other files required for testing.
- To build the projects, just run the
build.sh
script. - To run/execute the projects, use the
run.sh
script. Just execute therun.sh
1st to get help on how to use the script.
- OpenMP
- MPI
- OpenCL
- HPC
- FAQs