This repository, "High_Performance_Computing," contains the coursework and assignments from the High-Performance Computing course offered at the University of Genoa (UniGe), specifically tailored for the Master's degree in Computer Science. The repository is structured to include three primary assignments and a main project, all focusing on the parallelization of code using different technologies: OpenMP, CUDA, and MPI.
The first assignment involves the utilization of OpenMP to parallelize a given piece of code. OpenMP (Open Multi-Processing) is an API that supports multi-platform shared memory multiprocessing programming in C and C++.
The second assignment shifts focus to CUDA (Compute Unified Device Architecture), a parallel computing platform and application programming interface model created by Nvidia. This assignment involves parallelizing a different code segment using CUDA, which enables the use of Nvidia GPUs for general purpose processing (GPGPU).
The third assignment involves the Message Passing Interface (MPI). MPI is a standardized and portable message-passing system designed to function on a wide variety of parallel computers. In this assignment, students are tasked with parallelizing code using MPI, demonstrating an understanding of distributed memory computing by using the INFN institute cluster.
The centerpiece of this repository is the main project, which involves the parallelization of the Mandelbrot set generation algorithm. The Mandelbrot set is a complex and fascinating fractal, and generating it can be computationally intensive. This project aims to explore and implement parallelization of the Mandelbrot set algorithm using three different approaches:
- OpenMP: Implementing a shared-memory parallel version of the Mandelbrot set algorithm.
- CUDA: Leveraging the power of Nvidia GPUs to parallelize the computation of the Mandelbrot set.
- MPI: Using distributed memory parallelism to split the workload across multiple nodes in a computing cluster.
The goal is to compare the performance improvements and understand the nuances of each parallelization technique.