Skip to content

Sandro295/SGEMM_CUDA

 
 

Repository files navigation

Fast CUDA SGEMM from Scratch

Step-by-step optimization of matrix multiplication, implemented in CUDA. For an explanation of each kernel, see siboehm.com/CUDA-MMM.

Overview

Running the kernels on a NVIDIA 3070 (Ampere):

GFLOPs at matrix size 4096x4096:

Kernel GFLOPs/s Performance relative to cuBLAS
1: Naive 172.8 1.3%
2: GMEM Coalescing 1226.4 9.5%
3: SMEM Caching 1701.4 13.2%
4: 1D Blocktiling 5071.8 39.2%
9: Autotuning 9575.8 74.0%
7: Avoid Bank Conflicts (Linearize) 10027.5 77.5%
5: 2D Blocktiling 10142.7 78.4%
8: Avoid Bank Conflicts (Offset) 10323.8 79.8%
6: Vectorized Mem Access 11558.7 89.3%
10: Warptiling 12550.6 97.0%
0: cuBLAS 12936.8 100.0%

Setup

  1. Install dependencies: CUDA toolkit 12, Python (+ Seaborn), CMake, Ninja. See environment.yml.
  2. Configure NVCC compilation parameters. Look up your GPUs compute capability here. Then configure the CMakeLists.txt and change:
    set(CUDA_COMPUTE_CAPABILITY 80)
  3. Build: mkdir build && cd build && cmake .. && cmake --build .
  4. Run one of the kernels: DEVICE=<device_id> ./sgemm <kernel number>
  5. Profiling via NVIDIA Nsight Compute (ncu): make profile KERNEL=<kernel number>

Credit goes to wangzyon/NVIDIA_SGEMM_PRACTICE for the benchmarking setup.

About

Fast CUDA matrix multiplication from scratch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Cuda 80.7%
  • Shell 10.3%
  • Python 5.3%
  • CMake 2.3%
  • Other 1.4%