gpu-mode / lectures
Material for gpu-mode lectures
β4,075Updated last month
Alternatives and similar repositories for lectures:
Users that are interested in lectures are comparing it to the libraries listed below
- GPU programming related news and material linksβ1,412Updated 2 months ago
- π200+ Tensor/CUDA Cores Kernels, β‘οΈflash-attn-mma, β‘οΈhgemm with WMMA, MMA and CuTe (98%~100% TFLOPS of cuBLAS/FA2 ππ).β2,901Updated this week
- Puzzles for learning Tritonβ1,508Updated 4 months ago
- Tile primitives for speedy kernelsβ2,153Updated this week
- how to optimize some algorithm in cuda.β2,022Updated this week
- FlashInfer: Kernel Library for LLM Servingβ2,439Updated this week
- An ML Systems Onboarding listβ730Updated last month
- A PyTorch native library for large model trainingβ3,470Updated this week
- πA curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, Flash-Attention, Paged-Attention, Parallelism, etc. ππβ3,675Updated 2 weeks ago
- Fast CUDA matrix multiplication from scratchβ663Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUsβ¦β2,293Updated this week
- My learning notes/codes for ML SYS.β1,481Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)β732Updated 2 months ago
- π Efficient implementations of state-of-the-art linear attention models in Torch and Tritonβ2,111Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce severalβ¦β960Updated last year
- PyTorch native quantization and sparsity for training and inferenceβ1,913Updated this week
- Learn CUDA Programming, published by Packtβ1,120Updated last year
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)β721Updated 7 months ago
- What would you do with 1000 H100s...β1,016Updated last year
- The full minitorch student suite.β2,029Updated 7 months ago
- Minimalistic 4D-parallelism distributed training framework for education purposeβ935Updated 2 weeks ago
- A self-learning tutorail for CUDA High Performance Programing.β465Updated 2 weeks ago
- Building blocks for foundation models.β464Updated last year
- β960Updated 2 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDAβ769Updated this week
- β2,366Updated last year
- UNet diffusion model in pure CUDAβ600Updated 8 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Accelerationβ2,861Updated this week
- NanoGPT (124M) in 3 minutesβ2,403Updated this week