gpu-mode / lectures
Material for gpu-mode lectures
☆3,028Updated this week
Related projects ⓘ
Alternatives and complementary repositories for lectures
- GPU programming related news and material links☆1,237Updated last month
- Puzzles for learning Triton☆1,135Updated this week
- 📚Modern CUDA Learn Notes with PyTorch: Tensor/CUDA Cores, 📖150+ CUDA Kernels with PyTorch bindings, 📖HGEMM/SGEMM (95%~99% cuBLAS perfo…☆1,473Updated this week
- An ML Systems Onboarding list☆545Updated this week
- how to optimize some algorithm in cuda.☆1,593Updated last week
- Tile primitives for speedy kernels☆1,658Updated this week
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)☆615Updated 3 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,979Updated this week
- PyTorch native quantization and sparsity for training and inference☆1,585Updated this week
- FlashInfer: Kernel Library for LLM Serving☆1,452Updated this week
- A native PyTorch Library for large model training☆2,623Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆626Updated 7 months ago
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,199Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,526Updated last month
- 📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batc…☆2,845Updated this week
- UNet diffusion model in pure CUDA☆584Updated 4 months ago
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,339Updated this week
- Learn CUDA Programming, published by Packt☆1,030Updated 10 months ago
- Building blocks for foundation models.☆394Updated 10 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆483Updated 3 weeks ago
- nanoGPT style version of Llama 3.1☆1,246Updated 3 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆636Updated this week
- CUDA Templates for Linear Algebra Subroutines☆5,679Updated this week
- ☆556Updated 3 weeks ago
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,669Updated last month
- NanoGPT (124M) quality in 7.8 8xH100-minutes☆1,033Updated this week
- Slides, notes, and materials for the workshop☆306Updated 5 months ago
- Fast CUDA matrix multiplication from scratch☆479Updated 10 months ago
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆827Updated last year
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆567Updated this week