gpu-mode / lecturesLinks
Material for gpu-mode lectures
☆5,049Updated this week
Alternatives and similar repositories for lectures
Users that are interested in lectures are comparing it to the libraries listed below
Sorting:
- GPU programming related news and material links☆1,695Updated this week
- Puzzles for learning Triton☆1,985Updated 10 months ago
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆6,935Updated this week
- An ML Systems Onboarding list☆900Updated 7 months ago
- how to optimize some algorithm in cuda.☆2,473Updated this week
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆4,518Updated last month
- FlashInfer: Kernel Library for LLM Serving☆3,761Updated this week
- My learning notes/codes for ML SYS.☆3,632Updated this week
- ☆1,426Updated 2 months ago
- Tile primitives for speedy kernels☆2,704Updated this week
- Training materials associated with NVIDIA's CUDA Training Series (www.olcf.ornl.gov/cuda-training-series/)☆861Updated last year
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,805Updated this week
- Fast CUDA matrix multiplication from scratch☆846Updated 2 weeks ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,812Updated 3 weeks ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆931Updated 8 months ago
- CUDA Templates for Linear Algebra Subroutines☆8,468Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,729Updated this week
- 🚀 Efficient implementations of state-of-the-art linear attention models☆3,281Updated this week
- Large Language Model (LLM) Systems Paper List☆1,503Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce several…☆1,148Updated 2 years ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,632Updated this week
- A self-learning tutorail for CUDA High Performance Programing.☆735Updated 2 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,258Updated 2 months ago
- Efficient Triton Kernels for LLM Training☆5,658Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,361Updated this week
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆6,093Updated 3 weeks ago
- Solve puzzles. Improve your pytorch.☆3,714Updated last year
- A PyTorch native platform for training generative AI models☆4,395Updated this week
- Learn CUDA Programming, published by Packt☆1,189Updated last year
- ☆2,556Updated last year