Material for gpu-mode lectures
β6,012Apr 22, 2026Updated this week
Alternatives and similar repositories for lectures
Users that are interested in lectures are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- πLeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginnersπ, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.πβ10,736Apr 20, 2026Updated last week
- how to optimize some algorithm in cuda.β2,939Updated this week
- GPU programming related news and material linksβ2,114Mar 8, 2026Updated last month
- FlashInfer: Kernel Library for LLM Servingβ5,498Updated this week
- CUDA Templates and Python DSLs for High-Performance Linear Algebraβ9,638Updated this week
- AI Agents on DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- An ML Systems Onboarding listβ1,054Feb 19, 2026Updated 2 months ago
- Development repository for the Triton language and compilerβ19,040Updated this week
- Tile primitives for speedy kernelsβ3,326Updated this week
- πA curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.πβ5,162Apr 20, 2026Updated last week
- Puzzles for learning Tritonβ2,404Apr 1, 2026Updated 3 weeks ago
- My learning notes for ML SYS.β6,110Updated this week
- Distributed Compiler based on Triton for Parallel Systemsβ1,414Updated this week
- This is a series of GPU optimization topics. Here we will introduce how to optimize the CUDA kernel in detail. I will introduce severalβ¦β1,286Jul 29, 2023Updated 2 years ago
- Fast and memory-efficient exact attentionβ23,563Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Efficient Triton Kernels for LLM Trainingβ6,298Apr 18, 2026Updated last week
- Machine Learning Engineering Open Bookβ17,765Mar 16, 2026Updated last month
- Cataloging released Triton kernels.β301Sep 9, 2025Updated 7 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,632Updated this week
- LLM training in simple, raw C/CUDAβ29,687Jun 26, 2025Updated 10 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)β1,125Dec 30, 2024Updated last year
- compiler learning resources collect.β2,714Mar 19, 2025Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.β26,397Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizatβ¦β13,487Updated this week
- Managed Database hosting by DigitalOcean β’ AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabiliβ¦β4,025Updated this week
- Puzzles for learning Triton, play it with minimal environment configuration!β679Mar 17, 2026Updated last month
- flash attention tutorial written in python, triton, cuda, cutlassβ506Jan 20, 2026Updated 3 months ago
- A PyTorch native platform for training generative AI modelsβ5,258Updated this week
- PyTorch native quantization and sparsity for training and inferenceβ2,796Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.β5,186Updated this week
- π Efficient implementations for emerging model architecturesβ4,999Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hβ¦β3,291Updated this week
- Solve puzzles. Learn CUDA.β12,067Sep 1, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A Easy-to-understand TensorOp Matmul Tutorialβ428Mar 5, 2026Updated last month
- Examples of CUDA implementations by Cutlass CuTeβ274Jul 1, 2025Updated 9 months ago
- Fast CUDA matrix multiplication from scratchβ1,147Sep 2, 2025Updated 7 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,949Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsβ78,385Updated this week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernelβ2,218Apr 19, 2026Updated last week
- how to learn PyTorch and OneFlowβ495Mar 22, 2024Updated 2 years ago