modal-labs / gpu-glossaryLinks
GPU documentation for humans
☆337Updated 2 weeks ago
Alternatives and similar repositories for gpu-glossary
Users that are interested in gpu-glossary are comparing it to the libraries listed below
Sorting:
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆514Updated last month
- Simple MPI implementation for prototyping or learning☆284Updated 2 months ago
- Learning about CUDA by writing PTX code.☆143Updated last year
- ☆79Updated 3 weeks ago
- NVIDIA NVSHMEM is a parallel programming interface for NVIDIA GPUs based on OpenSHMEM. NVSHMEM can significantly reduce multi-process com…☆338Updated last week
- Quantized LLM training in pure CUDA/C++.☆198Updated last week
- TritonParse: A Compiler Tracer, Visualizer, and Reproducer for Triton Kernels☆155Updated last week
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆421Updated 7 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆232Updated 5 months ago
- Perplexity GPU Kernels☆488Updated 3 weeks ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆99Updated this week
- kernels, of the mega variety☆586Updated 2 weeks ago
- ☆120Updated 7 months ago
- Tilus is a tile-level kernel programming language with explicit control over shared memory and registers.☆377Updated last week
- High-Performance SGEMM on CUDA devices☆107Updated 8 months ago
- Learnings and programs related to CUDA☆420Updated 3 months ago
- Complete solutions to the Programming Massively Parallel Processors Edition 4☆547Updated 3 months ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆655Updated last week
- Fastest kernels written from scratch☆374Updated 3 weeks ago
- CPU inference for the DeepSeek family of large language models in C++☆313Updated 2 weeks ago
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- KV cache store for distributed LLM inference☆341Updated last month
- in this repository, i'm going to implement increasingly complex llm inference optimizations☆68Updated 4 months ago
- Cataloging released Triton kernels.☆261Updated last month
- CUDA tutorials for Maths & ML tutorials with examples, covers multi-gpus, fused attention, winograd convolution, reinforcement learning.☆196Updated 4 months ago
- Accelerated General (FP32) Matrix Multiplication from scratch in CUDA☆161Updated 9 months ago
- ☆242Updated last week
- An ML Systems Onboarding list☆914Updated 8 months ago
- ☆31Updated 5 months ago
- LLM training in simple, raw C/CUDA☆105Updated last year