cfregly / ai-performance-engineeringLinks
☆401Updated last week
Alternatives and similar repositories for ai-performance-engineering
Users that are interested in ai-performance-engineering are comparing it to the libraries listed below
Sorting:
- Slides, notes, and materials for the workshop☆333Updated last year
 - Some CUDA example code with READMEs.☆176Updated 8 months ago
 - A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆194Updated 5 months ago
 - A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆425Updated 7 months ago
 - Where GPUs get cooked 👩🍳🔥☆294Updated last month
 - Contains hands-on example code for [O'reilly book "Deep Learning At Scale"](https://www.oreilly.com/library/view/deep-learning-at/9781098…☆29Updated last year
 - A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆233Updated 5 months ago
 - ☆385Updated 6 months ago
 - Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆441Updated last week
 - GPU Kernels☆203Updated 6 months ago
 - Simple MPI implementation for prototyping or learning☆287Updated 2 months ago
 - NVIDIA curated collection of educational resources related to general purpose GPU programming.☆800Updated this week
 - An ML Systems Onboarding list☆921Updated 9 months ago
 - ☆193Updated last year
 - Learnings and programs related to CUDA☆422Updated 4 months ago
 - Write a fast kernel and run it on Discord. See how you compare against the best!☆58Updated 3 weeks ago
 - ArcticInference: vLLM plugin for high-throughput, low-latency inference☆288Updated last week
 - FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆301Updated this week
 - 100 days of building GPU kernels!☆521Updated 6 months ago
 - making the official triton tutorials actually comprehensible☆57Updated 2 months ago
 - Quantized LLM training in pure CUDA/C++.☆209Updated this week
 - Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆147Updated 2 years ago
 - Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆666Updated last week
 - ☆174Updated last year
 - Cataloging released Triton kernels.☆264Updated last month
 - Learn CUDA with PyTorch☆95Updated last month
 - ☆210Updated 10 months ago
 - ☆545Updated last year
 - This repository is a curated collection of resources, tutorials, and practical examples designed to guide you through the journey of mast…☆403Updated 8 months ago
 - ☆76Updated last year