cfregly / ai-performance-engineeringLinks
☆767Updated this week
Alternatives and similar repositories for ai-performance-engineering
Users that are interested in ai-performance-engineering are comparing it to the libraries listed below
Sorting:
- Slides, notes, and materials for the workshop☆336Updated last year
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆441Updated 9 months ago
- Some CUDA example code with READMEs.☆179Updated last month
- An ML Systems Onboarding list☆957Updated 10 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆195Updated 6 months ago
- ☆404Updated 8 months ago
- 100 days of building GPU kernels!☆552Updated 7 months ago
- This repository is a curated collection of resources, tutorials, and practical examples designed to guide you through the journey of mast…☆427Updated 9 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆244Updated 7 months ago
- ☆227Updated 11 months ago
- GPU Kernels☆210Updated 7 months ago
- Home for "How To Scale Your Model", a short blog-style textbook about scaling LLMs on TPUs☆760Updated this week
- Complete solutions to the Programming Massively Parallel Processors Edition 4☆608Updated 6 months ago
- NVIDIA curated collection of educational resources related to general purpose GPU programming.☆998Updated last week
- Yet Another Language Model: LLM inference in C++/CUDA, no libraries except for I/O☆539Updated 3 months ago
- ☆206Updated last year
- Simple MPI implementation for prototyping or learning☆292Updated 4 months ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆456Updated 2 weeks ago
- Best practices & guides on how to write distributed pytorch training code☆552Updated last month
- Learnings and programs related to CUDA☆428Updated 5 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆708Updated this week
- GPU documentation for humans☆416Updated last week
- GPU programming related news and material links☆1,874Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inference☆349Updated this week
- ☆547Updated last year
- Where GPUs get cooked 👩🍳🔥☆335Updated 3 months ago
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆174Updated 2 weeks ago
- Learn CUDA with PyTorch☆124Updated 3 weeks ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆321Updated last month
- ☆1,087Updated this week