MekkCyber / TritonAcademyLinks
A repository to unravel the language of GPUs, making their kernel conversations easy to understand
☆193Updated 4 months ago
Alternatives and similar repositories for TritonAcademy
Users that are interested in TritonAcademy are comparing it to the libraries listed below
Sorting:
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆417Updated 6 months ago
- Learn CUDA with PyTorch☆84Updated last week
- Load compute kernels from the Hub☆290Updated last week
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆226Updated 4 months ago
- ☆221Updated 7 months ago
- FlexAttention based, minimal vllm-style inference engine for fast Gemma 2 inference.☆280Updated last month
- ☆173Updated last year
- ☆203Updated 9 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆142Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆66Updated 6 months ago
- An extension of the nanoGPT repository for training small MOE models.☆195Updated 6 months ago
- Quantized LLM training in pure CUDA/C++.☆32Updated this week
- making the official triton tutorials actually comprehensible☆54Updated last month
- 👷 Build compute kernels☆149Updated this week
- GPU Kernels☆198Updated 5 months ago
- ☆44Updated 4 months ago
- Simple MPI implementation for prototyping or learning☆280Updated last month
- PyTorch Single Controller☆425Updated this week
- Cataloging released Triton kernels.☆261Updated 3 weeks ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆268Updated 2 months ago
- Write a fast kernel and run it on Discord. See how you compare against the best!☆57Updated this week
- A zero-to-one guide on scaling modern transformers with n-dimensional parallelism.☆94Updated this week
- ☆89Updated last year
- ☆89Updated last year
- Learning about CUDA by writing PTX code.☆137Updated last year
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆410Updated this week
- NanoGPT-speedrunning for the poor T4 enjoyers☆72Updated 5 months ago
- Best practices & guides on how to write distributed pytorch training code☆487Updated 7 months ago
- ring-attention experiments☆152Updated 11 months ago
- Fast low-bit matmul kernels in Triton☆373Updated last week