evintunador / triton_docs_tutorials
making the official triton tutorials actually comprehensible
☆26Updated last month
Alternatives and similar repositories for triton_docs_tutorials:
Users that are interested in triton_docs_tutorials are comparing it to the libraries listed below
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆178Updated this week
- Write a fast kernel and run it on Discord. See how you compare against the best!☆40Updated this week
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)☆65Updated last month
- ☆153Updated last year
- NanoGPT-speedrunning for the poor T4 enjoyers☆62Updated this week
- ☆155Updated 3 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆165Updated last month
- Cataloging released Triton kernels.☆217Updated 3 months ago
- An extension of the nanoGPT repository for training small MOE models.☆131Updated last month
- ☆45Updated 3 weeks ago
- Collection of autoregressive model implementation☆85Updated 2 months ago
- minimal GRPO implementation from scratch☆85Updated last month
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆36Updated 11 months ago
- ☆87Updated last year
- Collection of kernels written in Triton language☆119Updated 2 weeks ago
- ☆27Updated 9 months ago
- A curated list of resources for learning and exploring Triton, OpenAI's programming language for writing efficient GPU code.☆337Updated last month
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆54Updated last year
- Learn CUDA with PyTorch☆20Updated 2 months ago
- ring-attention experiments☆130Updated 6 months ago
- Mixed precision training from scratch with Tensors and CUDA☆22Updated 11 months ago
- Fine-tune an LLM to perform batch inference and online serving.☆109Updated last week
- Train, tune, and infer Bamba model☆88Updated this week
- An implementation of the transformer architecture onto an Nvidia CUDA kernel☆179Updated last year
- ML/DL Math and Method notes☆60Updated last year
- Unofficial implementation of https://arxiv.org/pdf/2407.14679☆44Updated 7 months ago
- Fast low-bit matmul kernels in Triton☆291Updated this week
- Prune transformer layers☆68Updated 10 months ago
- Learning about CUDA by writing PTX code.☆128Updated last year
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆130Updated last year