gpu-mode / triton-tutorialsLinks
☆10Updated 3 weeks ago
Alternatives and similar repositories for triton-tutorials
Users that are interested in triton-tutorials are comparing it to the libraries listed below
Sorting:
- Make triton easier☆47Updated 11 months ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 8 months ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated this week
- DPO, but faster 🚀☆42Updated 6 months ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆121Updated last week
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- JORA: JAX Tensor-Parallel LoRA Library (ACL 2024)☆33Updated last year
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆32Updated last month
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆44Updated 10 months ago
- ☆49Updated last year
- Using FlexAttention to compute attention with different masking patterns☆43Updated 8 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week
- ☆20Updated last month
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated 11 months ago
- Awesome Triton Resources☆30Updated last month
- ☆26Updated last year
- ☆71Updated 2 weeks ago
- Personal solutions to the Triton Puzzles☆18Updated 10 months ago
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated 3 weeks ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆41Updated last month
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- ☆73Updated 4 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆39Updated last year
- Hacks for PyTorch☆19Updated 2 years ago
- Work in progress.☆68Updated last week
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆31Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 11 months ago