dame-cell / TriformerLinks
Transformers components but in Triton
☆34Updated 6 months ago
Alternatives and similar repositories for Triformer
Users that are interested in Triformer are comparing it to the libraries listed below
Sorting:
- Quantized Attention on GPU☆44Updated last year
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- ☆51Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆88Updated 2 months ago
- ☆22Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆41Updated this week
- Fast and memory-efficient exact attention☆74Updated 9 months ago
- ☆132Updated 6 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆133Updated last year
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆86Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆212Updated 5 months ago
- ☆113Updated 6 months ago
- Triton-based Symmetric Memory operators and examples☆65Updated last month
- ☆154Updated 9 months ago
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 9 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆52Updated last year
- Estimate MFU for DeepSeekV3☆26Updated 11 months ago
- Xmixers: A collection of SOTA efficient token/channel mixers☆29Updated 3 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆91Updated 4 months ago
- ☆39Updated 3 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Updated last year
- Awesome Triton Resources☆38Updated 7 months ago
- ☆83Updated 10 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter☆73Updated last week
- A bunch of kernels that might make stuff slower 😉☆65Updated this week
- ☆125Updated 3 months ago
- DeeperGEMM: crazy optimized version☆73Updated 7 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆155Updated last month