dame-cell / Triformer
Transformers components but in Triton
☆33Updated last month
Alternatives and similar repositories for Triformer:
Users that are interested in Triformer are comparing it to the libraries listed below
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆40Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimzer☆91Updated this week
- Quantized Attention on GPU☆45Updated 5 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆69Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆73Updated 8 months ago
- Estimate MFU for DeepSeekV3☆23Updated 4 months ago
- ☆22Updated last year
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆45Updated 6 months ago
- Vocabulary Parallelism☆19Updated last month
- DeeperGEMM: crazy optimized version☆68Updated this week
- ☆68Updated last week
- ☆20Updated 2 months ago
- Code for paper: [ICLR2025 Oral] FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference☆100Updated 3 weeks ago
- ☆44Updated 2 months ago
- SQUEEZED ATTENTION: Accelerating Long Prompt LLM Inference☆46Updated 5 months ago
- ☆126Updated 2 months ago
- Official implementation of "The Sparse Frontier: Sparse Attention Trade-offs in Transformer LLMs"☆22Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 5 months ago
- Beyond KV Caching: Shared Attention for Efficient LLMs☆18Updated 9 months ago
- Code for data-aware compression of DeepSeek models☆23Updated last month
- GPTQ inference TVM kernel☆38Updated last year
- Framework to reduce autotune overhead to zero for well known deployments.☆70Updated this week
- ☆69Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆88Updated last week
- ☆68Updated 3 months ago
- ☆30Updated 11 months ago
- Here we will test various linear attention designs.☆60Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Linear Attention Sequence Parallelism (LASP)☆82Updated 11 months ago
- Best practices for testing advanced Mixtral, DeepSeek, and Qwen series MoE models using Megatron Core MoE.☆10Updated 2 weeks ago