fxmeng / TransMLA
TransMLA: Multi-Head Latent Attention Is All You Need
☆221Updated 3 weeks ago
Alternatives and similar repositories for TransMLA:
Users that are interested in TransMLA are comparing it to the libraries listed below
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆149Updated last week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆274Updated last month
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆168Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆442Updated last month
- ☆143Updated 2 weeks ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆601Updated last week
- The official implementation of Tensor ProducT ATTenTion Transformer (T6)☆345Updated last month
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- 🔥 A minimal training framework for scaling FLA models☆82Updated last week
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆231Updated last month
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆127Updated last week
- SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs☆89Updated 2 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆122Updated 3 months ago
- ☆185Updated 5 months ago
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆161Updated last week
- DeepSeek Native Sparse Attention pytorch implementation☆54Updated 3 weeks ago
- (Unofficial) PyTorch implementation of grouped-query attention (GQA) from "GQA: Training Generalized Multi-Query Transformer Models from …☆159Updated 10 months ago
- ☆108Updated this week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆166Updated last week
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆151Updated 2 weeks ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- Efficient LLM Inference over Long Sequences☆365Updated last month
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆458Updated last week
- ☆233Updated 10 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆86Updated 2 weeks ago
- 📰 Must-read papers on KV Cache Compression (constantly updating 🤗).☆355Updated this week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆162Updated last week
- Spec-Bench: A Comprehensive Benchmark and Unified Evaluation Platform for Speculative Decoding (ACL 2024 Findings)☆243Updated last week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆65Updated 3 months ago