JT-Ushio / MHA2MLA
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
☆149Updated this week
Alternatives and similar repositories for MHA2MLA:
Users that are interested in MHA2MLA are comparing it to the libraries listed below
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆442Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- TransMLA: Multi-Head Latent Attention Is All You Need☆221Updated 3 weeks ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆162Updated last week
- ☆143Updated 2 weeks ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆166Updated last week
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆127Updated last week
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆216Updated this week
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆261Updated 5 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- ☆171Updated last month
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆161Updated last week
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆168Updated last month
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆151Updated 2 weeks ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆311Updated 3 months ago
- ☆233Updated 10 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆139Updated 3 weeks ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆129Updated 8 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- ☆125Updated 3 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆122Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆131Updated last month
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆395Updated 5 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆140Updated this week
- Exploring Applications of GRPO☆145Updated this week
- ☆262Updated 2 weeks ago
- Efficient LLM Inference over Long Sequences☆365Updated last month
- 🔥 A minimal training framework for scaling FLA models☆82Updated last week
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆274Updated last month