JT-Ushio / MHA2MLA
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
☆158Updated this week
Alternatives and similar repositories for MHA2MLA:
Users that are interested in MHA2MLA are comparing it to the libraries listed below
- TransMLA: Multi-Head Latent Attention Is All You Need☆231Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆131Updated 10 months ago
- Efficient triton implementation of Native Sparse Attention.☆135Updated last week
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆180Updated 3 weeks ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆182Updated last week
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆169Updated 3 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆135Updated 2 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆155Updated last month
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆157Updated 9 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆146Updated last month
- ☆151Updated this week
- ☆73Updated 2 weeks ago
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆130Updated 8 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆102Updated last week
- OpenSeek aims to unite the global open source community to drive collaborative innovation in algorithms, data and systems to develop next…☆131Updated last week
- ☆235Updated 11 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆170Updated this week
- ☆184Updated last month
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆222Updated last week
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆151Updated 4 months ago
- ☆122Updated 2 months ago
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆217Updated 7 months ago
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆236Updated 2 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆621Updated 3 weeks ago
- Efficient LLM Inference over Long Sequences☆366Updated 2 months ago
- ☆278Updated last month
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆447Updated 2 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆402Updated 6 months ago
- A Comprehensive Survey on Long Context Language Modeling☆129Updated 3 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆123Updated 4 months ago