Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
☆208Dec 4, 2025Updated 3 months ago
Alternatives and similar repositories for MHA2MLA
Users that are interested in MHA2MLA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆435Feb 28, 2026Updated last month
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆27Oct 9, 2025Updated 5 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 11 months ago
- Efficient triton implementation of Native Sparse Attention.☆272May 23, 2025Updated 10 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆969Feb 25, 2026Updated last month
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆111Mar 7, 2025Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- ☆136May 29, 2025Updated 10 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆978Feb 5, 2026Updated last month
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 11 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 5 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆25Jun 3, 2025Updated 9 months ago
- ☆48Aug 29, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆274Jul 6, 2025Updated 8 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆140Jun 12, 2024Updated last year
- Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding.☆13Nov 19, 2024Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,086Apr 3, 2025Updated 11 months ago
- ☆23Sep 19, 2024Updated last year
- ☆124Feb 21, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆247Jun 15, 2025Updated 9 months ago
- ☆27Nov 25, 2025Updated 4 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,203Mar 9, 2026Updated 3 weeks ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 2 months ago
- Muon is Scalable for LLM Training☆1,450Aug 3, 2025Updated 7 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆239Oct 14, 2025Updated 5 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆104Dec 15, 2025Updated 3 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,541Feb 6, 2026Updated last month
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Feb 11, 2025Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆180Jul 12, 2024Updated last year
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Fork of Flame repo for training of some new stuff in development☆19Mar 17, 2026Updated last week
- (ACL 2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆34May 28, 2025Updated 10 months ago
- ☆39May 20, 2025Updated 10 months ago
- Rectified Rotary Position Embeddings☆388May 20, 2024Updated last year
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,692Updated this week
- ☆155Mar 4, 2025Updated last year
- (ICLR 2026) Unveiling Super Experts in Mixture-of-Experts Large Language Models☆39Sep 25, 2025Updated 6 months ago