Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
☆211Dec 4, 2025Updated 5 months ago
Alternatives and similar repositories for MHA2MLA
Users that are interested in MHA2MLA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Design hardware-friendly model architectures and migrate existing LLMs with minimal performance loss☆461Apr 6, 2026Updated last month
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆27Oct 9, 2025Updated 7 months ago
- Efficient triton implementation of Native Sparse Attention.☆276May 23, 2025Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆167Apr 13, 2025Updated last year
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆990Feb 25, 2026Updated 2 months ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆111Mar 7, 2025Updated last year
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- ☆139May 29, 2025Updated 11 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆995Feb 5, 2026Updated 3 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆156Apr 7, 2025Updated last year
- [ICLR 2026 🔥] Dr.LLM: Dynamic Layer Routing in LLMs☆48Apr 24, 2026Updated 2 weeks ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆112Oct 11, 2025Updated 6 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆50Oct 18, 2024Updated last year
- Code for the paper "Function-Space Learning Rates"☆24Jun 3, 2025Updated 11 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆48Aug 29, 2024Updated last year
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆277Jul 6, 2025Updated 10 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆140Jun 12, 2024Updated last year
- Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding.☆13Nov 19, 2024Updated last year
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,109Apr 3, 2025Updated last year
- ☆23Sep 19, 2024Updated last year
- ☆124Feb 21, 2025Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆248Jun 15, 2025Updated 10 months ago
- ☆27Nov 25, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,211Apr 8, 2026Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆122Jan 27, 2026Updated 3 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆241Oct 14, 2025Updated 6 months ago
- Muon is Scalable for LLM Training☆1,473Aug 3, 2025Updated 9 months ago
- An unofficial implementation of "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆36Jun 7, 2024Updated last year
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆113Dec 15, 2025Updated 4 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Feb 11, 2025Updated last year
- FlashMLA: Efficient Multi-head Latent Attention Kernels☆12,631Apr 30, 2026Updated last week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆183Jul 12, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆146Dec 4, 2024Updated last year
- Fork of Flame repo for training of some new stuff in development☆19Apr 24, 2026Updated 2 weeks ago
- (ACL2025 oral) SCOPE: Optimizing KV Cache Compression in Long-context Generation☆35May 28, 2025Updated 11 months ago
- ☆39May 20, 2025Updated 11 months ago
- Rectified Rotary Position Embeddings☆395May 20, 2024Updated last year
- (ICLR 2026) Unveiling Super Experts in Mixture-of-Experts Large Language Models☆40Sep 25, 2025Updated 7 months ago
- 🚀 Efficient implementations for emerging model architectures☆5,032May 1, 2026Updated last week