JT-Ushio / MHA2MLAView external linksLinks
Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs
☆204Dec 4, 2025Updated 2 months ago
Alternatives and similar repositories for MHA2MLA
Users that are interested in MHA2MLA are comparing it to the libraries listed below
Sorting:
- TransMLA: Multi-Head Latent Attention Is All You Need (NeurIPS 2025 Spotlight)☆429Sep 23, 2025Updated 4 months ago
- ☆131May 29, 2025Updated 8 months ago
- Efficient triton implementation of Native Sparse Attention.☆262May 23, 2025Updated 8 months ago
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.☆927Dec 31, 2025Updated last month
- Fast and Slow Generating: An Empirical Study on Large and Small Language Models Collaborative Decoding.☆13Nov 19, 2024Updated last year
- ☆19Jun 4, 2025Updated 8 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Apr 13, 2025Updated 10 months ago
- code for Scaling Laws of RoPE-based Extrapolation☆73Oct 16, 2023Updated 2 years ago
- The evaluation framework for training-free sparse attention in LLMs☆119Jan 27, 2026Updated 2 weeks ago
- ☆23Sep 19, 2024Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆235Jun 15, 2025Updated 8 months ago
- [ICML 2025] XAttention: Block Sparse Attention with Antidiagonal Scoring☆269Jul 6, 2025Updated 7 months ago
- Muon is Scalable for LLM Training☆1,426Aug 3, 2025Updated 6 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,044Apr 3, 2025Updated 10 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- (ICLR 2026) Unveiling Super Experts in Mixture-of-Experts Large Language Models☆36Sep 25, 2025Updated 4 months ago
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆964Feb 5, 2026Updated last week
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- CutDiffusion: A Simple, Fast, Cheap, and Strong Diffusion Extrapolation Method☆27Oct 9, 2025Updated 4 months ago
- ☆39May 20, 2025Updated 8 months ago
- [DAC'25] Official implement of "HybriMoE: Hybrid CPU-GPU Scheduling and Cache Management for Efficient MoE Inference"☆101Dec 15, 2025Updated last month
- A Comprehensive Dataset for Advanced Image Generation and Editing}☆31Oct 2, 2025Updated 4 months ago
- Fork of Flame repo for training of some new stuff in development☆19Jan 5, 2026Updated last month
- ☆123Feb 21, 2025Updated 11 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Apr 7, 2025Updated 10 months ago
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆71Sep 29, 2025Updated 4 months ago
- [AAAI26] LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs☆52Dec 7, 2025Updated 2 months ago
- [ICML 2024] When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models☆35Jun 12, 2024Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Oct 11, 2025Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆110Mar 7, 2025Updated 11 months ago
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆42Dec 29, 2025Updated last month
- ☆48Aug 29, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,183Sep 30, 2025Updated 4 months ago
- ☆145Sep 12, 2025Updated 5 months ago
- ☆27Nov 25, 2025Updated 2 months ago
- The official implementation of paper: SimLayerKV: A Simple Framework for Layer-Level KV Cache Reduction.☆52Oct 18, 2024Updated last year
- Library to facilitate pruning of LLMs based on context☆32Jan 31, 2024Updated 2 years ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆142Dec 4, 2024Updated last year
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆79Oct 16, 2024Updated last year