FlashMLA: Efficient Multi-head Latent Attention Kernels
☆12,505Feb 6, 2026Updated 3 weeks ago
Alternatives and similar repositories for FlashMLA
Users that are interested in FlashMLA are comparing it to the libraries listed below
Sorting:
- DeepEP: an efficient expert-parallel communication library☆9,005Feb 9, 2026Updated 3 weeks ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆6,206Updated this week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.☆2,926Jan 14, 2026Updated last month
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,970May 15, 2025Updated 9 months ago
- Expert Parallelism Load Balancer☆1,351Mar 24, 2025Updated 11 months ago
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,730Updated this week
- Analyze computation-communication overlap in V3/R1.☆1,143Mar 21, 2025Updated 11 months ago
- SGLang is a high-performance serving framework for large language models and multimodal models.☆23,905Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,057Updated this week
- Fast and memory-efficient exact attention☆22,361Updated this week
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,843Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,234Updated this week
- A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations☆16,649Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,519Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,938Updated this week
- Ongoing research training transformer models at scale☆15,461Updated this week
- Fully open reproduction of DeepSeek-R1☆25,910Nov 24, 2025Updated 3 months ago
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,315Updated this week
- Development repository for the Triton language and compiler☆18,501Updated this week
- A lightweight data processing framework built on DuckDB and 3FS.☆4,931Mar 5, 2025Updated 11 months ago
- ☆91,886Jun 27, 2025Updated 8 months ago
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,706Updated this week
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆26,713Jan 9, 2026Updated last month
- ☆101,745Aug 28, 2025Updated 6 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆5,284Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- Muon is Scalable for LLM Training☆1,440Aug 3, 2025Updated 7 months ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,073Apr 3, 2025Updated 11 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,618Feb 24, 2026Updated last week
- A Datacenter Scale Distributed Inference Serving Framework☆6,154Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,371Feb 13, 2026Updated 2 weeks ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,428Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆67,659Updated this week
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,176Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,919Updated this week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆52,724Updated this week
- Minimal reproduction of DeepSeek R1-Zero☆12,853Updated this week