deepseek-ai / FlashMLALinks
FlashMLA: Efficient MLA decoding kernels
☆11,642Updated 2 months ago
Alternatives and similar repositories for FlashMLA
Users that are interested in FlashMLA are comparing it to the libraries listed below
Sorting:
- DeepEP: an efficient expert-parallel communication library☆8,265Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆5,517Updated last week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in V3/R1 training.☆2,827Updated 4 months ago
- Production-tested AI infrastructure tools for efficient AGI development and community-driven innovation☆7,860Updated last month
- A high-performance distributed file system designed to address the challenges of AI training and inference workloads.☆9,132Updated 3 weeks ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆3,505Updated last week
- Expert Parallelism Load Balancer☆1,229Updated 3 months ago
- Analyze computation-communication overlap in V3/R1.☆1,076Updated 3 months ago
- SGLang is a fast serving framework for large language models and vision language models.☆15,747Updated this week
- A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations☆14,519Updated last week
- verl: Volcano Engine Reinforcement Learning for LLMs☆10,431Updated this week
- FlashInfer: Kernel Library for LLM Serving☆3,306Updated this week
- DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model☆4,920Updated 9 months ago
- High-performance inference framework for large language models, focusing on efficiency, flexibility, and availability.☆1,154Updated this week
- Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.☆22,363Updated 2 weeks ago
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,271Updated last month
- Simple RL training for reasoning☆3,676Updated 3 months ago
- Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆11,442Updated last month
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,817Updated 3 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,672Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & vLLM & Ray & Dynamic Sampling & Asy…☆7,287Updated this week
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attention☆3,016Updated this week
- Janus-Series: Unified Multimodal Understanding and Generation Models☆17,428Updated 5 months ago
- 📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉☆5,430Updated last week
- My learning notes/codes for ML SYS.☆2,854Updated this week
- s1: Simple test-time scaling☆6,487Updated 2 weeks ago
- DeepSeek-VL2: Mixture-of-Experts Vision-Language Models for Advanced Multimodal Understanding☆4,947Updated 4 months ago
- Reproduce R1 Zero on Logic Puzzle☆2,374Updated 3 months ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, InternLM3, DeepSeek-R1, ...) and 200+ MLLMs (Qwen2.5-…☆8,609Updated this week
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,747Updated last year