β1,309Nov 17, 2025Updated 3 months ago
Alternatives and similar repositories for Kimi-Linear
Users that are interested in Kimi-Linear are comparing it to the libraries listed below
Sorting:
- π Efficient implementations of state-of-the-art linear attention modelsβ4,474Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,073Apr 3, 2025Updated 11 months ago
- Resilient multi-LLM orchestration with in-built failure handing, rate limits, retries, and circuit breaker.β29Updated this week
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β969Feb 5, 2026Updated last month
- Checkpoint-engine is a simple middleware to update model weights in LLM inference enginesβ912Updated this week
- slime is an LLM post-training framework for RL Scaling.β4,536Updated this week
- [ICLR 2026]QeRL enables RL for 32B LLMs on a single H100 GPU.β491Nov 27, 2025Updated 3 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attentionβ3,356Jul 7, 2025Updated 7 months ago
- β26Jul 8, 2025Updated 7 months ago
- Efficient triton implementation of Native Sparse Attention.β268May 23, 2025Updated 9 months ago
- verl: Volcano Engine Reinforcement Learning for LLMsβ19,519Updated this week
- Muon is an optimizer for hidden layers in neural networksβ2,350Jan 19, 2026Updated last month
- Implementation for FP8/INT8 Rollout for RL training without performence drop.β293Nov 7, 2025Updated 3 months ago
- Kimi K2 is the large language model series developed by Moonshot AI teamβ10,450Jan 21, 2026Updated last month
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,551Jun 14, 2025Updated 8 months ago
- β114Sep 13, 2025Updated 5 months ago
- β813Jun 9, 2025Updated 8 months ago
- β1,495Nov 18, 2025Updated 3 months ago
- [arxiv: 2512.19673] Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policiesβ61Feb 6, 2026Updated last month
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,219Aug 27, 2025Updated 6 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafterβ149Feb 27, 2026Updated last week
- A bidirectional pipeline parallelism algorithm for computation-communication overlap in DeepSeek V3/R1 training.β2,926Jan 14, 2026Updated last month
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,164Jul 15, 2025Updated 7 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ659Updated this week
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scalingβ6,206Feb 27, 2026Updated last week
- [AAAI 2026] UltraGenβ77Feb 1, 2026Updated last month
- [CVPR 2026] π₯π₯ Official Repo of USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learningβ1,207Sep 12, 2025Updated 5 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with block diffusion, mixed-CoT, unified RL)β1,591Feb 14, 2026Updated 2 weeks ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,284Updated this week
- Efficient Triton Kernels for LLM Trainingβ6,189Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,190Sep 30, 2025Updated 5 months ago
- One-shot and Few-shot 3D Editing without Per-Scene Optimizationβ164Aug 21, 2025Updated 6 months ago
- β65Apr 26, 2025Updated 10 months ago
- SkyRL: A Modular Full-stack RL Library for LLMsβ1,656Updated this week
- Muon is Scalable for LLM Trainingβ1,440Aug 3, 2025Updated 7 months ago
- The LLM abstraction layer for modern AI agent applications.β509Feb 24, 2026Updated last week
- torchcomms: a modern PyTorch communications APIβ344Updated this week
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ129Jun 24, 2025Updated 8 months ago
- Fast and memory-efficient exact attentionβ22,460Updated this week