β1,335Nov 17, 2025Updated 4 months ago
Alternatives and similar repositories for Kimi-Linear
Users that are interested in Kimi-Linear are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π Efficient implementations of state-of-the-art linear attention modelsβ4,692Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,083Apr 3, 2025Updated 11 months ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference enginesβ925Feb 28, 2026Updated 3 weeks ago
- [AAAI 2026] UltraGenβ77Feb 1, 2026Updated last month
- slime is an LLM post-training framework for RL Scaling.β4,906Updated this week
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β977Feb 5, 2026Updated last month
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attentionβ3,375Jul 7, 2025Updated 8 months ago
- [arxiv: 2512.19673] Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policiesβ60Feb 6, 2026Updated last month
- Muon is an optimizer for hidden layers in neural networksβ2,428Jan 19, 2026Updated 2 months ago
- β1,515Nov 18, 2025Updated 4 months ago
- Efficient triton implementation of Native Sparse Attention.β270May 23, 2025Updated 10 months ago
- β110Feb 4, 2026Updated last month
- Code & Data for our Paper "RobustGEC: Robust Grammatical Error Correction Against Subtle Context Perturbation" (EMNLP 2023)β17Jan 23, 2024Updated 2 years ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.β300Nov 7, 2025Updated 4 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR 2026]QeRL enables RL for 32B LLMs on a single H100 GPU.β493Nov 27, 2025Updated 3 months ago
- Kimi K2 is the large language model series developed by Moonshot AI teamβ10,535Jan 21, 2026Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMsβ20,097Updated this week
- [CVPR 2026] π₯π₯ Official Repo of USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learningβ1,215Sep 12, 2025Updated 6 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ309Dec 6, 2025Updated 3 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,558Jun 14, 2025Updated 9 months ago
- Automated GPU Kernel Generation via Co-Evolving Intrinsic World Modelβ91Mar 2, 2026Updated 3 weeks ago
- β811Jun 9, 2025Updated 9 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ723Updated this week
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Muon is Scalable for LLM Trainingβ1,446Aug 3, 2025Updated 7 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,168Jul 15, 2025Updated 8 months ago
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,232Aug 27, 2025Updated 6 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,403Mar 20, 2026Updated last week
- Official JAX implementation of End-to-End Test-Time Training for Long Contextβ568Feb 15, 2026Updated last month
- Resilient multi-LLM orchestration with in-built failure handing, rate limits, retries, and circuit breaker.β30Mar 20, 2026Updated last week
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafterβ157Feb 27, 2026Updated last month
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ132Jun 24, 2025Updated 9 months ago
- β114Sep 13, 2025Updated 6 months ago
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.β4,855Mar 20, 2026Updated last week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β961Feb 25, 2026Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,198Mar 9, 2026Updated 2 weeks ago
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025β31Oct 22, 2025Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)β88Jun 4, 2024Updated last year
- β65Apr 26, 2025Updated 11 months ago
- Efficient Triton Kernels for LLM Trainingβ6,242Updated this week