β1,362Nov 17, 2025Updated 4 months ago
Alternatives and similar repositories for Kimi-Linear
Users that are interested in Kimi-Linear are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π Efficient implementations for emerging model architecturesβ4,878Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMsβ2,090Apr 3, 2025Updated last year
- Checkpoint-engine is a simple middleware to update model weights in LLM inference enginesβ940Feb 28, 2026Updated last month
- [AAAI 2026] UltraGenβ78Feb 1, 2026Updated 2 months ago
- slime is an LLM post-training framework for RL Scaling.β5,264Apr 9, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- π³ Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"β982Feb 5, 2026Updated 2 months ago
- The official repo of MiniMax-Text-01 and MiniMax-VL-01, large-language-model & vision-language-model based on Linear Attentionβ3,402Jul 7, 2025Updated 9 months ago
- Muon is an optimizer for hidden layers in neural networksβ2,479Jan 19, 2026Updated 2 months ago
- [arxiv: 2512.19673] Bottom-up Policy Optimization: Your Language Model Policy Secretly Contains Internal Policiesβ60Feb 6, 2026Updated 2 months ago
- β1,553Nov 18, 2025Updated 4 months ago
- Efficient triton implementation of Native Sparse Attention.β274May 23, 2025Updated 10 months ago
- β111Feb 4, 2026Updated 2 months ago
- Code & Data for our Paper "RobustGEC: Robust Grammatical Error Correction Against Subtle Context Perturbation" (EMNLP 2023)β17Jan 23, 2024Updated 2 years ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.β298Nov 7, 2025Updated 5 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [ICLR 2026]QeRL enables RL for 32B LLMs on a single H100 GPU.β498Mar 30, 2026Updated 2 weeks ago
- Kimi K2 is the large language model series developed by Moonshot AI teamβ10,621Jan 21, 2026Updated 2 months ago
- verl: Volcano Engine Reinforcement Learning for LLMsβ20,603Updated this week
- [CVPR 2026] π₯π₯ Official Repo of USO: Unified Style and Subject-Driven Generation via Disentangled and Reward Learningβ1,217Sep 12, 2025Updated 7 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving statβ¦β1,570Jun 14, 2025Updated 10 months ago
- Simple & Scalable Pretraining for Neural Architecture Researchβ327Mar 31, 2026Updated 2 weeks ago
- β814Jun 9, 2025Updated 10 months ago
- Muon is Scalable for LLM Trainingβ1,453Aug 3, 2025Updated 8 months ago
- A Distributed Attention Towards Linear Scalability for Ultra-Long Context, Heterogeneous Data Trainingβ781Apr 8, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilitiesβ1,175Jul 15, 2025Updated 9 months ago
- Understanding R1-Zero-Like Training: A Critical Perspectiveβ1,241Aug 27, 2025Updated 7 months ago
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernelsβ5,478Updated this week
- Resilient multi-LLM orchestration with in-built failure handing, rate limits, retries, and circuit breaker.β31Mar 23, 2026Updated 3 weeks ago
- Official JAX implementation of End-to-End Test-Time Training for Long Contextβ583Feb 15, 2026Updated 2 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernelβ133Jun 24, 2025Updated 9 months ago
- DeeperGEMM: crazy optimized versionβ86May 5, 2025Updated 11 months ago
- [ASPLOS'26] Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafterβ162Feb 27, 2026Updated last month
- Lightning-Fast RL for LLM Reasoning and Agents. Made Simple & Flexible.β5,011Updated this week
- Bare Metal GPUs on DigitalOcean Gradient AI β’ AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- β115Sep 13, 2025Updated 7 months ago
- Automated High-Performance GPU Kernel Generationβ95Updated this week
- [ICML2025] SpargeAttention: A training-free sparse attention that accelerates any model inference.β973Feb 25, 2026Updated last month
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attentionβ¦β1,203Apr 8, 2026Updated last week
- Tutorial Exercises and Code for GPU Communications Tutorial at HOT Interconnects 2025β31Oct 22, 2025Updated 5 months ago
- Linear Attention Sequence Parallelism (LASP)β88Jun 4, 2024Updated last year
- β67Apr 26, 2025Updated 11 months ago