MoonshotAI / Kimi-LinearLinks
☆1,272Updated 2 months ago
Alternatives and similar repositories for Kimi-Linear
Users that are interested in Kimi-Linear are comparing it to the libraries listed below
Sorting:
- WeDLM: The fastest diffusion language model with standard causal attention and native KV cache compatibility, delivering real speedups ov…☆588Updated last week
- ☆1,452Updated 2 months ago
- ☆862Updated 4 months ago
- OpenTinker is an RL-as-a-Service infrastructure for foundation models☆599Updated this week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆467Updated 8 months ago
- Block Diffusion for Ultra-Fast Speculative Decoding☆349Updated 3 weeks ago
- dLLM: Simple Diffusion Language Modeling☆1,633Updated 2 weeks ago
- Checkpoint-engine is a simple middleware to update model weights in LLM inference engines☆891Updated this week
- ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.☆604Updated last month
- Official implementation of "Continuous Autoregressive Language Models"☆714Updated last month
- codes for R-Zero: Self-Evolving Reasoning LLM from Zero Data (https://www.arxiv.org/pdf/2508.05004)☆736Updated last month
- ☆725Updated last month
- Official JAX implementation of End-to-End Test-Time Training for Long Context☆445Updated last week
- dInfer: An Efficient Inference Framework for Diffusion Language Models☆396Updated 2 weeks ago
- Official Repository for "Glyph: Scaling Context Windows via Visual-Text Compression"☆552Updated 2 months ago
- Open-source release accompanying Gao et al. 2025☆498Updated last month
- Dream 7B, a large diffusion language model☆1,150Updated 2 months ago
- Research code artifacts for Code World Model (CWM) including inference tools, reproducibility, and documentation.☆799Updated last month
- GPU-optimized framework for training diffusion language models at any scale. The backend of Quokka, Super Data Learners, and OpenMoE 2 tr…☆318Updated 2 months ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆474Updated last month
- Tiny Model, Big Logic: Diversity-Driven Optimization Elicits Large-Model Reasoning Ability in VibeThinker-1.5B☆563Updated 2 months ago
- ☆1,533Updated last month
- The offical repo for "Parallel-R1: Towards Parallel Thinking via Reinforcement Learning"☆251Updated 2 months ago
- Simple & Scalable Pretraining for Neural Architecture Research☆306Updated last month
- A Scientific Multimodal Foundation Model☆627Updated 3 months ago
- DFloat11 [NeurIPS '25]: Lossless Compression of LLMs and DiTs for Efficient GPU Inference☆594Updated 2 months ago
- Speed Always Wins: A Survey on Efficient Architectures for Large Language Models☆389Updated 2 months ago
- Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation (NeurIPS 2025)☆538Updated 4 months ago
- Scaling RL on advanced reasoning models☆661Updated 3 months ago
- ☆813Updated 7 months ago