inclusionAI / Ling-V2Links
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.
☆249Updated 3 months ago
Alternatives and similar repositories for Ling-V2
Users that are interested in Ling-V2 are comparing it to the libraries listed below
Sorting:
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 7 months ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆246Updated 4 months ago
- ☆165Updated last week
- ☆472Updated 3 weeks ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆84Updated 9 months ago
- ☆74Updated 7 months ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆87Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆198Updated last month
- ☆99Updated 4 months ago
- PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning☆249Updated 3 weeks ago
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆107Updated 5 months ago
- ☆93Updated 7 months ago
- Efficient Agent Training for Computer Use☆134Updated 4 months ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆239Updated 7 months ago
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆465Updated 7 months ago
- ☆109Updated 3 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆228Updated 2 months ago
- Implementation for OAgents: An Empirical Study of Building Effective Agents☆299Updated 2 months ago
- ☆75Updated 6 months ago
- LIMI: Less is More for Agency☆155Updated 2 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆188Updated 6 months ago
- The open-source code of MetaStone-S1.☆106Updated 5 months ago
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆100Updated 4 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆81Updated last week
- ☆178Updated 8 months ago
- [NeurIPS'25 Oral] Query-agnostic KV cache eviction: 3–4× reduction in memory and 2× decrease in latency (Qwen3/2.5, Gemma3, LLaMA3)☆176Updated last week
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated last month
- A construction kit for reinforcement learning environment management.☆273Updated this week
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 4 months ago