inclusionAI / RingLinks
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
☆105Updated 3 months ago
Alternatives and similar repositories for Ring
Users that are interested in Ring are comparing it to the libraries listed below
Sorting:
- ☆85Updated 7 months ago
- ☆86Updated 2 months ago
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆78Updated last month
- ☆101Updated 2 months ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆180Updated 4 months ago
- Towards a Unified View of Large Language Model Post-Training☆183Updated 2 months ago
- [EMNLP'2025 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆66Updated 7 months ago
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆223Updated last month
- Efficient Agent Training for Computer Use☆132Updated 2 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆180Updated 3 months ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆138Updated last year
- ☆74Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 5 months ago
- Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.☆75Updated 3 weeks ago
- ☆90Updated 5 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆126Updated 2 weeks ago
- MiroMind-M1 is a fully open-source series of reasoning language models built on Qwen-2.5, focused on advancing mathematical reasoning.☆240Updated 3 months ago
- [NeurIPS 2025 Spotlight] ReasonFlux-Coder: Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆131Updated last month
- ☆98Updated 3 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆223Updated last week
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆231Updated 6 months ago
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆166Updated last week
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 9 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆198Updated 2 weeks ago
- ☆135Updated 2 months ago
- WideSearch: Benchmarking Agentic Broad Info-Seeking☆100Updated last month
- A comrephensive collection of learning from rewards in the post-training and test-time scaling of LLMs, with a focus on both reward model…☆58Updated 5 months ago
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆99Updated 3 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆58Updated 8 months ago