inclusionAI / RingLinks
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
☆95Updated last month
Alternatives and similar repositories for Ring
Users that are interested in Ring are comparing it to the libraries listed below
Sorting:
- Klear-Reasoner: Advancing Reasoning Capability via Gradient-Preserving Clipping Policy Optimization☆61Updated last week
- ☆89Updated last month
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 11 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆54Updated last year
- ☆103Updated 2 months ago
- ☆117Updated 2 months ago
- Implementation for FP8/INT8 Rollout for RL training without performence drop.☆184Updated this week
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 7 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 3 months ago
- ☆100Updated 4 months ago
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆63Updated last month
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆87Updated 4 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆105Updated 2 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 4 months ago
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆54Updated 6 months ago
- ☆57Updated last month
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆169Updated last month
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models☆149Updated 3 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆115Updated 10 months ago
- ☆51Updated 2 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆64Updated 10 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆119Updated last month
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆41Updated 6 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆93Updated 4 months ago
- A unified suite for generating elite reasoning problems and training high-performance LLMs, including pioneering attention-free architect…☆65Updated 3 months ago
- ☆75Updated 2 weeks ago
- ☆147Updated 3 months ago
- On Memorization of Large Language Models in Logical Reasoning☆71Updated 5 months ago
- ARM: Adaptive Reasoning Model☆47Updated last month
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆78Updated last week