inclusionAI / RingLinks
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
☆87Updated last month
Alternatives and similar repositories for Ring
Users that are interested in Ring are comparing it to the libraries listed below
Sorting:
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 9 months ago
- ☆91Updated 2 months ago
- ☆75Updated last week
- ☆110Updated last month
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆127Updated this week
- LongSpec: Long-Context Lossless Speculative Decoding with Efficient Drafting and Verification☆60Updated this week
- The official repo for "AceCoder: Acing Coder RL via Automated Test-Case Synthesis" [ACL25]☆88Updated 3 months ago
- [ICLR 2025] LongPO: Long Context Self-Evolution of Large Language Models through Short-to-Long Preference Optimization☆38Updated 4 months ago
- ☆48Updated last month
- ☆136Updated last month
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- Fira: Can We Achieve Full-rank Training of LLMs Under Low-rank Constraint?☆112Updated 8 months ago
- This is the official repo of "QuickLLaMA: Query-aware Inference Acceleration for Large Language Models"☆53Updated last year
- ☆42Updated 3 weeks ago
- ☆51Updated last week
- [NeurIPS 2024] A Novel Rank-Based Metric for Evaluating Large Language Models☆50Updated last month
- Open-Source LLM Coders with Co-Evolving Reinforcement Learning☆93Updated this week
- ☆96Updated last month
- ☆17Updated 2 weeks ago
- ☆63Updated last month
- A repo for open research on building large reasoning models☆71Updated this week
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆144Updated last week
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 5 months ago
- Code for "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆75Updated 9 months ago
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆152Updated last week
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆110Updated 2 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆50Updated 7 months ago
- ☆59Updated last month
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆91Updated 2 months ago
- exploring whether LLMs perform case-based or rule-based reasoning☆29Updated last year