inclusionAI / Ring-V2Links
Ring-V2 is a reasoning MoE LLM provided and open-sourced by InclusionAI.
☆81Updated last month
Alternatives and similar repositories for Ring-V2
Users that are interested in Ring-V2 are comparing it to the libraries listed below
Sorting:
- ☆105Updated 3 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆181Updated 4 months ago
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆67Updated 8 months ago
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆209Updated 2 weeks ago
- SSRL: Self-Search Reinforcement Learning☆158Updated 3 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆118Updated 6 months ago
- Geometric-Mean Policy Optimization☆95Updated 3 weeks ago
- [NeurIPS 2025] The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆187Updated 5 months ago
- ☆85Updated 8 months ago
- MiroTrain is an efficient and algorithm-first framework for post-training large agentic models.☆99Updated 3 months ago
- Esoteric Language Models☆108Updated 2 weeks ago
- FastCuRL: Curriculum Reinforcement Learning with Stage-wise Context Scaling for Efficient LLM Reasoning☆53Updated 2 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 9 months ago
- The official github repo for "Diffusion Language Models are Super Data Learners".☆208Updated last month
- Easy and Efficient dLLM Fine-Tuning☆139Updated last week
- ☆105Updated 6 months ago
- PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning☆112Updated this week
- ☆342Updated last month
- Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.☆108Updated 4 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 3 months ago
- Defeating the Training-Inference Mismatch via FP16☆161Updated 3 weeks ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆224Updated last month
- ☆85Updated last month
- The official repository for SkyLadder: Better and Faster Pretraining via Context Window Scheduling☆40Updated last month
- The official repository of paper "Pass@k Training for Adaptively Balancing Exploration and Exploitation of Large Reasoning Models''☆111Updated 3 months ago
- SIFT: Grounding LLM Reasoning in Contexts via Stickers☆57Updated 9 months ago
- QeRL enables RL for 32B LLMs on a single H100 GPU.☆466Updated 2 weeks ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆197Updated last week
- [NeurIPS'25 Spotlight] ARM: Adaptive Reasoning Model☆60Updated last month
- Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI.☆245Updated 2 months ago