WindyLee0822 / Process_Q_ModelLinks
official implementation of paper "Process Reward Model with Q-value Rankings"
☆60Updated 5 months ago
Alternatives and similar repositories for Process_Q_Model
Users that are interested in Process_Q_Model are comparing it to the libraries listed below
Sorting:
- RL Scaling and Test-Time Scaling (ICML'25)☆109Updated 6 months ago
- ☆114Updated 6 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆168Updated 3 weeks ago
- Repo of paper "Free Process Rewards without Process Labels"☆160Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆105Updated 2 months ago
- Revisiting Mid-training in the Era of Reinforcement Learning Scaling☆159Updated last week
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆76Updated 4 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 7 months ago
- Critique-out-Loud Reward Models☆70Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆146Updated 9 months ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆82Updated 2 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆125Updated 4 months ago
- ☆99Updated last year
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆101Updated last week
- ☆67Updated last month
- ☆42Updated 5 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆59Updated 5 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆38Updated last month
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆112Updated last week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆101Updated 2 weeks ago
- ☆71Updated 4 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 7 months ago
- ☆48Updated 9 months ago
- ☆85Updated 2 months ago
- ☆91Updated 8 months ago
- ☆52Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- ☆47Updated 5 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆63Updated 9 months ago