WindyLee0822 / Process_Q_Model
official implementation of paper "Process Reward Model with Q-value Rankings"
☆56Updated 3 months ago
Alternatives and similar repositories for Process_Q_Model:
Users that are interested in Process_Q_Model are comparing it to the libraries listed below
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 3 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆29Updated 3 weeks ago
- ☆109Updated 3 months ago
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆85Updated last month
- Code for "Reasoning to Learn from Latent Thoughts"☆93Updated last month
- Repo of paper "Free Process Rewards without Process Labels"☆145Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆62Updated last week
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 5 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆51Updated 2 months ago
- Code for Paper: Teaching Language Models to Critique via Reinforcement Learning☆94Updated 3 weeks ago
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆80Updated last month
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆141Updated 2 weeks ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆57Updated 5 months ago
- The official repository of the Omni-MATH benchmark.☆83Updated 4 months ago
- Critique-out-Loud Reward Models☆63Updated 6 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆90Updated 2 weeks ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- ☆97Updated 10 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 6 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆55Updated 10 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆177Updated last month
- ☆72Updated 5 months ago
- ☆163Updated last month
- ☆111Updated this week
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆118Updated last month
- Agentic Reward Modeling: Integrating Human Preferences with Verifiable Correctness Signals for Reliable Reward Systems☆90Updated last month
- Official Repository of Are Your LLMs Capable of Stable Reasoning?☆25Updated last month