zhangxy-2019 / critique-GRPOLinks
☆34Updated last month
Alternatives and similar repositories for critique-GRPO
Users that are interested in critique-GRPO are comparing it to the libraries listed below
Sorting:
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆81Updated 7 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆111Updated 4 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆246Updated last month
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆128Updated 5 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆69Updated 4 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 9 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆163Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆105Updated last month
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 10 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆87Updated 7 months ago
- ☆331Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆127Updated 6 months ago
- Official codebase for "GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning".☆81Updated 3 months ago
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆44Updated 3 months ago
- A comprehensive collection of process reward models.☆108Updated last month
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆55Updated 3 months ago
- ☆206Updated 5 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆112Updated 4 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆326Updated 2 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆85Updated 7 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆61Updated 2 months ago
- [ACL-25] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLMs.☆67Updated 10 months ago
- ☆287Updated 3 months ago
- Source code for our paper: "ARIA: Training Language Agents with Intention-Driven Reward Aggregation".☆22Updated last month
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆61Updated 9 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆79Updated 5 months ago
- End-to-End Reinforcement Learning for Multi-Turn Tool-Integrated Reasoning☆273Updated this week
- Resources for the Enigmata Project.☆70Updated last month
- ☆67Updated 3 months ago
- Extrapolating RLVR to General Domains without Verifiers☆160Updated last month