kyegomez / Lets-Verify-Step-by-StepLinks
"Improving Mathematical Reasoning with Process Supervision" by OPENAI
☆111Updated 3 weeks ago
Alternatives and similar repositories for Lets-Verify-Step-by-Step
Users that are interested in Lets-Verify-Step-by-Step are comparing it to the libraries listed below
Sorting:
- [ICML 2025] Flow of Reasoning: Training LLMs for Divergent Reasoning with Minimal Examples☆110Updated 3 months ago
- RL Scaling and Test-Time Scaling (ICML'25)☆112Updated 9 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆67Updated 8 months ago
- ☆139Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆125Updated last year
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆218Updated 2 years ago
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- ☆103Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆143Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆109Updated 8 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆83Updated 7 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆116Updated 11 months ago
- Critique-out-Loud Reward Models☆70Updated last year
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- ☆76Updated 11 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆176Updated 5 months ago
- ☆197Updated 6 months ago
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- ☆116Updated 9 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆114Updated 3 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 5 months ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆54Updated last year
- official implementation of paper "Process Reward Model with Q-value Rankings"☆64Updated 9 months ago
- ☆100Updated last year
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆202Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆132Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆165Updated 8 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆178Updated 4 months ago
- [ICML 2025] Teaching Language Models to Critique via Reinforcement Learning☆115Updated 6 months ago