kyegomez / Lets-Verify-Step-by-Step
"Improving Mathematical Reasoning with Process Supervision" by OPENAI
☆108Updated last week
Alternatives and similar repositories for Lets-Verify-Step-by-Step:
Users that are interested in Lets-Verify-Step-by-Step are comparing it to the libraries listed below
- Flow of Reasoning: Training LLMs for Divergent Problem Solving with Minimal Examples☆84Updated 3 weeks ago
- Implementation of the Quiet-STAR paper (https://arxiv.org/pdf/2403.09629.pdf)☆53Updated 8 months ago
- ☆105Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆120Updated 7 months ago
- ☆119Updated 6 months ago
- ☆91Updated last month
- ☆65Updated last year
- Critique-out-Loud Reward Models☆57Updated 6 months ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆90Updated last month
- ☆96Updated 9 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆101Updated 4 months ago
- augmented LLM with self reflection☆118Updated last year
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆101Updated 2 months ago
- A dataset of LLM-generated chain-of-thought steps annotated with mistake location.☆80Updated 8 months ago
- ☆148Updated 4 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆46Updated last month
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆48Updated 5 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆139Updated 5 months ago
- ☆118Updated 10 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆180Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆135Updated 2 months ago
- [ACL'24] Code and data of paper "When is Tree Search Useful for LLM Planning? It Depends on the Discriminator"☆54Updated last year
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆98Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆140Updated last month
- ☆99Updated 2 weeks ago
- Reasoning with Language Model is Planning with World Model☆163Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆131Updated 6 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆60Updated 4 months ago
- Self-Alignment with Principle-Following Reward Models☆158Updated last year
- ☆126Updated 5 months ago