PRIME-RL / ImplicitPRM
Repo of paper "Free Process Rewards without Process Labels"
☆120Updated last month
Alternatives and similar repositories for ImplicitPRM:
Users that are interested in ImplicitPRM are comparing it to the libraries listed below
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆153Updated 2 months ago
- ☆130Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆115Updated 5 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆94Updated 4 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆112Updated 7 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆120Updated 3 months ago
- ☆53Updated 3 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 4 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆289Updated 6 months ago
- GenRM-CoT: Data release for verification rationales☆46Updated 4 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆94Updated 2 months ago
- Interpretable Contrastive Monte Carlo Tree Search Reasoning☆42Updated 3 months ago
- official implementation of paper "Process Reward Model with Q-value Rankings"☆48Updated 2 weeks ago
- ☆92Updated 3 weeks ago
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆153Updated last month
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆186Updated last week
- ☆64Updated 10 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆48Updated 2 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆67Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆126Updated this week
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆170Updated 9 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆172Updated 6 months ago
- Code and data used in the paper: "Training on Incorrect Synthetic Data via RL Scales LLM Math Reasoning Eight-Fold"☆29Updated 8 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆72Updated 8 months ago
- The official repository of the Omni-MATH benchmark.☆71Updated last month
- The HELMET Benchmark☆114Updated last week