amazon-science / PrefEvalLinks
☆31Updated 8 months ago
Alternatives and similar repositories for PrefEval
Users that are interested in PrefEval are comparing it to the libraries listed below
Sorting:
- ☆204Updated last month
- ☆223Updated 10 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆414Updated 6 months ago
- A brief and partial summary of RLHF algorithms.☆143Updated 10 months ago
- Paper Reproduction Google SCoRE(Training Language Models to Self-Correct via Reinforcement Learning)☆142Updated last year
- ☆141Updated 10 months ago
- ☆352Updated 6 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆87Updated 10 months ago
- A Sober Look at Language Model Reasoning☆92Updated 2 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆255Updated 8 months ago
- This is the official implementation of the paper "S²R: Teaching LLMs to Self-verify and Self-correct via Reinforcement Learning"☆73Updated 9 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- A continually updated list of literature on Reinforcement Learning from AI Feedback (RLAIF)☆194Updated 5 months ago
- A repo for open research on building large reasoning models☆133Updated this week
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆95Updated last year
- ☆117Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆116Updated 5 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆259Updated 8 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆79Updated 7 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆71Updated 11 months ago
- ☆108Updated last month
- ☆144Updated 4 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 9 months ago
- ☆215Updated 11 months ago
- ☆73Updated 9 months ago
- ☆216Updated 7 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆157Updated 3 months ago