PKU-Alignment / alignerLinks
[NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct
☆191Updated last year
Alternatives and similar repositories for aligner
Users that are interested in aligner are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 9 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆150Updated 2 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 11 months ago
- ☆51Updated last year
- ☆220Updated 9 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆159Updated last year
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆55Updated last year
- ☆213Updated 10 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆182Updated 7 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆150Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆73Updated 5 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆83Updated last year
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆162Updated 8 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 6 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆269Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆65Updated last year
- A research repo for experiments about Reinforcement Finetuning☆53Updated 9 months ago
- ☆47Updated 9 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆150Updated 2 months ago
- A comprehensive collection of process reward models.☆131Updated 3 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆142Updated 2 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆119Updated last year
- Model merging is a highly efficient approach for long-to-short reasoning.☆96Updated 3 months ago
- [NeurIPS 2024 D&B Track] GTA: A Benchmark for General Tool Agents☆132Updated 9 months ago
- RM-R1: Unleashing the Reasoning Potential of Reward Models☆155Updated 6 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated 2 years ago
- [R]einforcement [L]earning from [M]odel-rewarded [T]hinking - code for the paper "Language Models That Think, Chat Better"☆123Updated 2 months ago