PKU-Alignment / alignerLinks
[NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct
☆186Updated 8 months ago
Alternatives and similar repositories for aligner
Users that are interested in aligner are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆128Updated 6 months ago
- Trial and Error: Exploration-Based Trajectory Optimization of LLM Agents (ACL 2024 Main Conference)☆151Updated 11 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 7 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆110Updated last month
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 8 months ago
- ☆50Updated 11 months ago
- ☆207Updated 6 months ago
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 10 months ago
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆142Updated last year
- ☆211Updated 7 months ago
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆62Updated 2 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆171Updated 4 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆88Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆164Updated 6 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆158Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆131Updated 5 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆61Updated 11 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆84Updated 4 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆37Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 9 months ago
- ☆51Updated 4 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆69Updated 9 months ago
- This is the official GitHub repository for our survey paper "Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large Language …☆115Updated 4 months ago
- A comprehensive collection of process reward models.☆110Updated this week
- ☆69Updated last year
- A research repo for experiments about Reinforcement Finetuning☆52Updated 6 months ago
- On Memorization of Large Language Models in Logical Reasoning☆72Updated 6 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆136Updated 2 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆340Updated 2 months ago
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆80Updated 2 years ago