TU2021 / DPO-VPLinks
Improving Math reasoning through Direct Preference Optimization with Verifiable Pairs
☆14Updated 4 months ago
Alternatives and similar repositories for DPO-VP
Users that are interested in DPO-VP are comparing it to the libraries listed below
Sorting:
- Official implementation of the NeurIPS 2024 paper CORY☆17Updated 4 months ago
- [ACL'24, Outstanding Paper] Emulated Disalignment: Safety Alignment for Large Language Models May Backfire!☆37Updated 11 months ago
- ☆20Updated last month
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆83Updated 11 months ago
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆129Updated this week
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆251Updated last week
- DistRL: An Asynchronous Distributed Reinforcement Learning Framework for On-Device Control Agents☆25Updated 4 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆38Updated 5 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- [ACL 2025] Data and Code for Paper VLSBench: Unveiling Visual Leakage in Multimodal Safety☆48Updated 2 months ago
- Implementation of the MATRIX framework (ICML 2024)☆56Updated last year
- [ICML 2025] "From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium"☆12Updated last week
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆83Updated 2 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆64Updated 3 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆73Updated last month
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- SPA-Bench: A Comprehensive Benchmark for SmartPhone Agent Evaluation☆38Updated last week
- [arXiv] Do Not Let Low-Probability Tokens Over-Dominate in RL for LLMs☆34Updated 2 months ago
- Rewarded soups official implementation☆58Updated last year
- ☆242Updated 2 weeks ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last week
- ☆21Updated last month
- Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts☆25Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆57Updated 4 months ago
- ☆12Updated 3 months ago
- ☆14Updated last month
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆188Updated last year
- Reinforced Multi-LLM Agents training☆30Updated last month
- AdaRFT: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning☆38Updated last month
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆180Updated 6 months ago