yinyueqin / relative-preference-optimization
Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
☆24Updated last year
Alternatives and similar repositories for relative-preference-optimization:
Users that are interested in relative-preference-optimization are comparing it to the libraries listed below
- A Survey on the Honesty of Large Language Models☆56Updated 3 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆67Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆55Updated 5 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆72Updated 7 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆43Updated 8 months ago
- ☆30Updated last week
- ☆49Updated last month
- ☆43Updated 5 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆119Updated 6 months ago
- [NeurIPS 2024] Official code of $\beta$-DPO: Direct Preference Optimization with Dynamic $\beta$☆41Updated 5 months ago
- The official code repository for PRMBench.☆68Updated last month
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆28Updated last week
- M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆56Updated 3 months ago
- Directional Preference Alignment☆56Updated 6 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆130Updated last month
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆70Updated 4 months ago
- A Self-Training Framework for Vision-Language Reasoning☆73Updated 2 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆27Updated 6 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆37Updated 3 weeks ago
- Codes for Merging Large Language Models☆29Updated 7 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆77Updated this week
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated last year
- ☆59Updated 7 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆57Updated 5 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆24Updated 3 months ago
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆107Updated last week
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆49Updated 10 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆56Updated last month
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 9 months ago