yinyueqin / relative-preference-optimizationLinks
Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
☆25Updated last year
Alternatives and similar repositories for relative-preference-optimization
Users that are interested in relative-preference-optimization are comparing it to the libraries listed below
Sorting:
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆94Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- ☆46Updated 3 months ago
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆38Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆63Updated last year
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆49Updated 2 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆146Updated 2 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 10 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 9 months ago
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆69Updated 5 months ago
- ☆51Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆191Updated 11 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆46Updated 10 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆46Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 6 months ago
- A Sober Look at Language Model Reasoning☆92Updated last month
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆78Updated 7 months ago
- Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 7 months ago
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆119Updated last year
- ☆77Updated last year
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- ☆58Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 9 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 2 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago