yinyueqin / relative-preference-optimizationLinks
Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts
☆25Updated last year
Alternatives and similar repositories for relative-preference-optimization
Users that are interested in relative-preference-optimization are comparing it to the libraries listed below
Sorting:
- ☆46Updated 4 months ago
- Discriminative Constrained Optimization for Reinforcing Large Reasoning Models☆50Updated 2 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆70Updated 10 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 6 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆95Updated last year
- GenRM-CoT: Data release for verification rationales☆68Updated last year
- A Sober Look at Language Model Reasoning☆92Updated 2 months ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆39Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Updated 3 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆168Updated 10 months ago
- ☆78Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆151Updated 11 months ago
- [NeurIPS 2025] Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆157Updated 3 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆124Updated 10 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆79Updated 7 months ago
- [EMNLP 2025] LightThinker: Thinking Step-by-Step Compression☆131Updated 9 months ago
- [ICLR2026] Laser: Learn to Reason Efficiently with Adaptive Length-based Reward Shaping☆62Updated 8 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- RL with Experience Replay☆54Updated 6 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆58Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆134Updated 10 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆64Updated last year
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆70Updated 6 months ago
- Optimizing Anytime Reasoning via Budget Relative Policy Optimization☆51Updated 6 months ago
- ☆33Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆152Updated 6 months ago
- [ACL' 25] The official code repository for PRMBench: A Fine-grained and Challenging Benchmark for Process-Level Reward Models.☆87Updated 11 months ago
- The rule-based evaluation subset and code implementation of Omni-MATH☆26Updated last year
- ☆73Updated 7 months ago