RLHFlow / Directional-Preference-AlignmentLinks
Directional Preference Alignment
☆57Updated last year
Alternatives and similar repositories for Directional-Preference-Alignment
Users that are interested in Directional-Preference-Alignment are comparing it to the libraries listed below
Sorting:
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆110Updated 2 years ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆44Updated 6 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆57Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 5 months ago
- ☆103Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- This is an official implementation of the Reward rAnked Fine-Tuning Algorithm (RAFT), also known as iterative best-of-n fine-tuning or re…☆37Updated last year
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆51Updated last year
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆65Updated 7 months ago
- ☆101Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆102Updated last week
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆25Updated last year
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- GenRM-CoT: Data release for verification rationales☆67Updated last year
- ☆45Updated last year
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆148Updated 8 months ago
- official implementation of ICLR'2025 paper: Rethinking Bradley-Terry Models in Preference-based Reward Modeling: Foundations, Theory, and…☆66Updated 6 months ago
- ☆69Updated last year
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆57Updated 11 months ago
- Self-Alignment with Principle-Following Reward Models☆168Updated last month
- ☆50Updated 11 months ago
- Code for Paper (Preserving Diversity in Supervised Fine-tuning of Large Language Models)☆40Updated 5 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆73Updated 2 weeks ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated last year
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆69Updated last year
- Self-Supervised Alignment with Mutual Information☆21Updated last year
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆57Updated last year
- B-STAR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners☆85Updated 5 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆121Updated 6 months ago