facebookresearch / rlfh-gen-div
This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity
☆40Updated last year
Alternatives and similar repositories for rlfh-gen-div:
Users that are interested in rlfh-gen-div are comparing it to the libraries listed below
- Rewarded soups official implementation☆54Updated last year
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆42Updated 7 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆20Updated 2 months ago
- Directional Preference Alignment☆56Updated 5 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆26Updated last year
- ☆35Updated last year
- Dateset Reset Policy Optimization☆30Updated 10 months ago
- ☆79Updated 8 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆118Updated 5 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆126Updated 3 months ago
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆66Updated last year
- This is the official repo for Towards Uncertainty-Aware Language Agent.☆24Updated 6 months ago
- Online Adaptation of Language Models with a Memory of Amortized Contexts (NeurIPS 2024)☆61Updated 7 months ago
- Self-Supervised Alignment with Mutual Information☆15Updated 9 months ago
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 11 months ago
- GenRM-CoT: Data release for verification rationales☆49Updated 4 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆70Updated 6 months ago
- ☆95Updated 8 months ago
- Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging☆98Updated last year
- ☆81Updated 7 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆15Updated last month
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆50Updated 8 months ago
- ☆80Updated 11 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆57Updated 2 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆46Updated 3 months ago
- [𝐄𝐌𝐍𝐋𝐏 𝐅𝐢𝐧𝐝𝐢𝐧𝐠𝐬 𝟐𝟎𝟐𝟒 & 𝐀𝐂𝐋 𝟐𝟎𝟐𝟒 𝐍𝐋𝐑𝐒𝐄 𝐎𝐫𝐚𝐥] 𝘌𝘯𝘩𝘢𝘯𝘤𝘪𝘯𝘨 𝘔𝘢𝘵𝘩𝘦𝘮𝘢𝘵𝘪𝘤𝘢𝘭 𝘙𝘦𝘢𝘴𝘰𝘯𝘪𝘯…☆48Updated 9 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆51Updated 9 months ago
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆22Updated 6 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆128Updated 2 weeks ago
- RL algorithm: Advantage induced policy alignment☆64Updated last year