Cornell-RL / drpo
Dateset Reset Policy Optimization
☆28Updated 6 months ago
Related projects ⓘ
Alternatives and complementary repositories for drpo
- ☆24Updated 6 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆38Updated 9 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆77Updated 2 weeks ago
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆102Updated 7 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆52Updated 2 months ago
- ☆112Updated 3 months ago
- ICML 2024 - Official Repository for EXO: Towards Efficient Exact Optimization of Language Model Alignment☆46Updated 4 months ago
- Official implementation of Bootstrapping Language Models via DPO Implicit Rewards☆39Updated 3 months ago
- Rewarded soups official implementation☆49Updated last year
- Directional Preference Alignment☆49Updated last month
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆49Updated 5 months ago
- Uni-RLHF platform for "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" (ICLR2024…☆30Updated 7 months ago
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆24Updated last month
- Domain-specific preference (DSP) data and customized RM fine-tuning.☆24Updated 8 months ago
- ☆73Updated 4 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆104Updated 4 months ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆72Updated 9 months ago
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆24Updated 6 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆95Updated 2 months ago
- Code accompanying the paper "Noise Contrastive Alignment of Language Models with Explicit Rewards" (NeurIPS 2024)☆32Updated this week
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆20Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆32Updated 9 months ago
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆14Updated 11 months ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆23Updated 10 months ago
- Repository for NPHardEval, a quantified-dynamic benchmark of LLMs☆48Updated 7 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆15Updated this week
- Self-Supervised Alignment with Mutual Information☆14Updated 5 months ago
- Code for the arXiv preprint "The Unreasonable Effectiveness of Easy Training Data"☆44Updated 9 months ago
- [ACL 2024] Self-Training with Direct Preference Optimization Improves Chain-of-Thought Reasoning☆30Updated 3 months ago
- ☆25Updated last week