alecwangcq / f-divergence-dpoLinks
Direct preference optimization with f-divergences.
☆14Updated 9 months ago
Alternatives and similar repositories for f-divergence-dpo
Users that are interested in f-divergence-dpo are comparing it to the libraries listed below
Sorting:
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆189Updated 4 months ago
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆321Updated last year
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated 2 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆45Updated 7 months ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆191Updated last year
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆43Updated last month
- Benchmarking LLMs' Gaming Ability in Multi-Agent Environments☆87Updated 4 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 6 months ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆93Updated last year
- GenRM-CoT: Data release for verification rationales☆65Updated 10 months ago
- [NeurIPS 2023] Large Language Models Are Semi-Parametric Reinforcement Learning Agents☆34Updated last year
- Rewarded soups official implementation☆60Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆134Updated last month
- ☆43Updated 5 months ago
- ☆204Updated 5 months ago
- ☆68Updated last year
- Repo of paper "Free Process Rewards without Process Labels"☆162Updated 5 months ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆218Updated 2 weeks ago
- Implementation of the ICML 2024 paper "Training Large Language Models for Reasoning through Reverse Curriculum Reinforcement Learning" pr…☆108Updated last year
- Implementation for the research paper "Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision".☆56Updated 9 months ago
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆29Updated 8 months ago
- ☆22Updated 8 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆310Updated last month
- Source codes for "Preference-grounded Token-level Guidance for Language Model Fine-tuning" (NeurIPS 2023).☆16Updated 7 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆61Updated 10 months ago