alecwangcq / f-divergence-dpoLinks
Direct preference optimization with f-divergences.
☆14Updated 9 months ago
Alternatives and similar repositories for f-divergence-dpo
Users that are interested in f-divergence-dpo are comparing it to the libraries listed below
Sorting:
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆185Updated 3 months ago
- [ACL'24] Beyond One-Preference-Fits-All Alignment: Multi-Objective Direct Preference Optimization☆85Updated 11 months ago
- Code for the ICML 2024 paper "Rewards-in-Context: Multi-objective Alignment of Foundation Models with Dynamic Preference Adjustment"☆72Updated last month
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆319Updated last year
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆35Updated last month
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 5 months ago
- Official code for paper Understanding the Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation☆20Updated last year
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆45Updated 6 months ago
- Repo of paper "Free Process Rewards without Process Labels"☆161Updated 4 months ago
- An index of algorithms for reinforcement learning from human feedback (rlhf))☆92Updated last year
- ☆203Updated 4 months ago
- ☆43Updated 4 months ago
- GenRM-CoT: Data release for verification rationales☆63Updated 9 months ago
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆38Updated 5 months ago
- Principled Data Selection for Alignment: The Hidden Risks of Difficult Examples☆38Updated 3 weeks ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆189Updated last year
- Official code for the paper, "Stop Summation: Min-Form Credit Assignment Is All Process Reward Model Needs for Reasoning"☆132Updated 3 weeks ago
- ☆68Updated last year
- This is an official implementation of the paper ``Building Math Agents with Multi-Turn Iterative Preference Learning'' with multi-turn DP…☆28Updated 8 months ago
- The Entropy Mechanism of Reinforcement Learning for Large Language Model Reasoning.☆282Updated 3 weeks ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆167Updated 2 months ago
- ☆43Updated last year
- [NeurIPS 2023] Large Language Models Are Semi-Parametric Reinforcement Learning Agents☆34Updated last year
- Rewarded soups official implementation☆58Updated last year
- ☆65Updated 3 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆278Updated last year
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆100Updated 3 weeks ago
- A Framework for LLM-based Multi-Agent Reinforced Training and Inference☆185Updated this week