morning9393 / ETPOLinks
☆14Updated last year
Alternatives and similar repositories for ETPO
Users that are interested in ETPO are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2023] Large Language Models Are Semi-Parametric Reinforcement Learning Agents☆38Updated last year
- ☆109Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆201Updated 9 months ago
- Implementation of ICLR 2025 paper "Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation"☆18Updated last year
- Uni-RLHF platform for "Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback" (ICLR2024…☆42Updated last year
- ☆32Updated last year
- ☆40Updated 2 years ago
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆144Updated last year
- Rewarded soups official implementation☆62Updated 2 years ago
- ☆65Updated 10 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆184Updated 7 months ago
- Implementation of TWOSOME☆82Updated last year
- ☆89Updated 2 years ago
- The official code release for Q#: Provably Optimal Distributional RL for LLM Post-Training☆17Updated 10 months ago
- Code release for "Generating Code World Models with Large Language Models Guided by Monte Carlo Tree Search" published at NeurIPS '24.☆18Updated 11 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated 2 years ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated 2 years ago
- Verlog: A Multi-turn RL framework for LLM agents☆67Updated this week
- This my attempt to create Self-Correcting-LLM based on the paper Training Language Models to Self-Correct via Reinforcement Learning by g…☆38Updated 6 months ago
- We perform functional grounding of LLMs' knowledge in BabyAI-Text☆275Updated 2 months ago
- Preference Transformer: Modeling Human Preferences using Transformers for RL (ICLR2023 Accepted)☆166Updated 2 years ago
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Updated last year
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated last year
- ☆17Updated 3 weeks ago
- Code for Contrastive Preference Learning (CPL)☆178Updated last year
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆199Updated 2 years ago
- Code and data for the paper: Competing Large Language Models in Multi-Agent Gaming Environments☆93Updated last month
- Direct preference optimization with f-divergences.☆15Updated last year
- Code for NeurIPS 2024 paper "Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs"☆46Updated 11 months ago
- ☆118Updated 9 months ago