CarperAI / Algorithm-Distillation-RLHFLinks
☆35Updated 2 years ago
Alternatives and similar repositories for Algorithm-Distillation-RLHF
Users that are interested in Algorithm-Distillation-RLHF are comparing it to the libraries listed below
Sorting:
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆210Updated 2 years ago
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆117Updated last year
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated last year
- Official code for "Can Wikipedia Help Offline Reinforcement Learning?" by Machel Reid, Yutaro Yamada and Shixiang Shane Gu☆106Updated 3 years ago
- ☆105Updated last year
- ☆57Updated last year
- ☆45Updated last year
- ☆19Updated 2 years ago
- ☆15Updated last year
- Interpreting how transformers simulate agents performing RL tasks☆88Updated 2 years ago
- Learn online intrinsic rewards from LLM feedback☆45Updated 11 months ago
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated last year
- Platform to run interactive Reinforcement Learning agents in a Minecraft Server☆54Updated last year
- [ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)☆20Updated last year
- A reinforcement learning environment for the IGLU 2022 at NeurIPS☆34Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆67Updated 6 months ago
- ☆31Updated 3 years ago
- An implementation of PPO in Pytorch☆98Updated 2 weeks ago
- Scaling scaling laws with board games.☆53Updated 2 years ago
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆32Updated last year
- Supplementary Data for Evolving Reinforcement Learning Algorithms☆47Updated 4 years ago
- Intrinsic Motivation from Artificial Intelligence Feedback☆132Updated 2 years ago
- ☆37Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆65Updated 2 years ago
- Implementation of Direct Preference Optimization☆17Updated 2 years ago
- Plug-and-play hydra sweepers for the EA-based multifidelity method DEHB and several population-based training variations, all proven to e…☆84Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- ☆221Updated 2 years ago