CarperAI / Algorithm-Distillation-RLHFLinks
☆35Updated 2 years ago
Alternatives and similar repositories for Algorithm-Distillation-RLHF
Users that are interested in Algorithm-Distillation-RLHF are comparing it to the libraries listed below
Sorting:
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- Official code for "Can Wikipedia Help Offline Reinforcement Learning?" by Machel Reid, Yutaro Yamada and Shixiang Shane Gu☆105Updated 3 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆209Updated 2 years ago
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated last year
- A reinforcement learning environment for the IGLU 2022 at NeurIPS☆34Updated 2 years ago
- Interpreting how transformers simulate agents performing RL tasks☆87Updated last year
- ☆55Updated 10 months ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆114Updated last year
- ☆101Updated last year
- ☆19Updated 2 years ago
- Scalable Opponent Shaping Experiments in JAX☆24Updated last year
- ☆13Updated last year
- RL algorithm: Advantage induced policy alignment☆65Updated 2 years ago
- Repo to reproduce the First-Explore paper results☆38Updated 8 months ago
- Intrinsic Motivation from Artificial Intelligence Feedback☆131Updated last year
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆62Updated 4 months ago