CarperAI / Algorithm-Distillation-RLHFLinks
☆34Updated 2 years ago
Alternatives and similar repositories for Algorithm-Distillation-RLHF
Users that are interested in Algorithm-Distillation-RLHF are comparing it to the libraries listed below
Sorting:
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- Learn online intrinsic rewards from LLM feedback☆41Updated 6 months ago
- ☆95Updated 11 months ago
- CleanRL's implementation of DeepMind's Podracer Sebulba Architecture for Distributed DRL☆113Updated 10 months ago
- Interpreting how transformers simulate agents performing RL tasks☆84Updated last year
- Advantage Leftover Lunch Reinforcement Learning (A-LoL RL): Improving Language Models with Advantage-based Offline Policy Gradients☆26Updated 9 months ago
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆69Updated 2 years ago
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆57Updated 2 months ago
- [ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)☆20Updated 10 months ago
- Official PyTorch implementation of "Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning" (NeurIPS 20…☆32Updated 4 months ago
- Scalable Opponent Shaping Experiments in JAX☆24Updated last year
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated 11 months ago
- ☆17Updated last year
- Learning from preferences is a common paradigm for fine-tuning language models. Yet, many algorithmic design decisions come into play. Ou…☆29Updated last year
- Repo to reproduce the First-Explore paper results☆37Updated 6 months ago
- Minimal but scalable implementation of large language models in JAX☆35Updated 7 months ago
- [ICLR 2025] "Training LMs on Synthetic Edit Sequences Improves Code Synthesis" (Piterbarg, Pinto, Fergus)☆19Updated 4 months ago
- Code for Discovered Policy Optimisation (NeurIPS 2022)☆11Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆208Updated last year
- Learning to Modulate pre-trained Models in RL (Decision Transformer, LoRA, Fine-tuning)☆58Updated 8 months ago
- [AutoML'22] Bayesian Generational Population-based Training (BG-PBT)☆28Updated 2 years ago
- ☆20Updated 2 years ago
- ☆54Updated 7 months ago
- Jax implementation of Proximal Policy Optimization (PPO) specifically tuned for Procgen, with benchmarked results and saved model weights…☆56Updated 2 years ago
- ☆12Updated 11 months ago
- Official code for "Can Wikipedia Help Offline Reinforcement Learning?" by Machel Reid, Yutaro Yamada and Shixiang Shane Gu☆105Updated 2 years ago
- Drop-in environment replacements that make your RL algorithm train faster.☆21Updated last year
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆58Updated 4 months ago
- ☆56Updated 2 years ago
- Implements the Messenger environment and EMMA model.☆23Updated 2 years ago