jys5609 / MC-LAVE-RLLinks
ICLR 2021: "Monte-Carlo Planning and Learning with Language Action Value Estimates"
☆33Updated 2 years ago
Alternatives and similar repositories for MC-LAVE-RL
Users that are interested in MC-LAVE-RL are comparing it to the libraries listed below
Sorting:
- Official implementation of "Direct Preference-based Policy Optimization without Reward Modeling" (NeurIPS 2023)☆42Updated last year
- Implementation of ICLR 2025 paper "Q-Adapter: Customizing Pre-trained LLMs to New Preferences with Forgetting Mitigation"☆18Updated last year
- ☆110Updated last year
- Learning to Modulate pre-trained Models in RL (Decision Transformer, LoRA, Fine-tuning)☆61Updated last year
- ☆41Updated 2 years ago
- Generalized Decision Transformer for Offline Hindsight Information Matching (ICLR2022)☆70Updated 3 years ago
- Code for Paper (Policy Optimization in RLHF: The Impact of Out-of-preference Data)☆28Updated 2 years ago
- Code for the paper: Dense Reward for Free in Reinforcement Learning from Human Feedback (ICML 2024) by Alex J. Chan, Hao Sun, Samuel Holt…☆38Updated last year
- ☆14Updated last year
- The official code release for Q#: Provably Optimal Distributional RL for LLM Post-Training☆18Updated 10 months ago
- Implements the Messenger environment and EMMA model.☆25Updated 2 years ago
- Rewarded soups official implementation☆62Updated 2 years ago
- Code for Contrastive Preference Learning (CPL)☆178Updated last year
- An OpenAI gym environment to evaluate the ability of LLMs (eg. GPT-4, Claude) in long-horizon reasoning and task planning in dynamic mult…☆73Updated 2 years ago
- A repo for RLHF training and BoN over LLMs, with support for reward model ensembles.☆46Updated last year
- Official PyTorch implementation of "Discovering Hierarchical Achievements in Reinforcement Learning via Contrastive Learning" (NeurIPS 20…☆35Updated 11 months ago
- Implementation of ICML 2023 paper: Future-conditioned Unsupervised Pretraining for Decision Transformer☆29Updated 2 years ago
- Preference Transformer: Modeling Human Preferences using Transformers for RL (ICLR2023 Accepted)☆167Updated 2 years ago
- [ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)☆20Updated last year
- Exploring techniques to generate diverse conventions in multi-agent settings☆15Updated 2 years ago
- Official code for "Can Wikipedia Help Offline Reinforcement Learning?" by Machel Reid, Yutaro Yamada and Shixiang Shane Gu☆106Updated 3 years ago
- Codebase for "Uni[MASK]: Unified Inference in Sequential Decision Problems"☆57Updated last year
- We perform functional grounding of LLMs' knowledge in BabyAI-Text☆275Updated 3 months ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆185Updated 8 months ago
- This is code for most of the experiments in the paper Understanding the Effects of RLHF on LLM Generalisation and Diversity☆47Updated 2 years ago
- Tracking literature and additional online resources on transformers for sequential decision making including RL and beyond.☆49Updated 3 years ago
- Code and data for the paper "Understanding Hidden Context in Preference Learning: Consequences for RLHF"☆33Updated 2 years ago
- Lamorel is a Python library designed for RL practitioners eager to use Large Language Models (LLMs).☆244Updated last month
- Codes for "Efficient Offline Policy Optimization with a Learned Model", ICLR2023☆30Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago