pokaxpoka / B_PrefView external linksLinks
☆53Nov 10, 2022Updated 3 years ago
Alternatives and similar repositories for B_Pref
Users that are interested in B_Pref are comparing it to the libraries listed below
Sorting:
- Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.☆133Nov 3, 2021Updated 4 years ago
- ☆37Apr 27, 2023Updated 2 years ago
- Code for paper: Reward Uncertainty for Exploration in Preference-based Reinforcement Learning☆15May 26, 2022Updated 3 years ago
- Preference Transformer: Modeling Human Preferences using Transformers for RL (ICLR2023 Accepted)☆167Oct 15, 2023Updated 2 years ago
- Evaluating different engineering tricks that make RL work☆15Jun 3, 2021Updated 4 years ago
- Guide Your Agent with Adaptive Multimodal Rewards (NeurIPS 2023 Accepted)☆33Sep 25, 2023Updated 2 years ago
- Official codebase for Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings.☆21Mar 5, 2021Updated 4 years ago
- Official code for "Pretraining Representations For Data-Efficient Reinforcement Learning" (NeurIPS 2021)☆55Jul 27, 2021Updated 4 years ago
- Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning (NeurIPS 2020)☆39Oct 27, 2020Updated 5 years ago
- ☆10Oct 11, 2022Updated 3 years ago
- Unofficial PyTorch implementation (replicating paper results) of Implicit Q-Learning (In-sample Q-Learning) for offline RL☆24Nov 4, 2024Updated last year
- ☆10Oct 3, 2023Updated 2 years ago
- Codes for Evolving Plastic ANNs☆14Dec 18, 2022Updated 3 years ago
- Implementation of ICML 2023 paper: Future-conditioned Unsupervised Pretraining for Decision Transformer☆29Jul 25, 2023Updated 2 years ago
- ☆12Apr 25, 2022Updated 3 years ago
- ☆13Jun 3, 2022Updated 3 years ago
- ☆13Feb 5, 2024Updated 2 years ago
- Dataset collection and training code for "Ask Your Humans: Using Human Instructions to Improve Generalization in Reinforcement Learning"☆11Apr 8, 2025Updated 10 months ago
- ☆17Oct 12, 2023Updated 2 years ago
- The official repository of Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration" (AAMAS 2022)☆27Feb 3, 2022Updated 4 years ago
- Reproduction of OpenAI and DeepMind's "Deep Reinforcement Learning from Human Preferences"☆31Jul 27, 2021Updated 4 years ago
- Multi-agent active perception with prediction rewards☆11Nov 13, 2020Updated 5 years ago
- Implementation of the MEPOL algorithm - A policy gradient method for task-agnostic exploration☆15Jul 6, 2023Updated 2 years ago
- This is the pytorch implementation of the UAI2023 paper "A Trajectory is Worth Three Sentences: Multimodal Transformer for Offline Reinf…☆11Oct 9, 2023Updated 2 years ago
- Reproduction of OpenAI and DeepMind's "Deep Reinforcement Learning from Human Preferences"☆333Nov 29, 2021Updated 4 years ago
- A Library for Active Preference-based Reward Learning Algorithms☆53Dec 16, 2023Updated 2 years ago
- Advantage weighted Actor Critic for Offline RL☆52Aug 27, 2022Updated 3 years ago
- Pre-Trained Language Models for Interactive Decision-Making [NeurIPS 2022]☆130Jun 8, 2022Updated 3 years ago
- ☆15Sep 7, 2022Updated 3 years ago
- Jaehyung Kim et al's ACL 2023 paper on "infoVerse: A Universal Framework for Dataset Characterization with Multidimensional Meta-informat…☆16Jun 28, 2023Updated 2 years ago
- ☆60Apr 16, 2023Updated 2 years ago
- ☆58Jun 30, 2022Updated 3 years ago
- ☆60Feb 3, 2023Updated 3 years ago
- Code for "World Model as a Graph: Learning Latent Landmarks for Planning" (ICML 2021 Long Presentation)☆68Jul 17, 2021Updated 4 years ago
- Code for paper "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning".☆14May 23, 2021Updated 4 years ago
- Companion code to CoRL 2019 paper: E Bıyık, M Palan, NC Landolfi, DP Losey, D Sadigh. "Asking Easy Questions: A User-Friendly Approach to…☆17Oct 13, 2020Updated 5 years ago
- ☆15Aug 9, 2021Updated 4 years ago
- ☆18Jun 8, 2023Updated 2 years ago
- Code for Continual Learning of Control Primitives☆18Nov 11, 2020Updated 5 years ago