Preference Transformer: Modeling Human Preferences using Transformers for RL (ICLR2023 Accepted)
☆167Oct 15, 2023Updated 2 years ago
Alternatives and similar repositories for PreferenceTransformer
Users that are interested in PreferenceTransformer are comparing it to the libraries listed below
Sorting:
- Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.☆133Nov 3, 2021Updated 4 years ago
- PyTorch code accompanying the paper "Imitating Graph-Based Planning with Goal-Conditioned Policies" (ICLR 2023).☆20Mar 4, 2023Updated 2 years ago
- Listwise Reward Estimation for Offline Preference-based Reinforcement Learning (ICML 2024)☆17Jun 18, 2024Updated last year
- ☆43May 25, 2023Updated 2 years ago
- Jaehyung Kim et al's ACL 2023 paper on "infoVerse: A Universal Framework for Dataset Characterization with Multidimensional Meta-informat…☆16Jun 28, 2023Updated 2 years ago
- ☆37Apr 27, 2023Updated 2 years ago
- Guide Your Agent with Adaptive Multimodal Rewards (NeurIPS 2023 Accepted)☆33Sep 25, 2023Updated 2 years ago
- Meta-Learning with Self-Improving Momentum Target (NeurIPS 2022)☆23Oct 12, 2022Updated 3 years ago
- Trajectory-wise Multiple Choice Learning for Dynamics Generalization in Reinforcement Learning (NeurIPS 2020)☆39Oct 27, 2020Updated 5 years ago
- Official code for ACT: Empowering Decision Transformer with Dynamic Programming via Advantage Conditioning (AAAI'24)☆17Feb 10, 2024Updated 2 years ago
- Learning Large-scale Neural Fields via Context Pruned Meta-Learning (NeurIPS 2023)☆28Sep 24, 2023Updated 2 years ago
- ☆53Nov 10, 2022Updated 3 years ago
- PyTorch implementations for Offline Preference-Based RL (PbRL) algorithms☆21Mar 24, 2025Updated 11 months ago
- ☆18Jun 8, 2023Updated 2 years ago
- CaDM: Context-aware Dynamics Model for Generalization in Model-based Reinforcement Learning☆63May 20, 2020Updated 5 years ago
- ☆10Mar 11, 2024Updated last year
- ☆317Jan 23, 2022Updated 4 years ago
- Implementation of ICML 2023 paper: Future-conditioned Unsupervised Pretraining for Decision Transformer☆29Jul 25, 2023Updated 2 years ago
- Reinforcement Learning via Supervised Learning☆72May 16, 2022Updated 3 years ago
- ☆60Apr 16, 2023Updated 2 years ago
- A collection of reference environments for offline reinforcement learning☆1,656Nov 18, 2024Updated last year
- ☆60Feb 3, 2023Updated 3 years ago
- Subtask-Aware Visual Reward Learning from Segmented Demonstrations (ICLR 2025 accepted)☆18Apr 11, 2025Updated 10 months ago
- Online Decision Transformer☆274Jan 22, 2024Updated 2 years ago
- RE3: State Entropy Maximization with Random Encoders for Efficient Exploration☆69Jul 29, 2021Updated 4 years ago
- ☆364May 1, 2023Updated 2 years ago
- Code for the paper "What Makes Better Augmentation Strategies? Augment Difficult but Not too Different" (ICLR 22)☆12Aug 28, 2023Updated 2 years ago
- Official Codebase for TMLR 2023, Benchmarks and Algorithms for Offline Preference-Based Reward Learning☆20Dec 30, 2022Updated 3 years ago
- Reproduction of OpenAI and DeepMind's "Deep Reinforcement Learning from Human Preferences"☆333Nov 29, 2021Updated 4 years ago
- HIQL: Offline Goal-Conditioned RL with Latent States as Actions (NeurIPS 2023)☆93Dec 1, 2024Updated last year
- Extreme Q-Learning: Max Entropy RL without Entropy☆87Feb 14, 2023Updated 3 years ago
- Official codebase for Decision Transformer: Reinforcement Learning via Sequence Modeling.☆2,773Apr 29, 2024Updated last year
- Scaling Pareto-Efficient Decision Making via Offline Multi-Objective RL, published in ICLR 2023☆33Dec 7, 2024Updated last year
- Official PyTorch implementation of Scalable Neural Video Representations with Learnable Positional Features (NeurIPS 2022).☆78Apr 3, 2024Updated last year
- ☆27Apr 22, 2024Updated last year
- Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"☆529Oct 6, 2022Updated 3 years ago
- Author's PyTorch implementation of TD7 for online and offline RL☆161Sep 12, 2023Updated 2 years ago
- Multi-task Multi-agent Soft Actor Critic for SMAC☆15Jan 18, 2022Updated 4 years ago
- ☆17Mar 2, 2023Updated 3 years ago