david-lindner / idrlLinks
Code accompanying the paper "Information Directed Reward Learning for Reinforcement Learning" (NeurIPS 2021).
☆13Updated 3 years ago
Alternatives and similar repositories for idrl
Users that are interested in idrl are comparing it to the libraries listed below
Sorting:
- Estimating Q(s,s') with Deep Deterministic Dynamics Gradients☆32Updated 5 years ago
- ☆30Updated last year
- Model-Based Reinforcement Learning via Latent-Space Collocation.☆33Updated 2 years ago
- IV-RL - Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation☆39Updated 8 months ago
- [ICLR 22] Value Gradient weighted Model-Based Reinforcement Learning.☆24Updated 2 years ago
- A simple and easy to use implementation of the soft actor-critic algorithm.☆15Updated 2 years ago
- Implementation of the Prioritized Option-Critic on the Four-Rooms Environment☆16Updated 7 years ago
- Learning Off-Policy with Online Planning [CoRL 2021 Best Paper Finalist]☆39Updated 2 years ago
- 🔍 Codebase for the ICML '20 paper "Ready Policy One: World Building Through Active Learning" (arxiv: 2002.02693)☆18Updated last year
- ☆29Updated 4 years ago
- Code to accompany the paper "The Information Geometry of Unsupervised Reinforcement Learning"☆20Updated 3 years ago
- ☆31Updated 2 years ago
- Implementation for ICML 2019 paper, EMI: Exploration with Mutual Information.☆36Updated 4 years ago
- CaDM: Context-aware Dynamics Model for Generalization in Model-based Reinforcement Learning☆63Updated 5 years ago
- Pytorch code for "Learning Belief Representations for Imitation Learning in POMDPs" (UAI 2019)☆19Updated 2 years ago
- Learning Action-Value Gradients in Model-based Policy Optimization☆31Updated 3 years ago
- Author's PyTorch Implementation of Deep Homomorphic Policy Gradient (DHPG) - NeurIPS 2022 and JMLR 2024☆23Updated last year
- Efficient Exploration via State Marginal Matching (2019)☆69Updated 5 years ago
- ☆13Updated 6 years ago
- Official pytorch implementation for our ICLR 2023 paper "Latent State Marginalization as a Low-cost Approach for Improving Exploration".