DavidJanz / successor_uncertainties_atari
Code for paper "Successor Uncertainties: Exploration and Uncertainty in Temporal Difference Learning" by David Janz*, Jiri Hron*, Przemysław Mazur, Katja Hofmann, José Miguel Hernández-Lobato, Sebastian Tschiatschek. NeurIPS 2019. *Equal contribution
☆21Updated 2 years ago
Alternatives and similar repositories for successor_uncertainties_atari
Users that are interested in successor_uncertainties_atari are comparing it to the libraries listed below
Sorting:
- Estimating Q(s,s') with Deep Deterministic Dynamics Gradients☆32Updated 5 years ago
- Open source demo for the paper Learning to Score Behaviors for Guided Policy Optimization☆24Updated 4 years ago
- This repository contains code for the method and experiments of the paper "Learning with AMIGo: Adversarially Motivated Intrinsic Goals".☆61Updated last year
- Continual Reinforcement Learning in 3D Non-stationary Environments☆37Updated 5 years ago
- Implicit Normalizing Flows + Reinforcement Learning☆61Updated 5 years ago
- 📴 OffCon^3: SOTA PyTorch SAC and TD3 Implementations (arxiv: 2101.11331)☆24Updated 3 years ago
- Maximum Entropy-Regularized Multi-Goal Reinforcement Learning (ICML 2019)☆23Updated 5 years ago
- E-MAML, and RL-MAML baseline implemented in Tensorflow v1☆16Updated 5 years ago
- Official implementation of DynE, Dynamics-aware Embeddings for RL☆43Updated 4 years ago
- Energy-Based Hindsight Experience Prioritization (CoRL 2018) Oral presentation (7%)☆33Updated 6 years ago
- Code for the CoRL 2019 paper AC-Teach: A Bayesian Actor-Critic Method for Policy Learning with an Ensemble of Suboptimal Teachers☆24Updated 2 years ago
- Learning Action-Value Gradients in Model-based Policy Optimization☆31Updated 3 years ago
- Repository for the paper "Long-Horizon Visual Planning with Goal-Conditioned Hierarchical Predictors"☆44Updated 2 years ago
- Implementation of the Model-Based Meta-Policy-Optimization (MB-MPO) algorithm☆44Updated 6 years ago
- Invariant Causal Prediction for Block MDPs☆44Updated 4 years ago
- Code accompanying the paper "Better Exploration with Optimistic Actor Critic" (NeurIPS 2019)☆70Updated last year
- Dead-ends and Secure Exploration in Reinforcement Learning☆11Updated 5 years ago
- Revisiting Rainbow☆74Updated 3 years ago
- Options of Interest: Temporal Abstraction with Interest Functions AAAI 2020☆25Updated 4 years ago
- Code for Optimistic Exploration even with a Pessimistic Initialisation☆14Updated 4 years ago
- ☆31Updated 5 years ago
- Easy MDPs and grid worlds with accessible transition dynamics to do exact calculations☆49Updated 3 years ago
- Sparse environment for MuJoCo suite (v2 and v3)☆8Updated 5 years ago
- Implementation of the Box-World environment from the paper "Relational Deep Reinforcement Learning"☆46Updated last year
- Implementation of our paper "Meta Reinforcement Learning with Task Embedding and Shared Policy"☆34Updated 6 years ago
- ☆98Updated 2 years ago
- Efficient Exploration via State Marginal Matching (2019)☆68Updated 5 years ago
- Reinforcement Learning papers on exploration methods.☆19Updated 3 years ago
- Automatic Data-Regularized Actor-Critic (Auto-DrAC)☆103Updated 2 years ago
- ☆25Updated 6 years ago