TakuyaHiraoka / Dropout-Q-Functions-for-Doubly-Efficient-Reinforcement-LearningLinks
Source files to replicate experiments in my ICLR 2022 paper.
☆71Updated 11 months ago
Alternatives and similar repositories for Dropout-Q-Functions-for-Doubly-Efficient-Reinforcement-Learning
Users that are interested in Dropout-Q-Functions-for-Doubly-Efficient-Reinforcement-Learning are comparing it to the libraries listed below
Sorting:
- HIQL: Offline Goal-Conditioned RL with Latent States as Actions (NeurIPS 2023)☆85Updated 6 months ago
- Official code release for "CrossQ: Batch Normalization in Deep Reinforcement Learning for Greater Sample Efficiency and Simplicity"☆75Updated last year
- ☆53Updated 3 years ago
- Skill-based Model-based Reinforcement Learning (CoRL 2022)☆60Updated 2 years ago
- Advantage weighted Actor Critic for Offline RL☆50Updated 2 years ago
- Official implementation of NeurIPS'23 paper, Beyond Uniform Sampling: Offline Reinforcement Learning with Imbalanced Datasets☆26Updated last year
- ☆48Updated 7 months ago
- EARL: Environment for Autonomous Reinforcement Learning☆37Updated 2 years ago
- Implementation of Jump-Start Reinforcement Learning (JSRL) with Stable Baselines3☆31Updated last year
- Official release of CompoSuite, a compositional RL benchmark☆49Updated last year
- Code and project page for D-REX algorithm from the paper "Better-than-Demonstrator Imitation Learning via Automatically-Ranked Demonstrat…☆50Updated 2 years ago
- My Body Is A Cage☆41Updated 4 years ago
- Code for "Planning Goals for Exploration", ICLR2023 Spotlight. An unsupervised RL agent for hard exploration tasks.☆78Updated last year
- Official Codebase for Offline Reinforcement Learning from Images with Latent Space Models