sebascuri / hucrlView external linksLinks
☆32Nov 13, 2023Updated 2 years ago
Alternatives and similar repositories for hucrl
Users that are interested in hucrl are comparing it to the libraries listed below
Sorting:
- ☆66Mar 11, 2024Updated last year
- ☆19Jan 9, 2025Updated last year
- ☆19Nov 13, 2023Updated 2 years ago
- Batch and incremental Sparse Spectrum Gaussian Process for Regression☆10Mar 16, 2021Updated 4 years ago
- Implementation of BIMRL: Brain Inspired Meta Reinforcement Learning - Roozbeh Razavi et al. (IROS 2022)☆10Dec 1, 2022Updated 3 years ago
- Experiment utility code, specifically designed for use with Compute Canada.☆11Jan 27, 2025Updated last year
- Uncertainty sets for nonlinear dynamical systems☆10Nov 7, 2020Updated 5 years ago
- Google AI Research☆10Mar 11, 2020Updated 5 years ago
- Provides a heteroscedastic noise latent for a sparse variational Gaussian process using GPflow☆13Nov 11, 2020Updated 5 years ago
- The official implementation of the paper "Deep Reinforcement Learning with Task-Adaptive Retrieval via Hypernetwork".☆12Feb 27, 2024Updated last year
- Resilient Multi-Agent Reinforcement Learning☆10Nov 4, 2022Updated 3 years ago
- [AAMAS 2023] Code for the paper "Automatic Noise Filtering with Dynamic Sparse Training in Deep Reinforcement Learning"☆12Feb 22, 2024Updated last year
- Convergent Policy Optimization for Safe Reinforcement Learning☆11Oct 26, 2019Updated 6 years ago
- ☆29May 27, 2024Updated last year
- Code & Experiments for "LILA: Language-Informed Latent Actions" to be presented at the Conference on Robot Learning (CoRL) 2021.☆14Nov 4, 2021Updated 4 years ago
- This repository provides open-source code for sparse continuous distributions and corresponding Fenchel-Young losses.☆16May 10, 2023Updated 2 years ago
- Stochastic Optimal Control and Reachability Toolbox Written in Python☆15Jul 13, 2023Updated 2 years ago
- Code for testing DCT plus Sparse (DCTpS) networks☆14Jun 15, 2021Updated 4 years ago
- Library for model based RL in robotics☆37Sep 10, 2018Updated 7 years ago
- ☆15Jul 1, 2021Updated 4 years ago
- Code for paper "Learning Multimodal Transition Dynamics for Model-Based Reinforcement Learning".☆35May 24, 2018Updated 7 years ago
- Representation Learning in RL☆13Jun 1, 2022Updated 3 years ago
- ☆15Apr 5, 2023Updated 2 years ago
- Motion imitation with deep reinforcement learning.☆13Jul 24, 2019Updated 6 years ago
- This is the code base for our paper "B-GAP: Behavior-Rich Simulation and Navigation for Autonomous Driving", which was published at RA-L …☆41Oct 18, 2023Updated 2 years ago
- 🔍 Codebase for the ICML '20 paper "Ready Policy One: World Building Through Active Learning" (arxiv: 2002.02693)☆18Jul 6, 2023Updated 2 years ago
- Experiment code for "Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models"☆472Jul 6, 2023Updated 2 years ago
- Exploring the Dyna-Q reinforcement learning algorithm☆17Feb 27, 2018Updated 7 years ago
- Scalable Bayesian Inverse Reinforcement Learning (ICLR 2021) by Alex J. Chan and Mihaela van der Schaar.☆46Mar 12, 2021Updated 4 years ago
- ☆49Jul 23, 2021Updated 4 years ago
- [NeurIPS 2020] Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters (AHGP)☆23Dec 9, 2020Updated 5 years ago
- Modular-HER is revised from OpenAI baselines and supports many improvements for Hindsight Experience Replay as modules.☆17Jun 23, 2021Updated 4 years ago
- (NeurIPS '22) LISA: Learning Interpretable Skill Abstractions - A framework for unsupervised skill learning using Imitation☆29Feb 22, 2023Updated 2 years ago
- [CVPR 2021] Official Implementation of VAI: Unsupervised Visual Attention and Invariance for Reinforcement Learning☆27May 3, 2022Updated 3 years ago
- On the model-based stochastic value gradient for continuous reinforcement learning☆57Jan 7, 2026Updated last month
- Authors' PyTorch implementation of 'Recomposing the Reinforcement Learning Building-Blocks with Hypernetworks' (HypeRL)☆26Jun 9, 2021Updated 4 years ago
- ☆30Jun 3, 2022Updated 3 years ago
- ☆31Aug 25, 2022Updated 3 years ago
- ☆26Jun 17, 2022Updated 3 years ago