Multi Task RL Baselines
☆262Dec 31, 2021Updated 4 years ago
Alternatives and similar repositories for mtrl
Users that are interested in mtrl are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- MultiTask Environments for Reinforcement Learning.☆79Aug 18, 2022Updated 3 years ago
- Code for "Multi-task Reinforcement Learning with Soft Modularization"☆122Dec 28, 2020Updated 5 years ago
- Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning☆1,771Jan 20, 2026Updated 2 months ago
- ☆30Jul 12, 2023Updated 2 years ago
- Official PyTorch Implementation for Conflict-Averse Gradient Descent (CAGrad)☆146Nov 9, 2023Updated 2 years ago
- Invariant Causal Prediction for Block MDPs☆44Jun 11, 2020Updated 5 years ago
- ☆26Mar 16, 2023Updated 3 years ago
- Library for Model Based RL☆1,058Jul 12, 2024Updated last year
- Code to accompany the paper "The Information Geometry of Unsupervised Reinforcement Learning"☆20Oct 6, 2021Updated 4 years ago
- A list of papers regarding generalization in (deep) reinforcement learning☆154Aug 12, 2023Updated 2 years ago
- ☆49Jul 30, 2023Updated 2 years ago
- Latent Dynamics Mixture, NeurIPS 2021☆18Oct 25, 2022Updated 3 years ago
- ☆33Aug 30, 2024Updated last year
- Learning Invariant Representations for Reinforcement Learning without Reconstruction☆157Aug 31, 2021Updated 4 years ago
- Open source code for paper "Denoised MDPs: Learning World Models Better Than the World Itself"☆137Aug 15, 2023Updated 2 years ago
- Implementation of VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning - Zintgraf et al. (ICLR 2020)☆199Mar 15, 2023Updated 3 years ago
- Implementation of Efficient Off-policy Meta-learning via Probabilistic Context Variables (PEARL)☆505Dec 1, 2022Updated 3 years ago
- ☆25Jan 2, 2019Updated 7 years ago
- Code for Environment Probing Interaction Policies [ICLR 2019]☆29Jun 17, 2019Updated 6 years ago
- RLStructures is a library to facilitate the implementation of new reinforcement learning algorithms. It includes a library, a tutorial, a…☆261Mar 2, 2023Updated 3 years ago
- ☆22Sep 22, 2022Updated 3 years ago
- ☆361Oct 12, 2022Updated 3 years ago
- CURL: Contrastive Unsupervised Representation Learning for Sample-Efficient Reinforcement Learning☆599Oct 28, 2020Updated 5 years ago
- Proto-RL: Reinforcement Learning with Prototypical Representations☆86Jun 12, 2022Updated 3 years ago
- Implementation of our paper "Meta Reinforcement Learning with Task Embedding and Shared Policy"☆35May 17, 2019Updated 6 years ago
- Official code for "Pretraining Representations For Data-Efficient Reinforcement Learning" (NeurIPS 2021)☆55Jul 27, 2021Updated 4 years ago
- Reinforcement Learning with Model-Agnostic Meta-Learning in Pytorch☆877Dec 27, 2022Updated 3 years ago
- Decoupled Reward-free ExplorAtion and Execution for Meta-reinforcement learning☆90Feb 13, 2023Updated 3 years ago
- A toolkit for reproducible reinforcement learning research.☆2,088May 4, 2023Updated 2 years ago
- ☆24Aug 9, 2024Updated last year
- Real-World RL Benchmark Suite☆365Aug 11, 2020Updated 5 years ago
- Benchmarking RL generalization in an interpretable way.☆179Nov 20, 2025Updated 4 months ago
- Author's PyTorch implementation of BCQ for continuous and discrete actions☆660Apr 6, 2021Updated 4 years ago
- ☆16Aug 2, 2022Updated 3 years ago
- Implementation of Proximal Meta-Policy Search (ProMP) as well as related Meta-RL algorithm. Includes a useful experiment framework for Me…☆248Sep 30, 2022Updated 3 years ago
- Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022☆344Aug 22, 2024Updated last year
- Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.☆871Oct 14, 2024Updated last year
- A collection of reference environments for offline reinforcement learning☆1,663Nov 18, 2024Updated last year
- [NeurIPS'21 Outstanding Paper] Library for reliable evaluation on RL and ML benchmarks, even with only a handful of seeds.☆868Aug 12, 2024Updated last year