Simplifying Model-based RL: Learning Representations, Latent-space Models and Policies with One Objective
☆82Mar 9, 2023Updated 2 years ago
Alternatives and similar repositories for alm
Users that are interested in alm are comparing it to the libraries listed below
Sorting:
- Code to accompany the paper "Mismatched No More: Joint Model-Policy Optimization for Model-Based RL"☆21Oct 6, 2021Updated 4 years ago
- On the model-based stochastic value gradient for continuous reinforcement learning☆57Jan 7, 2026Updated last month
- ☆24Jan 26, 2024Updated 2 years ago
- Evaluation of TD-MPC2.☆21Jan 21, 2024Updated 2 years ago
- Dream to Control: Learning Behaviors by Latent Imagination, implemented in PyTorch.☆321Jan 11, 2024Updated 2 years ago
- Fast reinforcement learning research☆61Dec 7, 2024Updated last year
- 🔍 Codebase for the ICML '20 paper "Ready Policy One: World Building Through Active Learning" (arxiv: 2002.02693)☆18Jul 6, 2023Updated 2 years ago
- Evaluating long-term memory of reinforcement learning algorithms☆164Jun 23, 2023Updated 2 years ago
- ☆59Sep 22, 2022Updated 3 years ago
- pytorch-implementation of Dreamer (Model-based Image RL Algorithm)☆169Jan 19, 2025Updated last year
- [ICLR 22] Value Gradient weighted Model-Based Reinforcement Learning.☆25Apr 15, 2023Updated 2 years ago
- Learning Robust Dynamics Through Variational Sparse Gating☆20Oct 19, 2022Updated 3 years ago
- Proto-RL: Reinforcement Learning with Prototypical Representations☆86Jun 12, 2022Updated 3 years ago
- Open source code for paper "Denoised MDPs: Learning World Models Better Than the World Itself"☆137Aug 15, 2023Updated 2 years ago
- DrQ-v2: Improved Data-Augmented Reinforcement Learning☆431May 31, 2022Updated 3 years ago
- Formulating Model-based RL Dynamics as a continuous rather then one step prediction☆36Aug 24, 2022Updated 3 years ago
- Discovering and Achieving Goals via World Models, NeurIPS 2021☆88Jan 24, 2024Updated 2 years ago
- ☆52Jan 20, 2023Updated 3 years ago
- Code release for Efficient Planning in a Compact Latent Action Space (ICLR2023) https://arxiv.org/abs/2208.10291.☆113May 12, 2023Updated 2 years ago
- Repository for the paper "Planning to Explore via Self-Supervised World Models"☆234Feb 10, 2023Updated 3 years ago
- Code base for paper: Reparameterized Policy Learning for Multimodal Trajectory Optimization☆27Jul 19, 2023Updated 2 years ago
- Source code for the paper "Policy Architectures for Compositional Generalization in Control"☆30May 19, 2022Updated 3 years ago
- Deep Hierarchical Planning from Pixels☆115Dec 21, 2022Updated 3 years ago
- Library for Model Based RL☆1,054Jul 12, 2024Updated last year
- Official implementation for "Q-Ensemble for Offline RL: Don't Scale the Ensemble, Scale the Batch Size", NeurIPS 2022, Offline RL Worksho…☆21Feb 27, 2023Updated 3 years ago
- Code to accompany the paper "The Information Geometry of Unsupervised Reinforcement Learning"☆20Oct 6, 2021Updated 4 years ago
- ☆81Jul 8, 2022Updated 3 years ago
- Code for "Temporal Difference Learning for Model Predictive Control"☆502Nov 25, 2023Updated 2 years ago
- ☆46Sep 24, 2024Updated last year
- [ICML 2021] Learning Task Informed Abstractions -- a representation learning approach for model-based RL in complex visual domains☆18Jul 20, 2021Updated 4 years ago
- Bipedal Skills Benchmark for Reinforcement Learning☆25Oct 27, 2022Updated 3 years ago
- Code for Powderworld: A Platform for Understanding Generalization via Rich Task Distributions☆73Aug 31, 2024Updated last year
- Pytorch implementation of Dreamer-v2: Visual Model Based RL Algorithm.☆274Jul 29, 2023Updated 2 years ago
- Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations☆113May 27, 2024Updated last year
- Predictable MDP Abstraction for Unsupervised Model-Based RL (ICML 2023)☆32Feb 6, 2023Updated 3 years ago
- MR.Q is a general-purpose model-free reinforcement learning algorithm.☆143Jun 23, 2025Updated 8 months ago
- Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning☆11Jun 16, 2022Updated 3 years ago
- ☆12Apr 25, 2022Updated 3 years ago
- ☆122Feb 25, 2025Updated last year