mikelma / componetLinks
Source code of the ICML24 paper "Self-Composing Policies for Scalable Continual Reinforcement Learning" (selected for oral presentation)
☆23Updated last year
Alternatives and similar repositories for componet
Users that are interested in componet are comparing it to the libraries listed below
Sorting:
- Code for Mildly Conservative Q-learning for Offline Reinforcement Learning (NeurIPS 2022)☆59Updated last year
- ☆63Updated 10 months ago
- [NeurIPS 2024] Official Implementation of Meta-DT☆45Updated 11 months ago
- Official code repository for Prompt-DT.☆115Updated 3 years ago
- [ICML 2022] Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning☆36Updated 3 years ago
- ☆43Updated 2 years ago
- [NeurIPS 2023] The official implementation of "Offline Multi-Agent Reinforcement Learning with Implicit Global-to-Local Value Regularizat…☆38Updated last year
- [NeurIPS 2023] Implementation of Elastic Decision Transformer☆35Updated last year
- [ICML 2024] The offical implementation of A2PR, a simple way to achieve SOTA in offline reinforcement learning with an adaptive advantage…☆32Updated last year
- ☆36Updated 2 years ago
- ☆23Updated last year
- Continual reinforcement learning baselines: experiment specifications, implementation of existing methods, and common metrics. Easily ext…☆125Updated 2 years ago
- Implemention of the Decision-Pretrained Transformer (DPT) from the paper Supervised Pretraining Can Learn In-Context Reinforcement Learni…☆71Updated last year
- [ICML 2025 oral] Network Sparsity Unlocks the Scaling Potential of Deep Reinforcement Learning☆37Updated 3 months ago
- ☆83Updated 2 years ago
- Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)☆76Updated 3 years ago
- A PyTorch implementation of Implicit Q-Learning☆86Updated 3 years ago
- Official code for ICML 2024 paper, "RIME: Robust Preference-based Reinforcement Learning with Noisy Preferences" (ICML 2024 Spotlight)☆34Updated 11 months ago
- re-implementation of the offline model-based RL algorithm MOPO in pytorch☆25Updated 3 years ago
- Official code for "Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning".☆53Updated last year
- rlplot is an easy to use and highly encapsulated RL plot library (including basic error bar lineplot and a wrapper to "rliable").☆33Updated last year
- [NeurIPS 2023] Efficient Diffusion Policy☆110Updated last year
- Preference Transformer: Modeling Human Preferences using Transformers for RL (ICLR2023 Accepted)☆163Updated last year
- Author's PyTorch implementation of TD7 for online and offline RL☆148Updated 2 years ago
- ☆13Updated 11 months ago
- Official code for the paper: Continual Task Allocation in Meta-Policy Network via Sparse Prompting☆19Updated 7 months ago
- official implementation for our paper Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning (NeurIPS 2023)☆108Updated last year
- [NeurIPS 2022 Oral] The official implementation of POR in "A Policy-Guided Imitation Approach for Offline Reinforcement Learning"☆57Updated 2 years ago
- ICML'2024: Q-value Regularized Transformer for Offline Reinforcement Learning☆34Updated 8 months ago
- Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.☆130Updated 3 years ago