polixir / NeoRLLinks
Python interface for accessing the near real-world offline reinforcement learning (NeoRL) benchmark datasets
☆130Updated last year
Alternatives and similar repositories for NeoRL
Users that are interested in NeoRL are comparing it to the libraries listed below
Sorting:
- A collection of offline reinforcement learning algorithms.☆207Updated last year
- Code for MOPO: Model-based Offline Policy Optimization☆190Updated 3 years ago
- RLA is a tool for managing your RL experiments automatically☆72Updated 2 years ago
- Model-Based Offline Reinforcement Learning☆51Updated 4 years ago
- A pytorch reprelication of the model-based reinforcement learning algorithm MBPO☆182Updated 3 years ago
- Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)☆79Updated 3 years ago
- ☆201Updated 2 years ago
- Re-implementations of SOTA RL algorithms.☆136Updated 2 years ago
- ☆115Updated 2 years ago
- Paper Collection for Batch RL with brief introductions.☆84Updated 3 years ago
- Code for FOCAL Paper Published at ICLR 2021☆53Updated 2 years ago
- Code for Stabilizing Off-Policy RL via Bootstrapping Error Reduction☆163Updated 5 years ago
- PyTorch implementation of the Offline Reinforcement Learning algorithm CQL. Includes the versions DQN-CQL and SAC-CQL for discrete and co…☆144Updated last year
- Benchmarked implementations of Offline RL Algorithms.☆76Updated 9 months ago
- Conservative Q Learning on top of SAC