manish-pra / copgLinks
This repository contains all code and experiments for competitive policy gradient (CoPG) algorithm.
☆24Updated 5 years ago
Alternatives and similar repositories for copg
Users that are interested in copg are comparing it to the libraries listed below
Sorting:
- ☆27Updated 5 years ago
- This code implements Prioritized Level Replay, a method for sampling training levels for reinforcement learning agents that exploits the …☆92Updated 4 years ago
- Implementation of the Model-Based Meta-Policy-Optimization (MB-MPO) algorithm☆44Updated 7 years ago
- ☆99Updated 2 years ago
- An OpenAI Gym environment for multi-agent car racing based on Gym's original car racing environment.☆88Updated 4 years ago
- ☆18Updated 5 years ago
- Safe Policy Improvement with Baseline Bootstrapping☆26Updated 5 years ago
- Safe Model-based Reinforcement Learning with Robust Cross-Entropy Method☆67Updated 2 years ago
- ☆32Updated 2 years ago
- ☆30Updated 4 years ago
- Invariant Causal Prediction for Block MDPs☆44Updated 5 years ago
- Learning Off-Policy with Online Planning [CoRL 2021 Best Paper Finalist]☆41Updated 3 years ago
- ☆32Updated 4 years ago
- Implicit Normalizing Flows + Reinforcement Learning☆61Updated 6 years ago
- IV-RL - Sample Efficient Deep Reinforcement Learning via Uncertainty Estimation☆40Updated 4 months ago
- Estimating Q(s,s') with Deep Deterministic Dynamics Gradients☆32Updated 5 years ago
- The official repository of Decoupled Reinforcement Learning to Stabilise Intrinsically-Motivated Exploration" (AAMAS 2022)☆27Updated 3 years ago
- on-policy optimization baselines for deep reinforcement learning☆32Updated 5 years ago
- Disagreement-Regularized Imitation Learning☆30Updated 4 years ago
- Code for "Calibrated Model-Based Deep Reinforcement Learning", ICML 2019.☆55Updated 6 years ago
- Code for demonstration example-task in RUDDER blog☆24Updated 5 years ago
- On the model-based stochastic value gradient for continuous reinforcement learning☆57Updated 2 years ago
- ☆17Updated last year
- Offline Risk-Averse Actor-Critic (O-RAAC). A model-free RL algorithm for risk-averse RL in a fully offline setting☆35Updated 4 years ago
- ☆78Updated last year
- Implementation of the Option-Critic Architecture☆40Updated 7 years ago
- Implementation of the Prioritized Option-Critic on the Four-Rooms Environment☆17Updated 7 years ago
- Code for reproducing experiments in Model-Based Active Exploration, ICML 2019☆79Updated 6 years ago
- Implementation of Tactical Optimistic and Pessimistic value estimation☆25Updated 2 years ago
- Safe Option-Critic: Learning Safety in the Option-Critic Architecture☆20Updated 6 years ago