tuomaso / radial_rl_v2Links
This repository contains the official code for our NeurIPS 2021 publication "Robust Deep Reinforcement Learning through Adversarial Loss"
☆30Updated 3 years ago
Alternatives and similar repositories for radial_rl_v2
Users that are interested in radial_rl_v2 are comparing it to the libraries listed below
Sorting:
- Efficient Adversarial Training without Attacking: Worst-Case-Aware Robust Reinforcement Learning☆26Updated 2 years ago
- Robust Reinforcement Learning with the Alternating Training of Learned Adversaries (ATLA) framework☆66Updated 4 years ago
- Code accompanying the paper "Action Robust Reinforcement Learning and Applications in Continuous Control" https://arxiv.org/abs/1901.0918…☆48Updated 6 years ago
- Code for "On the Robustness of Safe Reinforcement Learning under Observational Perturbations" (ICLR 2023)☆45Updated 9 months ago
- [NeurIPS 2020, Spotlight] Code for "Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations"☆136Updated 3 years ago
- [S&P 2024] Replication Package for "Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets".☆28Updated 8 months ago
- ☆75Updated last year
- Code for the paper "WCSAC: Worst-Case Soft Actor Critic for Safety-Constrained Reinforcement Learning"☆60Updated 2 years ago
- Implementations of safe reinforcement learning algorithms☆28Updated last year
- An open-source framework to benchmark and assess safety specifications of Reinforcement Learning problems.☆70Updated 2 years ago
- Pytorch implementation of Multi-Agent Generative Adversarial Imitation Learning☆42Updated 3 years ago
- Implementations of SAILR, PDO, and CSC☆31Updated last year
- Safe Reinforcement Learning in Constrained Markov Decision Processes☆60Updated 5 years ago
- Code for "Constrained Variational Policy Optimization for Safe Reinforcement Learning" (ICML 2022)☆80Updated 2 years ago
- Pytorch implementation of "Safe Exploration in Continuous Action Spaces" [Dalal et al.]☆73Updated 6 years ago
- [NeurIPS 2020, Spotlight] State-Adversarial DQN (SA-DQN) for robust deep reinforcement learning☆34Updated 4 years ago
- Deep Learning (FS 2020)☆17Updated 2 years ago
- Code accompanying the paper "Off-Policy Primal-Dual Safe Reinforcement Learning"☆20Updated last year
- Code for the NeurIPS 2021 paper "Safe Reinforcement Learning by Imagining the Near Future"☆46Updated 3 years ago
- Pytorch implementation of InfoGAIL and WGAIL☆19Updated 2 years ago
- Safe Model-based Reinforcement Learning with Robust Cross-Entropy Method☆66Updated 2 years ago
- ☆13Updated last year
- An implementation of Constrained Policy Optimization (Achiam 2017) in PyTorch☆26Updated 5 years ago
- Safe Multi-Agent MuJoCo benchmark for safe multi-agent reinforcement learning research.☆65Updated last year
- Official open-source implementation of ICML 2022 paper: Reachability Constrainted Reinforcement Learning.☆36Updated 3 years ago
- Pytorch Implementation for First Order Constrained Optimization in Policy Space (FOCOPS).☆29Updated 3 years ago
- Official code for "RAMBO: Robust Adversarial Model-Based Offline RL", NeurIPS 2022☆30Updated 2 years ago
- Generate expert demonstrations; GAIL(Generative Adversarial Imitation Learning); IRL(Inverse Reinforcement Learning)☆32Updated 4 years ago
- [NeurIPS 2020 Spotlight] State-adversarial PPO for robust deep reinforcement learning☆31Updated 3 years ago
- Model-Free Safe Reinforcement Learning through Neural Barrier Certificate☆42Updated last year