behzadanksu / rl-attackLinks
Adversarial Example Attacks on Policy Learners
☆40Updated 5 years ago
Alternatives and similar repositories for rl-attack
Users that are interested in rl-attack are comparing it to the libraries listed below
Sorting:
- Code for "Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight"☆79Updated 8 years ago
- ☆27Updated 2 years ago
- Code used in our paper "Robust Deep Reinforment Learning through Adversarial Loss"☆33Updated 2 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- [NeurIPS 2020, Spotlight] Code for "Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations"☆137Updated 3 years ago
- Learning Backtracking Models, ICLR'19☆10Updated 7 years ago
- ☆26Updated 2 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆42Updated 6 years ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆162Updated 5 years ago
- Adversarial attacks on Deep Reinforcement Learning (RL)☆95Updated 4 years ago
- Code to train RL agents along with Adversarial distrubance agents☆66Updated 8 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆98Updated 4 years ago
- ☆27Updated 4 years ago
- Open source implementation of the TrojDRL algorithm presented in TrojDRL: Evaluation of backdoor attacks on Deep Reinforcement Learning☆19Updated 5 years ago
- Robust Reinforcement Learning with the Alternating Training of Learned Adversaries (ATLA) framework☆68Updated 4 years ago
- Modular PyTorch implementation of policy gradient methods☆25Updated 6 years ago
- AAAI 2019 oral presentation☆53Updated 4 months ago
- PyTorch implementation of our paper Real-Time Reinforcement Learning (NeurIPS 2019)☆76Updated 5 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆390Updated 3 years ago
- Interval attacks (adversarial ML)☆21Updated 6 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆222Updated last year
- Code for human intervention reinforcement learning☆35Updated 7 years ago
- Deep Variational Reinforcement Learning☆137Updated 3 years ago
- Machine Learning Course Project Skoltech 2018☆108Updated 6 years ago
- Pytorch implementation of LOLA (https://arxiv.org/abs/1709.04326) using DiCE (https://arxiv.org/abs/1802.05098)☆96Updated 7 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- Safe Reinforcement Learning algorithms☆75Updated 3 years ago
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- This repository contains the code used in the paper Evaluating the Performance of Reinformcent Learning Algorithms☆28Updated 4 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago