behzadanksu / rl-attackLinks
Adversarial Example Attacks on Policy Learners
☆40Updated 4 years ago
Alternatives and similar repositories for rl-attack
Users that are interested in rl-attack are comparing it to the libraries listed below
Sorting:
- Code for "Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight"☆79Updated 7 years ago
- ☆27Updated 2 years ago
- [NeurIPS 2020, Spotlight] Code for "Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations"☆133Updated 3 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- Adversarial attacks on Deep Reinforcement Learning (RL)☆91Updated 4 years ago
- Code used in our paper "Robust Deep Reinforment Learning through Adversarial Loss"☆33Updated last year
- Code to train RL agents along with Adversarial distrubance agents☆65Updated 8 years ago
- Robust Reinforcement Learning with the Alternating Training of Learned Adversaries (ATLA) framework☆67Updated 4 years ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆161Updated 5 years ago
- Learning Backtracking Models, ICLR'19☆10Updated 7 years ago
- ☆26Updated 2 years ago
- Open source implementation of the TrojDRL algorithm presented in TrojDRL: Evaluation of backdoor attacks on Deep Reinforcement Learning☆19Updated 4 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆41Updated 6 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆390Updated 3 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45Updated 7 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Updated 4 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Updated 11 months ago
- ☆27Updated 4 years ago
- Code for "Black-box Adversarial Attacks with Limited Queries and Information" (http://arxiv.org/abs/1804.08598)☆179Updated 3 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- Code for human intervention reinforcement learning☆34Updated 7 years ago
- Pytorch implementation of LOLA (https://arxiv.org/abs/1709.04326) using DiCE (https://arxiv.org/abs/1802.05098)☆95Updated 6 years ago
- Ensemble Adversarial Training on MNIST☆121Updated 8 years ago
- Safe Reinforcement Learning algorithms☆74Updated 2 years ago
- Simple grid-world environment compatible with OpenAI-gym☆50Updated 5 years ago
- AAAI 2019 oral presentation☆52Updated last month
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆17Updated 6 years ago
- Interval attacks (adversarial ML)☆21Updated 6 years ago