ftramer / ad-versarial
☆44Updated 2 years ago
Alternatives and similar repositories for ad-versarial:
Users that are interested in ad-versarial are comparing it to the libraries listed below
- ☆23Updated last year
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Detecting Adversarial Examples in Deep Neural Networks☆66Updated 6 years ago
- ☆120Updated 3 years ago
- ☆11Updated 5 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Code for "Variational Model Inversion Attacks" Wang et al., NeurIPS2021☆20Updated 3 years ago
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples☆39Updated 6 years ago
- Craft poisoned data using MetaPoison☆49Updated 3 years ago
- This is an implementation demo of the IJCAI 2022 paper [Eliminating Backdoor Triggers for Deep Neural Networks Using Attention Relation …☆20Updated 3 months ago
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆55Updated 4 years ago
- ☆25Updated 6 years ago
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Updated 3 years ago
- Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18☆55Updated 5 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆102Updated 5 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆47Updated 4 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆44Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆51Updated 4 years ago
- ☆26Updated 6 years ago
- Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"☆63Updated 5 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code for AAAI 2018 accepted paper: "Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing the…☆55Updated 2 years ago
- AAAI 2019 oral presentation☆50Updated 6 months ago
- A general method for training cost-sensitive robust classifier☆22Updated 5 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆38Updated 6 years ago
- Breaking Certifiable Defenses☆17Updated 2 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- ☆84Updated last year
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆122Updated 4 years ago