AngusG / cleverhans-attacking-bnnsLinks
Source for paper "Attacking Binarized Neural Networks"
☆23Updated 7 years ago
Alternatives and similar repositories for cleverhans-attacking-bnns
Users that are interested in cleverhans-attacking-bnns are comparing it to the libraries listed below
Sorting:
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- ☆18Updated 5 years ago
- Robustness vs Accuracy Survey on ImageNet☆98Updated 4 years ago
- [NeurIPS'2019] Shupeng Gui, Haotao Wang, Haichuan Yang, Chen Yu, Zhangyang Wang, Ji Liu, “Model Compression with Adversarial Robustness: …☆50Updated 3 years ago
- [ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“☆24Updated 3 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- Code release for "Adversarial Robustness vs Model Compression, or Both?"☆91Updated 4 years ago
- Implementation of our NeurIPS 2019 paper: Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks☆10Updated 5 years ago
- ☆26Updated 6 years ago
- AAAI 2019 oral presentation☆52Updated 3 months ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Updated 5 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- Code used in 'Exploring the Space of Black-box Attacks on Deep Neural Networks' (https://arxiv.org/abs/1712.09491)☆61Updated 7 years ago
- Python implementation for paper: Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples☆11Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 5 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- ☆48Updated 4 years ago
- ☆29Updated 6 years ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆92Updated 2 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆28Updated 6 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- Code for the paper "MMA Training: Direct Input Space Margin Maximization through Adversarial Training"☆34Updated 5 years ago
- A PyTorch baseline attack example for the NIPS 2017 adversarial competition☆86Updated 8 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 5 years ago
- Further improve robustness of mixup-trained models in inference (ICLR 2020)☆60Updated 5 years ago
- StrAttack, ICLR 2019☆33Updated 6 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks…☆18Updated last year
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Updated 4 years ago