google-research / preprocessor-aware-black-box-attack
☆20Updated last year
Alternatives and similar repositories for preprocessor-aware-black-box-attack:
Users that are interested in preprocessor-aware-black-box-attack are comparing it to the libraries listed below
- Official Code for Efficient and Effective Augmentation Strategy for Adversarial Training (NeurIPS-2022)☆16Updated last year
- Certified Patch Robustness via Smoothed Vision Transformers☆42Updated 3 years ago
- Code for a research paper "Part-Based Models Improve Adversarial Robustness" (ICLR 2023)☆22Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆25Updated 11 months ago
- ☆13Updated 2 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆26Updated 4 years ago
- SEAT☆20Updated last year
- Code for the paper Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation (CVPR 2023).☆33Updated last year
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Updated 4 years ago
- Official repository for "On Improving Adversarial Transferability of Vision Transformers" (ICLR 2022--Spotlight)☆70Updated 2 years ago
- Helper-based Adversarial Training: Reducing Excessive Margin to Achieve a Better Accuracy vs. Robustness Trade-off☆29Updated 2 years ago
- ☆13Updated 6 months ago
- ☆29Updated 2 years ago
- ☆39Updated 11 months ago
- ☆29Updated 2 years ago
- ☆12Updated 2 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆14Updated last year
- ☆22Updated 2 years ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆42Updated 3 weeks ago
- Official Implementation for PlugIn Inversion☆15Updated 3 years ago
- [NeurIPS 2021] “When does Contrastive Learning Preserve Adversarial Robustness from Pretraining to Finetuning?”☆48Updated 3 years ago
- Data-free knowledge distillation using Gaussian noise (NeurIPS paper)☆15Updated last year
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆22Updated 6 months ago
- ☆14Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".☆11Updated 11 months ago
- Robust Principles: Architectural Design Principles for Adversarially Robust CNNs☆21Updated last year
- ☆53Updated last year
- Pytorch implementation of NPAttack☆12Updated 4 years ago
- Code corresponding to the paper: "On the Robustness of Vision Transformers": https://arxiv.org/abs/2104.02610☆23Updated 9 months ago