Pytorch implementation of NPAttack
☆12Jul 7, 2020Updated 5 years ago
Alternatives and similar repositories for NPAttack
Users that are interested in NPAttack are comparing it to the libraries listed below
Sorting:
- Code for ICML2019 Paper "On the Convergence and Robustness of Adversarial Training"☆34Apr 28, 2020Updated 5 years ago
- Official PyTorch implementation of "Query-Efficient and Scalable Black-Box Adversarial Attacks on Discrete Sequential Data via Bayesian O…☆25Sep 26, 2023Updated 2 years ago
- ☆58Jul 27, 2022Updated 3 years ago
- ☆11Apr 27, 2022Updated 3 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Feb 10, 2023Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- ☆20May 6, 2022Updated 3 years ago
- The code for ECCV2022 (Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal)☆44Oct 1, 2022Updated 3 years ago
- Implementation of our ICLR 2021 paper: Policy-Driven Attack: Learning to Query for Hard-label Black-box Adversarial Examples.☆11Mar 9, 2021Updated 4 years ago
- This repository contains the official code for the paper: "Prompt Injection: Parameterization of Fixed Inputs"☆32Sep 13, 2024Updated last year
- [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks☆33Jul 5, 2024Updated last year
- ICSE2021 Submission☆13Aug 28, 2022Updated 3 years ago
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Feb 14, 2023Updated 3 years ago
- Code for CVPR2018 "Iterative Learning with Open-set Noisy Labels"☆12Mar 12, 2021Updated 4 years ago
- ☆25Mar 24, 2023Updated 2 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆22Aug 18, 2019Updated 6 years ago
- ☆48Feb 9, 2021Updated 5 years ago
- [ICLR 2025] REFINE: Inversion-Free Backdoor Defense via Model Reprogramming☆13Feb 13, 2025Updated last year
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- ☆12Mar 15, 2019Updated 6 years ago
- [NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".☆49Oct 3, 2023Updated 2 years ago
- Adversarial Robustness on In- and Out-Distribution Improves Explainability☆12Feb 10, 2022Updated 4 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆15Jan 18, 2023Updated 3 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 3 years ago
- Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)☆12Aug 25, 2020Updated 5 years ago
- Code for "Hard Label Black-box Adversarial Attacks in Low Query Budget Regimes"☆15Dec 20, 2020Updated 5 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆36Mar 14, 2021Updated 4 years ago
- Code for our NeurIPS 2020 paper Practical No-box Adversarial Attacks against DNNs.☆34Dec 5, 2020Updated 5 years ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- Code for Black-Box Adversarial Attack with Transferable Model-based Embedding☆58Jun 3, 2020Updated 5 years ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Oct 12, 2023Updated 2 years ago
- [ICML 2025] UDora: A Unified Red Teaming Framework against LLM Agents☆32Jun 24, 2025Updated 8 months ago
- Official repository for "On the Multi-modal Vulnerability of Diffusion Models"☆16Jul 15, 2024Updated last year
- Source code for "Neural Anisotropy Directions"☆16Nov 17, 2020Updated 5 years ago
- Implementation of the paper "MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation".☆31Dec 12, 2021Updated 4 years ago
- ☆19Mar 26, 2022Updated 3 years ago