Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at ICLR 2020
☆27Apr 29, 2020Updated 5 years ago
Alternatives and similar repositories for model-sanitization
Users that are interested in model-sanitization are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- competition☆17Aug 1, 2020Updated 5 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆38May 31, 2022Updated 3 years ago
- Implemention of "Piracy Resistant Watermarks for Deep Neural Networks" in TensorFlow.☆12Dec 5, 2020Updated 5 years ago
- ☆12May 27, 2022Updated 3 years ago
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- [CVPR 2022] "Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free" by Tianlong Chen*, Zhenyu Zhang*, Yihua Zhang*, Shiyu C…☆27Oct 5, 2022Updated 3 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆36Jul 3, 2021Updated 4 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- This is the official implementation of our paper 'Black-box Dataset Ownership Verification via Backdoor Watermarking'.☆26Jul 22, 2023Updated 2 years ago
- Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)☆12Aug 25, 2020Updated 5 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆27May 1, 2024Updated last year
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- ☆25Mar 24, 2023Updated 2 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆74Apr 12, 2018Updated 7 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆63May 8, 2023Updated 2 years ago
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆44Oct 24, 2023Updated 2 years ago
- ☆16Dec 3, 2021Updated 4 years ago
- Implementation of the Biased Boundary Attack for ImageNet☆22Aug 18, 2019Updated 6 years ago
- ☆18Jun 15, 2021Updated 4 years ago
- ☆14Jun 5, 2020Updated 5 years ago
- Official Code for ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users (NeurIPS 2024)☆23Oct 23, 2024Updated last year
- ☆19Mar 26, 2022Updated 3 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆35Jan 9, 2023Updated 3 years ago
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- Implementation of TABOR: A Highly Accurate Approach to Inspecting and Restoring Trojan Backdoors in AI Systems (https://arxiv.org/pdf/190…☆19Apr 13, 2023Updated 2 years ago
- The implementatin of our ICLR 2021 work: Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits☆18Jul 20, 2021Updated 4 years ago
- ☆50Aug 30, 2024Updated last year
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- ☆20May 6, 2022Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Defending against Model Stealing via Verifying Embedded External Features☆38Feb 19, 2022Updated 4 years ago
- Official Inplementation of CVPR23 paper "Backdoor Defense via Deconfounded Representation Learning"☆25Mar 13, 2023Updated 2 years ago