inspire-group / OOD-Attacks
Attacks using out-of-distribution adversarial examples
☆12Updated 5 years ago
Alternatives and similar repositories for OOD-Attacks:
Users that are interested in OOD-Attacks are comparing it to the libraries listed below
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38Updated 3 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- ReColorAdv and other attacks from the NeurIPS 2019 paper "Functional Adversarial Attacks"☆37Updated 2 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Updated 2 years ago
- ☆35Updated 4 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 4 years ago
- ☆16Updated 5 years ago
- Adversarially Robust Transfer Learning with LWF loss applied to the deep feature representation (penultimate) layer☆18Updated 5 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆42Updated 2 years ago
- Pytorch implementation of NPAttack☆12Updated 4 years ago
- Learnable Boundary Guided Adversarial Training (ICCV2021)☆36Updated 2 months ago
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆13Updated 4 years ago
- ☆19Updated 3 years ago
- StrAttack, ICLR 2019☆32Updated 5 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆46Updated 3 years ago
- ConvexPolytopePosioning☆34Updated 5 years ago
- the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral☆59Updated 3 years ago
- ☆13Updated 4 years ago
- ☆57Updated 2 years ago
- Code for the paper "(De)Randomized Smoothing for Certifiable Defense against Patch Attacks" by Alexander Levine and Soheil Feizi.☆17Updated 2 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆12Updated 2 years ago
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆17Updated 7 months ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆25Updated 6 years ago
- Code for CVPR2020 paper QEBA: Query-Efficient Boundary-Based Blackbox Attack☆30Updated 4 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆30Updated last year
- ☆22Updated last year
- ☆11Updated 5 years ago
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Updated 4 years ago
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)☆11Updated 4 years ago