IntelLabs / MART
Modular Adversarial Robustness Toolkit
☆18Updated 7 months ago
Alternatives and similar repositories for MART:
Users that are interested in MART are comparing it to the libraries listed below
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆26Updated 5 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆88Updated last year
- ☆18Updated 2 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 4 years ago
- Library for training globally-robust neural networks.☆28Updated last year
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆50Updated 6 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks☆34Updated 6 months ago
- ☆19Updated 2 years ago
- Code for paper "PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking"☆64Updated 2 years ago
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆23Updated last year
- Code for Auditing DPSGD☆37Updated 2 years ago
- Repository for Certified Defenses for Adversarial Patch ICLR-2020☆32Updated 4 years ago
- Code for the paper titled "Adversarial Vulnerability of Randomized Ensembles" (ICML 2022).☆10Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆48Updated 3 years ago
- Implementation of Wasserstein adversarial attacks.☆23Updated 4 years ago
- Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples☆18Updated 2 years ago
- ☆31Updated 4 months ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆85Updated 3 years ago
- Defending Against Backdoor Attacks Using Robust Covariance Estimation☆21Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆32Updated 4 years ago
- ☆11Updated 5 years ago
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Updated 5 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆12Updated 2 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆51Updated 4 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38Updated 3 years ago
- Code for "PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier"☆37Updated last year
- This is the implementation for CVPR 2022 Oral paper "Better Trigger Inversion Optimization in Backdoor Scanning."☆24Updated 2 years ago
- [NeurIPS 2021] "Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks" by Yon…☆13Updated 2 years ago