sokcertifiedrobustness / VeriGauge-deprecatedLinks
☆11Updated 2 years ago
Alternatives and similar repositories for VeriGauge-deprecated
Users that are interested in VeriGauge-deprecated are comparing it to the libraries listed below
Sorting:
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆55Updated 3 years ago
- Library for training globally-robust neural networks.☆28Updated last year
- Repository for Knowledge Enhanced Machine Learning Pipeline (KEMLP)☆10Updated 4 years ago
- ☆11Updated 5 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- MACER: MAximizing CErtified Radius (ICLR 2020)☆30Updated 5 years ago
- [CCS 2021] TSS: Transformation-specific smoothing for robustness certification☆26Updated last year
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Updated 2 years ago
- Implementation for <Understanding Robust Overftting of Adversarial Training and Beyond> in ICML'22.☆12Updated 3 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆39Updated 4 years ago
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆24Updated last month
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆87Updated 4 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆24Updated 2 years ago
- The official repo for GCP-CROWN paper☆13Updated 2 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Updated 2 years ago
- ☆18Updated 3 years ago
- Official TensorFlow implementation of "Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization" (ICML 2019)☆40Updated 4 years ago
- [ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chan…☆47Updated 3 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 3 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30Updated 3 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Updated last year
- Feature Scattering Adversarial Training (NeurIPS19)☆73Updated last year
- [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks☆34Updated last year
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆26Updated 11 months ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year