KaidiXu / LiRPA_Verify
Code for paper "Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers"
☆17Updated last year
Related projects ⓘ
Alternatives and complementary repositories for LiRPA_Verify
- The official repo for GCP-CROWN paper☆12Updated 2 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 5 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆30Updated 3 years ago
- Fourth edition of VNN COMP (2023)☆16Updated last year
- Certifying Geometric Robustness of Neural Networks☆15Updated last year
- kyleliang919 / Uncovering-the-Connections-BetweenAdversarial-Transferability-and-Knowledge-Transferabilitycode for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.☆17Updated last year
- [NeurIPS 2021] Fast Certified Robust Training with Short Warmup☆23Updated last year
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆22Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆87Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Updated 2 years ago
- VNN Neural Network Verification Competition 2021☆37Updated 3 years ago
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆25Updated 3 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆15Updated last year
- [NeurIPS 2022] Code for paper "Efficiently Computing Local Lipschitz Constants of Neural Networks via Bound Propagation"☆22Updated 11 months ago
- ☆11Updated 2 years ago
- Tensorflow implementation of Meta Adversarial Training for Adversarial Patch Attacks on Tiny ImageNet.☆25Updated 3 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆40Updated 5 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆24Updated 2 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆54Updated 2 years ago
- Repo for the paper "Bounding Training Data Reconstruction in Private (Deep) Learning".☆10Updated last year
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆93Updated 3 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Updated 4 years ago
- Certified Patch Robustness via Smoothed Vision Transformers☆41Updated 2 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆16Updated 5 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆19Updated 6 months ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Updated 3 years ago
- ☆22Updated 2 years ago
- Fastened CROWN: Tightened Neural Network Robustness Certificates☆10Updated 4 years ago
- Code for the paper "(De)Randomized Smoothing for Certifiable Defense against Patch Attacks" by Alexander Levine and Soheil Feizi.☆16Updated 2 years ago