IBM / CNN-Cert
Codes for reproducing the experimental results in "CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks", published at AAAI 2019
☆27Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for CNN-Cert
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆93Updated 3 years ago
- The official repo for GCP-CROWN paper☆12Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆88Updated last year
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆40Updated 5 years ago
- ☆26Updated last year
- Convex Layerwise Adversarial Training (COLT)☆29Updated 3 years ago
- Fastened CROWN: Tightened Neural Network Robustness Certificates☆10Updated 4 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆46Updated 4 years ago
- AAAI 2019 oral presentation☆50Updated 3 months ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆99Updated 2 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆30Updated 3 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆16Updated 5 years ago
- Code for Stability Training with Noise (STN)☆21Updated 3 years ago
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Updated 5 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- Fourth edition of VNN COMP (2023)☆16Updated last year
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆26Updated 4 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 4 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 5 years ago
- Official TensorFlow implementation of "Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization" (ICML 2019)☆37Updated 3 years ago
- Certifying Geometric Robustness of Neural Networks☆15Updated last year
- ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks☆166Updated 3 years ago
- Code used in 'Exploring the Space of Black-box Attacks on Deep Neural Networks' (https://arxiv.org/abs/1712.09491)☆61Updated 6 years ago
- Library for training globally-robust neural networks.☆28Updated last year
- Code for paper "Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers"☆17Updated last year
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 6 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆93Updated last year
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆25Updated 3 months ago