AkhilanB / CNN-CertLinks
☆22Updated 4 years ago
Alternatives and similar repositories for CNN-Cert
Users that are interested in CNN-Cert are comparing it to the libraries listed below
Sorting:
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 6 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆39Updated 6 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆42Updated 6 years ago
- Geometric Certifications of Neural Nets☆42Updated 2 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 4 months ago
- ☆48Updated 5 years ago
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆53Updated 7 years ago
- Source for paper "Attacking Binarized Neural Networks"☆23Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆51Updated 5 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 7 years ago
- All code for the Piecewise Linear Neural Networks verification: A comparative study paper☆35Updated 7 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Updated last year
- The official repo for GCP-CROWN paper☆13Updated 3 years ago
- SyReNN: Symbolic Representations for Neural Networks☆41Updated 2 years ago
- This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks…☆20Updated last year
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Updated 5 years ago
- Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18☆55Updated 6 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Updated 6 years ago
- Learning perturbation sets for robust machine learning☆65Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 7 years ago
- A Closer Look at Accuracy vs. Robustness☆88Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated last year
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Updated 5 years ago
- Investigating the robustness of state-of-the-art CNN architectures to simple spatial transformations.☆49Updated 6 years ago
- A fast and efficient way to compute a differentiable bound on the singular values of convolution layers☆12Updated 5 years ago
- Reachability Analysis of Deep Neural Networks with Provable Guarantees☆36Updated 5 years ago