AkhilanB / CNN-Cert
☆22Updated 3 years ago
Alternatives and similar repositories for CNN-Cert:
Users that are interested in CNN-Cert are comparing it to the libraries listed below
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆41Updated 5 years ago
- Certifying Geometric Robustness of Neural Networks☆16Updated last year
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆51Updated 4 years ago
- ☆16Updated 3 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆16Updated 6 years ago
- Geometric Certifications of Neural Nets☆41Updated 2 years ago
- Code for paper "Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers"☆17Updated 2 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆23Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆32Updated 4 years ago
- A community-run reference for state-of-the-art adversarial example defenses.☆49Updated 4 months ago
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆51Updated 6 years ago
- On Intrinsic Dataset Properties for Adversarial Machine Learning☆19Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Updated 6 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆11Updated 4 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆23Updated 2 years ago
- This github repository contains the official code for the paper, "Evolving Robust Neural Architectures to Defend from Adversarial Attacks…☆18Updated last year
- SyReNN: Symbolic Representations for Neural Networks☆40Updated last year
- [JMLR] TRADES + random smoothing for certifiable robustness☆14Updated 4 years ago
- ☆11Updated 5 years ago
- Interval attacks (adversarial ML)☆21Updated 5 years ago
- Analysis of Adversarial Logit Pairing☆60Updated 6 years ago
- All code for the Piecewise Linear Neural Networks verification: A comparative study paper☆35Updated 6 years ago
- Athena: A Framework for Defending Machine Learning Systems Against Adversarial Attacks☆42Updated 3 years ago
- Source for paper "Attacking Binarized Neural Networks"☆23Updated 6 years ago
- Code to reproduce experiments from "A Statistical Approach to Assessing Neural Network Robustness"☆12Updated 6 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆44Updated 5 years ago
- The official repo for GCP-CROWN paper☆13Updated 2 years ago
- Randomized Smoothing of All Shapes and Sizes (ICML 2020).☆52Updated 4 years ago