oval-group / GNN_branchingLinks
Implementation of GNN ReLU branching strategies
☆10Updated 4 years ago
Alternatives and similar repositories for GNN_branching
Users that are interested in GNN_branching are comparing it to the libraries listed below
Sorting:
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆42Updated 6 years ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆162Updated 5 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Updated 5 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆98Updated 4 years ago
- All code for the Piecewise Linear Neural Networks verification: A comparative study paper☆35Updated 6 years ago
- Certifying Geometric Robustness of Neural Networks☆16Updated 2 years ago
- Fastened CROWN: Tightened Neural Network Robustness Certificates☆10Updated 5 years ago
- DL2 is a framework that allows training neural networks with logical constraints over numerical values in the network (e.g. inputs, out…☆86Updated last year
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs☆97Updated 4 years ago
- auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs☆327Updated 6 months ago
- A community-run reference for state-of-the-art adversarial example defenses.☆50Updated 11 months ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆227Updated 5 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Updated last year
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆38Updated 6 years ago
- Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms)…☆26Updated 5 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆17Updated 6 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆29Updated 3 years ago
- ☆26Updated 2 years ago
- ☆88Updated last year
- Fourth edition of VNN COMP (2023)☆16Updated 2 years ago
- Code for "Robustness May Be at Odds with Accuracy"☆91Updated 2 years ago
- Targeted black-box adversarial attack using Bayesian Optimization☆37Updated 5 years ago
- MACER: MAximizing CErtified Radius (ICLR 2020)☆30Updated 5 years ago
- Safety Verification of Deep Neural Networks☆50Updated 7 years ago
- ETH Robustness Analyzer for Deep Neural Networks☆342Updated 2 years ago
- Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."☆34Updated 6 years ago
- Code for Stability Training with Noise (STN)☆22Updated 4 years ago
- The official repo for GCP-CROWN paper☆13Updated 3 years ago