Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://github.com/Verified-Intelligence/auto_LiRPA instead)
☆30Nov 1, 2019Updated 6 years ago
Alternatives and similar repositories for CertifiedReLURobustness
Users that are interested in CertifiedReLURobustness are comparing it to the libraries listed below
Sorting:
- Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms)…☆27Nov 23, 2019Updated 6 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆17Nov 29, 2018Updated 7 years ago
- Fastened CROWN: Tightened Neural Network Robustness Certificates☆10Feb 10, 2020Updated 6 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆98Jun 7, 2021Updated 4 years ago
- ☆26Feb 15, 2023Updated 3 years ago
- PLANET: a Piece-wise LineAr feed-forward NEural network verification Tool☆43Feb 5, 2019Updated 7 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Apr 24, 2020Updated 5 years ago
- Reachability Analysis of Deep Neural Networks with Provable Guarantees☆36Feb 25, 2020Updated 6 years ago
- Documentation and scripts related to the .nnet file format. This file format specifies a simple text file to define feed-forward, fully-c…☆40Mar 24, 2025Updated 11 months ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆161Dec 20, 2019Updated 6 years ago
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Sep 3, 2019Updated 6 years ago
- ☆22Jun 23, 2021Updated 4 years ago
- ☆27Sep 27, 2024Updated last year
- Codes for reproducing the experimental results in "CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Net…☆27Jun 23, 2021Updated 4 years ago
- The latest source code of the tool Flow*☆28Jan 15, 2023Updated 3 years ago
- Benchmarks for the VNN Comp 2023☆16Jun 7, 2024Updated last year
- Safety Verification of Deep Neural Networks☆50Feb 5, 2018Updated 8 years ago
- The released code of ReluVal in USENIX Security 2018☆60Mar 4, 2020Updated 6 years ago
- All code for the Piecewise Linear Neural Networks verification: A comparative study paper☆35Nov 7, 2018Updated 7 years ago
- First-Order Adversarial Vulnerability of Neural Networks and Input Dimension☆15Sep 4, 2019Updated 6 years ago
- ☆19Nov 11, 2019Updated 6 years ago
- Certifying Geometric Robustness of Neural Networks☆16Mar 24, 2023Updated 2 years ago
- Automated Controller Synthesis☆15Jun 27, 2018Updated 7 years ago
- Boundary analysis based Reachability analysis Toolbox for dynamic systems in Python☆19Jan 16, 2026Updated last month
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆39Dec 13, 2018Updated 7 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆69Jul 12, 2025Updated 7 months ago
- A method for training neural networks that are provably robust to adversarial attacks.☆391Feb 16, 2022Updated 4 years ago
- ETH Robustness Analyzer for Deep Neural Networks☆343Jan 27, 2023Updated 3 years ago
- Now available! Cloud-based open source software (OSS) that enables infrastructure cooperation with automated driving technology through T…☆25Nov 18, 2025Updated 3 months ago
- Certifying Some Distributional Robustness with Principled Adversarial Training (https://arxiv.org/abs/1710.10571)☆45May 1, 2018Updated 7 years ago
- DL2 is a framework that allows training neural networks with logical constraints over numerical values in the network (e.g. inputs, out…☆87Jul 25, 2024Updated last year
- "Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach …☆17Nov 16, 2019Updated 6 years ago
- ☆48Feb 9, 2021Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Apr 25, 2020Updated 5 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Jul 25, 2024Updated last year
- Learning Certified Individually Fair Representations☆24Nov 7, 2020Updated 5 years ago
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆52Sep 18, 2018Updated 7 years ago
- Reachability and Safety of Nondeterministic Dynamical Systems☆50May 22, 2021Updated 4 years ago
- auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks and General Computational Graphs☆338Feb 3, 2026Updated last month