chenhongge / treeVerificationLinks
[NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribution)
☆27Updated 6 years ago
Alternatives and similar repositories for treeVerification
Users that are interested in treeVerification are comparing it to the libraries listed below
Sorting:
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆67Updated 2 months ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆98Updated 4 years ago
- Code of On L-p Robustness of Decision Stumps and Trees, ICML 2020☆10Updated 5 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Updated 2 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆42Updated 6 years ago
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆53Updated 7 years ago
- ☆26Updated 2 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆90Updated 2 years ago
- This repository contains a simple implementation of Interval Bound Propagation (IBP) using TensorFlow: https://arxiv.org/abs/1810.12715☆161Updated 5 years ago
- Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms)…☆26Updated 5 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆125Updated 4 years ago
- Official implementation for Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (NeurIPS, 2021).☆24Updated 3 years ago
- CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks☆63Updated 4 years ago
- Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)☆111Updated 7 years ago
- Library for training globally-robust neural networks.☆29Updated 2 months ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Updated 5 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆227Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Updated 5 years ago
- Provable adversarial robustness at ImageNet scale☆400Updated 6 years ago
- ☆11Updated 2 years ago
- Fair Empirical Risk Minimization (FERM)☆37Updated 5 years ago
- The official repo for GCP-CROWN paper☆13Updated 3 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 6 years ago
- Official implementation for paper: A New Defense Against Adversarial Images: Turning a Weakness into a Strength☆38Updated 5 years ago
- ☆23Updated 3 years ago
- ☆157Updated 4 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- Interval attacks (adversarial ML)☆21Updated 6 years ago
- Code and data for the ICLR 2021 paper "Perceptual Adversarial Robustness: Defense Against Unseen Threat Models".☆56Updated 3 years ago