[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
☆69Jul 12, 2025Updated 7 months ago
Alternatives and similar repositories for RobustTrees
Users that are interested in RobustTrees are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆27Jun 15, 2019Updated 6 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Apr 25, 2020Updated 5 years ago
- Adversarial learning by utilizing model interpretation☆10Oct 19, 2018Updated 7 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Apr 8, 2018Updated 7 years ago
- ☆15Dec 7, 2021Updated 4 years ago
- OVAL framework for BaB-based Neural Network Verification☆17Dec 18, 2025Updated 2 months ago
- Cost-Aware Robust Tree Ensembles for Security Applications (Usenix Security'21) https://arxiv.org/pdf/1912.01149.pdf☆18Mar 2, 2021Updated 5 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Jun 7, 2021Updated 4 years ago
- Benchmark for LP-relaxed robustness verification of ReLU-networks☆42Apr 24, 2019Updated 6 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Nov 1, 2019Updated 6 years ago
- The library for symbolic interval☆22Jun 23, 2020Updated 5 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Nov 30, 2022Updated 3 years ago
- The released code of ReluVal in USENIX Security 2018☆60Mar 4, 2020Updated 5 years ago
- [NeurIPS 2020] Code for "An Efficient Adversarial Attack for Tree Ensembles"☆23Jun 6, 2021Updated 4 years ago
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Dec 2, 2018Updated 7 years ago
- Code release for the ICML 2019 paper "Are generative classifiers more robust to adversarial attacks?"☆24May 10, 2019Updated 6 years ago
- "Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers" (NeurIPS 2019, previously called "A Stratified Approach …☆17Nov 16, 2019Updated 6 years ago
- A certifiable defense against adversarial examples by training neural networks to be provably robust☆221Jul 25, 2024Updated last year
- Implemention of "Piracy Resistant Watermarks for Deep Neural Networks" in TensorFlow.☆12Dec 5, 2020Updated 5 years ago
- Deep Learning Library for R☆12May 6, 2018Updated 7 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- SVM Abstrac Verifier tool☆12Oct 13, 2022Updated 3 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆17Nov 29, 2018Updated 7 years ago
- This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on …☆98Oct 18, 2022Updated 3 years ago
- ☆13Aug 31, 2024Updated last year
- Code for Stability Training with Noise (STN)☆22Dec 27, 2020Updated 5 years ago
- An Algorithm to Quantify Robustness of Recurrent Neural Networks☆49Apr 24, 2020Updated 5 years ago
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆31Nov 9, 2021Updated 4 years ago
- Code for the paper "Adversarial Training and Robustness for Multiple Perturbations", NeurIPS 2019☆47Dec 8, 2022Updated 3 years ago
- ☆12Dec 9, 2020Updated 5 years ago
- Rethinking Bias-Variance Trade-off for Generalization of Neural Networks☆50Mar 12, 2021Updated 4 years ago
- ☆11Jan 21, 2021Updated 5 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Aug 5, 2020Updated 5 years ago
- Code repo for the NeurIPS 2021 paper "Online Adaption to Label Distribution Shift".☆15Feb 15, 2023Updated 3 years ago
- ☆15Jul 25, 2023Updated 2 years ago
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 3 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆228Nov 9, 2019Updated 6 years ago
- The official repo for GCP-CROWN paper☆13Sep 26, 2022Updated 3 years ago