Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]
☆18Apr 8, 2018Updated 8 years ago
Alternatives and similar repositories for cross-lipschitz
Users that are interested in cross-lipschitz are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Dec 2, 2018Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- The library for symbolic interval☆22Jun 23, 2020Updated 5 years ago
- Interval attacks (adversarial ML)☆21Jun 17, 2019Updated 6 years ago
- ☆13Jun 23, 2022Updated 3 years ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- β-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Verification☆31Nov 9, 2021Updated 4 years ago
- ☆15Dec 7, 2021Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Aug 5, 2020Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Apr 25, 2020Updated 5 years ago
- ☆22Oct 5, 2023Updated 2 years ago
- Implemention of "Piracy Resistant Watermarks for Deep Neural Networks" in TensorFlow.☆12Dec 5, 2020Updated 5 years ago
- ☆12Feb 19, 2025Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆32Aug 22, 2023Updated 2 years ago
- A blanked execution framework based on the Unicorn engine☆19Jan 29, 2017Updated 9 years ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆25Jul 25, 2024Updated last year
- Multiclass classification based on stochastic dual coordinate ascent☆33Nov 30, 2016Updated 9 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Jun 7, 2021Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆28Dec 1, 2021Updated 4 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆39Dec 13, 2018Updated 7 years ago
- Code to implement SIMILE algorithm from the paper entitled "Smooth Imitation Learning for Online Sequence Prediction" from ICML 2016☆13May 25, 2016Updated 9 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆44Sep 11, 2023Updated 2 years ago
- Adversarial Robustness on In- and Out-Distribution Improves Explainability☆12Feb 10, 2022Updated 4 years ago
- Code for FAB-attack☆33Jul 10, 2020Updated 5 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- Fork of Microsoft/LightGBM to include support for the CEGB (Cost Efficient Gradient Boosting) algorithm. Original repository at https://g…☆13Jun 30, 2017Updated 8 years ago
- SurFree: a fast surrogate-free black-box attack☆44Jun 27, 2024Updated last year
- Scalable Multitask Representation Learning for Scene Classification☆12Jun 10, 2014Updated 11 years ago
- ☆46May 8, 2024Updated last year
- A fast sparse attack on deep neural networks.☆51Sep 27, 2020Updated 5 years ago
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆69Jul 12, 2025Updated 9 months ago
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆27Jun 15, 2019Updated 6 years ago
- Cost-Aware Robust Tree Ensembles for Security Applications (Usenix Security'21) https://arxiv.org/pdf/1912.01149.pdf☆18Mar 2, 2021Updated 5 years ago
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Torch implementation for Robust convolutional neural networks under adversarial noise☆13Mar 8, 2016Updated 10 years ago
- [ICML 2022] "Linearity Grafting: Relaxed Neuron Pruning Helps Certifiable Robustness" by Tianlong Chen*, Huan Zhang*, Zhenyu Zhang, Shiyu…☆17Jun 22, 2022Updated 3 years ago
- Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms)…☆27Nov 23, 2019Updated 6 years ago
- Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks☆46Feb 24, 2022Updated 4 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62May 30, 2019Updated 6 years ago
- ☆16Dec 22, 2017Updated 8 years ago
- ☆16Dec 29, 2023Updated 2 years ago