Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]
☆18Apr 8, 2018Updated 8 years ago
Alternatives and similar repositories for cross-lipschitz
Users that are interested in cross-lipschitz are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Dec 2, 2018Updated 7 years ago
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- The library for symbolic interval☆22Jun 23, 2020Updated 5 years ago
- Interval attacks (adversarial ML)☆21Jun 17, 2019Updated 6 years ago
- ☆13Jun 23, 2022Updated 3 years ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- ☆15Dec 7, 2021Updated 4 years ago
- A powerful white-box adversarial attack that exploits knowledge about the geometry of neural networks to find minimal adversarial perturb…☆12Aug 5, 2020Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Apr 25, 2020Updated 6 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Nov 30, 2022Updated 3 years ago
- Codes for reproducing the experimental results in "CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Net…☆28Jun 23, 2021Updated 4 years ago
- can calculate the Hessian matrix and/or its spectrum for simple neural nets☆11May 7, 2018Updated 7 years ago
- Implemention of "Piracy Resistant Watermarks for Deep Neural Networks" in TensorFlow.☆12Dec 5, 2020Updated 5 years ago
- ☆12Feb 19, 2025Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆32Aug 22, 2023Updated 2 years ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- A blanked execution framework based on the Unicorn engine☆19Jan 29, 2017Updated 9 years ago
- This is the official implementation of ClusTR: Clustering Training for Robustness paper.☆20Oct 20, 2021Updated 4 years ago
- [ICML'20] Multi Steepest Descent (MSD) for robustness against the union of multiple perturbation models.☆25Jul 25, 2024Updated last year
- Multiclass classification based on stochastic dual coordinate ascent☆33Nov 30, 2016Updated 9 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Jun 7, 2021Updated 4 years ago
- Learning Security Classifiers with Verified Global Robustness Properties (CCS'21) https://arxiv.org/pdf/2105.11363.pdf☆28Dec 1, 2021Updated 4 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆44Sep 11, 2023Updated 2 years ago
- Adversarial Robustness on In- and Out-Distribution Improves Explainability☆12Feb 10, 2022Updated 4 years ago
- Code for FAB-attack☆33Jul 10, 2020Updated 5 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Fork of Microsoft/LightGBM to include support for the CEGB (Cost Efficient Gradient Boosting) algorithm. Original repository at https://g…☆13Jun 30, 2017Updated 8 years ago
- A way to achieve uniform confidence far away from the training data.☆38Apr 16, 2021Updated 5 years ago
- SurFree: a fast surrogate-free black-box attack☆44Jun 27, 2024Updated last year
- Scalable Multitask Representation Learning for Scene Classification☆12Jun 10, 2014Updated 11 years ago
- ☆46May 8, 2024Updated last year
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆51Sep 18, 2018Updated 7 years ago
- ☆60Dec 5, 2024Updated last year
- [ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples☆69Jul 12, 2025Updated 9 months ago
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contrib…☆27Jun 15, 2019Updated 6 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them [NeurIPS 2020]☆35Jul 3, 2021Updated 4 years ago
- Torch implementation for Robust convolutional neural networks under adversarial noise☆13Mar 8, 2016Updated 10 years ago
- Reference implementations for RecurJac, CROWN, FastLin and FastLip (Neural Network verification and robustness certification algorithms)…☆27Nov 23, 2019Updated 6 years ago
- Sparse-RS: a versatile framework for query-efficient sparse black-box adversarial attacks☆46Feb 24, 2022Updated 4 years ago
- Official implementation for the paper: "Shallow Updates for Deep Reinforcement Learning"☆18Nov 2, 2017Updated 8 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62May 30, 2019Updated 6 years ago
- ☆16Dec 22, 2017Updated 8 years ago