CLEVER (Cross-Lipschitz Extreme Value for nEtwork Robustness) is a robustness metric for deep neural networks
☆63Aug 3, 2021Updated 4 years ago
Alternatives and similar repositories for CLEVER
Users that are interested in CLEVER are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Codes for reproducing the robustness evaluation scores in “Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approac…☆52Sep 18, 2018Updated 7 years ago
- CROWN: A Neural Network Verification Framework for Networks with General Activation Functions☆39Dec 13, 2018Updated 7 years ago
- Code to reproduce experiments from "A Statistical Approach to Assessing Neural Network Robustness"☆12Feb 11, 2019Updated 7 years ago
- CROWN: A Neural Network Robustness Certification Algorithm for General Activation Functions (This repository is outdated; use https://git…☆18Nov 29, 2018Updated 7 years ago
- This repository contains the official PyTorch implementation of GeoDA algorithm. GeoDA is a Black-box attack to generate adversarial exam…☆35Mar 14, 2021Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- SmoothFool: An Efficient Framework for Computing Smooth Adversarial Perturbations☆14Jan 6, 2022Updated 4 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Sep 17, 2020Updated 5 years ago
- Strongest attack against Feature Scatter and Adversarial Interpolation☆24Dec 26, 2019Updated 6 years ago
- SurFree: a fast surrogate-free black-box attack☆44Jun 27, 2024Updated last year
- Code for the paper "A Light Recipe to Train Robust Vision Transformers" [SaTML 2023]☆54Feb 6, 2023Updated 3 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Apr 8, 2018Updated 7 years ago
- ☆22Jun 23, 2021Updated 4 years ago
- Efficient Robustness Verification for ReLU networks (this repository is outdated, don't use; checkout our new implementation at https://g…☆30Nov 1, 2019Updated 6 years ago
- Code for "Diversity can be Transferred: Output Diversification for White- and Black-box Attacks"☆51Nov 2, 2020Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Fine-grained ImageNet annotations☆30May 25, 2020Updated 5 years ago
- On the effectiveness of adversarial training against common corruptions [UAI 2022]☆30May 16, 2022Updated 3 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]☆89Mar 24, 2023Updated 3 years ago
- First-Order Adversarial Vulnerability of Neural Networks and Input Dimension☆15Sep 4, 2019Updated 6 years ago
- Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTor…☆97Jun 7, 2021Updated 4 years ago
- Source code for the paper "Exploiting Excessive Invariance caused by Norm-Bounded Adversarial Robustness"☆25Feb 12, 2020Updated 6 years ago
- Certified robustness of deep neural networks☆19Aug 20, 2024Updated last year
- Official implementation of "When Adversarial Training Meets Vision Transformers: Recipes from Training to Architecture" published at Neur…☆37Sep 19, 2024Updated last year
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.☆943Jan 11, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Code implementing the experiments described in the NeurIPS 2018 paper "With Friends Like These, Who Needs Adversaries?".☆13Sep 11, 2020Updated 5 years ago
- Keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular da…☆19Jun 12, 2024Updated last year
- NIPS Adversarial Vision Challenge☆41Sep 17, 2018Updated 7 years ago
- Official Code for Efficient and Effective Augmentation Strategy for Adversarial Training (NeurIPS-2022)☆17Mar 29, 2023Updated 3 years ago
- Fooling neural based speech recognition systems.☆14Jun 9, 2017Updated 8 years ago
- Implementation of the paper "Certifiable Robustness and Robust Training for Graph Convolutional Networks".☆43Dec 7, 2020Updated 5 years ago
- ☆26Feb 15, 2023Updated 3 years ago
- Code for paper "Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality".☆124Nov 4, 2020Updated 5 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- A general method for training cost-sensitive robust classifier☆22May 29, 2019Updated 6 years ago
- ☆13Mar 11, 2026Updated 2 weeks ago
- Trained model weights, training and evaluation code from the paper "A simple way to make neural networks robust against diverse image cor…☆62May 24, 2023Updated 2 years ago
- Connecting Interpretability and Robustness in Decision Trees through Separation☆17May 8, 2021Updated 4 years ago
- Codes for the paper "Optimizing Mode Connectivity via Neuron Alignment" from NeurIPS 2020.☆16Dec 10, 2020Updated 5 years ago
- A Closer Look at Accuracy vs. Robustness☆87May 17, 2021Updated 4 years ago
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago