MadryLab / robustnessLinks
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
☆937Updated last year
Alternatives and similar repositories for robustness
Users that are interested in robustness are comparing it to the libraries listed below
Sorting:
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"☆700Updated last year
- A Toolbox for Adversarial Robustness Research☆1,338Updated last year
- Related papers for robust machine learning☆568Updated 2 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]☆713Updated 2 months ago
- TRADES (TRadeoff-inspired Adversarial DEfense via Surrogate-loss minimization)☆537Updated 2 years ago
- A challenge to explore adversarial robustness of neural networks on CIFAR10.☆496Updated 3 years ago
- A challenge to explore adversarial robustness of neural networks on MNIST.☆752Updated 3 years ago
- Robust evasion attacks against neural network to find adversarial examples☆827Updated 4 years ago
- Provable adversarial robustness at ImageNet scale☆390Updated 6 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSM☆443Updated 10 months ago
- Corruption and Perturbation Robustness (ICLR 2019)☆1,076Updated 2 years ago
- Pytorch implementation of convolutional neural network adversarial attack techniques☆358Updated 6 years ago
- PyTorch-1.0 implementation for the adversarial training on MNIST/CIFAR-10 and visualization on robustness classifier.☆251Updated 4 years ago
- LaTeX source for the paper "On Evaluating Adversarial Robustness"☆255Updated 4 years ago
- A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.☆506Updated last year
- PyTorch implementation of adversarial attacks [torchattacks]☆2,029Updated 11 months ago
- Empirical tricks for training robust models (ICLR 2021)☆253Updated 2 years ago
- Code for our NeurIPS 2019 *spotlight* "Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers"☆225Updated 5 years ago
- A method for training neural networks that are provably robust to adversarial attacks.☆387Updated 3 years ago
- Implementation of Papers on Adversarial Examples☆397Updated 2 years ago
- Datasets for the paper "Adversarial Examples are not Bugs, They Are Features"☆187Updated 4 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.☆167Updated 4 years ago
- PyTorch Implementation of Adversarial Training for Free!☆246Updated 3 years ago
- Pretrained TorchVision models on CIFAR10 dataset (with weights)☆676Updated last year
- A Harder ImageNet Test Set (CVPR 2021)☆608Updated last year
- This is the reading list mainly on adversarial examples (attacks, defenses, etc.) I try to keep and update regularly.☆226Updated 5 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆315Updated 6 months ago
- A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX☆2,864Updated last year
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.☆565Updated last year
- AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty☆989Updated last month