Fantastic Robustness Measures: The Secrets of Robust Generalization [NeurIPS 2023]
☆44Jan 13, 2025Updated last year
Alternatives and similar repositories for MAIR
Users that are interested in MAIR are comparing it to the libraries listed below
Sorting:
- ICML 2024 Paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies"☆17Jul 10, 2024Updated last year
- OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift. ICML 2024 and ICLRW-DMLR 2024☆23Jul 25, 2024Updated last year
- PyTorch implementation of adversarial attacks [torchattacks]☆2,148Jun 29, 2024Updated last year
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]☆772Mar 31, 2025Updated 11 months ago
- This repository contains the ViewFool and ImageNet-V proposed by the paper “ViewFool: Evaluating the Robustness of Visual Recognition to …☆33Dec 18, 2023Updated 2 years ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆24Mar 16, 2022Updated 4 years ago
- [NeurIPS 2023] Code for the paper "Revisiting Adversarial Training for ImageNet: Architectures, Training and Generalization across Threa…☆39Dec 3, 2024Updated last year
- Code for the paper "Better Diffusion Models Further Improve Adversarial Training" (ICML 2023)☆145Jul 31, 2023Updated 2 years ago
- LISA Traffic Signs Dataset for Pytorch. For Classification. 32x32 images. I use this to reproduce the Activation Clustering Results.☆20Jan 12, 2021Updated 5 years ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆157Feb 19, 2026Updated last month
- (AAAI 24) Step Vulnerability Guided Mean Fluctuation Adversarial Attack against Conditional Diffusion Models☆11Oct 12, 2024Updated last year
- A pytorch implementation of "Ensemble Adversarial Training : Attacks and Defenses"☆10Sep 4, 2019Updated 6 years ago
- PyTorch adversarial attack baselines for ImageNet, CIFAR10, and MNIST (state-of-the-art attacks comparison)☆20Mar 12, 2021Updated 5 years ago
- ☆26Feb 14, 2024Updated 2 years ago
- PDM-based Purifier☆22Nov 5, 2024Updated last year
- Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks☆18May 13, 2019Updated 6 years ago
- Brief info and loader for public rPPG (remote Photoplethysmography) datasets☆15Aug 16, 2021Updated 4 years ago
- ☆89Mar 3, 2026Updated 2 weeks ago
- [CVPR 2024] official code for SimAC☆21Jan 23, 2025Updated last year
- python RobustRMC projects☆10Apr 22, 2024Updated last year
- Host CIFAR-10.2 Data Set☆13Sep 22, 2021Updated 4 years ago
- Husky-LIO-SAM☆11Feb 23, 2023Updated 3 years ago
- ☆16Jan 26, 2025Updated last year
- Repository implementing the lightweight split learning framework enabling edge devices to collaboratively train machine learning models w…☆10Mar 27, 2024Updated last year
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- The official repo for the paper "An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability"☆43Oct 12, 2023Updated 2 years ago
- ☆45Jun 11, 2023Updated 2 years ago
- ☆20Aug 27, 2022Updated 3 years ago
- ☆12Feb 15, 2025Updated last year
- Code for LAS-AT: Adversarial Training with Learnable Attack Strategy (CVPR2022)☆118Mar 30, 2022Updated 3 years ago
- ☆16Oct 16, 2024Updated last year
- ☆14Mar 1, 2019Updated 7 years ago
- ☆21May 13, 2022Updated 3 years ago
- Procedural Image Programs for Representation Learning - NeurIPS 2022☆40Feb 4, 2026Updated last month
- ☆13Feb 3, 2025Updated last year
- Federated data center power consumption prediction.☆16Jun 25, 2024Updated last year
- ☆53Jan 7, 2022Updated 4 years ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆58Dec 20, 2024Updated last year
- Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".☆10Feb 6, 2024Updated 2 years ago