☆22Oct 5, 2023Updated 2 years ago
Alternatives and similar repositories for evaluating-adaptive-test-time-defenses
Users that are interested in evaluating-adaptive-test-time-defenses are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBM☆27Jun 30, 2020Updated 5 years ago
- [CVPR 2024] This repository includes the official implementation our paper "Revisiting Adversarial Training at Scale"☆20Apr 21, 2024Updated 2 years ago
- ☆13Jun 23, 2022Updated 3 years ago
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- This is the official implementation of ContraNet (NDSS2022).☆22Aug 31, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Provable Robustness of ReLU networks via Maximization of Linear Regions [AISTATS 2019]☆31Jul 15, 2020Updated 5 years ago
- Towards Efficient and Effective Adversarial Training, NeurIPS 2021☆16Feb 15, 2022Updated 4 years ago
- Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation [NeurIPS 2017]☆18Apr 8, 2018Updated 8 years ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Nov 30, 2022Updated 3 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"☆85Feb 18, 2021Updated 5 years ago
- Implementation of Confidence-Calibrated Adversarial Training (CCAT).☆45Aug 3, 2020Updated 5 years ago
- ☆12Feb 19, 2025Updated last year
- ☆22Jul 28, 2020Updated 5 years ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆32Aug 22, 2023Updated 2 years ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆64Aug 9, 2023Updated 2 years ago
- Generate custom text files for dataloader within UDA methods☆14May 24, 2023Updated 2 years ago
- SC-Adagrad, SC-RMSProp and RMSProp algorithms for training deep networks proposed in☆14Oct 5, 2018Updated 7 years ago
- Implementation of "Adversarial purification with Score-based generative models", ICML 2021☆30Oct 24, 2021Updated 4 years ago
- The code of the ICLR 2024 paper: Adversarial Training on Purification (AToP): Advancing Both Robustness and Generalization☆10Nov 21, 2024Updated last year
- ☆53Jan 7, 2022Updated 4 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆44Sep 11, 2023Updated 2 years ago
- Adversarial Robustness on In- and Out-Distribution Improves Explainability☆12Feb 10, 2022Updated 4 years ago
- [CVPRW'22] A privacy attack that exploits Adversarial Training models to compromise the privacy of Federated Learning systems.☆12Jul 7, 2022Updated 3 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code for FAB-attack☆33Jul 10, 2020Updated 5 years ago
- A way to achieve uniform confidence far away from the training data.☆38Apr 16, 2021Updated 5 years ago
- ☆46May 8, 2024Updated 2 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38May 25, 2021Updated 4 years ago
- Training vision models with full-batch gradient descent and regularization☆40Feb 14, 2023Updated 3 years ago
- Tensorflow Implementation of adversarial learning based adversarial example generator☆10Jan 31, 2018Updated 8 years ago
- [ICML 2024] "Improving Accuracy-robustness Trade-off via Pixel Reweighted Adversarial Training"☆17Jun 4, 2024Updated last year
- Logit Pairing Methods Can Fool Gradient-Based Attacks [NeurIPS 2018 Workshop on Security in Machine Learning]☆19Dec 2, 2018Updated 7 years ago
- The Keras Implementation of the paper "The HSIC Bottleneck: Deep Learning without Back-Propagation"(https://arxiv.org/abs/1908.01580)