[Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness
☆16Jul 5, 2024Updated last year
Alternatives and similar repositories for MDAttack
Users that are interested in MDAttack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks☆33Jul 5, 2024Updated last year
- [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image☆36Oct 29, 2025Updated 4 months ago
- ☆11Jan 25, 2022Updated 4 years ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆171Jul 5, 2024Updated last year
- This is the repository for the AI2019, tutorial on adversarial machine learning☆17Jul 20, 2020Updated 5 years ago
- SaTML'23 paper "Backdoor Attacks on Time Series: A Generative Approach" by Yujing Jiang, Xingjun Ma, Sarah Monazam Erfani, and James Bail…☆21Feb 5, 2023Updated 3 years ago
- ☆26Feb 19, 2025Updated last year
- CVPR2023: Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples☆22Apr 25, 2023Updated 2 years ago
- Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)☆11Aug 29, 2020Updated 5 years ago
- Strongest attack against Feature Scatter and Adversarial Interpolation☆24Dec 26, 2019Updated 6 years ago
- Unrestricted adversarial images via interpretable color transformations (TIFS 2023 & BMVC 2020)☆32Apr 25, 2023Updated 2 years ago
- ⚖️ Code for the paper "Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning".☆11Dec 8, 2022Updated 3 years ago
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learning☆31Dec 2, 2023Updated 2 years ago
- Tensorflow implementation of "Defense against Universal Adversarial Perturbations"☆10Apr 16, 2018Updated 7 years ago
- On Intrinsic Dataset Properties for Adversarial Machine Learning☆19Jun 7, 2020Updated 5 years ago
- ZOSVRG-BlackBox-Adv☆13Oct 30, 2018Updated 7 years ago
- Code for Transferable Unlearnable Examples☆22Mar 11, 2023Updated 3 years ago
- Code for the NeurIPS 2019 submission: "Improving Black-box Adversarial Attacks with a Transfer-based Prior".☆16May 6, 2020Updated 5 years ago
- Final Project for COMP 551. A detailed tutorial on the various techniques employed for adversarial attacks on machine learning classifier…☆12May 16, 2017Updated 8 years ago
- Code for our NeurIPS 2020 paper Backpropagating Linearly Improves Transferability of Adversarial Examples.☆41Feb 10, 2023Updated 3 years ago
- ☆53Dec 4, 2024Updated last year
- Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural trai…☆177May 3, 2024Updated last year
- Adversarial machine learning and explainable machine learning for cyber security☆13Jun 21, 2022Updated 3 years ago
- Code release for DeepJudge (S&P'22)☆52Mar 14, 2023Updated 3 years ago
- Code for Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks (NeurIPS 2022)☆10Jul 20, 2023Updated 2 years ago
- ☆19Jun 26, 2021Updated 4 years ago
- Bachelor's Thesis on Adversarial Machine Learning Attacks and Defences☆17Nov 18, 2022Updated 3 years ago
- Learning perturbation sets for robust machine learning☆64Aug 23, 2021Updated 4 years ago
- [NeurIPS 2021] Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training☆32Jan 9, 2022Updated 4 years ago
- [NeurIPS 2021] "Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks" by Yon…☆13Feb 13, 2022Updated 4 years ago
- ☆57Jul 27, 2022Updated 3 years ago
- Anti-Backdoor learning (NeurIPS 2021)☆84Jul 20, 2023Updated 2 years ago
- Black-box Adversarial Attacks on Video Recognition Models. (VBAD)☆27Oct 28, 2019Updated 6 years ago
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 6 months ago
- Reproduce Results for ICCV2019 "Symmetric Cross Entropy for Robust Learning with Noisy Labels" https://arxiv.org/abs/1908.06112☆191Dec 27, 2020Updated 5 years ago
- A repository for the query-efficient black-box attack, SignHunter☆22Jan 15, 2020Updated 6 years ago
- This repository is for NeurIPS 2018 spotlight paper "Attacks Meet Interpretability: Attribute-steered Detection of Adversarial Samples."☆32Apr 27, 2022Updated 3 years ago
- One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks (ICLR 2023 Spotlight)☆14Sep 28, 2025Updated 5 months ago