hbaniecki / adversarial-explainable-aiView external linksLinks
π‘ Adversarial attacks on explanations and how to defend them
β334Nov 30, 2024Updated last year
Alternatives and similar repositories for adversarial-explainable-ai
Users that are interested in adversarial-explainable-ai are comparing it to the libraries listed below
Sorting:
- MateriaΕy z seminariΓ³w prowadzonych w MI^2 DataLabie.β33Feb 7, 2026Updated last week
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.β18May 31, 2023Updated 2 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β768Mar 31, 2025Updated 10 months ago
- A collection of research materials on explainable AI/MLβ1,612Dec 11, 2025Updated 2 months ago
- eXplainable Machine Learning 2022 at MIM UWβ20Jul 1, 2023Updated 2 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β212May 27, 2022Updated 3 years ago
- Library containing PyTorch implementations of various adversarial attacks and resourcesβ166Nov 20, 2025Updated 2 months ago
- Revisiting Transferable Adversarial Images (TPAMI 2025)β140Sep 11, 2025Updated 5 months ago
- A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.β524Oct 15, 2023Updated 2 years ago
- β14Nov 3, 2025Updated 3 months ago
- ICLR 2023 paper "Exploring and Exploiting Decision Boundary Dynamics for Adversarial Robustness" by Yuancheng Xu, Yanchao Sun, Micah Goldβ¦β25May 2, 2023Updated 2 years ago
- Variable importance via oscillationsβ14Sep 26, 2020Updated 5 years ago
- Model verification, validation, and error analysisβ59Jan 9, 2024Updated 2 years ago
- Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)β12Aug 25, 2020Updated 5 years ago
- white box adversarial attackβ38Jan 30, 2021Updated 5 years ago
- A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]β90Mar 24, 2023Updated 2 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β37Jul 24, 2022Updated 3 years ago
- Surrogate Assisted Feature Extraction in Rβ28Aug 13, 2022Updated 3 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)β27Nov 18, 2024Updated last year
- PyTorch implementation of BPDA+EOT attack to evaluate adversarial defense with an EBMβ26Jun 30, 2020Updated 5 years ago
- A Toolbox for Adversarial Robustness Researchβ1,363Sep 14, 2023Updated 2 years ago
- β25Mar 24, 2023Updated 2 years ago
- code for ICML 2021 paper in which we explore the relationship between adversarial transferability and knowledge transferability.β17Dec 8, 2022Updated 3 years ago
- Interesting resources related to XAI (Explainable Artificial Intelligence)β849May 31, 2022Updated 3 years ago
- Source of the ECCV22 paper "LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity"β18Mar 12, 2025Updated 11 months ago
- [ACM MM 2023] Improving the Transferability of Adversarial Examples with Arbitrary Style Transfer.β22Feb 23, 2024Updated last year
- [ICLR 2023, Spotlight] Indiscriminate Poisoning Attacks on Unsupervised Contrastive Learningβ33Dec 2, 2023Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitznessβ35Jan 9, 2023Updated 3 years ago
- The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferabiliβ¦β20Aug 22, 2024Updated last year
- Official Code of "Imperceptible Adversarial Attack via Invertible Neural Networks"β24Jul 24, 2024Updated last year
- Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red andβ¦β5,821Dec 12, 2025Updated 2 months ago
- A list of backdoor learning resourcesβ1,158Jul 31, 2024Updated last year
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.β944Jan 11, 2024Updated 2 years ago
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calcβ¦β12Feb 14, 2023Updated 3 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β84Dec 8, 2022Updated 3 years ago
- code for "Feature Importance-aware Transferable Adversarial Attacks"β87Jun 9, 2022Updated 3 years ago
- π Interactive Studio for Explanatory Model Analysisβ332Aug 31, 2023Updated 2 years ago
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)β33Dec 16, 2022Updated 3 years ago
- moDel Agnostic Language for Exploration and eXplanationβ1,455Jan 20, 2026Updated 3 weeks ago