hbaniecki / adversarial-explainable-aiLinks
π‘ Adversarial attacks on explanations and how to defend them
β321Updated 8 months ago
Alternatives and similar repositories for adversarial-explainable-ai
Users that are interested in adversarial-explainable-ai are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β83Updated 2 years ago
- A Python library for Secure and Explainable Machine Learningβ184Updated last month
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 11 months ago
- β127Updated 3 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β210Updated 3 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β727Updated 4 months ago
- A curated list of awesome Fairness in AI resourcesβ326Updated last year
- Related papers for robust machine learningβ566Updated 2 years ago
- A repository to quickly generate synthetic data and associated trojaned deep learning modelsβ78Updated 2 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.β946Updated last year
- reference implementation for "explanations can be manipulated and geometry is to blame"β36Updated 3 years ago
- Creating and defending against adversarial examplesβ42Updated 6 years ago
- β146Updated 9 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ613Updated 2 weeks ago
- A unified benchmark problem for data poisoning attacksβ156Updated last year
- A curated list of trustworthy deep learning papers. Daily updating...β371Updated last week
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial β¦β15Updated 3 years ago
- Datasets derived from US census dataβ268Updated last year
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribβ¦β27Updated 6 years ago
- Library containing PyTorch implementations of various adversarial attacks and resourcesβ161Updated last month
- Provable adversarial robustness at ImageNet scaleβ396Updated 6 years ago
- ARMORY Adversarial Robustness Evaluation Test Bedβ182Updated last year
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"β712Updated last year
- All about explainable AI, algorithmic fairness and moreβ110Updated last year
- An awesome list of papers on privacy attacks against machine learningβ616Updated last year
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β87Updated 4 years ago
- This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10.β170Updated 4 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β73Updated 2 years ago
- [ICLR 2020] A repository for extremely fast adversarial training using FGSMβ444Updated last year