hbaniecki / adversarial-explainable-aiLinks
π‘ Adversarial attacks on explanations and how to defend them
β319Updated 7 months ago
Alternatives and similar repositories for adversarial-explainable-ai
Users that are interested in adversarial-explainable-ai are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β82Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 10 months ago
- A Python library for Secure and Explainable Machine Learningβ183Updated 3 weeks ago
- A curated list of awesome Fairness in AI resourcesβ324Updated last year
- β127Updated 3 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β73Updated 2 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β726Updated 3 months ago
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial β¦β15Updated 3 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β210Updated 3 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β36Updated 2 years ago
- Library containing PyTorch implementations of various adversarial attacks and resourcesβ158Updated 3 weeks ago
- Related papers for robust machine learningβ566Updated 2 years ago
- β146Updated 9 months ago
- A repository to quickly generate synthetic data and associated trojaned deep learning modelsβ78Updated 2 years ago
- All about explainable AI, algorithmic fairness and moreβ110Updated last year
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β87Updated 4 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.β943Updated last year
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribβ¦β27Updated 6 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiationβ213Updated 3 years ago
- A toolbox for differentially private data generationβ132Updated 2 years ago
- A unified benchmark problem for data poisoning attacksβ156Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ607Updated this week
- List of relevant resources for machine learning from explanatory supervisionβ157Updated 5 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ132Updated 2 months ago
- π± A curated list of data valuation (DV) to design your next data marketplaceβ122Updated 4 months ago
- Datasets derived from US census dataβ264Updated last year
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"β707Updated last year
- [ICLR 2020] A repository for extremely fast adversarial training using FGSMβ444Updated 11 months ago
- A library for running membership inference attacks against ML modelsβ149Updated 2 years ago