hbaniecki / adversarial-explainable-aiLinks
π‘ Adversarial attacks on explanations and how to defend them
β325Updated 8 months ago
Alternatives and similar repositories for adversarial-explainable-ai
Users that are interested in adversarial-explainable-ai are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β83Updated 2 years ago
- A Python library for Secure and Explainable Machine Learningβ184Updated 2 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated last year
- β128Updated 3 years ago
- A curated list of awesome Fairness in AI resourcesβ327Updated last year
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β210Updated 3 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β732Updated 4 months ago
- A unified benchmark problem for data poisoning attacksβ157Updated last year
- A curated list of trustworthy deep learning papers. Daily updating...β373Updated this week
- A repository to quickly generate synthetic data and associated trojaned deep learning modelsβ78Updated 2 years ago
- All about explainable AI, algorithmic fairness and moreβ110Updated last year
- β146Updated 10 months ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β74Updated 2 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.β944Updated last year
- A toolbox for differentially private data generationβ132Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ74Updated 3 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ618Updated last month
- Related papers for robust machine learningβ566Updated 2 years ago
- Library containing PyTorch implementations of various adversarial attacks and resourcesβ161Updated this week
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribβ¦β27Updated 6 years ago
- Code for "On Adaptive Attacks to Adversarial Example Defenses"β86Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β36Updated 3 years ago
- Creating and defending against adversarial examplesβ41Updated 6 years ago
- PhD/MSc course on Machine Learning Security (Univ. Cagliari)β211Updated 2 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ136Updated 3 months ago
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial β¦β16Updated 3 years ago
- Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"β714Updated last year
- β193Updated last year
- TabularBench: Adversarial robustness benchmark for tabular dataβ19Updated 8 months ago
- Datasets derived from US census dataβ268Updated last year