hbaniecki / adversarial-explainable-aiLinks
π‘ Adversarial attacks on explanations and how to defend them
β328Updated 11 months ago
Alternatives and similar repositories for adversarial-explainable-ai
Users that are interested in adversarial-explainable-ai are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ248Updated last year
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)β84Updated 2 years ago
- A curated list of awesome Fairness in AI resourcesβ328Updated 2 years ago
- RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]β752Updated 7 months ago
- A Python library for Secure and Explainable Machine Learningβ189Updated 4 months ago
- β129Updated 3 years ago
- A curated list of papers on adversarial machine learning (adversarial examples and defense methods).β212Updated 3 years ago
- A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.β945Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Humanβ¦β74Updated 3 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"β37Updated 3 years ago
- A toolbox for differentially private data generationβ131Updated 2 years ago
- Repository of the paper "Imperceptible Adversarial Attacks on Tabular Data" presented at NeurIPS 2019 Workshop on Robust AI in Financial β¦β16Updated 4 years ago
- All about explainable AI, algorithmic fairness and moreβ110Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systemsβ75Updated 3 years ago
- A unified benchmark problem for data poisoning attacksβ160Updated 2 years ago
- Related papers for robust machine learningβ567Updated 2 years ago
- β149Updated last year
- Datasets derived from US census dataβ272Updated last year
- List of relevant resources for machine learning from explanatory supervisionβ160Updated 4 months ago
- Library containing PyTorch implementations of various adversarial attacks and resourcesβ165Updated this week
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 2 years ago
- [NeurIPS 2019] H. Chen*, H. Zhang*, S. Si, Y. Li, D. Boning and C.-J. Hsieh, Robustness Verification of Tree-based Models (*equal contribβ¦β27Updated 6 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ628Updated 3 months ago
- Creating and defending against adversarial examplesβ41Updated 6 years ago
- TabularBench: Adversarial robustness benchmark for tabular dataβ19Updated last month
- A repository to quickly generate synthetic data and associated trojaned deep learning modelsβ82Updated 2 years ago
- Provable adversarial robustness at ImageNet scaleβ402Updated 6 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ139Updated 2 months ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithmsβ295Updated 2 years ago
- A curated list of trustworthy deep learning papers. Daily updating...β375Updated last week