Ighina / CERTIFAI
A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com/2020/wp-content/papers/099.pdf
☆9Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for CERTIFAI
- Explore/examine/explain/expose your model with the explabox!☆15Updated last month
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- Codes for reproducing the experimental results in "Proper Network Interpretability Helps Adversarial Robustness in Classification", publi…☆13Updated 4 years ago
- A Python framework for the quantitative evaluation of eXplainable AI methods☆16Updated last year
- Adversarial detection and defense for deep learning systems using robust feature alignment☆14Updated 4 years ago
- reference implementation for "explanations can be manipulated and geometry is to blame"☆35Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- ⚖️ Code for the paper "Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning".☆11Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆51Updated 2 years ago
- This repository provides a PyTorch implementation of "Fooling Neural Network Interpretations via Adversarial Model Manipulation". Our pap…☆22Updated 3 years ago
- ☆12Updated 4 years ago
- Implementation of Adversarial Debiasing in PyTorch to address Gender Bias☆30Updated 4 years ago
- General fair regression subject to demographic parity constraint. Paper appeared in ICML 2019.☆14Updated 4 years ago
- Code for "Neuron Shapley: Discovering the Responsible Neurons"☆23Updated 6 months ago
- ☆13Updated 3 years ago
- bayesian lime☆16Updated 3 months ago
- Explain Neural Networks using Layer-Wise Relevance Propagation and evaluate the explanations using Pixel-Flipping and Area Under the Curv…☆13Updated 2 years ago
- Model Agnostic Counterfactual Explanations☆87Updated 2 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆11Updated 2 years ago
- Code for our paper☆12Updated 2 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆37Updated 3 years ago
- ☆9Updated 3 years ago
- ☆21Updated last year
- ☆13Updated 3 years ago
- ☆11Updated last year
- ☆35Updated last year
- ☆19Updated 2 months ago
- Foolbox implementation for NeurIPS 2021 Paper: "Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints".☆25Updated 2 years ago
- ☆28Updated 3 years ago
- This repository provides details of the experimental code in the paper: Instance-based Counterfactual Explanations for Time Series Classi…☆18Updated 3 years ago