AgaMiko / GEBI
GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset
☆18Updated 2 years ago
Related projects ⓘ
Alternatives and complementary repositories for GEBI
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆73Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆72Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆80Updated last year
- All about explainable AI, algorithmic fairness and more☆107Updated last year
- ☆117Updated 2 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆9Updated last year
- ☆33Updated 5 months ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆102Updated 7 months ago
- ☆134Updated last year
- TensorFlow 2 implementation of the paper Generalized ODIN: Detecting Out-of-distribution Image without Learning from Out-of-distribution …☆45Updated 3 years ago
- Drift Detection for your PyTorch Models☆312Updated 2 years ago
- Data Augmentation with Variational Autoencoders (TPAMI)☆136Updated 2 years ago
- Meaningful Local Explanation for Machine Learning Models☆41Updated last year
- Contains materials for workshops pertaining to adversarial robustness in deep learning.☆86Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆137Updated 2 years ago
- Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)☆125Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- A repo for transfer learning with deep tabular models☆101Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆299Updated 8 months ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- The official implementation of "The Shapley Value of Classifiers in Ensemble Games" (CIKM 2021).☆218Updated last year
- A toolbox for differentially private data generation☆129Updated last year
- automatic data slicing☆35Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆42Updated 3 months ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆127Updated 3 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆30Updated 7 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆560Updated 2 weeks ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- SPEAR: Programmatically label and build training data quickly.☆103Updated 4 months ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆229Updated last year