Crisp-Unimib / ContrXT
a tool for comparing the predictions of any text classifiers
☆25Updated 2 years ago
Alternatives and similar repositories for ContrXT:
Users that are interested in ContrXT are comparing it to the libraries listed below
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆30Updated 2 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Official code for NeurIPS 2022 paper https://arxiv.org/abs/2208.00780 Visual correspondence-based explanations improve AI robustness and …☆43Updated last year
- Adversarial Black box Explainer generating Latent Exemplars☆12Updated 2 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆43Updated 8 months ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated 11 months ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- Implementation of the spotlight: a method for discovering systematic errors in deep learning models☆11Updated 3 years ago
- Codes for reproducing the contrastive explanation in “Explanations based on the Missing: Towards Contrastive Explanations with Pertinent…☆54Updated 6 years ago
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆13Updated 8 months ago
- ☆31Updated 3 years ago
- AQuA: A Benchmarking Tool for Label Quality Assessment☆21Updated last year
- Fine-grained ImageNet annotations☆29Updated 4 years ago
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 2 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆49Updated last year
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆27Updated 3 years ago
- Code-repository for the ICML 2020 paper Fairwashing explanations with off-manifold detergent☆12Updated 4 years ago
- Code for "Interpretable Image Recognition with Hierarchical Prototypes"☆18Updated 5 years ago
- Statistical test for bias in unsupervised image representations.☆10Updated 4 years ago
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- quica is a tool to run inter coder agreement pipelines in an easy and effective ways. Multiple measures are run and results are collected…☆23Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability (CVPR 2023)☆9Updated 2 years ago
- XAI Experiments on an Annotated Dataset of Wild Bee Images☆19Updated 3 months ago
- How Can I Explain This to You? An Empirical Study of Deep Neural Network Explanation Methods☆23Updated 4 years ago
- ☆15Updated 4 years ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆15Updated last year
- [Paper] Repository for the paper "On a Guided Nonnegative matrix factorization," published in IEEE ICASSP 2021.☆10Updated 2 years ago