Crisp-Unimib / ContrXT
a tool for comparing the predictions of any text classifiers
☆25Updated 2 years ago
Alternatives and similar repositories for ContrXT
Users that are interested in ContrXT are comparing it to the libraries listed below
Sorting:
- Adversarial Black box Explainer generating Latent Exemplars☆12Updated 3 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated last month
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Code-repository for the ICML 2020 paper Fairwashing explanations with off-manifold detergent☆12Updated 4 years ago
- Code accompanying the paper "Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers"☆31Updated 2 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 3 years ago
- This repository provides a summarization of recent empirical studies/human studies that measure human understanding with machine explanat…☆13Updated 9 months ago
- Fast Axiomatic Attribution for Neural Networks (NeurIPS*2021)☆16Updated 2 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- Data-SUITE: Data-centric identification of in-distribution incongruous examples (ICML 2022)☆10Updated 2 years ago
- XAI Experiments on an Annotated Dataset of Wild Bee Images☆19Updated 5 months ago
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 2 years ago
- Toolkit for explaining the detections of an object detector☆14Updated 2 years ago
- PyTorch reimplementation of computing Shapley values via Truncated Monte Carlo sampling from "What is your data worth? Equitable Valuatio…☆27Updated 3 years ago
- Official implementation of "Meta Learning for Few-Shot One-class Classification", 2020☆13Updated 3 years ago
- Explore/examine/explain/expose your model with the explabox!☆16Updated 2 weeks ago
- Research prototype of deletion efficient k-means algorithms☆23Updated 5 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- A toolkit for efficent computation of saliency maps for explainable AI attribution. This tool was developed at Lawrence Livermore Nationa…☆45Updated 4 years ago
- Simple reimplementation of Maximum Density Divergence for Unsupervised Domain Adaptation (https://arxiv.org/abs/2004.12615) in PyTorch Li…☆26Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 4 years ago
- Implementation of the spotlight: a method for discovering systematic errors in deep learning models☆11Updated 3 years ago
- Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction☆36Updated 2 years ago
- Federated Learning (FL) experiment simulation in Python.☆17Updated this week
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- AQuA: A Benchmarking Tool for Label Quality Assessment, NeurIPS'23 D&B☆21Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- Official code for NeurIPS 2022 paper https://arxiv.org/abs/2208.00780 Visual correspondence-based explanations improve AI robustness and …☆42Updated last year
- Code repo for KDD'22 paper : 'RES: A Robust Framework for Guiding Visual Explanation'☆32Updated 2 years ago
- NoiseGrad (and its extension NoiseGrad++) is a method to enhance explanations of artificial neural networks by adding noise to model weig…☆22Updated 2 years ago