Crisp-Unimib / ContrXTLinks
a tool for comparing the predictions of any text classifiers
☆25Updated 2 years ago
Alternatives and similar repositories for ContrXT
Users that are interested in ContrXT are comparing it to the libraries listed below
Sorting:
- Adversarial Black box Explainer generating Latent Exemplars☆11Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 6 months ago
- Code-repository for the ICML 2020 paper Fairwashing explanations with off-manifold detergent☆12Updated 4 years ago
- code release for the paper "On Completeness-aware Concept-Based Explanations in Deep Neural Networks"☆53Updated 3 years ago
- A collection of Italian benchmarks for LLM evaluation☆30Updated last month
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- 👋 Xplique is a Neural Networks Explainability Toolbox☆689Updated 8 months ago
- A python package for benchmarking interpretability techniques on Transformers.☆213Updated 8 months ago
- LOcal Rule-based Exlanations☆51Updated last year
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆96Updated 3 months ago
- ☆39Updated 6 years ago
- A Python Library for Biquality Learning☆14Updated 2 months ago
- Geolocation Inference for Reddit☆12Updated last year
- bayesian lime☆17Updated 10 months ago
- A library that implements fairness-aware machine learning algorithms☆125Updated 4 years ago
- CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox☆44Updated 3 weeks ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆74Updated 3 years ago
- Multi-Objective Counterfactuals☆41Updated 2 years ago
- Minimal template for a Python library project☆11Updated 2 years ago
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Updated last year
- ☆83Updated 4 years ago
- [NeurIPS 2024] CoSy is an automatic evaluation framework for textual explanations of neurons.☆16Updated last week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- A Python package to compute HONEST, a score to measure hurtful sentence completions in language models. Published at NAACL 2021.☆21Updated 2 months ago
- A python implementation of CERTIFAI framework for machine learning models' explainability as discussed in https://www.aies-conference.com…☆11Updated 3 years ago
- REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets --- https://arxiv.org/abs/2004.07999☆111Updated 2 years ago
- A fairness library in PyTorch.☆29Updated 11 months ago
- ☆49Updated 2 years ago
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆73Updated 2 years ago
- ☆33Updated last year