Crisp-Unimib / ContrXTLinks
a tool for comparing the predictions of any text classifiers
☆27Updated 3 years ago
Alternatives and similar repositories for ContrXT
Users that are interested in ContrXT are comparing it to the libraries listed below
Sorting:
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆84Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Adversarial Black box Explainer generating Latent Exemplars☆11Updated 3 years ago
- GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset☆18Updated 3 years ago
- ☆33Updated last year
- 👋 Xplique is a Neural Networks Explainability Toolbox☆732Updated this week
- All about explainable AI, algorithmic fairness and more☆110Updated 2 years ago
- Meaningful Local Explanation for Machine Learning Models☆42Updated 2 years ago
- A library that implements fairness-aware machine learning algorithms☆127Updated 5 years ago
- Codebase for the blog post "24 Evaluation Metrics for Binary Classification (And When to Use Them)"☆56Updated 6 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆639Updated 3 weeks ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆98Updated 2 years ago
- Hands-on tutorial on ML Fairness☆74Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆334Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆75Updated 3 years ago
- This is a benchmark to evaluate machine learning local explanaitons quality generated from any explainer for text and image data☆30Updated 4 years ago
- Comparing fairness-aware machine learning techniques.☆161Updated 3 years ago
- Data and Model-based approaches for Mitigating Bias in Machine Learning Applications☆22Updated 6 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆77Updated 3 years ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆268Updated 4 months ago
- 🐦 Quickly annotate data from the comfort of your Jupyter notebook☆281Updated 2 years ago
- Bias Auditing & Fair ML Toolkit☆747Updated last week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- Datasets derived from US census data☆276Updated last year
- ☆290Updated 2 years ago
- In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results o…☆29Updated 2 years ago
- python tools to check recourse in linear classification☆76Updated 5 years ago
- Multi-Objective Counterfactuals☆43Updated 3 years ago
- A Python package for unwrapping ReLU DNNs☆68Updated 2 years ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆440Updated last year