dbvis-ukon / explainerLinks
The official repository containing the source code to the explAIner publication.
☆30Updated last year
Alternatives and similar repositories for explainer
Users that are interested in explainer are comparing it to the libraries listed below
Sorting:
- Preprint/draft article/blog on some explainable machine learning misconceptions. WIP!☆28Updated 5 years ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆120Updated last year
- A Python package for unwrapping ReLU DNNs☆70Updated last year
- Responsible AI knowledge base☆103Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- ☆20Updated 4 years ago
- ☆19Updated 4 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- A visual analytic system for fair data-driven decision making☆25Updated 2 years ago
- ☆33Updated 11 months ago
- Explainable Artificial Intelligence through Contextual Importance and Utility☆28Updated 9 months ago
- BERT Probe: A python package for probing attention based robustness to character and word based adversarial evaluation. Also, with recipe…☆18Updated 2 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 3 months ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆22Updated last month
- Course for Interpreting ML Models☆52Updated 2 years ago
- ☆22Updated 3 years ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆74Updated last week
- ☆11Updated 4 years ago
- ☆23Updated 2 years ago
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated last month
- Cross-field empirical trends analysis of XAI literature☆20Updated last year
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- Ranking of fine-tuned HF models as base models.☆35Updated last month
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- Interpretable, intuitive outlier detector intended for categorical and numeric data.☆10Updated 11 months ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 3 years ago
- This repository provides a curated list of references about Machine Learning Model Governance, Ethics, and Responsible AI.☆114Updated last year
- Rule Extraction Methods for Interactive eXplainability☆43Updated 3 years ago